Prob-Cog: An Adaptive Filtering Model for Trust Evaluation
Abstract
Trust and reputation systems are central for resisting against threats from malicious agents in decentralized systems. In previous work we have introduced the Prob-Cog model of multi-layer filtering for consumer agents in e-marketplaces which provide mechanisms for identifying participants who disseminate unfair ratings by cognitively eliciting the behavioural characteristics of e-marketplace agents. We have argued that the notion of unfairness does not exclusively refer to deception but can also imply differences in dispositions. The proposed filtering approach goes beyond the inflexible judgements on the quality of participants and instead allows environmental circumstances and the human dispositions that we call optimism, pessimism and realism to be incorporated into our trustworthiness evaluation procedures. In this paper we briefly outline the two layers before providing a detailed exposition of our experimental results, comparing Prob-Cog to FIRE and the personalized approach under various attacks and normal situations.
Domains
Computer Science [cs]Origin | Files produced by the author(s) |
---|