RELYING ON TRUST TO FIND RELIABLE INFORMATION. Keywords Reputation; recommendation; trust; information retrieval; open distributed systems.

Similar documents
Supporting Trust in Virtual Communities

On Trust. Massimo Felici. Massimo Felici On Trust c

Finding Information Sources by Model Sharing in Open Multi-Agent Systems 1

Position Paper: How Certain is Recommended Trust-Information?

Models of Information Retrieval

Cognitive Authority. Soo Young Rieh. School of Information. University of Michigan.

Evaluation of Linguistic Labels Used in Applications

CHAPTER 3 METHOD AND PROCEDURE

A Cooperative Multiagent Architecture for Turkish Sign Tutors

TRUST AND TRUSTWORTHINESS

Trust in E-Commerce Vendors: A Two-Stage Model

Review Statistics review 2: Samples and populations Elise Whitley* and Jonathan Ball

A Computational Theory of Belief Introspection

From where does the content of a certain geo-communication come? semiotics in web-based geo-communication Brodersen, Lars

Familiarity: With familiarity the future is a generalized extension of the past. It is taken for granted that life

ELEMENTS OF PSYCHOPHYSICS Sections VII and XVI. Gustav Theodor Fechner (1860/1912)

Sense-making Approach in Determining Health Situation, Information Seeking and Usage

Meta-Analysis De-Mystified: A Step-by-Step Workshop

DOING SOCIOLOGICAL RESEARCH C H A P T E R 3

Optimism and pessimism in trust

Representation Theorems for Multiple Belief Changes

Annotation and Retrieval System Using Confabulation Model for ImageCLEF2011 Photo Annotation

National Culture Dimensions and Consumer Digital Piracy: A European Perspective

RISK PERCEPTION and GOVERNANCE. Ortwin Renn Stuttgart University and Dialogik gemeinnützige GmbH

PSYCHOLOGISM VS. CONVENTIONAL VALUES by L.A. Boland

Information can kill

PO Box 19015, Arlington, TX {ramirez, 5323 Harry Hines Boulevard, Dallas, TX

Artificial Intelligence

VOLUME B. Elements of Psychological Treatment

Between Micro and Macro: Individual and Social Structure, Content and Form in Georg Simmel

Reliability of feedback fechanism based on root cause defect analysis - case study

2. Americans now insist much more strongly that jobs become more impersonal. True False

Lecturer: Rob van der Willigen 11/9/08

NPC-BASED GLOBAL INDICATORS FOR TEACHING UNIVERSITY ASSESSMENT. Carrozzo E. a Joint with Arboretti R. b

Energizing Behavior at Work: Expectancy Theory Revisited

Lecturer: Rob van der Willigen 11/9/08

Artificial Intelligence For Homeopathic Remedy Selection

INDIVIDUALIZED MASTER OF ARTS PROGRAM IN CONFLICT RESOLUTION. The McGregor School of Antioch University. Core Session I

Running Head: NARRATIVE COHERENCE AND FIDELITY 1

- Conduct effective follow up visits when missing children return home ensuring intelligence is shared with appropriate partners.

Structural assessment of heritage buildings

Reasons and Emotions that Guide Stakeholder s Decisions and Have an Impact on Corporate Reputation

Funnelling Used to describe a process of narrowing down of focus within a literature review. So, the writer begins with a broad discussion providing b

Huber, Gregory A. and John S. Lapinski "The "Race Card" Revisited: Assessing Racial Priming in

Analyzing the Robustness of CertainTrust

Herbert A. Simon Professor, Department of Philosophy, University of Wisconsin-Madison, USA

Cambridge Public Schools SEL Benchmarks K-12

BIOSTATISTICAL METHODS

Emotional Agents need Formal Models of Emotion

YSC Potential Guide Report for Joe Bloggs

Psychological needs. Motivation & Emotion. Psychological needs & implicit motives. Reading: Reeve (2015) Ch 6

Overview of Sullivan's theory

SOCI 323 Social Psychology

IDEA Technical Report No. 20. Updated Technical Manual for the IDEA Feedback System for Administrators. Stephen L. Benton Dan Li

Handling Partial Preferences in the Belief AHP Method: Application to Life Cycle Assessment

What is Artificial Intelligence? A definition of Artificial Intelligence. Systems that act like humans. Notes

Spatial Cognition for Mobile Robots: A Hierarchical Probabilistic Concept- Oriented Representation of Space

COMP329 Robotics and Autonomous Systems Lecture 15: Agents and Intentions. Dr Terry R. Payne Department of Computer Science

You can judge them on how they look. : Homelessness officers, medical evidence and decision-making

Student Social Worker (End of Second Placement) Professional Capabilities Framework Evidence

Modeling Terrorist Beliefs and Motivations

Sciences, 25(supplement 1), Jackson et al. (2010) Use of nutritional supplementation among university recreation users, Recreation Sports

Psycholinguistics Psychological Mechanisms

Running Head: TRUST INACCURATE INFORMANTS 1. In the Absence of Conflicting Testimony Young Children Trust Inaccurate Informants

TERRORIST WATCHER: AN INTERACTIVE WEB- BASED VISUAL ANALYTICAL TOOL OF TERRORIST S PERSONAL CHARACTERISTICS

Evaluation of Qualitative Studies - VAKS

Using Perceptual Grouping for Object Group Selection

SOCIAL ATTITUDES TO HOMELESSNESS. A Student Survey of Cambridge Residents

Year Strategy. Our purpose is to end homelessness

[1] provides a philosophical introduction to the subject. Simon [21] discusses numerous topics in economics; see [2] for a broad economic survey.

DISTRIBUTED BIOSURVEILLANCE SYSTEMS USING SENSIBLE AGENT TECHNOLOGY TO IMPROVE COORDINATION AND COMMUNICATION AMONG DECISION-MAKERS

Emotional Intelligence and NLP for better project people Lysa

What is Science 2009 What is science?

1 Qualitative Research and Its Use in Sport and Physical Activity

Artificial Intelligence Lecture 7

The Potsdam NLG systems at the GIVE-2.5 Challenge

How Do We Assess Students in the Interpreting Examinations?

Hoare Logic and Model Checking. LTL and CTL: a perspective. Learning outcomes. Model Checking Lecture 12: Loose ends

Towards an Understanding of Human Persuasion and Biases in Argumentation

Mentors, Coaches and the Empowerment Factor Why Functional Fluency is Important

AGENT-BASED SYSTEMS. What is an agent? ROBOTICS AND AUTONOMOUS SYSTEMS. Today. that environment in order to meet its delegated objectives.

Supporting Blended Workflows

Improving sound quality measures through the multifaceted soundscape approach

STATISTICAL INFERENCE 1 Richard A. Johnson Professor Emeritus Department of Statistics University of Wisconsin

EEL-5840 Elements of {Artificial} Machine Intelligence

PERCEIVED TRUSTWORTHINESS OF KNOWLEDGE SOURCES: THE MODERATING IMPACT OF RELATIONSHIP LENGTH

A Brief Introduction to Bayesian Statistics

Clinical Trial and Evaluation of a Prototype Case-Based System for Planning Medical Imaging Work-up Strategies

DRAFT. full paper published in: Lecture Notes in Computer Science, Volume 5219, Berlin/Heidelberg, 2008, pp

The Missing Link of Risk why Risk Attitude Matters

4.0 INTRODUCTION 4.1 OBJECTIVES

International Journal of Software and Web Sciences (IJSWS)

Pearson Education Limited Edinburgh Gate Harlow Essex CM20 2JE England and Associated Companies throughout the world

The evolution of optimism: A multi-agent based model of adaptive bias in human judgement

A Computational Framework for Concept Formation for a Situated Design Agent

On the Combination of Collaborative and Item-based Filtering

The Regression-Discontinuity Design

Learning Objectives. Learning Objectives 17/03/2016. Chapter 4 Perspectives on Consumer Behavior

Conflict It s What You Do With It!

1 What is an Agent? CHAPTER 2: INTELLIGENT AGENTS

Transcription:

Abstract RELYING ON TRUST TO FIND RELIABLE INFORMATION Alfarez Abdul-Rahman and Stephen Hailes Department of Computer Science, University College London Gower Street, London WC1E 6BT, United Kingdom E-mail: {F.AbdulRahman,S.Hailes}@cs.ucl.ac.uk Information retrieved from open distributed systems can be of uncertain reliability and sifting through a large amount of data can be a complex procedure. In real life, we handle such problems through the social mechanisms of trust and word-of-mouth, or reputation. We propose a sociologically motivated trust and reputation model for the problem of reliable information retrieval and present an example of its use in this paper. Keywords Reputation; recommendation; trust; information retrieval; open distributed systems. Introduction We deal with the complexities and uncertainties of life by relying on two important social mechanisms trust and reputation. However, our ability to use them is impaired when engaging in online interactions in large distributed systems such as the Internet. This is because there is currently no satisfactory trust model available to guide us through its complexity and to automate the sifting out of reliable information. Without a trust model, the task of identifying reputable and reliable information sources becomes daunting as any network user can be a source of information. This is the motivation underlying our work described in this paper. It is based on a more general trust model presented in (Abdul- Rahman and Hailes, 1997), which was designed to reason about trust in distributed systems security. Thus, this paper outlines a multi-disciplinary approach to the problem of information reliability, combining motivations rooted in distributed systems security with elementary foundations in sociology applied to the problem of reliable information retrieval. Indeed, such is the far-reaching nature of trust. The goal of this paper is to show that we can make information retrieval return more reliable results by applying the social mechanisms of trust and reputation. Information Retrieval Problems In general, increasing complexity and uncertainty are the main problems that users of the large and expanding Internet are faced with. Specifically, we will focus on three problem areas: 1. Reliability of Retrieved Information: Results of search engines return references to arbitrary pages of information containing the queried keyword(s). However they are unable to qualify the retrieved information with data about the sources of information themselves. 1

2. Semantic mapping between agents 1 : Keywords may sometimes have different meanings depending on context and even language (i.e. when a keyword is treated as simply a string of bytes). Furthermore, some keywords have subjective interpretations, like the words good and beautiful. 3. Complexity Reduction: It is simply beyond an average person s resources to filter through all information available from the Internet to select the ones best suited to his or her needs. We argue that trust and reputation mechanisms can help reduce these problems in the following way. Socially, we tend to attach greater weight to opinions of people we know and trust (Hardin, 1991). Thus, by navigating our information retrieval process through trusted sources, it is possible to obtain information that is more reliable than from arbitrary sources. This also automatically reduces the problem of information overload, since we only obtain information from sources we know we trust and can rely on. Thus, trust has also become a mechanism for complexity reduction (Luhmann, 1979). Furthermore, with the ability to handle reputational information in the Internet, a form of social control can be in place, clustering results towards those sources that are positively reputable (Rasmusson, 1996). Trust Trust is a social phenomenon. As such, any artificial model of trust must be based on how trust works in society. In this work, a survey of the social sciences was carried out, which highlighted the following properties of trust. We find it useful to use the following trust definition by Gambetta: trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent will perform a particular action, both before [we] can monitor such action (or independently of his capacity of ever to be able to monitor it) and in a context in which it affects [our] own action (Gambetta, 1990). Trust is not an objective property of an agent but a subjective degree of belief about specific agents (Misztal, 1996; McKnight and Chervany, 1996), within a specific context, that range from complete distrust to complete trust (Gambetta, 1990). However, trust is not a prediction nor some measure of probability (Luhmann 1979). A trusting action is taken despite uncertainty of outcome but in anticipation of a positive outcome (Baier, 1985; Misztal 1996; Barber, 1983). Nevertheless, that action may not follow the rules of rational choice theory (Barber, 1983; Gambetta, 1990) as an agent may have motivations other than risk and utility. It is also non-monotonic and the degree of trust is appraised with additional experience and information. Lastly, trust decisions are made based on the agent s knowledge and experiences (Hardin, 1993; Jøsang, 1996). Reputation In the words of Misztal (Misztal, 1996), [Reputation] helps us to manage the complexity of social life by singling out trustworthy people in whose interest it is to meet promises. In this work, a reputation is an expectation about an agent s behaviour based on information about or observations of its past behaviour. Reputational information need not be solely the opinion of others. We also include reputational information completely based on an individual agent s own personal 1 We will use the term agent to refer to any producer or consumer of information. 2

experiences. This allows us to generalise reputational information to combine personal opinions and opinions of others for the same reputation subject. The Trust-Reputation Model We propose a model for determining trustworthiness of agents based on the agent s collected statistics on 1) direct experiences and 2) recommendations about other agents from other agents (see Figure 1). Agents do not maintain a database of specific trust statements in the form of a trusts b with respect to context c. Instead, at any given time, the trustworthiness of a particular agent is obtained by summarising the relevant subset of recorded experiences. Direct and recommender Trust Figure 1 The Trust-Reputation Model The following represents an agent s belief in another agent s (a) trustworthiness within a certain context (c) to a certain degree (td) by the following: t(a, c, td) where td {vt, t, u, vu}. The semantics for td is given in Table 1 below. Additionally, we leave the context variable c open so that agents are able to define their own contexts when using this trust model. vt Very Trustworthy t Trustworthy u Untrustworthy vu Very Untrustworthy Table 1 Trust degrees and their meaning. An agent may also believe that another agent (b) is trustworthy to a certain degree (rtd) for giving recommendations about other agents 2 with respect to a context (c), represented as: rt(b, c, rtd) The value of rtd indicates the semantic closeness between the recommendation and x s own perception of the recommended agent s trustworthiness. For example, the recommender s perception of very trustworthy may only equate to what x perceives to be trustworthy. As with direct trust, we leave the context variable c open. Experience Sets An agent records its experiences with other agents in two separate sets: Q for direct trust experiences and R for recommender trust experiences. Assume that C = {c 1, c n } is the set 2 We assume that recommenders may lie or give out contradictory recommendations to different agents. 3

of contexts known to an agent x and A = {a 1,, a n } is the set of agents that x has interacted with (either directly or as a recommender). An experience, e, is a member of the ordered set E = {vg, g, b, vb}, representing very good, good, bad and very bad respectively. These values correspond to the values given in Table 1. For each agent a and context c, there is an associated 4-tuple s = (s vg, s g, s b, s vb ) where s j is an accumulator for experiences e = j. Let S = {(s vg, s g, s b, s vb )}. Q is defined as Q C A S For experiences with recommender agents, the result is different. The goal is to obtain a similarity measure of an agent s recommendation and x s perception of the outcome. As a simple example, if a recommends to x that agent b is a very good source of information with respect to context c, and x s evaluation of its experience with b is merely good, then future recommendations from a can be adjusted accordingly. In this example, we say that x s experience with b downgrades a s recommendation by one, or 1. The domain of possible adjustment values is given by the set G = {-3, -2, -1, 0, 1, 2, 3}. There are 4 sets of adjustment experiences, T vg, T g, T b and T vb, for each recommending agent a and context c. Each T e, where e E, represents adjustments for each of a s recommendations of e. The domain for values in T e is the set G. Let T = {T vg, T g, T b, T vb }. R is defined as R C A T Evaluating Direct Trust To determine the direct trust degree td in an agent a with respect to context c, or in other words, the reputation of a in context c, first obtain the relation (c, a, s) from Q. Let s = (s vg, s g, s b, s vb ). Then the value of td is such that td is the subscript or index of s e where s e is the largest element in s. td E s e s, (s e = max(s)) (td = e) (1) If max(s) returns more than one value, then td is assigned an uncertainty value according to Table 2 below (the symbol? indicates zero or one other value ). e td Meaning vg g? µ + Mostly good and some bad. vb b? µ - Mostly bad and some good. All other combinations µ 0 Equal amount of good and bad. Evaluating Recommender Trust Table 2 Uncertainty Values. To determine the recommender trust degree rtd for an agent a in context c, we first find the relation (c, a, t = (T vg, T g, T b, T vb )) for a in R. The value of rtd is obtained by taking the mod of the absolute values of members in the set T a = T vg T g T b T vb. This tells how close most recommendations are to the evaluating agent s own experiences by identifying how far an agent s recommendations usually are from the actual experience from relying on its recommendations. rtd = mod({ x T a x }) (2) 4

Evaluating Semantic Closeness of Recommenders Let sc be a 4-tuple (sc vg, sc g, sc b, sc vb ). To evaluate the semantic closeness, sc, of a recommender a in context c, first find the relation (c, a, t) in R, where t = (T vg, T g, T b, T vb ). Then, for each member sc e in sc, assign the mod of the corresponding member set T e in t. e E, sc e = mod(t e ) (3) If T e is multimodal, then this means that there is uncertainty in a s recommendations and further experience is required to resolve this uncertainty. In this case, we let s e = 0 so that no adjustments are made to a s recommendations. This allows us to take future recommendations at face value and decide on the difference after the experience of relying on those uncertain recommendations. Evaluating a Recommendation To evaluate a recommendation of degree d from a about b s trustworthiness in context c, represented by rec(a, b, c, rd), where rd E, first evaluate the semantic closeness, sc = (sc vg, sc g, sc b, sc vb ), of a for context c as shown in the previous section. Then adjust rd using the appropriate sc member to obtain the adjusted recommended trust degree, rd *, shown as rd * = rd sc rd (4) where denotes the operation is increased by the order of. E.g., if rd = vg and sc vg = -1 (i.e. downgrade by one) then rd * = vg -1 = g. Updating Experiences After an experience with an agent a, the experience relation for a in Q is updated by incrementing the appropriate experience type counter. For example, if (c, a, s = (s vg, s g, s b, s vb )) is the appropriate relation in Q, and it was a good experience (e = g), then increment s g. Formally, given an experience of e, s e = s e + 1 (5) Furthermore, if the experience was a result of relying on a recommendation from agent b, then we also update the experience for b in R by obtaining the appropriate relation in R for b, (c, b, t), where t = {T vg, T g, T b, T vb } and adding the difference between the recommended trust degree, rd, and the experience, e, to T rd. The difference, shown by the operator, is the number of levels to upgrade or downgrade rv to get e, e.g. if rd = very good and e = good then e rd = -1, or downgrade by one. Combining Recommendations T rd = T rd {e rd} (6) Sometimes an agent x may encounter more than one recommender for a particular recommended agent y. To evaluate the final trust degree of y, ct y, by combining the recommendations, we first obtain the recommender trust value, for all known recommenders of y (recommendations from unknown agents are discarded). If a 1 a n are the recommenders, then obtain rtd k in each rt(c, a k, rtd k ) for k = 1...n. Each recommender is then assigned a weight according to its rtd k value using Table 3 below: 5

rtd k 0 1 2 3 unknown weight 3 9 5 3 1 0 Table 3 Recommender weights. We then adjust the recommendations according to (4). Now, for each recommended trust degree e E, sum the weightings of the recommenders who recommended e. Assume a 1 a n are recommenders of y, w 1...w n are their corresponding individual weightings (i.e. w n is the weighting for a n ) and rec(a n, b, c, rd n ) is a n s recommendation. Let L e be the set whose members are the weights associated with recommenders who recommended e, then e E w i L e, sum e = ct y is the sum e with the highest value. If there are more than one largest sum e, then ct y is assigned an uncertainty value according to Table 2. Conclusion We have discussed why trust and reputation is important for reliable information retrieval and proposed a model for trust and reputation motivated by their real-world characteristics as identified in the social sciences. However we acknowledge that the approaches has some adhoc design decisions, namely the members of E and the weightings. Future work will include further investigation into a more concrete basis for these values. References Abdul-Rahman, Alfarez and Hailes, Stephen (1997); A Distributed Trust Model; Proceedings of New Security Paradigms 97 Workshop Baier, Annette (1985); Trust and Antitrust; Ethics, 96 (pp. 231-260) Barber, Bernard (1983); The Logic and Limits of Trust; New Brunswick, New Jersey Gambetta, Diego (1990) Can We Trust Trust?; In Trust: Making and Breaking Cooperative Relations, Gambetta, D (ed.), Basil Blackwell, Oxford, (pp. 213-237) Hardin, Russell (1993); The Street Level Epistemology of Trust; Politics and Society, 21 (pp. 505-531) Jøsang, Audun (1996); The right type of trust for distributed systems; Proceedings of New Security Paradigms 96 Workshop Luhmann, Niklas (1979) Trust and Power; Wiley, Chichester McKnight, D. Harrison and Chervany, Norman L. (1996); The Meanings of Trust; Technical Report 94-04, Carlson School of Management, University of Minnesota Misztal, Barbara (1996); Trust in Modern Societies; Polity Press, Cambridge MA Rasmusson, Lars (1996); Socially Controlled Global Agent Systems; Master s Thesis, Swedish Institute of Computer Science L e i= 1 w i (7) 3 The weightings were chosen almost arbitrarily, but in such a way as to reflect the relative importance of recommendations from agent who are semantically closer. 6