In open multi-agent systems, trust models are an important tool for agents to achieve effective interactions; however, trust is an inherently subjective concept, and thus for the agents to communicate about trust meaningfully, additional information is required. This thesis focuses on Trust Alignment and Trust Adaptation, two approaches for communicating about trust.
The first approach is to model the problem of communicating trust as a problem of alignment. We show that currently proposed solutions, such as common ontologies or ontology alignment methods, lead to additional problems, and propose trust alignment as an alternative. We propose to use the interactions that two agents share as a basis for learning an alignment. We model this using the mathematical framework of Channel Theory, which allows us to formalise how two agents' subjective trust evaluations are related through the interactions that support them. Because the agents do not have access to each other's trust evaluations, they must communicate; we specify relevance and consistency, two necessary properties for this communication. The receiver of the communicated trust evaluations can generalise the messages using ¿-subsumption, leading to a predictive model that allows an agent to translate future communications from the same sender.
We demonstrate this alignment process in practice, using TILDE, a first-order regression algorithm, to learn an alignment and demonstrate its functioning in an example scenario. We find empirically that: (1) the difficulty of learning an alignment depends on the relative complexity of different trust models; (2) our method outperforms other methods for trust alignment; and (3) our alignment method deals well with deception.
The second approach to communicating about trust is to allow agents to reason about their trust model and personalise communications to better suit the other agent's needs. Contemporary models do not allow for enough introspection into ¿ or adaptation of ¿ the trust model, so we present AdapTrust, a method for incorporating a computational trust model into the cognitive architecture of the agent. In AdapTrust, the agent's beliefs and goals influence the priorities between factors that are important to the trust calculation. These, in turn, define the values for parameters of the trust model, and the agent can effect changes in its computational trust model, by reasoning about its beliefs and goals. This way it can proactively change its model to produce trust evaluations that are better suited to its current needs. We give a declarative formalisation of this system by integrating it into a multi-context system representation of a beliefs-desires-intentions (BDI) agent architecture. We show that three contemporary trust models can be incorporated into an agent's reasoning system using our framework.
Subsequently, we use AdapTrust in an argumentation framework that allows agents to create a justification for their trust evaluations. Agents justify their evaluations in terms of priorities between factors, which in turn are justified by their beliefs and goals. These justifications can be communicated to other agents in a formal dialogue, and by arguing and reasoning about other agents' priorities, goals and beliefs, the agent may adapt its trust model to provide a personalised trust recommendation for another agent. We test this system empirically and see that it performs better than the current state-of-the-art system for arguing about trust evaluations.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados