The standard model of an extensive form game rules out two phenomena of importance in situations of dynamic strategic interaction: deception and unreliability. We show how the model can be generalized to incorporate these phenomena, and give some examples of the richer model in use.
We say that deception takes place when one player tricks another into believing that she has done something other than what she actually did. The standard model does allow moves to be uninformative (whenever information sets are non-singleton), in that it is not revealed which of several moves has been made. But they cannot be deceptive: the actual move made is never ruled out. Our extension of extensive form games relaxes the assumption that the information sets partition the set of nodes. In particular, the set of nodes believed possible after a certain move is made might not include the actual node.
Consider the story of the wooden horse in Virgil�s Aeneid Book II. The Greeks have two choices, to go home and give up the war or to stay and attempt to sack Troy. The latter seems hopeless until Odysseus suggests the following plan: they should sail their ships out of sight behind the island of Tenedos and leave a gigantic wooden horse in front of the city. Believing the Greeks have really gone home, the Trojans accept the horse as a gift and break down their walls to wheel it into Troy. The Greeks then leap out of the horse and successfully sack the city. What we have here is a genuine case of deception:
one move is made, but another is observed. Note that while standard game theory does not rule out an agent having false beliefs (for example the beliefs that justify a rationalizable strategy may well be mistaken), nothing in the structure of the game itself forces her to have these beliefs. When deception takes place, on the other hand, the agent is forced to have false beliefs. So deception is not simply a case of an agent coming to a false belief on the basis of a mistaken prior. Rather, the agent receives false information about what move has been made, and updates her beliefs according to that information.
Formally, we replace the information sets of the standard model with an information correspondence Ii for each player. Ii is a function from the set X of nodes of the game tree to 2X, the set of subsets of X. Ii(x) is to be interpreted as the set of nodes which player i considers possible when actually at node x. A player is deceived at node x if x is in Ii(x). To model the deception of the Greeks described above, we allow them three choices at the beginning of the game: to go home (reaching node h); to sail behind the island of Tenedos (t); and to stay put (s). In each case, the Trojans can choose to accept or reject the wooden horse. The Trojans� information at these three nodes is defined as follows: Ii(h) = h; Ii(t) = h; Ii(s) = s. It is clear this cannot be represented by a partition.
If deception is allowed, an agent may receive contradictory information about where she is in the game. Further on in Aeneid Book II, we find that Sinon and Laocoon give the King of Troy two opposing accounts of the true purpose of the wooden horse. In such cases, some of the information must be unreliable.
We further enrich the standard model by attaching reliability weights to each node. These weights have quantitative as well as qualitative significance, and they tell us how an agent will evaluate various conflicting pieces of information.
Her beliefs are updated using a modified version of the belief revision system proposed by Spohn (1988) (used in a game-theoretic context by Board 1998 (TARK VII). The new system implies a relaxation of the AGM axiom of success, so that the latest piece of information is not always accepted. In the example of Sinon vs. Laocoon, although Laocoon spoke second and warned King Priam that the wooden horse was a trick, the king decided that Sinon was a more reliable witness and accepted it as a genuine gift. We model this situation by attaching a greater reliability weight to the nodes following Sinon�s move than to those following Laocoon�s move.
In order to analyze rational behavior, we show how to construct epistemic models for these games. These models tell us what the players believe about the game and about each other, and also how these beliefs will be revised as the game progresses. This allows us to give a formal definition of rationality as expected utility maximization. We then consider various solution concepts.
It can be shown that a player cannot be deceived along the path of a (perfect Bayesian) equilibrium, since the requirement that beliefs be consistent with strategies dictates that those beliefs be correct. In certain games, deception is inconsistent with common belief in rationality as well. The intuition behind this result is as follows: for a player to believe that her opponent is rational requires her to have beliefs about the structure of the game, and in particular about the moves that are available to him. If these beliefs are correct, and if only one of these moves is rational, then she can infer what he will do. She cannot therefore be deceived at the resulting node.
© 2008-2024 Fundación Dialnet · Todos los derechos reservados