Wednesday, December 4, 2013

Agreeing to Disagree?

Before Thanksgiving, I flippantly made the point to the class that "agreeing to disagree is not an equilibrium." This is actually a pretty classic result in game theory, and was originally "proven" in 1976 by Robert Aumann and has become known as "Aumann's agreement theorem." A shorter and slightly more intuitive (read: less topological mathematics) can be found here
Here is the basic argument: Two people start with the same "priors" (i.e. would believe the same thing if given the same information), but then receive different information (and therefore have different posterior beliefs). Assume that these beliefs are not represented by single points, but rather by probability distributions taking values over a range of possible "truths."[1] Also assume that the information sets of the agents are "common knowledge" (i.e. each agent knows the other’s opinion and knows that the other agent knows his opinion). If this is the case, then honest agents who exchange information (each had different information at the start) can never disagree.
There is some criticism of the result with respect to the "common knowledge" assumption, but there have been some extensions of the theorem that relax this assumption slightly and still retain the basic result. 
Another interesting point, by Robin Hanson and Tyler Cowen, is that most disagreements are likely to be dishonest because people are deceived into thinking that they are “better arguers” than others. This means that people are not in fact truth-seekers in arguments. They conclude with tips for being more truth-seeking in arguments. They include:
1.     Do not assume that you are more rational than others.
2.     Adopt a moderate opinion (of course, now we can disagree about what constitutes a “moderate” opinion!).
3.     Become aware of signs of self-deception. Signs include: self-interest, emotionality, informality, difficulty articulating arguments, stubbornness, and ignorance of cognitive biases.
I would also add that the greater the degree to which someone is arguing on the basis of conviction rather than fact, the higher the likelihood that they are self deceived. But I suppose that’s an argument for another day.




[1] It is important to represent the posterior beliefs as probability distributions because the whole point of the theorem is that the agents are honest truth seekers (which implies that they do not believe themselves to know "truth"). More on honesty later.

2 comments:

  1. So after class I was mulling over more controversial topics then who might be the best Quarterback. One such topic that really got my mind reeling was abortion. Might this be a topic that we can agree to disagree on? This is a pretty gray area that I think the full “truth” might not be attainable. When I think about this my mind goes straight to the book Freakonomics . Their findings suggest that with more abortions crime rate drops because unwanted children are aborted. After hearing in class today that a life is worth around $3 million, and the number of aborted children since Roe v. Wade is 55 million, I calculate a ballpark estimate of $1.65^14 lost because of abortion. But the gains of having a stable market place might be much greater because of a drop in crime rate due to those aborted. Also the women who might not have wanted to become a mother can be more productive in the economy. So what really is the best policy? Putting aside all the religion aspects on both sides (i.e. women’s right to choose, God says a life is a life) and looking at just numbers, I do not think the truth of this matter can be solved. So we might “agree to disagree”? Alas! I am still open to conversation and hope we might chip away through dialogue and find the TRUTH!
    “THE TRUTH IS OUT THERE” – X Files

    ReplyDelete
  2. In terms of the basic issue of whether "honest truth-seeking agents" can agree to disagree, we are not ruling out any so-called "gray area" if we recall that agents are not necessarily "agreeing" on a single point, but rather are agreeing on a probability distribution over the sample space of possible truths, and which ranges of "possible truth" are more or less likely than others.
    In terms of the specific gains or losses of a particular policy (such as for example Roe v. Wade), I would remind you again to make sure that you are considering not only costs but also benefits. While it might seem appalling to think that there are "benefits" to such a thing, many people have argued that outlawing abortion has been responsible for many more maternal deaths than abortions it may have prevented.
    So, in my mind, the question is, "what is the least costly way (in terms of human life) to reduce the undesirable thing (in your example abortions)?" Some might argue that a less costly way to reduce abortions might be to provide greater and cheaper access to other forms of contraception (setting aside for the moment the religious implications), while others might argue that increasing welfare assistance to families with children would give poor mothers-to-be the means to choose life. If the main goal is reducing abortions (and not something else like pursuing your chosen religion's definitions of sexual morality or punishing female promiscuity disproportionately to that of males), then these arguments seem reasonable.

    ReplyDelete