Wednesday, 3 January 2018

Two Critical Remarks on The Elephant in the Brain

UPDATE: See my response to Robin Hanson's response.

The Elephant in the Brain, the new book by Robin Hanson and Kevin Simler, is a fantastic synthesis of subversive social scientific insight into hidden (or less apparent) motives of human behaviour, and hidden (or less apparent) functions of institutions. Just understanding these matters is an intellectual thrill, and helpful in thinking about how the world works. Furthermore - and I didn't sufficiently appreciate this point until reading the book, despite being exposed to some of the ideas on Hanson's blog and elsewhere - better understanding the real function of our institutions can help us improve them and prevent us from screwing them up. Lots of reform efforts, I have been convinced (especially for the case of schooling), are likely to make a hash of things due to taking orthodox views of institutions' functions too seriously.

Without trying to summarise the book here, I want to point out what I think are two conceptual shortcomings in the book. This is friendly criticism. Straightening these confusions out will, I think, help us make the most of the insights contained in this book. Also, avoiding these errors, which may cause some to be unduly hostile, in future or revised presentations of these insights may aid in their dissemination.

I'm not sure how important the first shortcoming is. It may be fairly trifling, so I'll be quick. The second one I suspect might be more important.

1. Being Common Knowledge Confused With Being Out in the Open

One conceptual issue came up for me in Chapter 4, 'Cheating'. Here, around p. 55 - 57, the authors seem to conflate the concept of common knowledge with the idea of being "out in the open" or "on the record".

A group of people have common knowledge of P if everyone in the group knows that P, and knows that everyone in the group knows that P, and knows that everyone in the group knows that everyone in the group knows that P, and so on.

On the other hand, a bit of knowledge is on the record or out in the open if it is 'available for everyone to see and discuss openly' (p. 55). 

The authors conflate these ideas, asserting that 'Common knowledge is information that's fully "on the record," available for everyone to see and discuss openly' (p. 55). (This comes shortly after the proper recursive explanation of 'common knowledge'.)

This seems wrong to me. Something may satisfy the conditions for being common knowledge, but people may still not be OK talking about it openly. The popular notion of an open secret gets at this point (somewhat confusingly for present purposes, since here the word 'open' gets used on the other side of the distinction). Something may be widely known, indeed even commonly known in the special recursive sense, while being taboo or otherwise unavailable for free discussion.

In addition to muddying the proper recursive explanation by asserting that common knowledge is that which is on the record and out in the open, the authors give supplementary example-based explanations of 'common knowledge' which seem to pull this expression further towards being unhelpfully synonymous with 'out in the open' and 'on the record'. For instance when they write: 'Common knowledge is the difference between (...) a lesbian who's still in the closet (though everyone suspects her of being a lesbian), and one who's open about her sexuality; between an awkward moment that everyone tries to pretend didn't happen and one that everyone acknowledges' (p, 55). If we stick to the proper recursive explanation of 'common knowledge', these claims just seem wrong. There could be cases where a lesbian is not open about being a lesbian, yet the hierarchy of conditions for common knowledge is fulfilled. Likewise for the awkward moment that everyone wants swept under the rug.

2. Excessive Preconditions Posited for Adaptive 'Self-Sabotage'

The authors give fascinating, instructive explanations of how what they call 'self-sabotage' can be adaptive in some situations (pp. 66 - 67). One example they give is visibly removing and throwing out your steering wheel in a game of chicken (provided you do it first, this is a good strategy, since your opponent then knows that their only hope of avoiding collision is to turn away themselves, losing the game of chicken). Another is closing or degrading a line of communication, e.g. turning your phone off when you think you might be asked a favour you don't want to grant. Another is avoiding seeing your kidnapper's face so that they don't kill you in order to prevent you identifying them to authorities. Another example is a general believing, despite contrary evidence, that they are in a good position to win a battle - while epistemically bad, this may cause the general (and in turn the troops) to be more confident and intimidating, and could even change the outcome in the general's favour.

But some of the things they then say about this sort of thing seem confused or wrong to me. The underlying problem, I think, is hasty generalisation. For instance:
Classical decision theory has it right: there's no value in sabotaging yourself per se. The value lies in convincing other players that you've sabotaged yourself. (p. 67).
This fits the game of chicken example pretty well.

But it doesn't really fit the turning-your-phone-off example: what matters there is that your phone is off - it doesn't matter if the person wanting the favour thinks that your phone malfunctioned and turned itself off, rather than you turning it off. Indeed having them think the former thing may be even better. But still, it might be right in this case that it's important that the person calling believes that you were uncontactable. If you have your phone off but they somehow nevertheless believe they succeeded in speaking to you and asking the favour, you may not have gained anything by turning it off.

It similarly doesn't fit the example of the kidnapper. It doesn't really matter how the kidnapper thinks it came about that you failed to see them - they don't need to believe you brought the failure on yourself for the strategy to be good. But still, it seems right in this case that it's important that they believe you didn't see their face.

Now it really doesn't fit the example of the general, and here the failure of fit is worse than in the previous two cases. If the point is that the epistemically dodgy belief of the general makes them more confident and intimidating, potentially causing them to win, then it doesn't matter how the general got the belief. The "sabotage" could just as well be due to an elaborate ruse carried out by a small cadre of the general's subordinates. And here there's not even a 'but still' of the sort in the two previous cases. The general's epistemically dodgy belief does not have to be known to be epistemically dodgy by the enemy in order for it to intimidate them and cause them to lose. Indeed, that would undermine the effectiveness of the strategy!

So, things are not as simple as the above quote suggests. Realising this and appreciating the nuances here could pay dividends.

Another claim made about this sort of thing which may at first seem striking and insightful, but which I think does not hold up, is this:
Sabotaging yourself works only when you're playing against an opponent with a theory-of-mind (p. 68).
(Theory-of-mind is the ability to attribute mental states to oneself and others.)

This doesn't really fit the game of chicken example, or at least it doesn't fit possible cases with a similar structure. It may be that to truly have a game of chicken, you need theory-of-mind on both sides, but you could have a situation where you're up against a robotic car with no theory-of-mind, and it may still be best to throw out your steering wheel. (As to why you wouldn't just forfeit the "game of chicken": there may be (theory-of-mind-less) systems monitoring you which will bring about your death if you swerve.)

I don't think it really fits the kidnapper case in a deep way. It may be a contingent fact that this sort of thing only works in our world with kidnappers with theory-of-mind, but one can easily imagine theory-of-mind-less animals who have evolved, rather than worked out by thinking, the behaviour of killing captives when seen by them.

I think it quite clearly doesn't fit the general example. Imagine the general and their army were fighting beasts with no theory-of-mind. All that matters is that the beasts can be intimidated by the confident behaviour caused by the general's dodgy belief. No theory-of-mind in the opponent required.

This seems like more than a quibble, for going along with this mistaken overgeneralization may stop us from seeing this kind of mechanism at work in lots of situations where there is no theory-of-mind at work on the other end of the adaptive sabotage.

3 comments: