Wednesday, 20 June 2018

Forthcoming in The Philosophical Forum

Cover image

My paper 'Propositions, Meaning and Names' is forthcoming in the Winter 2018 edition of The Philosophical Forum. It derives from a chapter of my PhD thesis and sketches an approach to the topics mentioned in the title. The approach to propositions and meaning it develops has other applications besides the question of the meaning of proper names, but that will have to wait (for the most part). This is my longest publication to date and is fairly ambitious. I've blogged here over the years about some of the ideas in it, and am glad to see them making their way to publication. It seems that philosophers who are especially interested in these topics each have to start from the beginning, and this is my start.

Saturday, 5 May 2018

Epistemic Modals and Whether 'True' and 'False' are Ambiguous

Around 2011 I began to develop a view about epistemic modals which I have always wanted to return to. Motivated by dissatisfaction with the rival approaches of contextualism and relativism, the core ideas of the view were:

(1) That an utterance of 'It might be that p' can be thought of as being correct iff the proposition p is robust with respect to a contextually relevant amount and kind of inquiry. That is, iff the proposition is not falsified by the relevant amount and kind of inquiry.

(2) Sometimes, when we assess epistemic modals, we assess whether their prejacents - the propositions p they say might obtain - meet the contextually relevant robustness requirement, but other times, we assess whether they are ultimately correct - i.e. whether they would survive all possible inquiry.

To get a feel for why you'd want something like (1), consider this phenomenon: sometimes when asked whether something might be true, we say 'I don't know'. That suggests that we don't think present evidence settles the question. Nevertheless, we might be happy after a certain amount of inquiry to say 'It turns out that p could indeed be true', and we might be happy to say this even while allowing that yet more inquiry may still falsify p. So when we were asked whether it might be that p, we took ourselves to need some more evidence than we currently have, but still not all possible evidence. Thus, we had some idea in the background of a contextually relevant amount or kind of inquiry.

To get a feel for why you'd want something like (2), consider the fact that if I'm asked whether it might be that p, and say something 'I don't know, we'd better wait for the results of the police report', and then later, once the report is in, say 'The report is in now, and yes, it might be that p', I may still, later when even more is known and we now know p is false, say 'Ah, so it turns out that our belief that p might be the case was wrong'.

My explanation for this was that the judgement made when the report came in - and perhaps the agnosticism before it - was about robustness up to a contextually relevant point, but that the later judgement once even more inquiry is done, and we know that p is false, was about ultimate correctness.

This led to the idea that, with respect to epistemic modals, expressions like 'true'/'false', 'right'/'wrong', 'correct'/'incorrect', are ambiguous. Or more carefully, that statements about epistemic modals made with these terms can be understood in different ways. (You could perhaps maintain a unified semantics for these terms by having a contextual parameter - very roughly, a view on which 'true' means something like 'cognitively good in way X'.)

Now, a forthcoming paper by Justin Khoo and Jonathan Phillips considers a version of relativism which responds to certain difficult data with a kind of ambiguity view of 'true' and 'false'. They raise a problem for this response which would also be a problem for my idea about terms like these, when epistemic modals are in play, sometimes being about contextually relevant robustness and sometimes being about ultimate correctness.

Against any such ambiguity approach - or more carefully, against any approach on which these assessment claims about epistemic modals are equivocal can be understood in different ways - Khoo and Phillips adduce the following exchange:
A: Fat Tony might be dead.
B: What A said is false.
C: #I agree with you B – what A said is false; but also, A’s claim is true since A didn’t have the evidence proving Fat Tony is alive.
And judge that what C says at the end seems 'irreparably incoherent' (p. 12 of the archived draft). 

I want to suggest that this is too quick. I agree that C's last utterance sounds very bad. But I don't think that is all that hard for an ambiguity approach to account for.

In service of this argument against an ambiguity approach, they consider an uncontroversially ambiguous word, 'book', and note that the following sounds fine (p. 12 of the archived draft):
D: (pointing to a bound volume with blank pages) This is a book.
E: I agree with you D – that is a book; but it also isn’t a book since it’s not a literary work.
But this isn't a good comparison. I suggest that there's something like a rule of language which is flouted in the dialogue they complain about between A, B and C but is not flouted in the dialogue between D and E. Something like: when you use both members of a contrast pair like 'true' and 'false' then, absent explicit markers to the contrary, you're using them in the same way.

In support of this: if we add something like 'in a sense' and 'in another sense' to C's utterance, it begins to sound a lot better.

This has reinvigorated me - to see new considerations putting pressure on relativism to look more like the view I began to develop in 2011, which move on relativism's part then faces published (or forthcoming) objections to which I think I have a good answer.

Tuesday, 27 March 2018

Forthcoming in Acta Analytica


My paper 'Linking Necessity to Apriority' is forthcoming in Acta Analytica. It grew out of this blog post, although the proposal in the post is slightly different and, if not false, seems to require Millianism about proper names, which I don't want to have to require. (Two anonymous referees for Acta Analytica made me see this.) I arrived at the basic idea of the paper during my PhD work. It's a kind of stripped-down, partial version of the more ambitious account of necessity developed in my thesis. If it's right, it shows a clear, straightforward way in which our knowledge of modal status can always be traced back to a crucial a priori factor.

It's my first publication to contain a clear and positive theoretical proposal. My previous publications have either been mainly negative, or in the case of my paper on identity statements, positive in a way but hard to get a grip on, and the sort of thing that only sympathetic minds would accept. This, on the other hand, seems more "mathematical", more like something that quite differently minded philosophers might accept. It impinges on recent work by Kipper and Strohminger & Yli-Vakkuri, which I have discussed quite a bit here on this blog.

Thursday, 8 March 2018

'Metaphorical Truth'? Three Frontiers for a Sharper Metalinguistic Negotiation Toolkit

I have recently been thinking about verbal disputes, a topic which has seen an increase in philosophical attention over the last few years. I began to think more about them when confronted with some debates involving public intellectuals on the idea of truth. Roughly, some prefer what I call an 'austere' conception of truth, which does not for instance allow that a single claim with a given meaning is false in some "literal" sense while true in some other "metaphorical", mythological, metaphysical, or higher sense. Others prefer a more capacious conception of truth, which does let us say things like 'That may not be literally or scientifically true, but it is metaphorically true: this idea can guide us in the world and help us'. (This is interestingly different from pragmatism about truth as normally discussed within philosophy, since it retains a plain conception of truth which need not be thought of in pragmatic terms at all, but can be thought of as correspondence to the facts, and then has a separate stratum of truths, or a separate way of being true.)

Now, I would like to discuss this particular dispute further in future, applying in detail some of the thoughts which follow. In this post I will outline some general ideas about what I call 'tenacious verbal disputes', which I think show how complex and multi-faceted such disputes can be, and may help furnish us with a toolkit for better engaging with, arbitrating, and understanding them.

Tenacious verbal disputes are different from 'merely' verbal disputes - roughly, ones which dissolve once their nature becomes clear. Tenacious verbal disputes do not so dissolve. I think that many philosophical disputes are tenacious verbal disputes, and that many (if not all) tenacious verbal disputes are normative, ethical, or pragmatic disputes about how to use words (and symbols, and pictures).

It is merely verbal disputes that Chalmers spends most of his time on in his agenda-setting 'Verbal Disputes' paper of 2011. He does, however, say some insightful things about what Peirce called "the ethics of terminology" - how we should use words - but this is a very brief glance made during the fourth entry in a list of four things that ordinary language philosophy can do. 

The sort of thing I am interested in here has been discussed, especially by Plunkett and Thomasson, under the umbrella 'metalinguistic negotiation'. Some of their discussions are more in terms of what concepts we should use, rather than how we should use certain given representations, but such discussions are obviously still relevant. (Here it's helpful to distinguish between repertoire and deployment. If I want to use a term X in a certain way but my opponent wants to use it another, I may still want to have the concept they want to attach X to on board, but just in a less central place.) Or what concepts we should use for some 'task at hand'. (But sometimes there's no very circumscribed task, and our disagreement surrounds a term which comes up all over the place.) In view of repertoire vs. deployment issues, and the open-ended nature of some 'jobs' that representations do, I think it is often more helpful for many disputes to frame the issue in terms of how we should use certain words, symbols, or pictures.

Here are three frontiers I see for sharpening our metalinguistic negotiation toolkit:

(1) We should not just think of the issues in terms of some indefinitely expandable 'we' - as in 'How should we use word X?'. Some ways of using words, symbols and pictures will be best for some people, others better for others. Likewise not just for different persons but different times, situations, and communities.

(2) The above point also makes it clear that there are issues of what I call contagion to consider. For example, two thinkers having a tenacious verbal dispute may actually be disposed to agree that it's horses for courses, and that maybe one disputant's usage could give them some value in certain ways, but the other party may nevertheless worry that this sort of usage, which has certain virtues in some range of application, may catch on overly and have bad effects which outweigh the good. Realising that this is the issue, when it is the issue, could help the disputants to settle the issue, or at least to stop wasting time and effort arguing about it in the wrong way.

(3) Just as, with normative issues more generally, we recognise distinctions between differences in values versus differences over how things are, and between basic and instrumental values (or at least relatively basic vs. relatively instrumental ones), we should take this sophistication and apply it to tenacious verbal disputes. Two disputants, for instance, may be unclear about whether the crux of their disagreement is that they have different views of what would actually happen if the usage at issue caught on, or that they differ in their preferences regarding a given such outcome. Getting clear about this could pay real dividends. For instance, if they manage to agree that it's largely due to different views about what would actually happen, they could then move on to investigating more thoroughly what would actually happen, using the wisdom of relevant disciplines instead of confused, frustrating arguments.

Monday, 29 January 2018

Update on my Necessity and Propositions account (and my haste to declare it false)

In some recent posts here I have discussed propositions like 'Air is airy' (due to Jens Kipper) which we know to be necessarily true, but only because we know empirically that air is not a natural kind, and hence that all there is to being air is being airy, and 'Eminem is not taller than Marshall Mathers' (due to Strohminger and Yli-Vakkuri), which we know to be necessarily true, but only because we know empirically that Eminem is Marshall Mathers, in relation to the account of necessity defended in my thesis. That account says that a proposition is necessarily true iff it is in the deductive closure of the set of true inherently counterfactually invariant propositions. (Roughly, a proposition is ICI if it does not vary across counterfactual scenarios when held true. For more detail see Chapter 5 of my thesis.)

At first, I reacted by thinking that such propositions show that account to be false. I then came up with another account, based on the idea of a counterfactual invariance decider. I still find this new account more elegant, but I soon came to have doubts about just how threatening they are to the ICI-based account in my thesis.

I have recently realised that the ICI-account fares even better in the face of these examples than suggested in the post mentioned above. There, I suggested in effect that 'All there is to being air is being airy' could be argued to imply 'Air is airy' on a suitably rich notion of implication, thus saving the ICI-account, and similarly that 'Eminem is Marshall Mathers' could be argued to imply 'Eminem is not taller than Marshall Mathers' on a suitably rich notion of implication.

But, I have realised, no such rich notion of implication is required! We just need to conjoin the empirical proposition which decides the modal matter with the proposition whose modal status is in question. 'Air is not a natural kind and air is airy', or 'All there is to being air is being airy and air is airy', are both true and ICI, and they both - very straightforwardly, by conjunction elimination - imply the desired proposition. For the Eminem case we have 'Eminem is Marshall Mathers and Eminem is not taller than Marshall Mathers'. So there was never a serious problem for the ICI-account after all! 

Admittedly, these impliers do perhaps seem a bit "clever", a bit artificial in some way, and this - together with not requiring any appeal to implication at all - is why I still think the CI decider account is more elegant. 

One thing that I think went wrong in my thought process around this is that I got a kind of kick out of concluding that my original account was false. Doing so made me feel like a virtuous philosopher, open to changing their views. But I am glad that I now have a more elegant account, and the notion of a CI decider. (I wonder: Would the CI decider account still have come to me if I had not overreacted and thought my original account falsified? Or did my foolishness here cause me to come up with the CI decider account?)

Tuesday, 9 January 2018

Robin Hanson Responds

I recently posted criticisms of Robin Hanson and Kevin Simler's excellent new social science book The Elephant in the Brain. Hanson responds here. The response is short so I will reproduce it here:

The fourth blog review was 1500 words, and is the one on a 4-rank blog, by philosopher Tristan Haze. He starts with praise:

A fantastic synthesis of subversive social scientific insight into hidden (or less apparent) motives of human behaviour, and hidden (or less apparent) functions of institutions. Just understanding these matters is an intellectual thrill, and helpful in thinking about how the world works. Furthermore – and I didn’t sufficiently appreciate this point until reading the book, … better understanding the real function of our institutions can help us improve them and prevent us from screwing them up. Lots of reform efforts, I have been convinced (especially for the case of schooling), are likely to make a hash of things due to taking orthodox views of institutions’ functions too seriously.
But as you might expect from a philosopher, he has two nits to pick regarding our exact use of words.
I want to point out what I think are two conceptual shortcomings in the book. … The authors seem to conflate the concept of common knowledge with the idea of being “out in the open” or “on the record”. … This seems wrong to me. Something may satisfy the conditions for being common knowledge, but people may still not be OK talking about it openly. … They write: ‘Common knowledge is the difference between (…) a lesbian who’s still in the closet (though everyone suspects her of being a lesbian), and one who’s open about her sexuality; between an awkward moment that everyone tries to pretend didn’t happen and one that everyone acknowledges’ (p. 55). If we stick to the proper recursive explanation of ‘common knowledge’, these claims just seem wrong.
We agree that the two concepts are in principle distinct. In practice the official definition of common knowledge almost never applies, though a related concept of common belief does often apply. But we claim that in practice a lack of common belief is the main reason for widely known things not being treated as “out in the open”. While the two concepts are not co-extensive, one is the main cause of the other. Tristan’s other nit:
Classical decision theory has it right: there’s no value in sabotaging yourself per se. The value lies in convincing other players that you’ve sabotaged yourself. (p. 67).
This fits the game of chicken example pretty well. But it doesn’t really fit the turning-your-phone-off example: what matters there is that your phone is off – it doesn’t matter if the person wanting the favour thinks that your phone malfunctioned and turned itself off, rather than you turning it off. … It doesn’t really matter how the kidnapper thinks it came about that you failed to see them – they don’t need to believe you brought the failure on yourself for the strategy to be good.
Yes, yes, in the quote above we were sloppy, and should have instead said “The value lies in convincing other players that you’ve been sabotaged.” It matters less who exactly caused you to be sabotaged.
So Hanson paints me as a nitpicky philosopher, but nevertheless takes the points. He didn't mention the second point under the second heading, about theory of mind, which I think is maybe the most important. This omission better lets him get away with painting me as a nitpicky philosopher. But I am happy to see the response, and will not be daunted in making conceptual points that in fast-and-loose mode may seem like mere nitpicks.

What may seem like mere nitpicks at the stage of airing these ideas and getting them a hearing can turn into important substantive points in the context of actually trying to develop them further and make them more robust. 

Wednesday, 3 January 2018

Two Critical Remarks on The Elephant in the Brain

UPDATE: See my response to Robin Hanson's response.

The Elephant in the Brain, the new book by Robin Hanson and Kevin Simler, is a fantastic synthesis of subversive social scientific insight into hidden (or less apparent) motives of human behaviour, and hidden (or less apparent) functions of institutions. Just understanding these matters is an intellectual thrill, and helpful in thinking about how the world works. Furthermore - and I didn't sufficiently appreciate this point until reading the book, despite being exposed to some of the ideas on Hanson's blog and elsewhere - better understanding the real function of our institutions can help us improve them and prevent us from screwing them up. Lots of reform efforts, I have been convinced (especially for the case of schooling), are likely to make a hash of things due to taking orthodox views of institutions' functions too seriously.

Without trying to summarise the book here, I want to point out what I think are two conceptual shortcomings in the book. This is friendly criticism. Straightening these confusions out will, I think, help us make the most of the insights contained in this book. Also, avoiding these errors, which may cause some to be unduly hostile, in future or revised presentations of these insights may aid in their dissemination.

I'm not sure how important the first shortcoming is. It may be fairly trifling, so I'll be quick. The second one I suspect might be more important.

1. Being Common Knowledge Confused With Being Out in the Open

One conceptual issue came up for me in Chapter 4, 'Cheating'. Here, around p. 55 - 57, the authors seem to conflate the concept of common knowledge with the idea of being "out in the open" or "on the record".

A group of people have common knowledge of P if everyone in the group knows that P, and knows that everyone in the group knows that P, and knows that everyone in the group knows that everyone in the group knows that P, and so on.

On the other hand, a bit of knowledge is on the record or out in the open if it is 'available for everyone to see and discuss openly' (p. 55). 

The authors conflate these ideas, asserting that 'Common knowledge is information that's fully "on the record," available for everyone to see and discuss openly' (p. 55). (This comes shortly after the proper recursive explanation of 'common knowledge'.)

This seems wrong to me. Something may satisfy the conditions for being common knowledge, but people may still not be OK talking about it openly. The popular notion of an open secret gets at this point (somewhat confusingly for present purposes, since here the word 'open' gets used on the other side of the distinction). Something may be widely known, indeed even commonly known in the special recursive sense, while being taboo or otherwise unavailable for free discussion.

In addition to muddying the proper recursive explanation by asserting that common knowledge is that which is on the record and out in the open, the authors give supplementary example-based explanations of 'common knowledge' which seem to pull this expression further towards being unhelpfully synonymous with 'out in the open' and 'on the record'. For instance when they write: 'Common knowledge is the difference between (...) a lesbian who's still in the closet (though everyone suspects her of being a lesbian), and one who's open about her sexuality; between an awkward moment that everyone tries to pretend didn't happen and one that everyone acknowledges' (p, 55). If we stick to the proper recursive explanation of 'common knowledge', these claims just seem wrong. There could be cases where a lesbian is not open about being a lesbian, yet the hierarchy of conditions for common knowledge is fulfilled. Likewise for the awkward moment that everyone wants swept under the rug.

2. Excessive Preconditions Posited for Adaptive 'Self-Sabotage'

The authors give fascinating, instructive explanations of how what they call 'self-sabotage' can be adaptive in some situations (pp. 66 - 67). One example they give is visibly removing and throwing out your steering wheel in a game of chicken (provided you do it first, this is a good strategy, since your opponent then knows that their only hope of avoiding collision is to turn away themselves, losing the game of chicken). Another is closing or degrading a line of communication, e.g. turning your phone off when you think you might be asked a favour you don't want to grant. Another is avoiding seeing your kidnapper's face so that they don't kill you in order to prevent you identifying them to authorities. Another example is a general believing, despite contrary evidence, that they are in a good position to win a battle - while epistemically bad, this may cause the general (and in turn the troops) to be more confident and intimidating, and could even change the outcome in the general's favour.

But some of the things they then say about this sort of thing seem confused or wrong to me. The underlying problem, I think, is hasty generalisation. For instance:
Classical decision theory has it right: there's no value in sabotaging yourself per se. The value lies in convincing other players that you've sabotaged yourself. (p. 67).
This fits the game of chicken example pretty well.

But it doesn't really fit the turning-your-phone-off example: what matters there is that your phone is off - it doesn't matter if the person wanting the favour thinks that your phone malfunctioned and turned itself off, rather than you turning it off. Indeed having them think the former thing may be even better. But still, it might be right in this case that it's important that the person calling believes that you were uncontactable. If you have your phone off but they somehow nevertheless believe they succeeded in speaking to you and asking the favour, you may not have gained anything by turning it off.

It similarly doesn't fit the example of the kidnapper. It doesn't really matter how the kidnapper thinks it came about that you failed to see them - they don't need to believe you brought the failure on yourself for the strategy to be good. But still, it seems right in this case that it's important that they believe you didn't see their face.

Now it really doesn't fit the example of the general, and here the failure of fit is worse than in the previous two cases. If the point is that the epistemically dodgy belief of the general makes them more confident and intimidating, potentially causing them to win, then it doesn't matter how the general got the belief. The "sabotage" could just as well be due to an elaborate ruse carried out by a small cadre of the general's subordinates. And here there's not even a 'but still' of the sort in the two previous cases. The general's epistemically dodgy belief does not have to be known to be epistemically dodgy by the enemy in order for it to intimidate them and cause them to lose. Indeed, that would undermine the effectiveness of the strategy!

So, things are not as simple as the above quote suggests. Realising this and appreciating the nuances here could pay dividends.

Another claim made about this sort of thing which may at first seem striking and insightful, but which I think does not hold up, is this:
Sabotaging yourself works only when you're playing against an opponent with a theory-of-mind (p. 68).
(Theory-of-mind is the ability to attribute mental states to oneself and others.)

This doesn't really fit the game of chicken example, or at least it doesn't fit possible cases with a similar structure. It may be that to truly have a game of chicken, you need theory-of-mind on both sides, but you could have a situation where you're up against a robotic car with no theory-of-mind, and it may still be best to throw out your steering wheel. (As to why you wouldn't just forfeit the "game of chicken": there may be (theory-of-mind-less) systems monitoring you which will bring about your death if you swerve.)

I don't think it really fits the kidnapper case in a deep way. It may be a contingent fact that this sort of thing only works in our world with kidnappers with theory-of-mind, but one can easily imagine theory-of-mind-less animals who have evolved, rather than worked out by thinking, the behaviour of killing captives when seen by them.

I think it quite clearly doesn't fit the general example. Imagine the general and their army were fighting beasts with no theory-of-mind. All that matters is that the beasts can be intimidated by the confident behaviour caused by the general's dodgy belief. No theory-of-mind in the opponent required.

This seems like more than a quibble, for going along with this mistaken overgeneralization may stop us from seeing this kind of mechanism at work in lots of situations where there is no theory-of-mind at work on the other end of the adaptive sabotage.