My recent Logos & Episteme paper, 'Two New Counterexamples to the Truth-Tracking Theory of Knowledge', has come in for criticism by Fred Adams and Murray Clarke. (Here is my original blog post about the paper.)
A recent issue of the journal contains a discussion note by Adams and Clarke entitled 'Two Non-Counterexamples to Truth-Tracking Theories of Knowledge'. Peter Baumann drew my attention to this and discussed it with me. I have also been happy to learn that Adams has given presentations where he talks about my paper and what he thinks is wrong with it.
The latest issue of the journal contains my reply to them, as well as a rejoinder by them.
The following is in response to their rejoinder:
Regarding the first counterexample, I am beginning to consider the possibility that the real lesson of our disagreement over it is that there are two possible concepts of knowledge which disagree about this case. Perhaps some people have one of these concepts, and some have the other, but this hasn't become clearly evident yet. Intuitively, I think that if a belief, even if it be true and truth-tracking in Nozick's sense, rests on a delusion in such a way that the delusion continues to be associated with it, and in such a way that if the delusion were removed, the belief would be relinquished, then that belief doesn't count as knowledge. If you have such a belief, I feel, you do not possess the truth about the matter in question, since your belief can be taken away from you just by correcting a delusion that you have. Adams and Clarke do not agree, and claim that everyone they have put the question to is on their side. I have certainly had people on my side too. So perhaps it is time to consider the possibility that there are two different concepts of knowledge here, and perhaps neither party is misapplying their concept in judging the example.
(Also, it may not be simply that some people have one concept of knowledge and other people have another. Some people, or even most, may have both. Or be disposed to form either (i.e. they may, in advance of considering this type of case, have just one concept which is in some sense indeterminate with respect to the case). Perhaps when I am talking to someone, urging the case as a counterexample, charity makes them select the concept of knowledge which behaves as I maintained in my original article, and perhaps when Adams or Clarke are talking to someone, urging their contrary judgement about my case, then charity makes their audience select the concept of knowledge which behaves as they maintain in their reply to me.)
Regarding the second counterexample, their latest argument seems to me very weak. In short, they contend that 'it seems intuitively likely that if p weren’t true, it might not be the case that Nutt speaks the truth regarding p [sic]' (p. 229).
This talk about what is likely true of a partially specified scenario is methodologically flawed. All I require is that there is a possible scenario, however unlikely, in which, if p weren't true, then Nutt - their name for the neighbour in my example - would still speak the truth about p, telling me that it is false. And this is the scenario I have tried to describe, by emphasizing that Nutt had a counterfactually robust desire - and ability, I might have added - to have me believe the truth about whether p is true or not. And my idea was that this desire and ability is counterfactually robust with respect to whether or not p is true. Surely this is possible. And in that case, it seems to me, Nozick's conditions are fulfilled and yet I do not have knowledge. That there is also a possible case nearby which doesn't make trouble for Nozick's theory is irrelevant.
Toward the end of their rejoinder, they seem to fall again into the misunderstanding of Nozick's theory that I tried to ward off in my reply:
[i]f one is to know something about tax law from a tax lawyer, it had better be the case that the tax lawyer would not say “p” about tax law unless p. (p. 230)
It looks here as though they mean that, if someone is to know something about tax law from a tax lawyer, it had better be the case that the tax lawyer would not assert any proposition about tax law unless it were true. As it happens, I am inclined to think this is false. But the point is that this requirement is not part of Nozick's theory. What Nozick's theory requires is that, to know something about tax law from a tax lawyer, it had better be the case that the tax lawyer would not assert that very thing unless it were true. And this is clearly a weaker requirement.
So, what Adams and Clarke say here does not succeed in neutralizing my counterexample to Nozick's theory. And their justification for saying it is cryptic and strained. They write:
Haze says that we are going rogue, and not staying true to Nozick's conditions. But as every constitutional lawyer knows, the letter of the law does not cover every application to every case. Some interpretation is required. Nozick's theory does not anticipate Haze's attempted counterexamples. But it is not hard to figure out how to apply the theory to the example and it goes as we suggest. (p. 230)
What does this claim, that 'Nozick's theory does not anticipate Haze's attempted counterexamples', mean? I don't really understand this, but it seems weaselly to me.
Hello Tristan (if I may),
ReplyDeleteRegarding your first counterexample, Adams and Clarke say:"Of course, if the delusion is only about whether or not the neighbor is a lawyer, and not about anything the neighbor says to Haze about tax law, then the delusion does not infect Haze's belief-forming methods about propositions uttered by the neighbor..."
And you reply to that (and what follows) by saying they got it right: "Firstly, the assumption that they make is right: in the example as I intended it, the main delusion I have is that my neighbour is not a lawyer but a divine oracle."
I'm not sure you got their assumption right.
It's not the same to say the delusion is about whether or not the neighbor is a lawyer, and to say the delusion is that the neighbor is not a lawyer but a Divine Oracle.
The delusion that he is a divine oracle does seriously infect your belief-forming methods about propositions uttered by your neighbor, since the delusion-infected method is to believe him regardless of their content, unless perhaps their content is something you believe to be false with a conviction similar to the conviction in your delusion (that might work to some extent with the lesser delusion they seem to assume, but it's more difficult).
The fact that your neighbor is usually reliable (with respect to tax law, or in general) is not the point, it seems to me, because your delusional belief is what causes you to believe what he says under almost any circumstances, with regard to tax law or anything else. But Nozick's theory (in your wording, and as I understand it) seems to imply that even if the method is generally unreliable (but generally reliable with regard to tax law), you actually know it.
Still, if there is a conflict of intuitions here perhaps the way around it is to modify the example a little bit.
Let's say your delusion is just the same. But now your neighbor, say Bob, is also a generally very deluded person.
In fact, while Bob is an expert in American tax law, and knows only a little about, say, American criminal law, or constitutional law (except with regard to taxes), let alone the laws of other countries. But he likes to believe (and believes) that he knows a lot about all of that. He also likes to believe (and believes) that he knows about astronomy, biology, etc., even though he does not.
In short, he believes himself to be a polymath, but he's far from it. He is still an expert in American tax law.
You and your neighbor have conversations about many things, from extrasolar planets to the legal system of the Roman Empire.
There are many things he has always wanted to tell you, and most of them are false even though he believes they're true. He usually gives you false information, and you buy it because you believe he's a divine oracle (and you trust God, etc.).
On the other hand, when you talk about American tax law, he gets it right, and you believe him not because he's an expert on that (which you don't know, and you shouldn't know anyway, since you don't have any info indicating he's an expert in American tax law), but because you believe he's a divine oracle. But only a very small portion of your conversations with him are about American tax law.
So, he tells you p (which he's always wanted to tell you), which you believe via method M (which is to believe whatever he tells you, because you firmly believe he's a divine oracle).
It seems even more clear to me that you don't know p. I don't know whether Adams and Clarke (and all of the philosophers who agreed with them) would still say that you know that p. But one can find even more radical examples, if needed.
Then again, I guess I might have misunderstood Nozick's theory.
Hi Angra, interesting! I'm inclined to agree that it is, if anything, even clearer that I do not know that p in your case.
DeleteIt seems like there's a cumulative, piling-up effect at work. Or maybe the epistemic problems coming from the lawyer's side just carry the day all by themselves, and the ones on my side just go along for the ride. In any case, the example feels like a hybrid of two things.
As to whether this would help me in the argument between me on the one hand, and Adams and Clarke on the other, I suspect that with this case they would just rehearse their (to my mind plainly inadequate) arguments against my *second* counterexample, and maintain that Nozick's theory doesn't actually count your case as a case of knowledge.
Going by your description of the theory - which they accepts -, I don't see how that wouldn't count as knowledge under Nozick's theory - and I agree with your assessment of their reply to your second example.
DeleteStill, if Adams and Clarke are suggesting a theory in which the method has to be reliable, I think that wouldn't work, either.
For example, what about a case in which the person is epistemically irrational to a great degree, and yet their method is extremely reliable?
Granted, externalists might insist that in those cases, there is knowledge. But if that is actually the proper answer under the concept of knowledge that externalists have (or at least, some externalists), then it seems that as you suggest, different people have different concepts of knowledge, and while the differences may not be noticeable in daily life (though who knows?), they became more clear in such cases.
For example, here's a weird cult scenario:
A cult leader talks about other planets, alien civilizations, etc., describing them in great detail, and promises to his followers that those who persevere in their belief until 01/01/2030 will be taken to another world (also described in detail) on a spaceship, which will pick them up on that day. He gives no more evidence than his testimony, as usual for cult leaders.
His followers believe him without any more evidence than what's usual for cult followers, so in particular, they're being epistemically irrational to a huge degree.
More precisely, if they were being epistemically rational, they would assign an astronomically low probability to the hypothesis that the cult leader is telling them the truth about alien civilizations, etc., to the proposition that those specific civilizations (described in great detail by the cult leader) exist, to the claim that a spaceship will come pick them up on that specific date, etc.
As it turns out, the cult leader is an alien cyborg AI, designed to look human and capable of fooling humans at that. His behavior is part of a big alien experiment observing humans, and indeed, any reasonable person would just think he's an ordinary cult leader (assuming they don't crack his skull open or something like that!).
The leader/alien/cyborg is committed to its programming:
It will only make true statements about other civilizations, about which he (or "it") has a lot of very accurate information, which it gives to its followers. It gives them no knowledge of physics, engineering, etc., though; he actually behaves like a cult leader as far as any epistemically rational person can tell (with the difference that those alien civilizations exist and are as described, the spaceship is actually coming, etc., but no human has any good reason to even suspect that), and will not give any evidence in support of his claims.
The method used by the cult followers (namely, trust whatever the cult leader tells them about anything alien) is very reliable (as is their leader), and it gives them very accurate info about all of that. But they're being epistemically irrational to a huge degree by believing him (which is part of what the alien experiment is about).
Do they actually know all of those things about the alien civilizations? Do they actually know that the alien ship is coming to pick them up?
In my assessment, they clearly don't know any of that. Maybe they will once the ship picks them up (though that depends on a number of other factors, I think), but until 2030, they surely don't know any of that, at least not as I understand and use the word "know".
'Going by your description of the theory - which they accepts -, I don't see how that wouldn't count as knowledge under Nozick's theory' - Same here, but then that's also true of my second counterexample.
Delete'I agree with your assessment of their reply to your second example.' - That is good to know! I was starting to wonder if I'm really losing it.
Interesting scenario with the cult leader. Like with your other one, there's a lot going on in it.
"Same here, but then that's also true of my second counterexample."
ReplyDeleteTrue.
In the context of your discussion with Adams and Clarke, the alternative scenario (i.e., the one about the deluded tax lawyer who believes he knows about all of the other stuff) blocks their reply "[i]f one is to know something about tax law from a tax lawyer, it had better be the case that the tax lawyer would not say “p” about tax law unless p. (p. 230)", by making him reliable about American tax law (which is the relevant tax law assuming he's a tax lawyer in America; still, one can adjust the scenario if they demand that the tax lawyer is always correct about the tax law of other countries too).
That said, I find your reply to their rejoinder persuasive, so their reply is mistaken anyway, and in a sense, blocking their reply by means of making the lawyer reliable about tax law is not an advantage. However, it might be an advantage in the context of a discussion, and in another sense - namely, the sense that they wouldn't be able insist on their reply, as it wouldn't be applicable even if it were a correct reply to your example (granted, they might come up with another reply that misinterprets the theory).
"Interesting scenario with the cult leader. Like with your other one, there's a lot going on in it."
I was trying to focus on a specific issue, namely that the cult followers have a system that reliably makes correct claims about alien stuff, but such that given the information available to them, the epistemically rational conclusion would be that the claims are almost certainly false. So, the belief-forming mechanism consisting on believing all of those claims (for the wrong reasons) is reliable, but (in my assessment) those beliefs are not knowledge. I thought it would work against a theory in which reliability is required, but no internal rationality is (then again, I suppose someone might assess the cult followers have knowledge).
Maybe a more "pure" example can be construed? (I'm not sure what other factors I would need to eliminate).
'I was trying to focus on a specific issue, namely that the cult followers have a system that reliably makes correct claims about alien stuff, but such that given the information available to them, the epistemically rational conclusion would be that the claims are almost certainly false'
DeleteThis is actually very helpful. That property of the claims you have in mind - that the rational conclusion would be that they are almost certainly false - is an even stronger property, so to speak, than is required for a counterexample. And your example shows something quite striking, namely that beliefs which are truth-tracking (in Nozick's sense) can have even this property.
visit this page Your Domain Name click resources best replica designer more helpful hints gucci replica bags
ReplyDeletebape clothing
ReplyDeletebape t shirt
golden goose sale
off white
kobe shoes
off white clothing
jordan outlet
fear of god outlet
supreme new york
spongebob kyrie 5