This is a bit rough and rambling. A much more focused rewritten version, taking into account more of the relevant literature (especially pieces by Bouwsma and Chalmers), is here.
Much ink has been spilled on skeptical arguments like the following:
Much ink has been spilled on skeptical arguments like the following:
- If you're a brain in a vat then you don't have hands
- You don't know that you're not a brain in a vat
- Therefore you don't know that you have hands
There are
many variations on this sort of argument, and many issues have been
raised about it, for example the issues of the closure of knowledge
under implication, and the closure of knowledge under known
implication.
I have
long felt that there is something very dubious about the first
premise in the above formulation (and the analogous premises in the
variations). I suspect that coming to terms with this would involve
throwing a spanner in the works of some of our ways of thinking about
how language functions (we being analytic philosophers, broadly
speaking). Not necessarily at the level of explicit commitments,
either. And perhaps that is part of the reason why this premise has
not been questioned, in anything like the way I want to question it,
much in the literature1;
it seems very difficult.
In this
essay I want to begin to explore this issue. It is a pretty large and
confusing issue, and this is only meant to be a beginning, so I will
try to be fairly non-technical and to avoid bringing in any very
particular overarching theoretical framework, out of fear that if I
did so the issue or important aspects of it would get lost.
I will
suggest that (1) is not true – or at least, that it isn't true on
its most natural readings. If I am right, then we may have a way of
avoiding the repugnant conclusions of skeptical arguments like the
above, without having to show that we do somehow know that we are not
brains in vats. This seems attractive to me – it does intuitively
seem to me that I do know that I have hands and that I do not
know that I'm not a BIV. Or perhaps better, and from a broadly
Lewisian contextualist perspective on knowledge-ascriptions (a
perspective I find independently attractive)2,
it seems to me that there are levels of strictness – or sets of
relevant alternatives – on which 'I know I have hands' comes out
true while 'I know I'm not a BIV' comes out false.
So, one
thing which may come out of this discussion is a way of diffusing a
large class of skeptical arguments. But my motivation isn't primarily
epistemological – isn't to vouchsafe certain bits of presumed
knowledge. For my part, I don't think that sort of philosophical
anxiety about whether we really know such-and-such should always be
indulged and attempts made to alleviate it by the straightforward
course of trying to come up with reasons to be reassured. It seems
largely pathological to me – something which ought to be
scrutinized and dissolved, as I think Wittgenstein tried to do.3
No, I'm
more interested in (1) for its own sake, and for the sake of the
issues which come up once we begin to question it. As we will see,
these are fundamental issues in the philosophy of language – for
example, the issue of what propositions are, and the issue of whether
we ought to think of propositions as sorting all possibilities into
two categories (one of which may be empty): those in which the
proposition holds, and those in which it does not. (We will not have
space to go deeply into these issues in a general way, but we will
end up seeing that we have here found one good path into them.)
So much
for scene-setting. To kick off the investigation, let us note that
(1) is generally supposed to be accepted readily, as though it were
obvious. It just appears in skeptical arguments as a premise, which
we're meant (by the skeptic) to accept without argument. Once you
start scrutinizing it as I have done, this quickly begins to seem
very odd and confusing. So, to try to avoid being hampered by such
confusion when we do scrutinize it, let us first ask the question:
why might (1) seem true?
Silly as
it sounds – silly as it is – the answer appears to be something
like: when (1) strikes us as true, we are as it were picturing a
brain sitting in a vat, and observing that there are no hands in that
picture. Or we are picturing a brain sitting in a vat, and a mad
scientist tending it, and noticing a striking contrast between the
two figures – the scientist-figure has a body and hands, whereas
the other figure is just an organ (in a vat). Something along those
lines.
And here
is a good place to consider and put aside one particular line of
attack on (1). It is a comically literal-minded objection. I did not
think of it myself, but found it when I was searching the literature
for previous attempts at calling propositions like (1) into question.
(And this is all I found.)
Roush (2010) argues that it is not true that if you're a brain
in a vat then you don't have hands, on the grounds that you might be
a brain in a vat with hands just stuck on (!) – that is, where
there are attached hands in the environment which contains the brain
and the vat, not the environment simulated for the brain. Maybe the
hands are just stuck on with glue and dangle there, or maybe they are
delicately connected up neurologically with the brain, making for a
queer straddling of two “worlds” (or environments, or levels of
reality) on the part of the BIV.
There is
something very frustrating about this objection. It is frustrating, I
think, because if you just accepted this objection, deciding on its
basis that (1) is false, and then walked away, you would have
bypassed all the deep issues in the philosophy of language which we
can dig up by scrutinizing (1), thus losing a valuable opportunity.
Calling
the objection into question – scrutinizing it
– may lead somewhere, however. For instance, we may ask whether a
BIV with the envisaged appendages really counts as 'having hands', or
whether this is really the best candidate meaning for 'having hands'
in connection with a BIV. And here we begin to get the sense of an
abyss, this sense of unforeseen ambiguity or indeterminacy, and the
sense that this sort of thing, since it's not very clear how we
should think about it, has ominous implications for how we think
philosophically about language.
These spectres raised just now are at the heart of what I am
concerned with here, but they come up with (1) itself anyway, quite
apart from the literal-minded objection we have just looked at. So,
what I propose is that we put this objection to one side, go back to
what we said about why (1) might seem plausible, and proceed from
there.
(If
you think the objection does show (1) to be simply false, you may be
more comfortable with the ensuing discussion if you exchange (1) for
something like 'If you're a brain in a vat without
appendages as envisaged in Roush (2010),
then you don't have hands'. But really, it hardly matters, since the
point of this discussion is not really to determine the
truth-value of (1). Indeed, the idea that (1) as used in skeptical
arguments has a definite meaning, and a definite truth-value, may not
survive scrutiny. And our idea of what a 'definite meaning' is, and
of what role the notion of definiteness should play in thinking about
meaning, may have to be altered too.)
So, we picture a BIV and there are no hands in the picture. And we
picture a scientist tending the BIV and see a contrast between the
figure of the scientist and the BIV-figure. And with this in mind, we
might be tempted to say 'The BIV doesn't have hands and the scientist
does'.
But consider a different situation, in which we have two BIVs. It
doesn't matter whether or not they are plugged into the same
simulation. What does matter is that, in their lives in their
simulation(s), one of them is an anatomically normal human, while the
other has been in an accident and lost their hands. Mightn't we, if
this was the first case we had considered, be tempted to say 'One BIV
has hands, the other does not'? And if we would be right in so
saying, then we would be wrong to say (without shifting the meanings
of relevant terms) that if you're a brain in a vat you don't have
hands; the first BIV would then be a counterexample to (1).
Here
it might be objected that it would not
be correct to say, unqualifiedly, 'One BIV has hands, the other does
not' – rather, one would, to be both right and completely explicit,
have to say something like 'One BIV has hands in its
simulated environment, the other
does not'.
Suppose
we go along with the objection. We can still ask about things the
BIVs may say, in their
simulations. And we could reason as follows: Surely, if the first BIV
says, in the simulation, 'I have hands', they are, in the simulation,
saying something true. And surely if they say, in the simulation, 'I
am a BIV', they are, in the simulation, saying something true (even
if they could never know it to be true). And thus, if they said 'If
you're a brain in a vat then you don't have hands', they would be
saying something false
– something to which their very case is a counterexample. And if
that's right, how could (1) fail to be false? How could our situation
differ from the BIV in question's situation in such a way that (1) is
true, whereas their utterance – in the simulation – of 'If you're
a brain in a vat then you don't have hands' – is not? I can see no
way. And furthermore, if there was a way, surely it would turn on us
not being BIVs, not
living in a simulation – and in that case, we wouldn't be able to
know (1) without first knowing that we are not BIVs, and so the
skeptical argument could no longer be run.
Now,
the above reasoning seems natural, but of course it could be
challenged. The most salient way it could be challenged would be to
follow Putnam's notorious paper (1981) in saying that, when the BIV
says, in their simulation, 'I am a BIV', they are saying, in their
simulation, something false,
contra the above
reasoning.
I do not have space here to lay out Putnam's arguments in full, and
to discredit them in detail, but I think it is important to realize
that Putnam is completely wrong on this point, and to see why. I will
now briefly try to defend this, and to say something about what was
going on with Putnam for him to be led so far astray.
Putnam begins with a causal theory of reference, according to which
what you're talking about when you say something is what stands in an
appropriate causal relation with your utterance. He argues, from the
causal theory, that since a BIV could have no causal contact with the
brain they are, and the vat they are in, they could not be talking
about that when they say 'I am a brain in a vat' – rather, their
utterance is, according to Putnam, about 'vats-in-the-image', 'or
something related (electronic impulses or program features)'. And
since they, the utterers, are not vats-in-the-image, i.e. not vats
belonging to their simulation, nor the relevant 'related' things,
what they thus say comes out false.
There are lots of things about this we could argue with – the idea
that the BIV's talk might literally refer to electronic impulses or
program features seems to me very crude and objectionable, for
instance – but I will confine myself to three points, the first two
of which are closely related to each other.
Firstly,
note that singular reference – reference to particular objects –
isn't what is in question here. Putnam isn't saying that to say
something which is made true by the state of some particular object O
requires that we have causal connections to O itself. That, after
all, would yield absurd consequences (not that that tends to stop
Putnam, but ignore that; these
absurd consequences aren't as cool or interesting). For example, I
may say, let us suppose on a whim, 'A man will walk into the room
now', and if a man immediately walks in, what I said is true, in
virtue of that particular man's walking in. But of course the man
need not have any causal connection with what I said. All Putnam
would insist on is that, in order to be about men at all, my talk
needed to have an appropriate causal connection with some
man or men. Likewise, in order for a BIV to think or say they are a
BIV, their thought or talk doesn't have to be causally connected with
the brain they are or the vat they are in. It just has to be
connected with some
brain(s) and vat(s).
Secondly,
why can't there be a general category marked with the word 'vat'
which includes as members both “vats-in-the-image” - vats in
simulations – and vats outside simulations? (Likewise for 'brain'.)
I think there can. Consider things like happiness and intelligence: a
BIV with a rich life is surely acquainted with these things, and
causally connected with exemplars of them – and so they can have a
category, for example marked 'expressions of happiness', and this
category would include both things in their simulated environment and
any appropriate things outside the simulation. And so Putnam's
argument falls down here, by implicitly holding that the relevant
reference classes – the 'brain' class and the 'vat' class – can
only include things in the utterer's “world”. Once we see this is
not so, we can go along with Putnam's basic causal-theoretic starting
point, but maintain that there is nothing stopping them thinking they
are BIVs, because they can
form categories – by means of causal connections to brains and vats
in their environment – which manage to include the brain they are
and the vat they are in, despite those particular instances not
being in their environment.
Thirdly, and stepping back a bit, note how implausible and crude Putnam's interpretation of the BIV's utterance 'I am a BIV' is; it's supposed by Putnam to assert something which to the BIV would be obviously wrong – namely, that they are brains in their environment in vats in their environment. And yet a reflective BIV might not find their utterance of it obviously wrong at all. This suggests that something has gone badly wrong. At a very general level, we may say that Putnam's problem is that he has inappropriately treated the language-game of talking about being a BIV as being just like an ordinary one about things in our environment. But it is plainly not that. Language, we might say, is here playing an entirely new trick. We may not be able to come up with a theoretical understanding of it which would satisfy Putnam, but that does not mean he gets to falsify it.
So, if I'm right about Putnam here, then the reasoning we went
through just before considering him seems hard to argue with. And
thus, it seems that (1) isn't true, at least on the most natural ways
of understanding it. At the very least, it should certainly seem by
now that (1) is not the straightforward truth it may have looked to
be at first. There are serious challenges to be raised against the
naïve, unreflective procedure of just (doing something like)
picturing a brain in a vat, observing that there are no hands in the
picture, and drawing (1) as a conclusion.
But
we are in a bit of a muddle now, only halfway through the essay. A
lot of arguments and worries have piled up. I want now to try to
restore our energies by clearing the table and approaching the issue
from the other side: why might we think (1) is false?
I have an intuitive case to make for thinking that (1) is false. It
involves considering statements made in ordinary, everyday
conversation, statements which intuitively seem to imply that the
utterer has hands, but which intuitively seem not to imply that the
utterer is not living in a simulation. For example, suppose someone
asks me to help them with something and I say 'OK, one second - I'm
just washing my hands'.
This
statement – that I'm washing my hands – surely implies that I
have hands. Furthermore, I find it very intuitive that it does
not imply that I'm not living in
a simulation, or that I'm not a BIV; that simply isn't at issue at
all. It is completely independent of the truth of what I said.
Having hands is compatible with it not being the case that I'm not a
BIV. And so, having hands is compatible with my being a BIV. And so
it can't be true that if you're a BIV then you don't have hands.
The key intuition there – that my ordinary statement does not imply
that I'm not living in a simulation – can perhaps be bolstered by
thinking a bit about the space of scenarios in which I am living in a
simulation, and seeing that it is possible to take an attitude to
many of these scenarios which is quite unlike regarding them as
epistemic nightmares, i.e. situations in which we're in really bad
shape epistemically – where much of what we ordinarily think we
know fails to even be true.
Certainly we can imagine simulation-scenarios which are
epistemic nightmares. We may be BIVs whose tending scientists are
engaging in all kinds of foul play, planting false memories and
moving things around on us. Also diabolical would be if some or all
of the apparent agents we are interacting with are not sentient, or
not as fully sentient as we think. I don't so much mean that they may
not be constituted the way we are, or the way we think they are –
after all, multiple realizability might be the case – but rather
that maybe all there is to these agents is what's required to
generate our interactions with them. And in lots of cases, corners
may be cut, so to speak – when we think they're off by themselves
having a rich mental life, perhaps often nothing of the sort is true.
But nightmarish scenarios like this are clearly a special subset of
all simulation scenarios; in many of the latter, we may not be wrong
about much of anything. It just might be the case that, unbeknownst
to us, there is a higher level of reality “hosting” the one we
inhabit, and this level may involve brains in vats.
From this point of view, we can see that there is no need to respond
to the news that you're a brain in a vat by revising your belief that
you have hands. Why not treat the news instead as telling you, among
other things, something new about your hands (and everything else in
your environment), namely that they are “hosted” at a higher
level of reality, or speaking crudely, are constituted by electrical
impulses or program features. (I say 'crudely' because the relation
is obviously not the normal one of constitution from normal physical
inquiry. Physics can be done in a simulation, too, and facts about
the simulation being a simulation need not be regarded as
belonging to it.)
I contend, then, that once we reflect a bit, we can see that (1) is
false, at least on the most natural ways of construing it.
Why the hedge about 'most natural ways'? Well, there is one way of
construing 'hands' I can think of which is not totally discontinuous
with what 'hands' really means and which would make (1) come
out true. Namely, a way on which hands are taken as a matter of
definition to be things which exist only at the highest level of
reality. (Note, in case it seems woolly or unclear, that this notion
of levels of reality I've been throwing around does not precede, or
exist apart from, considerations of simulations. It is a special
notion for talking about these very special matters. Despite possible
appearances, there's no more general story about it which could be
missing or unsatisfactorily hand-waved to here.)
So there is this construal. But when we adopt it, the conclusion of
the skeptical argument, that we don't know that we have hands, isn't
particularly repugnant any more. And this, by the way, shows that the
construal in question isn't very natural, since we do feel the
conclusion as ordinarily understood to be highly repugnant.
That conclusion can't be put at the end of the skeptical
argument without either rendering (1) false, or keeping it true but
equivocating on 'hands'.
So much for (1) and its role in the skeptical argument. I will now
begin to conclude, with some more general remarks about meaning and
propositions.
It seems like there's something artificial about pinning a particular
resolution of these issues of 'What exactly does it take to be a
hand, anyway?' on ordinary talk about hands, no matter which one we
pick. Rather, something along the lines of there being no fact of the
matter seems to be the case. Consider in this connection
Wittgenstein's case of the disappearing chair:
§80. I say "There is a
chair". What if I go up to it, meaning to fetch it, and it
suddenly disappears from sight?—"So it wasn't a chair, but
some kind of illusion".—But in a few moments we see it again
and are able to touch it and so on.—"So the chair was there
after all and its disappearance was some kind of illusion".—But
suppose that after a time it disappears again—or seems to
disappear. What are we to say now? Have you rules ready for such
cases—rules saying whether one may use the word "chair"
to include this kind of thing? But do we miss them when we use the
word "chair"; and are we to say that we do not really
attach any meaning to this word, because we are not equipped with
rules for every possible application of it?
So,
what of propositions? What of meaning? Should we say that hand-talk
is somehow incomplete, failing to express determinate propositions?
Well, we could say that, but this is taking the notions of a
proposition and of meaning pretty far from home. And what for?
Perhaps the only answer is: to preserve certain ways of thinking
about how propositions work, and what they do (for example, the idea
we mentioned at the outset that propositions sort all possibilities
into two categories). But is that wise? Were these ways of thinking
the results of investigation, or a priori
requirements? (Cf. entry 107 of the Investigations.)
In any case, does the breakdown of these ways of thinking here mean
they have to be chucked out entirely? No – we could think of them
as offering an idealized perspective. A perspective which is robust
in some areas of thinking, useless perhaps in others, and worse than
useless in others again.
Now
to stop and take stock. Firstly, (1) is no straightforward truth.
Secondly, there's a lot more to it than there might seem to be at
first glance. Thirdly, it is very arguably false on the most natural
ways of understanding it. There's one somewhat
natural way on which it's true, but on that one the conclusion isn't
very repugnant. Finally, we have looked fleetingly at what all this
might mean for the fundamentals of philosophy of language, and
suggested that certain ways of thinking about language which run into
trouble here are either just bad, or at best are idealizations which
have some value but can easily break down and become inappropriate.
And they do break down
and become inappropriate very quickly once we scrutinize (1).
References
Lewis,
D.K. (1996). Elusive knowledge. Australasian Journal of
Philosophy 74 (4):549 –
567.
Putnam,
H. (1981). 'Brains in a Vat', Chapter 1 of Reason, Truth,
and History. Cambridge
University Press.
Roush,
S. (2010). Closure On Skepticism. Journal of Philosophy
107 (5):243-256.
Wittgenstein,
L. (1953/2003). Philosophical Investigations: The German
Text, with a Revised English Translation.
Blackwell.
1.
I say 'much in the
literature' in case there are documents I am unaware of which
question it in something like the way I have in mind; I haven't been
able to find any. On the other hand, this does seem to me to be the
sort of thing that a philosopher might register in passing in a
document mainly about something else. I wouldn't be at all surprised
therefore to find myself anticipated to some extent in that way.
2. Cf.
Lewis (1996).
3.
I am uneasy about this though, since running with such
epistemological worries and trying to meet them straightforwardly
and on their own ground has borne spectacular philosophical
fruit, as it were along the way, even if the worries are ultimately
never thus met. Russell's quest for certainty and his work, in
service of this quest, in mathematical logic and philosophy of
language, seems a spectacular example. The quest seems a sad vestige
of a screwed up childhood, while the result of the quest includes
spectacular advances in logic, the theory of descriptions, and
long-overdue attention to Frege. This kind of alchemy seems
unsettlingly rife in philosophy – at least, it's a bit unsettling
if you hope to do fundamental work in the subject yourself. Another
example would be Nietzsche's writing Zarathustra
in the wake of his humiliating falling out with Paul Rée
and Lou Salomé.