Showing posts with label validity. Show all posts
Showing posts with label validity. Show all posts

Saturday, 28 August 2021

Validity as (Material!) Truth-Preservation in Virtue of Form

Forthcoming in Analytic Philosophy and available on PhilPapers.

Abstract:

According to a standard story, part of what we have in mind when we say that an argument is valid is that it is necessarily truth preserving: if the premises are true, the conclusion must also be true. But—the story continues—that’s not enough, since ‘Roses are red, therefore roses are coloured’ for example, while it may be necessarily truth-preserving, is not so in virtue of form. Thus we arrive at a standard contemporary characterisation of validity: an argument is valid when it is NTP in virtue of form. Here I argue that we can and should drop the N; the resulting account is simpler, less problematic, and performs just as well with examples.



Sunday, 7 April 2019

Contradictory Premises and the Notion of Validity

When evaluating arguments in philosophy, it can be tempting to call an argument 'invalid' if you determine that it has contradictory premises. For example, in an introductory philosophy course at the University of Sydney, students are taught that a particular argument for the existence of God - called the Argument from Causation - is invalid because two of its premises contradict each other. It is tempting to call such an argument invalid because we can determine a priori that it is not sound, i.e. that it isn't both valid and such that its premises are true. But on a classical conception of validity, any argument with contradictory premises counts as valid, since it is impossible for all the premises of an argument with contradictory premises to be true, and so a fortiori impossible for the argument to have true premises and false conclusion.

I have heard this anomaly explained away by appeal to the fact that, while an argument with contradictory premises may count as formally valid, we are looking at informal validity, and in an informal sense perhaps any argument with contradictory premises should count as invalid. But I don't think that's right. If 'formal' is meant to signal that we are not interested in the meanings of non-logical terms and are only interested in what can be shown on the basis of the form of the argument, then that is clearly a different issue: premises could be determined to be contradictory on the basis of form alone, or in part on the basis of the meanings of the non-logical terms. The issue of contradictory premises is similarly orthogonal to the issue of 'formality' if 'formal' is instead meant to signify something like 'in an artificial language' or 'in a precise mathematical sense'. 

In fact, it's arguable that the standard treatment of validity of arguments in classical formal logic should be supplemented, so that an argument counts as valid iff it has no countermodel and its premises are jointly satisfiable.

If we defined 'valid' that way in classical logic, then to test an argument for validity using the tree method, you might have to do two trees. First, one to see if the premises can all be true together. If the tree says No, the argument is invalid and we can stop, but if the tree says Yes, then we do another tree to see if the premises together with the negation of the conclusion can all be true together, and if the tree says No, the argument is valid.

Whether or not it's worth adopting in practise, it is worth noting that this augmented definition of 'valid' in classical logic seems to correspond more closely to the ordinary, informal notion of deductive validity than the usual definition. This even delivers at least one of the desiderata which motivate relevance logic.

However, note that while we seem pretty disposed to call an argument invalid if it has contradictory premises, there is no equally strong tendency to say corresponding things using 'follows from', 'is a consequence of', or 'implies'. This is interesting in itself. It looks like, when we're talking about implication, our focus is on the putative implier or impliers and what can be got out of them, whether or not they're true. By contrast, when we talk about arguments, we're often more focused on the conclusion and whether it is shown to be true by the argument in question, so that validity is treated as one of the things we need to verify along the way. If validity is playing that role, it makes sense to declare an argument invalid if we work out that its premises can't all be true.

Monday, 21 October 2013

Formal Logic Isn't a General Science of Validity

'Tell me how you are searching, and I will tell you what you are searching for.' - Wittgenstein.

Introduction 

A curious feature of the philosophical landscape is that the most fundamental and influential discussions of several key issues in the philosophy of formal logic occur, in large part, in introductory logic textbooks. There's something Wild-West-ish and exciting about this, even if it stems partly from regrettable neglect of these issues. While proficient philosophically-trained logic teachers all impart a certain core of technical understanding, they differ greatly on what they tell their students formal logic is all about.

Here I want to give an argument for a thesis in the philosophy of formal logic:

(NGV) Formal logic is not aptly regarded as having, among its aims, that of giving a general account of validity.

This argument, if any good, should be of considerable interest, since the view which (NGV) denies – call it (GV) – is widely held, and plays a large role in our collective understanding of the nature of formal logic. I will use as a foil N.J.J. Smith's 2012 textbook Logic: The Laws of Truth, which endorses (GV).

My argument in compressed form is as follows: there are glaring lacunae in formal logic construed as a general account of validity; if that were one of its aims, it would be blatantly neglecting certain basic phenomena which it is meant to account for. But no one with any close relationship to formal logic cares, and we cannot put this down to laziness or stupidity. Therefore, it is inappropriate to construe formal logic as having a general account of validity among its aims. 

What I Mean by 'Formal Logic' and 'Validity' 

Before explaining the argument further with an example, a brief word about what I mean by 'formal logic' and 'validity'. 'Formal logic' means the formal, mathematized discipline of logic – syntax, formal semantics, proof-theory. Observations about the validity of natural language arguments, even if validity can be aptly conceived as turning on the form of propositions, are not themselves bits of formal logic, on this usage. By 'validity' I mean the thing which gets explicated sometimes as 'necessary truth preservation in virtue of form', or 'necessary truth-preservation in virtue of the meanings of logical (or subject-matter neutral) vocabulary', but also in other ways. A key point is that the valid inferences are a subset of the deductive (in a common sense of 'deductive'). We may deduce 'There is a vehicle' from 'There is a car', but this is not a valid inference in the relevant sense, since it turns on 'vehicle' and 'car'. 'There is a car, therefore there is a car or a vehicle', on the other hand, is valid. We say that the conclusion of a valid argument is a logical consequence of the premises. (Often what I am calling 'validity' here gets called 'logical validity', or even 'narrowly logical validity'. This may not be a bad practise, but here I follow the logicians who use 'validity' in this already-narrow way.)

There are different ways of spelling out what validity consists in. 'Valid' may also be a vague or indeterminate predicate (if, for example, the form/content distinction or the distinction between logical and non-logical vocabulary is vague or indeterminate), but that doesn't matter.

This should suffice to identify the relevant basic intuitive idea of validity, an idea which plays a major role in thinking about logic. It is not our purpose here to investigate the notion of validity, or the nature of validity, any further. We just needed to get it clear enough for the purpose of giving the argument for (NGV).

(GV)

Here is a clear expression of a conception involving (GV) - the view that formal logic is aptly regarded as having, among its aims, that of giving a general account of validity - in The Laws of Truth (pp.20-21, Chapter 1: Propositions and Arguments):

When it comes to validity, then, we now have two goals on the table. One is to find a precise analysis of validity. (Thus far we have given only a rough, guiding idea of what validity is: NTP [necessary truth-preservation - TH] guaranteed by form. As we noted, this does not amount to a precise analysis.) The other is to find a method of assessing arguments for validity that is both
1. foolproof: it can be followed in a straightforward, routine way, without recourse to intuition or imagination—and it always gives the right answer;
and
2. general: it can be applied to any argument.
Note that there will be an intimate connection between the role of form in the definition of validity (an argument is valid if it is NTP by virtue of its form) and the goal of finding a method of assessing arguments for validity that can be applied to any argument, no matter what its subject matter. It is the fact that validity can be assessed on the basis of form, in abstraction from the specific content of the propositions involved in an argument (i.e., the specific claims made about the world—what ways, exactly, the propositions that make up the argument are representing the world to be), that will bring this goal within reach.

One of the philosophically most important sections of the book is 14.4, 'Expressive Power', where four kinds of propositions are discussed which may seem to create difficulties for first-order logic with identity (FOL=) construed as a general account of validity (what follows is summary and paraphrase, not quotation):

(1) 2 + 2 = 4. (Translatable into FOL=, but the result is considerably more complex, and this may make us hope for something more elegant.)
 

(2) She will see the doctor but she hasn't yet. (Quantification over times is required to translate this adequately into FOL=, and, like with (1), we might hope for something else. And there is something else: tense logic.)

(3) Propositions involving vague predicates. (FOL= requires predicates to have definite extensions. But there is fuzzy set theory and other formal tools which could deal with these.)

(4) 'There are finitely many Fs'. (Can't be expressed in FOL=, but there are extensions in which it can be.)

 

Note carefully that Smith is not claiming that classical first-order logic is enough for a general account of validity. He admits, for example, that it cannot adequately represent propositions of type (4). But none of this casts any real doubt on (GV).

I want to bring a different kind of example into the mix. It will perhaps seem a bit boring or basic, but that's actually what makes it philosophically important.


Example: From 'Only' to 'All' 

Consider this argument:

Only horses gallop, therefore all gallopers are horses.

This is at least as non-trivial as 'A and B, therefore A', which formal logic deigns to capture.

 

I think this is, in a way, a more philosophically instructive sort of "problem for classical logic", in that it doesn't seem like much of a problem, and this gives us a way of seeing that (NGV). The argument isn't "fancy", it doesn't seem to make us want new systems, and the problem-propositions involved are translatable in a loose sense with no worries at all. But there is no way of capturing the inference in first-order logic. You either wind up with the premise the same as the conclusion, or a premise which differs from the conclusion (e.g. by using existential quantification), but no more similar to the natural language premise than the formal conclusion is, and so not capturing the original inference at all. 

This translatability in a loose sense is important. Since we can quite easily see that only Xs Y if and only if all Y'ers are X's, we just translate 'only' propositions using ordinary first-order quantifiers. But the logical insight we needed for that was at least as real and non-trivial as that which we need to see that 'A and B' implies 'A'.
 

We must ask the logician: if you're bothering to codify things like 'A and B, therefore A', and you're quite generally interested in the validity of arguments in natural and other languages, why are you so content to leave 'Only horses gallop, therefore all gallopers are horses' unaccounted for by your science?

The fact that we are so content shows (GV) to be misleading, indeed false. In doing formal logic, we are actually doing something quite different from: trying to construct a general account of validity.

It seems like what we are really doing is something more like: developing artificial languages fulfilling certain desiderata, arguments in which can be easily inspected for validity. (Quine, regimentation.) Or perhaps, getting a sense of what validity turns on, without trying to give a general account. (Realistically, it's probably both of these plus many others, right down to aesthetic or even social motivations.) At least, if we were doing either of those things, our formal neglect of the Only-to-All argument would be quite intelligible.


Again: if we're so keen on a general accoubt of validity, how can we be so ready to not care about having no formal account of 'Only Xs Y, therefore all Y'ers are Xs'? Because it is so trivial? That can't be right, since things we do try to capture are no less trivial. No, there is no way out - the fact that we don't care shows that we really aren't so keen on a general formal theory of validity. That idea doesn't capture the real life of formal logic.

We might tell ourselves we want a general account of validity, but then when we're underway, we think 'Why bother formalizing "only", when we've already formalized "all" and can use that?' - the fact that we take that attitude gives the lie to the idea that we're really after a general account of validity.


We sense that it would be fairly uninstructive to formalize 'only' once we've got 'all' etc. - we should let that help guide our thinking about what we're really doing when we do formal logic (hence the Wittgenstein quote at the beginning).

That is my argument for (NGV) and against (GV).

Other Kinds of Examples
 
The above example may be resisted on the grounds that 'only' has been studied from within Montague semantics, and perhaps from other perspectives. My reply to that, in the first instance, is that these contributions were not generally thought of as part of formal logic, nor have they come to be incorporated into it. I think that's the right reply, but it isn't entirely satisfying - it might seem overly legalistic and trusting in the status quo. Fortunately, other examples are to hand.

What other kinds of examples are there, besides the Only-to-All argument, which show the same thing? A couple more I have thought of are:

- 'All men are mortal, therefore everything is such that its being a man materially implies its being mortal', or 'There are men, therefore there is something which is a man'. That is, arguments which take us from basic natural quantifier-constructions to something of the quantifier-variable form. These arguments embody logical insights due to the inventors of quantification theory (such as Frege and Peirce), and yet it seems we don't care about giving formal accounts of the arguments themselves.

- Arguments involving truth-functional compounds where the premise is written in some non-standard notation (such as: Venn diagrams, Gardner-Landini shuttle diagrams, Wittgensteinian ab-notation, truth-tables construed as propositional signs), and the conclusion is written with regular connectives (or vice versa).

These two kinds of examples differ from the Only-to-All argument in involving technically-formed propositions, whereas 'All gallopers are horses' and 'Only horses gallop' are forms that come naturally to us.

Monday, 10 October 2011

Deduction and the Necessary A Posteriori

Consider: There is a cat here, therefore there is an animal here.

Assuming we want to say that this inference is valid in some sense, here are three things we might say about it:

1) It is an elliptical argument, involving an unarticulated premise, namely that cats are animals.

2) It is an enthymematic or gappy argument, involving unarticulated reasoning.

3) It is a complete argument in itself - neither (1) nor (2) is the case.

On the first approach, the deduction is clearly a priori. But this is not the only possible attitude. While the belief that cats are animals could conceivably be overturned by experience, it is arguably not a contingent fact that all the cats around are animals (cf. Kripke 1980). So rather than regarding this as part of the matter being reasoned from, we might regard it as part of the deductive apparatus.

Thus, on the second interpretation, we might regard the move from 'cat' to 'animal' as being licensed by an unarticulated principle of reasoning (which may be expressed in the form of an inference rule, or an axiom such as 'All cats are animals'). Or we might resist even this, and say that nothing is unarticulated - perhaps still allowing that the argument can be justified by the principles of reasoning which are held to be unarticulated on the second interpretation.

Note how natural these latter two approaches are; there does seem to be some sense in which the conclusion follows from the single articulated premise. Note also that the argument by itself satisfies the natural (admittedly problematic) modal characterization of validity, if we take the relevant modality to be subjunctive or metaphysical modality ("what could have been the case") rather than a priori possibility or epistemic modality ("what could be the case"): it could not have been the case that there was a cat here yesterday but no animal here yesterday.

This may suggest that, according to some intuitive and central concept of deduction, some facts about what can be deduced from what are empirical (i.e. not knowable a priori).

However, this flies rather completely in the face of previous philosophical thinking about deduction. It also raises the following puzzle: on one way of thinking about necessity, our holding it to be necessary that all cats are animals means that we have made a certain kind of connection between our cat-concept and our animal-concept (an empirically defeasible connection, held constant when describing counterfactual scenarios). But it is natural to think of this conceptual connection as partly constitutive of the content of thoughts involving these concepts - thoughts such as 'There is a cat here' and 'There is an animal here'. Thus, someone who doesn't have such a connection arguably isn't in a position to have those two thoughts at all: there would appear to be little room for them to have those exact thoughts and yet not be able to work out a priori that one implies the other.

My suggestion is that, when faced with this sort of puzzle, one should try to distinguish different ways of individuating content, some more fine-grained than others. When talking about thoughts in, e.g., the context of communication, a relatively course-grained individuation scheme is often most appropriate. When talking about the epistemology of deduction - and not just the epistemology - a more fine-grained approach is called for. (That is, an approach where what are for many purposes two instances of the same thought get treated as distinct structures.)

Insofar as this is right, the more general moral is perhaps something like: when doing philosophy, be willing to put multiple modes of content-individuation on the table - don't let one obsess you to the exclusion of all others.

Tristan Haze

Reference

Saul A. Kripke (1980). Naming and Necessity. Harvard University Press.