I've decided to follow the recent lead of others and migrate this blog to a new wordpress site.
The big appeal for me of this is added functionality---in particular I'll be able to typeset logical notation using latex commands. Should make things prettier and easier.
Hope to see people over at the new site!
Friday, March 28, 2008
Monday, March 17, 2008
Paracompleteness and credences in contradictions.
The last few posts have discussed non-classical approaches to indeterminacy.
One of the big stumbling blocks about "folklore" non-classicism, for me, is the suggestion that contradictions (A&~A) be "half true" where A is indeterminate.
Here's a way of putting a constraint that appeals to me: I'm inclined to think that an ideal agent ought to fully reject such contradictions.
(Actually, I'm not quite as unsympathetic to contradictions as this makes it sound. I'm interested in the dialethic/paraconsistent package. But in that setting, the right thing to say isn't that A&~A is half-true, but that it's true (and probably also false). Attitudinally, the ideal agent ought to fully accept it.)
Now the no-interpretation non-classicist has the resources to satisfy this constraint. She can maintain that the ideal degree of belief in A&~A is always 0. Given that:
p(A)+p(B)=p(AvB)+p(A&B)
we have:
p(A)+p(~A)=p(Av~A)
And now, whenever we fail to fully accept Av~A, it will follow that our credences in A and ~A don't sum to one. That's the price we pay for continuing to utterly reject contradictions.
The *natural* view in this setting, it seems to me, is that accepting indeterminacy of A corresponds to rejecting Av~A. So someone fully aware that A is indeterminate should fully reject Av~A. (Here and in the above I'm following Field's "No fact of the matter" presentation of the nonclassicist).
But now consider the folklore nonclassicist, who does take talk of indeterminate propositions being "half true" (or more generally, degree-of-truth talk) seriously. This is the sort of position that the Smith paper cited in the last post explores. The idea there is that indeterminacy corresponds to half-truth, and fully informed ideal agents should set their partial beliefs to match the degree-of-truth of a proposition (e.g. in a 3-valued setting, an indeterminate A should be believed to degree 0.5). [NB: obviously partial beliefs aren't going to behave like a probability function if truth-functional degrees of truth are taken as an "expert function" for them.]
Given the usual min/max take on how these multiple truth values get settled over conjunction and negation, for the fullyinformed agent we'll get p(Av~A) set equal to the degree of truth of Av~A, i.e. 0.5. And exactly the same value will be given to A&~A. So contradictions, far from being rejected, are appropriately given the same doxastic attitude as I assign to "this fair coin will land heads"
Another way of putting this: the difference between our overall attitude to "the coin will land heads" and "Jim is bald and not bald" only comes out when we consider attitudes to contents in which these are embedded. For example, I fully disbelieve B&~B when B=the coin lands heads; but I half-accept it for B=A&~A . That doesn't at all ameliorate the implausibility of the initial identification, for me, but it's something to work with.
In short, the Field-like nonclassicist sets A&~A to 0; and that seems exactly right. Given this and one or two other principles, we get a picture where our confidence in Av~A can take any value---right down to 0; and as flagged before, the probabilities of A and ~A carve up this credence between them, so in the limit where Av~A has probability 0, they take probability 0 too.
But the folklore nonclassicist I've been considering, for whom degrees-of-truth are an expert function for degrees-of-belief, has 0.5 as a pivot. For the fully informed, Av~A always exceeds this by exactly the amount that A&~A falls below it---and where A is indeterminate, we assign them all probability 0.5.
As will be clear, I'm very much on the Fieldian side here (if I were to be a nonclassicist in the first place). It'd be interesting to know whether folklore nonclassicists do in general have a picture about partial beliefs that works as Smith describes. Consistently with taking semantics seriously, they might think of the probability of A as equal to the measure of the set of possibilities where A is perfectly true. And that will always make the probability of A&~A 0 (since it's never perfectly true); and meet various other of the Fieldian descriptions of the case. What it does put pressure on is the assumption (more common in degree theorists than 3-value theorists perhaps) that we should describe degree-of-truth-0.5 as a way of being "half true"---why in a situation where we know A is halftrue, would we be compelled to fully reject it? So it does seem to me that the rhetoric of folklore degree theorists fits a lot better with Smith's suggestions about how partial beliefs work. And I think it's objectionable on that account.
[Just a quick update. First observation. To get a fix on the "pivot" view, think of the constraint being that P(A)+P(~A)=1. Then we can see that P(Av~A)=1-P(A&~A), which summarizes the result. Second observation. I mentioned above that something that treated the degrees of truth as an expert function "won't behave like a probability function". One reflection of that is that the logic-probability link will be violated, given certain choices for the logic. E.g. suppose we require valid arguments to preserve perfect truth (e.g. we're working with the K3 logic). Then A&~A will be inconsistent. And, for example, P(A&~A) can be 0.5, while for some unrelated B, P(B) is 0. But in the logic A&~A|-B, so probability has decreased over a valid argument. Likewise if we're preserving non-perfect-falsity (e.g. we're working with the LP system). Av~A will then be a validity, but P(Av~A) can be 0.5, yet P(B) be 1. These are for the 3-valued case, but clearly that point generalizes to the analogous definitions of validity in a degree valued setting. One of the tricky things about thinking about the area is that there are lots of choice-points around, and one is the definition of validity. So, for example, one might demand that valid arguments preserve both perfect truth and non-perfect falsity; and then the two arguments above drop away since neither |-Av~A nor A&~A|- on this logic. The generalization to this in the many-valued setting is to demand e-truth preservation for every e. Clearly these logics are far more constrained than the K3 or LP logics, and so there's a better chance of avoiding violations of the logic-probability link. Whether one gets away with it is another matter.]
One of the big stumbling blocks about "folklore" non-classicism, for me, is the suggestion that contradictions (A&~A) be "half true" where A is indeterminate.
Here's a way of putting a constraint that appeals to me: I'm inclined to think that an ideal agent ought to fully reject such contradictions.
(Actually, I'm not quite as unsympathetic to contradictions as this makes it sound. I'm interested in the dialethic/paraconsistent package. But in that setting, the right thing to say isn't that A&~A is half-true, but that it's true (and probably also false). Attitudinally, the ideal agent ought to fully accept it.)
Now the no-interpretation non-classicist has the resources to satisfy this constraint. She can maintain that the ideal degree of belief in A&~A is always 0. Given that:
p(A)+p(B)=p(AvB)+p(A&B)
we have:
p(A)+p(~A)=p(Av~A)
And now, whenever we fail to fully accept Av~A, it will follow that our credences in A and ~A don't sum to one. That's the price we pay for continuing to utterly reject contradictions.
The *natural* view in this setting, it seems to me, is that accepting indeterminacy of A corresponds to rejecting Av~A. So someone fully aware that A is indeterminate should fully reject Av~A. (Here and in the above I'm following Field's "No fact of the matter" presentation of the nonclassicist).
But now consider the folklore nonclassicist, who does take talk of indeterminate propositions being "half true" (or more generally, degree-of-truth talk) seriously. This is the sort of position that the Smith paper cited in the last post explores. The idea there is that indeterminacy corresponds to half-truth, and fully informed ideal agents should set their partial beliefs to match the degree-of-truth of a proposition (e.g. in a 3-valued setting, an indeterminate A should be believed to degree 0.5). [NB: obviously partial beliefs aren't going to behave like a probability function if truth-functional degrees of truth are taken as an "expert function" for them.]
Given the usual min/max take on how these multiple truth values get settled over conjunction and negation, for the fullyinformed agent we'll get p(Av~A) set equal to the degree of truth of Av~A, i.e. 0.5. And exactly the same value will be given to A&~A. So contradictions, far from being rejected, are appropriately given the same doxastic attitude as I assign to "this fair coin will land heads"
Another way of putting this: the difference between our overall attitude to "the coin will land heads" and "Jim is bald and not bald" only comes out when we consider attitudes to contents in which these are embedded. For example, I fully disbelieve B&~B when B=the coin lands heads; but I half-accept it for B=A&~A . That doesn't at all ameliorate the implausibility of the initial identification, for me, but it's something to work with.
In short, the Field-like nonclassicist sets A&~A to 0; and that seems exactly right. Given this and one or two other principles, we get a picture where our confidence in Av~A can take any value---right down to 0; and as flagged before, the probabilities of A and ~A carve up this credence between them, so in the limit where Av~A has probability 0, they take probability 0 too.
But the folklore nonclassicist I've been considering, for whom degrees-of-truth are an expert function for degrees-of-belief, has 0.5 as a pivot. For the fully informed, Av~A always exceeds this by exactly the amount that A&~A falls below it---and where A is indeterminate, we assign them all probability 0.5.
As will be clear, I'm very much on the Fieldian side here (if I were to be a nonclassicist in the first place). It'd be interesting to know whether folklore nonclassicists do in general have a picture about partial beliefs that works as Smith describes. Consistently with taking semantics seriously, they might think of the probability of A as equal to the measure of the set of possibilities where A is perfectly true. And that will always make the probability of A&~A 0 (since it's never perfectly true); and meet various other of the Fieldian descriptions of the case. What it does put pressure on is the assumption (more common in degree theorists than 3-value theorists perhaps) that we should describe degree-of-truth-0.5 as a way of being "half true"---why in a situation where we know A is halftrue, would we be compelled to fully reject it? So it does seem to me that the rhetoric of folklore degree theorists fits a lot better with Smith's suggestions about how partial beliefs work. And I think it's objectionable on that account.
[Just a quick update. First observation. To get a fix on the "pivot" view, think of the constraint being that P(A)+P(~A)=1. Then we can see that P(Av~A)=1-P(A&~A), which summarizes the result. Second observation. I mentioned above that something that treated the degrees of truth as an expert function "won't behave like a probability function". One reflection of that is that the logic-probability link will be violated, given certain choices for the logic. E.g. suppose we require valid arguments to preserve perfect truth (e.g. we're working with the K3 logic). Then A&~A will be inconsistent. And, for example, P(A&~A) can be 0.5, while for some unrelated B, P(B) is 0. But in the logic A&~A|-B, so probability has decreased over a valid argument. Likewise if we're preserving non-perfect-falsity (e.g. we're working with the LP system). Av~A will then be a validity, but P(Av~A) can be 0.5, yet P(B) be 1. These are for the 3-valued case, but clearly that point generalizes to the analogous definitions of validity in a degree valued setting. One of the tricky things about thinking about the area is that there are lots of choice-points around, and one is the definition of validity. So, for example, one might demand that valid arguments preserve both perfect truth and non-perfect falsity; and then the two arguments above drop away since neither |-Av~A nor A&~A|- on this logic. The generalization to this in the many-valued setting is to demand e-truth preservation for every e. Clearly these logics are far more constrained than the K3 or LP logics, and so there's a better chance of avoiding violations of the logic-probability link. Whether one gets away with it is another matter.]
Regimentation (x-post).
Here's something you frequently hear said about ontological commitment. First, that to determine the ontological commitments of some sentence S, one must look not at S, but at a regimentation or paraphrase of S, S*. Second (very roughly), you determine the ontological commitments of S by looking at what existential claims follow from S*.
Leave aside the second step of this. What I'm perplexed about is how people are thinking about the first step. Here's one way to express the confusion. We're asked about the sentence S, but to determine the ontological commitments we look at features of some quite different sentence S*. But what makes us think that looking at S* is a good way of finding out about what's required of the world for S to be true?
Reaction (1). The regimentation may be constrained so as to make the relevance of S* transparent. Silly example: regimentation could be required to be null, i.e. every sentence has to be "regimented" as itself. No mystery there. Less silly example: the regimentation might be required to preserve meaning, or truth-conditions, or something similar. If that's the case then one could plausibly argue that the OC's of S and S* coincide, and looking at the OC's of S* is a good way of figuring out what the OC's of S is.
(The famous "symmetry" objections are likely to kick in here; i.e. if certain existential statements follow from S but not from S*, and what we know is that S and S* have the same OC's, why take it that S* reveals those OC's better than S?---so for example if S is "prime numbers exist" and S* is a nominalistic paraphrase, we have to say something about whether S* shows that S is innocent of OC to prime numbers, or whether S shows that S* is in a hidden way committed to prime numbers).
Obviously this isn't plausibly taken as Quine view---the appeal to synonymy is totally unQuinean (moreover in Word and Object, he's pretty explicit that the regimentation relationship is constrained by whether S* can play the same theoretical role as we initially thought S played---and that'll allow for lots of paraphrases where the sentences don't even have the appearance of being truth-conditionally equivalent).
Reaction (2). Adopt a certain general account of the nature of language. In particular, adopt a deflationism about truth and reference. Roughly: T- and R-schemes are in effect introduced into the object language as defining a disquotational truth-predicate. Then note that a truth-predicate so introduced will struggle to explain the predications of truth for sentences not in one's home language. So appeal to translation, and let the word "true" apply to a sentence in a non-home language iff that sentence translates to some sentence of the home language that is true in the disquotational sense. Truth for non-home languages is then the product of translation and disquotational truth. (We can take the "home language" for present purposes to be each person's idiolect).
I think from this perspective the regimentation steps in the Quinean characterization of ontological commitment have an obvious place. Suppose I'm a nominalist, and refuse to speak of numbers. But the mathematicians go around saying things like "prime numbers exist". Do I have to say that what they say is untrue (am I going to go up to them and tell them this?) Well, they're not speaking my idiolect; so according to the deflationary conception under consideration, what I need to do is figure out whether there sentences translate to something that's deflationarily true in my idiolect. And if I translate them according to a paraphrase on which their sentences pair with something that is "nominalistically acceptable", then it'll turn out that I can call what they say true.
This way of construing the regimentation step of ontological commitment identifies it with the translation step of the translation-disquotation treatment of truth sketched above. So obviously what sorts of constraints we have on translation will transfer directly to constraints on regimentation. One *could* appeal to a notion of truth-conditional equivalence to ground the notion of translatability---and so get back to a conception whereby synonymy (or something close to it) was central to our analysis of language.
It's in the Quinean spirit to take translatability to stand free of such notions (to make an intuitive case for separation here, one might, for example, that synonymy should be an equivalence relation, whereas translatability is plausibly non-transitive). There are several options. Quine I guess focuses on preservation of patterns of assent and dissent to translated pairs; Field appeals to his projectivist treatment of norms and takes "good translation" as something to be explained in projective terms. No doubt there are other ways to go.
This way of defending the regimentation step in treatments of ontological commitment turns essentially on deflationism about truth; and more than that, on a non-universal part of the deflationary project: the appeal to translation as a way to extend usage of the truth-predicate to non-home languages. If one has some non-translation story about how this should go (and there are some reasons for wanting one, to do with applying "true" to languages whose expressive power outstrips that of one's own) then the grounding for the regimentation step falls away.
So the Quinean regimentation-involving treatment of ontological commitment makes perfect sense within a Quinean translation-involving treatment of language in general. But I can't imagine that people who buy into to the received view of ontological commitment really mean to be taking a stance on deflationism vs. its rivals; or about the exact implementation of deflationism.
Of course, regimentation or translatability (in a more Quinean, preservation-of-theoretical-role sense, rather than a synonymy-sense) can still be significant for debates about ontological commitments. One might think that arithmetic was ontologically committing, but the existence of some nominalistic paraphrase that was suited to play the same theoretical role gave one some reassurance that one doesn't *have* to use the committing language, and maybe overall these kind of relationships will undermine the case for believing in dubious entities---not because ordinary talk isn't committed to them, but because for theoretical purposes talk needn't be committed to them. But unlike the earlier role for regimentation, this isn't a "hermeneutic" result. E.g. on the Quinean way of doing things, some non-home sentence "there are prime numbers" can be true, despite there being no numbers---just because the best translation of the quoted sentence translates it to something other than the home sentence "there are prime numbers". This kind of flexibility is apparently lost if you ditch the Quinean use of regimentation.
Leave aside the second step of this. What I'm perplexed about is how people are thinking about the first step. Here's one way to express the confusion. We're asked about the sentence S, but to determine the ontological commitments we look at features of some quite different sentence S*. But what makes us think that looking at S* is a good way of finding out about what's required of the world for S to be true?
Reaction (1). The regimentation may be constrained so as to make the relevance of S* transparent. Silly example: regimentation could be required to be null, i.e. every sentence has to be "regimented" as itself. No mystery there. Less silly example: the regimentation might be required to preserve meaning, or truth-conditions, or something similar. If that's the case then one could plausibly argue that the OC's of S and S* coincide, and looking at the OC's of S* is a good way of figuring out what the OC's of S is.
(The famous "symmetry" objections are likely to kick in here; i.e. if certain existential statements follow from S but not from S*, and what we know is that S and S* have the same OC's, why take it that S* reveals those OC's better than S?---so for example if S is "prime numbers exist" and S* is a nominalistic paraphrase, we have to say something about whether S* shows that S is innocent of OC to prime numbers, or whether S shows that S* is in a hidden way committed to prime numbers).
Obviously this isn't plausibly taken as Quine view---the appeal to synonymy is totally unQuinean (moreover in Word and Object, he's pretty explicit that the regimentation relationship is constrained by whether S* can play the same theoretical role as we initially thought S played---and that'll allow for lots of paraphrases where the sentences don't even have the appearance of being truth-conditionally equivalent).
Reaction (2). Adopt a certain general account of the nature of language. In particular, adopt a deflationism about truth and reference. Roughly: T- and R-schemes are in effect introduced into the object language as defining a disquotational truth-predicate. Then note that a truth-predicate so introduced will struggle to explain the predications of truth for sentences not in one's home language. So appeal to translation, and let the word "true" apply to a sentence in a non-home language iff that sentence translates to some sentence of the home language that is true in the disquotational sense. Truth for non-home languages is then the product of translation and disquotational truth. (We can take the "home language" for present purposes to be each person's idiolect).
I think from this perspective the regimentation steps in the Quinean characterization of ontological commitment have an obvious place. Suppose I'm a nominalist, and refuse to speak of numbers. But the mathematicians go around saying things like "prime numbers exist". Do I have to say that what they say is untrue (am I going to go up to them and tell them this?) Well, they're not speaking my idiolect; so according to the deflationary conception under consideration, what I need to do is figure out whether there sentences translate to something that's deflationarily true in my idiolect. And if I translate them according to a paraphrase on which their sentences pair with something that is "nominalistically acceptable", then it'll turn out that I can call what they say true.
This way of construing the regimentation step of ontological commitment identifies it with the translation step of the translation-disquotation treatment of truth sketched above. So obviously what sorts of constraints we have on translation will transfer directly to constraints on regimentation. One *could* appeal to a notion of truth-conditional equivalence to ground the notion of translatability---and so get back to a conception whereby synonymy (or something close to it) was central to our analysis of language.
It's in the Quinean spirit to take translatability to stand free of such notions (to make an intuitive case for separation here, one might, for example, that synonymy should be an equivalence relation, whereas translatability is plausibly non-transitive). There are several options. Quine I guess focuses on preservation of patterns of assent and dissent to translated pairs; Field appeals to his projectivist treatment of norms and takes "good translation" as something to be explained in projective terms. No doubt there are other ways to go.
This way of defending the regimentation step in treatments of ontological commitment turns essentially on deflationism about truth; and more than that, on a non-universal part of the deflationary project: the appeal to translation as a way to extend usage of the truth-predicate to non-home languages. If one has some non-translation story about how this should go (and there are some reasons for wanting one, to do with applying "true" to languages whose expressive power outstrips that of one's own) then the grounding for the regimentation step falls away.
So the Quinean regimentation-involving treatment of ontological commitment makes perfect sense within a Quinean translation-involving treatment of language in general. But I can't imagine that people who buy into to the received view of ontological commitment really mean to be taking a stance on deflationism vs. its rivals; or about the exact implementation of deflationism.
Of course, regimentation or translatability (in a more Quinean, preservation-of-theoretical-role sense, rather than a synonymy-sense) can still be significant for debates about ontological commitments. One might think that arithmetic was ontologically committing, but the existence of some nominalistic paraphrase that was suited to play the same theoretical role gave one some reassurance that one doesn't *have* to use the committing language, and maybe overall these kind of relationships will undermine the case for believing in dubious entities---not because ordinary talk isn't committed to them, but because for theoretical purposes talk needn't be committed to them. But unlike the earlier role for regimentation, this isn't a "hermeneutic" result. E.g. on the Quinean way of doing things, some non-home sentence "there are prime numbers" can be true, despite there being no numbers---just because the best translation of the quoted sentence translates it to something other than the home sentence "there are prime numbers". This kind of flexibility is apparently lost if you ditch the Quinean use of regimentation.
Saturday, March 15, 2008
Arche talks
In a few weeks time (31st March-5th April) I'm going to be visiting the Arche research centre in St Andrews, and giving a series of talks. I studied at Arche for my PhD, so it'll be really good to go back and see what's going on.
The talks I'm giving relate to the material on indeterminacy and probability (in particular, evidential probability or partial belief). The titles are as follows:
But why should you believe that key principle about how attitudes to indeterminacy constrain attitudes to p? The case I've been focussing on up till now has concerned a truth-value gappy position on indeterminacy. With a broadly classical logic governing the object language, one postulates truth-value gaps in indeterminate cases. There's then an argument directly from this to the sort of revisionism associated with supervaluationist positions in vagueness. And from there, and a certain consistency requirement on rational partial belief (or evidence) we get the result. The consistency requirement is simply the claim, for example, that if q follows from p, one cannot rationally invest more confidence in p than one invests in q (given, of course, that one is aware of the relevant facts).
The only place I appeal to what I've previously called the "Aristotelian" view of indeterminacy (truth value gaps but LEM retained) is in arguing for the connection between attitudes to determinately p and attitudes to p. But I've just realized something that should have been obvious all along---which is that there's a quick argument to something similar for someone who thinks determinacy is marked by a rejection of excluded middle. Assume, to begin with, that the paracompletist nonclassicist will think in borderline cases, characteristically, one should reject the relevant instance of excluded middle. So if one is fully convinced that p is borderline, one should utterly reject pv~p.
It's dangerous to generalize about non-classical systems, but the ones I'm thinking of all endorse the claim p|-pvq---i.e. disjunction introduction. So in particular, an instance of excluded middle will follow from p.
But if we utterly reject pv~p in a borderline case (assign it credence 0), then by the probability-logic link we should utterly reject (assign credence 0) anything from which it follows.
In particular, we should assign credence 0 to p. And by parallel reasoning, we should assign credence 0 to ~p.
[Edit: there's a question, I think, about whether the non-classicist should take us to utterly reject LEM in a borderline case (i.e. degree of partial belief=0). The folklore non-classicist, at least, might suggest that on her conception degrees of truth should be expert functions for partial beliefs---i.e. absent uncertainty about what the degrees of truth are, one should conform the partial beliefs to the degrees of truth. Nick J. J. Smith has a paper where he works out a view that has this effect, from what I can see. It's available here and is well worth a read. If a paradigm borderline case for the folklore nonclassicist is one where degree of truth of p, not p and pv~p are all 0.5, then one's degree of belief in all of them should be 0.5. And there's no obvious violation of the probability-logic link here. (At least in this specific case. The logic will have to be pretty constrained if it isn't to violate probability-logic connection somewhere).]
If all this is correct, then I don't need to restrict myself to discussing the consequences of the Aristotelian/supervaluation sort of view. Everything will generalize to cover the nonclassical cases---and will cover both the folklore nonclassicist and the no interpretation nonclassicist discussed in the previous cases (here's a place where there's convergence).
[A folklore classicist might object that for them, there isn't a unique "logic" for which to run the argument. If one focuses on truth-preservation, one gets say a Kleene logic; if one focuses on non-falsity preservation, one gets an LP logic. But I don't think this thought really goes anywhere...]
The talks I'm giving relate to the material on indeterminacy and probability (in particular, evidential probability or partial belief). The titles are as follows:
- Indeterminacy and partial belief I: The open future and future-directed belief.
- Indeterminacy and partial belief II: Conditionals and conditional belief.
- Indeterminacy and partial belief III: Vague survival and de se belief.
But why should you believe that key principle about how attitudes to indeterminacy constrain attitudes to p? The case I've been focussing on up till now has concerned a truth-value gappy position on indeterminacy. With a broadly classical logic governing the object language, one postulates truth-value gaps in indeterminate cases. There's then an argument directly from this to the sort of revisionism associated with supervaluationist positions in vagueness. And from there, and a certain consistency requirement on rational partial belief (or evidence) we get the result. The consistency requirement is simply the claim, for example, that if q follows from p, one cannot rationally invest more confidence in p than one invests in q (given, of course, that one is aware of the relevant facts).
The only place I appeal to what I've previously called the "Aristotelian" view of indeterminacy (truth value gaps but LEM retained) is in arguing for the connection between attitudes to determinately p and attitudes to p. But I've just realized something that should have been obvious all along---which is that there's a quick argument to something similar for someone who thinks determinacy is marked by a rejection of excluded middle. Assume, to begin with, that the paracompletist nonclassicist will think in borderline cases, characteristically, one should reject the relevant instance of excluded middle. So if one is fully convinced that p is borderline, one should utterly reject pv~p.
It's dangerous to generalize about non-classical systems, but the ones I'm thinking of all endorse the claim p|-pvq---i.e. disjunction introduction. So in particular, an instance of excluded middle will follow from p.
But if we utterly reject pv~p in a borderline case (assign it credence 0), then by the probability-logic link we should utterly reject (assign credence 0) anything from which it follows.
In particular, we should assign credence 0 to p. And by parallel reasoning, we should assign credence 0 to ~p.
[Edit: there's a question, I think, about whether the non-classicist should take us to utterly reject LEM in a borderline case (i.e. degree of partial belief=0). The folklore non-classicist, at least, might suggest that on her conception degrees of truth should be expert functions for partial beliefs---i.e. absent uncertainty about what the degrees of truth are, one should conform the partial beliefs to the degrees of truth. Nick J. J. Smith has a paper where he works out a view that has this effect, from what I can see. It's available here and is well worth a read. If a paradigm borderline case for the folklore nonclassicist is one where degree of truth of p, not p and pv~p are all 0.5, then one's degree of belief in all of them should be 0.5. And there's no obvious violation of the probability-logic link here. (At least in this specific case. The logic will have to be pretty constrained if it isn't to violate probability-logic connection somewhere).]
If all this is correct, then I don't need to restrict myself to discussing the consequences of the Aristotelian/supervaluation sort of view. Everything will generalize to cover the nonclassical cases---and will cover both the folklore nonclassicist and the no interpretation nonclassicist discussed in the previous cases (here's a place where there's convergence).
[A folklore classicist might object that for them, there isn't a unique "logic" for which to run the argument. If one focuses on truth-preservation, one gets say a Kleene logic; if one focuses on non-falsity preservation, one gets an LP logic. But I don't think this thought really goes anywhere...]
Friday, March 14, 2008
Non-classical logics: the no interpretation account
In the previous post, I set out what I took to be one folklore conception of a non-classicist treatment of indeterminacy. Essential elements were (a) the postulation of not two, but several truth statuses; (b) the treatment of "it is indeterminate whether" (or degreed variants thereof) as an extensional operator; (c) the generalization to this setting of a classicist picture, where logic is defined as truth preservation over a range of reinterpretations, one amongst which is the interpretation that gets things right.
I said in that post that I thought that folklore non-classicism was a defensible position, though there's some fairly common maneuvers which I think the folklore non-classicist would be better off ditching. One of these is the idea that the intended interpretation is describable "only non-classically".
However, there's a powerful alternative way of being a non-classicist. The last couple of weeks I've had a sort of road to Damascus moment about this, through thinking about non-classicist approaches to the Liar paradox---and in particular, by reading Hartry Field's articles and new book where he defends a "paracomplete" (excluded-middle rejecting) approach to the semantic paradoxes and work by JC Beall on a "paraconsistent" (contradiction-allowing) approach.
One interpretative issue with the non-classical approaches to the Liar and the like is that a crucial element is a truth-predicate that works in a way very unlike the notion of "truth" or "perfect truth" ("semantic value 1", if you want neutral terminology) that feature in the many-valued semantics. But that's not necessarily a reason by itself to start questioning the folklore picture. For it might be that "truth" is ambiguous---sometimes picking up on a disquotational notion, sometimes tracking the perfect truth notion featuring in the nonclassicists semantics. But in fact there are tensions here, and they run deep.
Let's warm up with a picky point. I was loosely throwing around terms like "3-valued logic" in the last post, and mentioned the (strong) Kleene system. But then I said that we could treat "indeterminate whether p" as an extensional operator (the "tertium operator" that makes "indet p" true when p is third-valued, and otherwise false). But that operator doesn't exist in the Kleene system---the Kleene system isn't expressively complete with respect to the truth functions definable over three values, and this operator is one of the truth-functions that isn't there. (Actually, I believe if you add this operator, you do get something that is expressively complete with respect to the three valued truth-functions).
One might take this to be just an expressive limitation of the Kleene system. After all, one might think, in the intended interpretation there is a truth-function behaving in the way just described lying around, and we can introduce an expression that picks up on it if we like.
But it's absolutely crucial to the nonclassical treatments of the Liar that we can't do this. The problem is that if we have this operator in the language, then "exclusion negation" is definable---an operator "neg" such that "neg p" is true when p is false or indeterminate, and otherwise false (this will correspond to "not determinately p"---i.e. ~p&~indeterminate p, where ~ is so-called "choice" negation, i.e. |~p|=1-|p|). "p v neg p" will be a tautology; and arbitrary q will follow from the pair {p, neg p}. But this is exactly the sort of device that leads to so-called "revenge" puzzles---Liar paradoxes that are paradoxical even in the 3-valued system. Very roughly, it looks as if on reasonable assumptions a system with exclusion negation can't have a transparent truth predicate in it (something where p and T(p) are intersubstitutable in all extensional contexts). It's the whole point of Field and Beall's approaches to retain something with this property. So they can't allow that there is such a notion around (so for example, Beall calls such notions "incoherent").
What's going on? Aren't these approaches just denying us the resources to express the real Liar paradox? The key, I think, is a part of the nonclassicist picture that Beall and Field are quite explicit about and which totally runs against the folklore conception. They do not buy into the idea that model theory is ranging over a class of "interpretations" of the language among which we might hope to find the "intended" interpretation. The core role of the model theory is to give an extensionally adequate characterization of the consequence relation. But the significance of this consequence relation is not to be explained in model-theoretic terms (in particular, in terms of one among the models being intended, so that truth-preservation on every model automatically gives us truth-preservation simpliciter).
(Field sometimes talks about the "heuristic value" of this or that model and explicitly says that there is something more going on than just the use of model theory as an "algebraic device". But while I don't pretend to understand exactly what is being invoked here, it's quite quite clear that the "added value" doesn't consist on some classical 3-valued model being "intended".)
Without appeal to the intended interpretation, I just don't see how the revenge problem could be argued for. The key thought was that there is a truth-function hanging around just waiting to be given a name, "neg". But without the intended interpretation, what does this even mean? Isn't the right thought simply that we're characterizing a consequence relation using rich set-theoretic resources---and in terms of which we can draw differences that correspond to nothing in the phenomenon being modelled.
So it's absolutely essential to the nonclassicist treatment of the Liar paradox that we drop the "intended interpretation" view of language. Field, for one, has a ready-made alternative approach to suggest---a Quinean combination of deflationism about truth and reference, with perhaps something like translatability being invoked to explain how such predicates can be applied to expressions in a language other than ones own.
I'm therefore inclined to think of the non-classicism---at least about the Liar---as a position that *requires* something like this deflationist package. Whereas the folklore non-classicist I was describing previously is clearly someone who takes semantics seriously, and who buys into a generalization of the powerful connections between truth and consequence that a semantic theory of truth affords.
When we come to the analysis of vagueness and other (non-semantic-paradox related) kinds of indeterminacy, it's now natural to consider this "no interpretation" non-classicism. (Field does exactly this---he conceives of his project as giving a unified account of the semantic paradoxes and the paradoxes of vagueness. So at least *this* kind of nonclassicism, we can confidently attribute to a leading figure in the field). All the puzzles described previously for the non-classicist position are thrown into a totally new light. Once we make this move.
To begin with, there's no obvious place for the thought that there are multiple truth statuses. For you get that by looking at a many valued model, and imagining that to be an image of what the intended interpretation of the language must be like. And that is exactly the move that's now illegitimate. Notice that this undercuts one motivation for going towards a fuzzy logic---the idea that one represents vague predicates as some smoothly varying in truth status. Likewise, the idea that we're just "iterating a bad idea" in multiplying truth values doesn't hold water on this conception---since the many-values assigned to sentences in models just don't correspond to truth statuses.
Connectedly, one shouldn't say that contradictions can be "half true" (nor that excluded middle is "half true". It's true that (on say the Kleene approach) that you won't have ~(p&~p) as a tautology. Maybe you could object to *that* feature. But that to me doesn't seem nearly as difficult to swallow as a contradiction having "some truth to it" despite the fact that from a contradiction, everything follows.
One shouldn't assume that "determinately" should be treated as the tertium operator. Indeed, if you're shooting for a combined non-classical theory of vagueness and semantic paradoxes, you *really* shouldn't treat it this way, since as noted above this would give you paradox back.
There is therefore a central and really important question: what is the non-classical treatment of "determinately" to be? Sample answer (lifted from Field's discussion of the literature): define D(p) as p&~(p-->~p), where --> is a certain fuzzy logic conditional. This, Field argues, has many of the features we'd intuitively want a determinately operator to have; and in particular, it allows for non-trivial iterations. So if something like this treatment of "determinately" were correct, then higher-order indeterminacy wouldn't be obviously problematic (Field himself thinks this proposal is on the right lines, but that one must use another kind of conditional to make the case).
"No interpretation" nonclassicism is an utterly, completely different position from the folklore nonclassicism I was talking about before. For me, the reasons to think about indeterminacy and the semantic and vagueness-related paradoxes in the first place, is that they shed light on the nature of language, representation, logic and epistemology. And on these sorts of issues, the no interpretation nonclassicism and the folklore version take diametrically opposed positions on such issues, and flowing from this, the appropriate ways to arguing for or against these views are just very very different.
I said in that post that I thought that folklore non-classicism was a defensible position, though there's some fairly common maneuvers which I think the folklore non-classicist would be better off ditching. One of these is the idea that the intended interpretation is describable "only non-classically".
However, there's a powerful alternative way of being a non-classicist. The last couple of weeks I've had a sort of road to Damascus moment about this, through thinking about non-classicist approaches to the Liar paradox---and in particular, by reading Hartry Field's articles and new book where he defends a "paracomplete" (excluded-middle rejecting) approach to the semantic paradoxes and work by JC Beall on a "paraconsistent" (contradiction-allowing) approach.
One interpretative issue with the non-classical approaches to the Liar and the like is that a crucial element is a truth-predicate that works in a way very unlike the notion of "truth" or "perfect truth" ("semantic value 1", if you want neutral terminology) that feature in the many-valued semantics. But that's not necessarily a reason by itself to start questioning the folklore picture. For it might be that "truth" is ambiguous---sometimes picking up on a disquotational notion, sometimes tracking the perfect truth notion featuring in the nonclassicists semantics. But in fact there are tensions here, and they run deep.
Let's warm up with a picky point. I was loosely throwing around terms like "3-valued logic" in the last post, and mentioned the (strong) Kleene system. But then I said that we could treat "indeterminate whether p" as an extensional operator (the "tertium operator" that makes "indet p" true when p is third-valued, and otherwise false). But that operator doesn't exist in the Kleene system---the Kleene system isn't expressively complete with respect to the truth functions definable over three values, and this operator is one of the truth-functions that isn't there. (Actually, I believe if you add this operator, you do get something that is expressively complete with respect to the three valued truth-functions).
One might take this to be just an expressive limitation of the Kleene system. After all, one might think, in the intended interpretation there is a truth-function behaving in the way just described lying around, and we can introduce an expression that picks up on it if we like.
But it's absolutely crucial to the nonclassical treatments of the Liar that we can't do this. The problem is that if we have this operator in the language, then "exclusion negation" is definable---an operator "neg" such that "neg p" is true when p is false or indeterminate, and otherwise false (this will correspond to "not determinately p"---i.e. ~p&~indeterminate p, where ~ is so-called "choice" negation, i.e. |~p|=1-|p|). "p v neg p" will be a tautology; and arbitrary q will follow from the pair {p, neg p}. But this is exactly the sort of device that leads to so-called "revenge" puzzles---Liar paradoxes that are paradoxical even in the 3-valued system. Very roughly, it looks as if on reasonable assumptions a system with exclusion negation can't have a transparent truth predicate in it (something where p and T(p) are intersubstitutable in all extensional contexts). It's the whole point of Field and Beall's approaches to retain something with this property. So they can't allow that there is such a notion around (so for example, Beall calls such notions "incoherent").
What's going on? Aren't these approaches just denying us the resources to express the real Liar paradox? The key, I think, is a part of the nonclassicist picture that Beall and Field are quite explicit about and which totally runs against the folklore conception. They do not buy into the idea that model theory is ranging over a class of "interpretations" of the language among which we might hope to find the "intended" interpretation. The core role of the model theory is to give an extensionally adequate characterization of the consequence relation. But the significance of this consequence relation is not to be explained in model-theoretic terms (in particular, in terms of one among the models being intended, so that truth-preservation on every model automatically gives us truth-preservation simpliciter).
(Field sometimes talks about the "heuristic value" of this or that model and explicitly says that there is something more going on than just the use of model theory as an "algebraic device". But while I don't pretend to understand exactly what is being invoked here, it's quite quite clear that the "added value" doesn't consist on some classical 3-valued model being "intended".)
Without appeal to the intended interpretation, I just don't see how the revenge problem could be argued for. The key thought was that there is a truth-function hanging around just waiting to be given a name, "neg". But without the intended interpretation, what does this even mean? Isn't the right thought simply that we're characterizing a consequence relation using rich set-theoretic resources---and in terms of which we can draw differences that correspond to nothing in the phenomenon being modelled.
So it's absolutely essential to the nonclassicist treatment of the Liar paradox that we drop the "intended interpretation" view of language. Field, for one, has a ready-made alternative approach to suggest---a Quinean combination of deflationism about truth and reference, with perhaps something like translatability being invoked to explain how such predicates can be applied to expressions in a language other than ones own.
I'm therefore inclined to think of the non-classicism---at least about the Liar---as a position that *requires* something like this deflationist package. Whereas the folklore non-classicist I was describing previously is clearly someone who takes semantics seriously, and who buys into a generalization of the powerful connections between truth and consequence that a semantic theory of truth affords.
When we come to the analysis of vagueness and other (non-semantic-paradox related) kinds of indeterminacy, it's now natural to consider this "no interpretation" non-classicism. (Field does exactly this---he conceives of his project as giving a unified account of the semantic paradoxes and the paradoxes of vagueness. So at least *this* kind of nonclassicism, we can confidently attribute to a leading figure in the field). All the puzzles described previously for the non-classicist position are thrown into a totally new light. Once we make this move.
To begin with, there's no obvious place for the thought that there are multiple truth statuses. For you get that by looking at a many valued model, and imagining that to be an image of what the intended interpretation of the language must be like. And that is exactly the move that's now illegitimate. Notice that this undercuts one motivation for going towards a fuzzy logic---the idea that one represents vague predicates as some smoothly varying in truth status. Likewise, the idea that we're just "iterating a bad idea" in multiplying truth values doesn't hold water on this conception---since the many-values assigned to sentences in models just don't correspond to truth statuses.
Connectedly, one shouldn't say that contradictions can be "half true" (nor that excluded middle is "half true". It's true that (on say the Kleene approach) that you won't have ~(p&~p) as a tautology. Maybe you could object to *that* feature. But that to me doesn't seem nearly as difficult to swallow as a contradiction having "some truth to it" despite the fact that from a contradiction, everything follows.
One shouldn't assume that "determinately" should be treated as the tertium operator. Indeed, if you're shooting for a combined non-classical theory of vagueness and semantic paradoxes, you *really* shouldn't treat it this way, since as noted above this would give you paradox back.
There is therefore a central and really important question: what is the non-classical treatment of "determinately" to be? Sample answer (lifted from Field's discussion of the literature): define D(p) as p&~(p-->~p), where --> is a certain fuzzy logic conditional. This, Field argues, has many of the features we'd intuitively want a determinately operator to have; and in particular, it allows for non-trivial iterations. So if something like this treatment of "determinately" were correct, then higher-order indeterminacy wouldn't be obviously problematic (Field himself thinks this proposal is on the right lines, but that one must use another kind of conditional to make the case).
"No interpretation" nonclassicism is an utterly, completely different position from the folklore nonclassicism I was talking about before. For me, the reasons to think about indeterminacy and the semantic and vagueness-related paradoxes in the first place, is that they shed light on the nature of language, representation, logic and epistemology. And on these sorts of issues, the no interpretation nonclassicism and the folklore version take diametrically opposed positions on such issues, and flowing from this, the appropriate ways to arguing for or against these views are just very very different.
Non-classical logics: some folklore
Having just finished the final revisions to my Phil Compass survey article on Metaphysical indeterminacy and ontic vagueness (penultimate draft available here) I started thinking some more about how those who favour non-classical logics think of their proposal (in particular, people who think that something like the Kleene 3-valued logic or some continuum valued generalization of it is the appropriate setting for analyzing vagueness or indeterminacy).
The way that I've thought of non-classical treatments in the past is I think a natural interpretation of one non-classical picture, and I think it's reasonably widely shared. In this post, I'm going to lay out some of that folklore-y conception of non-classicism (I won't attribute views to authors, since I'm starting to wonder whether elements of the folklore conception are characterizations offered by opponents, rather than something that the nonclassicists should accept---ultimately I want to go back through the literature and check exactly what people really do say in defence of non-classicism).
Here's my take on folklore nonclassicism. While classicists think there are two truth-statuses, non-classicists believe in three, four or continuum many truth-statuses (let's focus on the 3-valued system for now). They might have various opinions about the structure of these truth-statuses---the most common ones being that they're linearly ordered (so for any two truth-statuses, one is truer than the other). Some sentences (say, Jimmy is bald) get a status that's intermediate between perfect truth and perfect falsity. And if we want to understand the operator "it is indeterminate whether" in such settings, we can basically treat it as a one-place extensional connective: "indeterminate(p)" is perfectly true just in case p has the intermediate status; otherwise it is perfectly false.
So interpreted, non-classicism generalizes classicism smoothly. Just as the classicist can think there is an intended interpretation of language (a two valued model which gets the representation properties of words right) the non-classicist can think there's an intended interpretation (say a three valued model getting the representational features right). And that then dovetails very nicely with a model-theoretic characterization of consequence as truth-preservation under (almost) arbitrary reinterpretations of the language. For if one knows that some pattern is truth-preserving under arbitrary reinterpretations of the language, then that pattern is truth-preserving in particular in the intended interpretation---which is just to say that preserves truth simpliciter. This forges a connection between validity and preserving a status we have all sorts of reason to be interested in---truth. (Of course, one just has to write down this thought to start worrying about the details. Personally, I think this integrated package is tremendously powerful and interesting, deserves detailed scrutiny, and should be given up only as an option of last resort---but maybe others take a different view). All this carries over to the non-classicist position described. So for example, on a Kleene system, validity is a matter of preserving perfect truth under arbitrary reinterpretations---and to the extent we're interested in reasoning which preserves that status, we've got the same reasons as before to be interested in consequence. Of course, one might also think that reasoning that preserves non-perfect-falsity is also an interesting thing to think about. And very nicely, we have a systematic story about that too---this non-perfect falsity sense of validity would be the paraconsistent logic LP (though of course not under an interpretation where contradictions get to be true).
With this much on board, one can put into position various familiar gambits in the literature.
(2) seems pretty interesting. It looks like the non-classicist's treatment of indeterminacy, if they stick in the 3-valued setting, doesn't allow for "higher-order" indeterminacy at all. Now, if the nonclassicist is aiming to treat determinacy rather than vagueness *in general* (say if they're giving an account of the indeterminacy purportedly characteristic of the open future, or of the status of personal identity across fission cases) then it's not clear one need to posit higher-order indeterminacy.
I should say that there's one response to the "higher order" issues that I don't really understand. That's the move of saying that strictly, the semantics should be done in a non-classical metalanguage, where we can't assume that "x is true or x is indeterminate or x is false" itself holds. I think Williamson's complaints here in the chapter of his vagueness book are justified---I just don't know how what the "non-classical theory" being appealed to here is, or how one would write it down in order to assess its merits (this is of course just a technical challenge: maybe it could be done).
I'd like to point out one thing here (probably not new to me!). The "nonclassical metalanguage" move at best evades the challange that by saying that there's an intended 3-valued interpretation, one is committed to deny higher-order indeterminacy. But we achieve this, supposedly, by saying that the intended interpretation needs to be described non-classically (or perhaps notions like "the intended interpretation" need to be replaced by some more nuanced characterization). The 3-valued logic is standardly defined in terms of what preserves truth over all 3-valued interpretations describable in a classical metalanguage. We might continue with the classical model-theoretic characterization of the logic. But then (a) if the real interpretation is describable only non-classically, it's not at all clear why truth-preservation in all classical models should entail truth-preservation in the real, non-classical interpretation. And moreover, our object-language "determinacy" operator, treated extensionally, will still trivially iterate---that was a feature of the *logic* itself. This last feature in particular might suggest that we should really be characterizing the logic as truth-preservation under all interpretations including those describable non-classically. But that means we don't even have a fix on the *logic*, for who knows what will turn out to be truth-preserving on these non-classical models (if only because I just don't know how to think about them).
To emphasize again---maybe someone could convince me this could all be done. But I'm inclined to think that it'd be much neater for this view to deny higher-order indeterminacy---which as I mentioned above just may not be a cost in some cases. My suggested answer to (4), therefore, is just to take it on directly---to provide independent motivation for wanting however many values that is independent of having higher-order indeterminacy around (I think Nick J.J. Smith's AJP paper "Vagueness as closeness" pretty explicitly takes this tack for the fuzzy logic folk).
Anyway, I take this to be some of the folklore and dialectical moves that people try out in this setting. Certainly it's the way I once thought of the debate shaping up. It's still, I think, something that's worth thinking about. But in the next post I'm going to say why I think there's a far far more attractive way of being a non-classicist.
The way that I've thought of non-classical treatments in the past is I think a natural interpretation of one non-classical picture, and I think it's reasonably widely shared. In this post, I'm going to lay out some of that folklore-y conception of non-classicism (I won't attribute views to authors, since I'm starting to wonder whether elements of the folklore conception are characterizations offered by opponents, rather than something that the nonclassicists should accept---ultimately I want to go back through the literature and check exactly what people really do say in defence of non-classicism).
Here's my take on folklore nonclassicism. While classicists think there are two truth-statuses, non-classicists believe in three, four or continuum many truth-statuses (let's focus on the 3-valued system for now). They might have various opinions about the structure of these truth-statuses---the most common ones being that they're linearly ordered (so for any two truth-statuses, one is truer than the other). Some sentences (say, Jimmy is bald) get a status that's intermediate between perfect truth and perfect falsity. And if we want to understand the operator "it is indeterminate whether" in such settings, we can basically treat it as a one-place extensional connective: "indeterminate(p)" is perfectly true just in case p has the intermediate status; otherwise it is perfectly false.
So interpreted, non-classicism generalizes classicism smoothly. Just as the classicist can think there is an intended interpretation of language (a two valued model which gets the representation properties of words right) the non-classicist can think there's an intended interpretation (say a three valued model getting the representational features right). And that then dovetails very nicely with a model-theoretic characterization of consequence as truth-preservation under (almost) arbitrary reinterpretations of the language. For if one knows that some pattern is truth-preserving under arbitrary reinterpretations of the language, then that pattern is truth-preserving in particular in the intended interpretation---which is just to say that preserves truth simpliciter. This forges a connection between validity and preserving a status we have all sorts of reason to be interested in---truth. (Of course, one just has to write down this thought to start worrying about the details. Personally, I think this integrated package is tremendously powerful and interesting, deserves detailed scrutiny, and should be given up only as an option of last resort---but maybe others take a different view). All this carries over to the non-classicist position described. So for example, on a Kleene system, validity is a matter of preserving perfect truth under arbitrary reinterpretations---and to the extent we're interested in reasoning which preserves that status, we've got the same reasons as before to be interested in consequence. Of course, one might also think that reasoning that preserves non-perfect-falsity is also an interesting thing to think about. And very nicely, we have a systematic story about that too---this non-perfect falsity sense of validity would be the paraconsistent logic LP (though of course not under an interpretation where contradictions get to be true).
With this much on board, one can put into position various familiar gambits in the literature.
- One could say that allowing contradictions to be half-true (i.e. to be indeterminate, to have the middle-status) is just terrible. Or that allowing a parity of truth-status between "Jimmy is bald or he isn't" and "Jimmy's both bald and not bald" just gets intuitions wrong (the most powerful way dialectically to deploy this is if the non-classicist backs their position primarily by intuitions about cases---e.g. our reluctance to endorse the first sentence in borderline cases. The accusation is that if our game is taking intuitions about sentences at face value, it's not at all clear that the non-classicist is doing a good job.)
- One could point out that "indeterminacy" for the nonclassicist will trivially iterate. If one defines Determinate(p) as p&~indeterminate(p) (or directly as the one-place connective that is perfectly true if p is, and perfectly false otherwise) then we'll quickly see that Determinately determinately p will follow from determinately p; and determinately indeterminate whether p will follow from indeterminate whether p. And so on.
- In reaction to this, one might abandon the 3-valued setting for a smooth, "fuzzy" setting. It's not quite so clear what value "indeterminate p" should take (though there are actually some very funky options out there). Perhaps we might just replace such talk with direct talk of "degrees of determinacy" thought of as degrees of truth---with "D(p)=n" being again a one-place extensional operator perfectly true iff p has degree of truth n; and otherwise perfectly false.
- One might complain that all this multiplying of truth-values is fundamentally misguiding. Think of people saying that the "third status" view of indeterminacy is all wrong---indeterminacy is not a status that competes with truth and falsity; or the quip (maybe due to Mark Sainsbury?) that one does "not improve a bad idea by iterating it"---i.e. by introducing finer and finer distinctions.
(2) seems pretty interesting. It looks like the non-classicist's treatment of indeterminacy, if they stick in the 3-valued setting, doesn't allow for "higher-order" indeterminacy at all. Now, if the nonclassicist is aiming to treat determinacy rather than vagueness *in general* (say if they're giving an account of the indeterminacy purportedly characteristic of the open future, or of the status of personal identity across fission cases) then it's not clear one need to posit higher-order indeterminacy.
I should say that there's one response to the "higher order" issues that I don't really understand. That's the move of saying that strictly, the semantics should be done in a non-classical metalanguage, where we can't assume that "x is true or x is indeterminate or x is false" itself holds. I think Williamson's complaints here in the chapter of his vagueness book are justified---I just don't know how what the "non-classical theory" being appealed to here is, or how one would write it down in order to assess its merits (this is of course just a technical challenge: maybe it could be done).
I'd like to point out one thing here (probably not new to me!). The "nonclassical metalanguage" move at best evades the challange that by saying that there's an intended 3-valued interpretation, one is committed to deny higher-order indeterminacy. But we achieve this, supposedly, by saying that the intended interpretation needs to be described non-classically (or perhaps notions like "the intended interpretation" need to be replaced by some more nuanced characterization). The 3-valued logic is standardly defined in terms of what preserves truth over all 3-valued interpretations describable in a classical metalanguage. We might continue with the classical model-theoretic characterization of the logic. But then (a) if the real interpretation is describable only non-classically, it's not at all clear why truth-preservation in all classical models should entail truth-preservation in the real, non-classical interpretation. And moreover, our object-language "determinacy" operator, treated extensionally, will still trivially iterate---that was a feature of the *logic* itself. This last feature in particular might suggest that we should really be characterizing the logic as truth-preservation under all interpretations including those describable non-classically. But that means we don't even have a fix on the *logic*, for who knows what will turn out to be truth-preserving on these non-classical models (if only because I just don't know how to think about them).
To emphasize again---maybe someone could convince me this could all be done. But I'm inclined to think that it'd be much neater for this view to deny higher-order indeterminacy---which as I mentioned above just may not be a cost in some cases. My suggested answer to (4), therefore, is just to take it on directly---to provide independent motivation for wanting however many values that is independent of having higher-order indeterminacy around (I think Nick J.J. Smith's AJP paper "Vagueness as closeness" pretty explicitly takes this tack for the fuzzy logic folk).
Anyway, I take this to be some of the folklore and dialectical moves that people try out in this setting. Certainly it's the way I once thought of the debate shaping up. It's still, I think, something that's worth thinking about. But in the next post I'm going to say why I think there's a far far more attractive way of being a non-classicist.
Subscribe to:
Posts (Atom)