Monday, March 17, 2008

Paracompleteness and credences in contradictions.

The last few posts have discussed non-classical approaches to indeterminacy.

One of the big stumbling blocks about "folklore" non-classicism, for me, is the suggestion that contradictions (A&~A) be "half true" where A is indeterminate.

Here's a way of putting a constraint that appeals to me: I'm inclined to think that an ideal agent ought to fully reject such contradictions.

(Actually, I'm not quite as unsympathetic to contradictions as this makes it sound. I'm interested in the dialethic/paraconsistent package. But in that setting, the right thing to say isn't that A&~A is half-true, but that it's true (and probably also false). Attitudinally, the ideal agent ought to fully accept it.)

Now the no-interpretation non-classicist has the resources to satisfy this constraint. She can maintain that the ideal degree of belief in A&~A is always 0. Given that:

p(A)+p(B)=p(AvB)+p(A&B)

we have:

p(A)+p(~A)=p(Av~A)

And now, whenever we fail to fully accept Av~A, it will follow that our credences in A and ~A don't sum to one. That's the price we pay for continuing to utterly reject contradictions.

The *natural* view in this setting, it seems to me, is that accepting indeterminacy of A corresponds to rejecting Av~A. So someone fully aware that A is indeterminate should fully reject Av~A. (Here and in the above I'm following Field's "No fact of the matter" presentation of the nonclassicist).

But now consider the folklore nonclassicist, who does take talk of indeterminate propositions being "half true" (or more generally, degree-of-truth talk) seriously. This is the sort of position that the Smith paper cited in the last post explores. The idea there is that indeterminacy corresponds to half-truth, and fully informed ideal agents should set their partial beliefs to match the degree-of-truth of a proposition (e.g. in a 3-valued setting, an indeterminate A should be believed to degree 0.5). [NB: obviously partial beliefs aren't going to behave like a probability function if truth-functional degrees of truth are taken as an "expert function" for them.]

Given the usual min/max take on how these multiple truth values get settled over conjunction and negation, for the fullyinformed agent we'll get p(Av~A) set equal to the degree of truth of Av~A, i.e. 0.5. And exactly the same value will be given to A&~A. So contradictions, far from being rejected, are appropriately given the same doxastic attitude as I assign to "this fair coin will land heads"

Another way of putting this: the difference between our overall attitude to "the coin will land heads" and "Jim is bald and not bald" only comes out when we consider attitudes to contents in which these are embedded. For example, I fully disbelieve B&~B when B=the coin lands heads; but I half-accept it for B=A&~A . That doesn't at all ameliorate the implausibility of the initial identification, for me, but it's something to work with.

In short, the Field-like nonclassicist sets A&~A to 0; and that seems exactly right. Given this and one or two other principles, we get a picture where our confidence in Av~A can take any value---right down to 0; and as flagged before, the probabilities of A and ~A carve up this credence between them, so in the limit where Av~A has probability 0, they take probability 0 too.

But the folklore nonclassicist I've been considering, for whom degrees-of-truth are an expert function for degrees-of-belief, has 0.5 as a pivot. For the fully informed, Av~A always exceeds this by exactly the amount that A&~A falls below it---and where A is indeterminate, we assign them all probability 0.5.

As will be clear, I'm very much on the Fieldian side here (if I were to be a nonclassicist in the first place). It'd be interesting to know whether folklore nonclassicists do in general have a picture about partial beliefs that works as Smith describes. Consistently with taking semantics seriously, they might think of the probability of A as equal to the measure of the set of possibilities where A is perfectly true. And that will always make the probability of A&~A 0 (since it's never perfectly true); and meet various other of the Fieldian descriptions of the case. What it does put pressure on is the assumption (more common in degree theorists than 3-value theorists perhaps) that we should describe degree-of-truth-0.5 as a way of being "half true"---why in a situation where we know A is halftrue, would we be compelled to fully reject it? So it does seem to me that the rhetoric of folklore degree theorists fits a lot better with Smith's suggestions about how partial beliefs work. And I think it's objectionable on that account.

[Just a quick update. First observation. To get a fix on the "pivot" view, think of the constraint being that P(A)+P(~A)=1. Then we can see that P(Av~A)=1-P(A&~A), which summarizes the result. Second observation. I mentioned above that something that treated the degrees of truth as an expert function "won't behave like a probability function". One reflection of that is that the logic-probability link will be violated, given certain choices for the logic. E.g. suppose we require valid arguments to preserve perfect truth (e.g. we're working with the K3 logic). Then A&~A will be inconsistent. And, for example, P(A&~A) can be 0.5, while for some unrelated B, P(B) is 0. But in the logic A&~A|-B, so probability has decreased over a valid argument. Likewise if we're preserving non-perfect-falsity (e.g. we're working with the LP system). Av~A will then be a validity, but P(Av~A) can be 0.5, yet P(B) be 1. These are for the 3-valued case, but clearly that point generalizes to the analogous definitions of validity in a degree valued setting. One of the tricky things about thinking about the area is that there are lots of choice-points around, and one is the definition of validity. So, for example, one might demand that valid arguments preserve both perfect truth and non-perfect falsity; and then the two arguments above drop away since neither |-Av~A nor A&~A|- on this logic. The generalization to this in the many-valued setting is to demand e-truth preservation for every e. Clearly these logics are far more constrained than the K3 or LP logics, and so there's a better chance of avoiding violations of the logic-probability link. Whether one gets away with it is another matter.]

3 comments:

Anonymous said...

Hi Robbie,

Wrt you last comment about validity, there are some interesting options here. For instance, there is a (somewhat) common definition for validity in K3 in which an argument A |- B is valid just if v(A) is less than or equal to v(B). This fits more naturally with the "degree of truth" view of the semantic values. I've got a post on what this logic might look light, and some related logics in the area. You might like to check it out: http://cotnoir.wordpress.com/2008/02/01/between-lp-and-k3/

Also, wrt to Field's view about degrees of belief, things become even stranger when we start thinking about conditionals. It's certainly available, I suppose, to a "no interpretation" paracomplete theorist to hold the same line for conditionals as well -- especially since Field's conditional collapses into the material in LEM-satisfying contexts. Field seems to think that the degree of belief in a conditional can diverge even more radically from the truth value. He thinks the Ramsey-Lewis stuff is plausible, and tries to drive a wedge between "conditional assertion" and "assertion of a conditional". There are some pretty complicated issues here with embedded conditionals that I don't really understand. But anyway, I bring this up only to point to the fact that Field may have some independent reason to divorce degree of belief in a proposition from it's semantic value.

Anonymous said...

Oh and one other thing:

Alan Weir considers a paracomplete theory in which validity is defined in a non-standard way. His reasons? He wants to take the model seriously -- indeed he requires a conditional that has a deduction theorem. He specifically doesn't wish to have a "no interpretation" account. He's what (I think) you'd call a "folklore nonclassicist".

In the end, Weir's truth predicate is transparent (T[A] and A are substitutable in all non-opaque contexts.)

See Weir's "Naive Truth Theory and Sophisticated Logic" in JC's _Deflationism and Paradox_ volume.

Robbie Williams said...

Hi there,

Thanks for these! (BTW I left a comment at the post you link to, too).

I'm pretty sure that, in the light of the examples you and others have been raising, I'm going to have to rethink my terminology. (Hooray for blogs helping to clarify thinking...) .

It's clear to me what one principled position is---the person who thinks of many-valued models (describable classically) just playing the same role that a certain kind of classicist might think 2-valued models play. This character thinks that "definitely" and "weak negation" and the like are all available as extensional operators, with no qualms or funny business.

Obviously there are many properties of arguments that you might be interested in if you're going for this setting---the K3-valid arguments have one nice property (preserving perfect truth) the LP-valid arguments have another (avoiding falsity). There's not obviously a good answer to the question what "the" logic of this setting should be. But if I were to choose, I'd be inclined to pick the "no drop in truth value" logic, since it seems to me that given certain assumptions about the appropriate doxastic attitudes to borderline (or "half-true") cases, this will be the logic in terms of which we can spell out what combinations of attitudes are coherent.

I'm also clear about an opposite extreme: where someone totally dispenses with "intended interpretations" (or anything in the intended-interpretation role), and just focuses on the logic. The logic isn't many-valued in any interesting sense---it's just a set of valid arguments that can be characterized via a certain technical device, many-valued semantics. I've got a clear sense of what's *not* a legitimate move against someone who adopts that stance---e.g. just because we can write down a truth table for exclusion negation or the tertium operator, we can't assume that these are coherent concepts.

But that leaves a great range of intermediate cases where people want to regard the model theory as more than an algebraic device, but don't think there's any one classical model that can be "intended". And a general thought is that we somehow need to do the reasoning about the non-classical models.

Examples of the intermediate position I guess include Priest's use of non-classical set theory for doing models, perhaps the Weir ideas just mentioned; and perhaps some of the proposals that Williamson discusses and criticizes in the degree theory chapter of his Vagueness book. And Beall and Field both talk about some more-than-algebraic role for model-theoretic constructions---so perhaps they've got an intermediate position, in the end.

I'm not sure I understand everything that is involved in the intermediate positions; whereas I'm very comfortable that I understand the two extreme positions I started with. What prompted this set of posts was my realization that the model-theory-as-algebraic-device option was available, and pretty attractive, and could be totally utterly different---not even approximately the same as---views which e.g. think that borderline sentences have an intermediate degree of truth.

What I need to do now is think about a non-classical model theory. There seem to be lots of options, i.e. retaining classical models, but giving up on excluded middle for the predicate "is the intended model". Or going non-classical with the models themselves (say by viewing them as sets characterized by a non-classical set theory---maybe that's a Priestian view). And there seem lots of other options. Those seem very different. E.g. if only the predicate "is an intended interpretation" is non-classical, and the underlying set theory is classical, then classical model theory still gives us the logic. If the models themselves are non-classical, then we'll have to examine arguments that this or that argument is valid, to see if they employ dubious classical reasoning.

What I'm inclined to think is that prior to doing this, we really need to know what a theory of the "intended model" is supposed to *do* for us. One thing classicists sometimes do is have an axiomatic specification of the intended interpretation, and from that derive canonical T-sentences for the language. You might think of that as underpinning a theory of understanding (understanding=knowing the canonical T-sentences) or you might think of them as giving a theory of what the representation properties of expressions are. If so, we've got at least one clear success-conditions for a non-classical model theory---allow the derivation of T-sentences. But it's *not* clear to me that this is what people are after.

So: lots more to think about! And lots of reading to do...