Friday, November 02, 2007

Degrees of belief and supervaluations

Suppose you've got an argument with one premise and one conclusion, and you think its valid. Call the premise p and the conclusion q. Plausibly, constraints on rational belief follow: in particular, you can't rationally have a lesser degree of belief in q than you have in p.

The natural generalization of this to multi-premise cases is that if p1...pn|-q, then your degree of disbelief in q can't rationally exceed the sum of your degrees of disbelief in the premises.

FWIW, there's a natural generalization to the multi-conclusion case too (a multi-conclusion argument is valid, roughly, if the truth of all the premises secures the truth of at least one conclusion). If p1...pn|-q1...qm, then the sum of your degrees of disbelief in the conclusions can't rationally exceed the sum of your degrees of disbelief in the premises.

What I'm interested in at the moment is to what extent this sort of connection can be extended to non-classical settings. In particular (and connected with the last post) I'm interested in what the supervaluationist should think about all this.

There's a fundamental choice to be made at the get-go. Do we think that "degrees of belief" in sentences of a vague language can be represented by a standard classical probability function? Or do we need to be a bit more devious?

Let's take a simple case. Construct the artificial predicate B(x), so that numbers less than 5 satisfy B, and numbers greater than5 fail to satisfy it. We'll suppose that it is indeterminate whether 5 itself is B, and that supervaluationism gives the right way to model this.

First observation. It's generally accepted that for the standard supervaluationist

p &~Det(p)|-absurdity;

Given this and the constraints on rational credence mentioned earlier, we'd have to conclude that my credence in B(5)&~Det(B(5)) must be 0. I have credence 0 in absurdity; and the degree of disbelief in the conclusion of this valid argument (1) must not exceed the degree of disbelief in its premises.

Let's think that through. Notice that in this case, my credence in ~Det(B(5)) can be taken to be 1. So given minimal assumptions about the logic of credences, my credence in B(5) must be 0.

A parallel argument running from ~B(5)&~Det(~B(5))|-absurdity gives us that my credence in ~B(5) must be 0.

Moreover, supervaluational entails all classical tautologies. So in particular we have the validity: |-B(5)v~B(5). The standard constraint in this case tells us that rational credence in this disjunction must be 1. And so, we have a disjunction in which we have credence 1, each disjunct of which we have credence 0 in. (Compare the standard observation that supervaluational disjunctions can be non-prime: the disjunction can be true when neither disjunct is).

This is a fairly direct argument that something non-classical has to be going on with the probability calculus. One move at this point is to consider Shafer functions (which I know little about: but see here). Now maybe that works out nicely, maybe it doesn't. But I find it kinda interesting that the little constraint on validity and credences gets us so quickly into a position where something like this is needed if the constraint is to work. It also gives us a recipe for arguing against standard supervaluationism: argue against the Shafer-function like behaviour in our degrees of belief, and you'll ipso facto have an argument against supervaluationism. For this, the probablistic constraint on validity is needed (as far as I can see): for its this that makes the distinctive features mandatory.

I'd like to connect this to two other issues I've been working on. One is the paper on the logic of supervaluationism cited below. The key thing here is that it raises the prospect of p&~Dp|-absurdity not holding, even for your standard "truth=supertruth" supervaluationist. If that works, the key premise of the argument that forces you to have degree of belief 0 in both an indeterminate sentence 'p' and its negation goes missing.

Maybe we can replace it by some other argument. If you read "D" as "it is true that..." as the standard supervaluationist encourages you to, then "p&~Dp" should be read "p&it is not true that p". And perhaps that sounds to you just like an analytic falsity (it sure sounded to me that way); and analytic falsities are the sorts of things one should paradigmatically have degree of belief 0 in.

But here's another observation that might give you pause (I owe this point to discussions with Peter Simons and John Hawthorne). Suppose p is indeterminate. Then we have ~Dp&~D~p. And given supervaluationism's conservativism, we also have pv~p. So by a bit of jiggery-pokery, we'll get (p&~Dp v ~p&~D~p). But in moods where I'm hyped up thinking that "p&~Dp" is analytically false and terrible, I'm equally worried by this disjunction. But that suggests that the source of my intuitive repulsion here isn't the sort of thing that the standard supervaluationist should be buying. Of course, the friend of Shafer functions could just say that this is another case where our credence in the disjunction is 1 while our credences in each disjunct is 0. That seems dialectically stable to me: after all, they'll have *independent* reason for thinking that p&~Dp should have credence 0. All I want to insist is that the "it sounds really terrible" reason for assigning p&~Dp credence 0 looks like it overgeneralizes, and so should be distrusted.

I also think that if we set aside truth-talk, there's some plausibility in the claim that "p&~Dp" should get non-zero credence. Suppose you're initially in a mindset where you should be about half-confident of a borderline case. Well, one thing that you absolutely want to say about borderline cases is that they're neither true nor false. So why shouldn't you be at least half-confident in the combination of these?

And yet, and yet... there's the fundamental implausibility of "p&it's not true that p" (the standard supervaluationist's reading of "p&~Dp") having anything other than credence 0. But ex hypothesi, we've lost the standard positive argument for that claim. So we're left, I think, with the bare intuition. But it's a powerful one, and something needs to be said about it.

Two defensive maneuvers for the standard supervaluationist:

(1) Say that what you're committed to is just "p& it's not supertrue that p". Deny that the ordinary concept of truth can be identified with supertruth (something that as many have emphasized, is anyway quite plausible given the non-disquotational nature of supertruth). But crucially, don't seek to replace this with some other gloss on supertruth: just say that supertruth, superfalsity and gap between them are appropriate successor concepts, and that ordinary truth-talk is appropriate only when we're ignoring the possibility of the third case. If we disclaim conceptual analysis in this way, then it won't be appropriate to appeal to intuitions about the English word "true" to kick away independently motivated theoretical claims about supertruth. In particular, we can't appeal to intuitions to argue that "p&~supertrue that p" should be assigned credence 0. (There's a question of whether this should be seen as an error-theory about English "truth"-ascriptions. I don't see it needs to be. It might be that the English word "true" latches on to supertruth because supertruth what best fits the truth-role. On this model, "true" stands to supertruth as "de-phlogistonated air" according to some, stands to oxygen. And so this is still a "truth=supertruth" standard supervaluationism.)

(2) The second maneuver is to appeal to supervaluational degrees of truth. Let the degree of supertruth of S be, roughly, the measure of the precisifications on which S is true. S is supertrue simpliciter when it is true on all the precisifications, i.e. measure 1 of the precisifications. If we then identify degrees of supertruth with degrees of truth, the contention that truth is supertruth becomes something that many find independently attractive: that in the context of a degree theory, truth simpliciter should be identified with truth to degree 1. (I think that this tendancy has something deeply in common with the temptation (following Unger) to think that nothing that nothing can be flatter than a flat thing: nothing can be truer than a true thing. I've heard people claim that Unger was right to think that a certain class of adjectives in English work this way).

I think when we understand the supertruth=truth claim in that way, the idea that "p&~true that p" should be something in which we should always have degree of belief 0 loses much of its appeal. After all, compatibly with "p" not being absolutely perfectly true (=true), it might be something that's almost absolutely perfectly true. And it doesn't sound bad or uncomfortable to me to think that one should conform one's credences to the known degree of truth: indeed, that seems to be a natural generalization of the sort of thing that originally motivated our worries.

In summary. If you're a supervaluationist who takes the orthodox line on supervaluational logic, then it looks like there's a strong case for a non-classical take on what degrees of belief look like. That's a potentially vulnerable point for the theory. If you're a (standard, global, truth=supertruth) supervaluationist who's open to the sort of position I sketch in the paper below, prima facie we can run with a classical take on degrees of belief.

Let me finish off by mentioning a connection between all this and some material on probability and conditionals I've been working on recently. I think a pretty strong case can be constructed for thinking that for some conditional sentences S, we should be all-but-certain that S&~DS. But that's exactly of the form that we've been talking about throughout: and here we've got *independent* motivation to think that this should be high-probability, not probability zero.

Now, one reaction is to take this as evidence that "D" shouldn't be understood along standard supervaluationist lines. That was my first reaction too (in fact, I couldn't see how anyone but the epistemicist could deal with such cases). But now I'm thinking that this may be too hasty. What seems right is that (a) the standard supervaluationist with the Shafer-esque treatment of credences can't deal with this case. But (b) the standard supervaluationist articulated in one of the ways just sketched shouldn't think there's an incompatibility here.

My own preference is to go for the degrees-of-truth explication of all this. Perhaps, once we've bought into that, the "truth=degree 1 supertruth" element starts to look less important, and we'll find other useful things to do with supervaluational degrees of truth (a la Kamp, Lewis, Edgington). But I think the "phlogiston" model of supertruth is just about stable too.

[P.S. Thanks to Daniel Elstein, for a paper today at the CMM seminar which started me thinking again about all this.]

9 comments:

Anonymous said...

Maybe the right thought is that for every admissible precisification there will be a classical probability function which respects the probabilistic constraints on coherence with validity. So we would allow the supervaluationist to supervaluate truth over classical assignments, and to supervalute credence over classical credence functions.

This will cohere too with the models of imprecise credence that Jeffrey, van Fraassen, et al, have defended, basically the idea that (at least) we should model actual credences by a family of credence functions. (Usually they end up using upper and lower credences in each proposition, which leads to prima facie difficulties if you think that we should model higher-order vagueness this way.) One argument that can be given for this treatment on very classical lines is as follows. Note that Dutch-book arguments justify the thesis that credences are probabilities by pointing to a connection between credences and betting behaviour. However, if one takes this seriously, one will note that if betting behaviour is supposed to operationally define credences, observed betting behaviour is compatible with a great many credences, not the unique function assumed by the classical theory. So we should, all along, have been modelling credence by at least a family of probability functions. Note that, old-fashioned and dodgy as an appeal to operational definitions clearly is, it nevertheless aims to provide a vagueness-independent reason to accept imprecise credences.

How this maps onto regular full belief is even more complicated than in the original case. And whether supervaluating over classical credences yields anything that deserves the name 'credence' strikes me as even more vexed than whether supervaluating over classical assignments yields anything that deserves the name 'truth'. But on first glance a parallel treatment would seem to have something to recommend it for supervaluationists, especially if there are independent reasons apart from vagueness to move to a richer framework for credences.

Kenny said...

To follow up on Ant's point - I'm prettty sure that using Dempster-Shafer functions is in some sense equivalent to using sets of probability functions, though I can't remember if they're required to be convex sets, or satisfy some other condition. And since Dempster-Shafer functions naturally give you both upper and lower credences, you might think these should somehow correspond to super- and sub-valuations.

Robbie Williams said...

That's very interesting; I was meaning to think about how the sets-of-probability functions thingy fitted in. I agree with Ant that they've got some independent appeal (I quite like the old and outdated operationalization of this sort of stuff, too!)

The interactions are far from transparent to me though, so I'd really appreciate guidance/references on this stuff.

Let's take as a setup the following: propositions (sets of possible worlds) are the primary bearers of probability, and that no vagueness afflicts propositions. If a sentence expresses a unique proposition, one can obviously straightforwardly associate the sentence with a unique probability. But if the sentence corresponds to many different propositions with different probabilities, then there's obviously some slack (just as there is when a sentence corresponds to propositions with different truth values).

I take it the analogue of the truth=supertruth idea is to define the probability (=superprobability) of a sentence to be the minimum of probability of the propositions partially expressed by the sentence. A truth=subtruth idea would be to define the probability (=subprobability) of a sentence to be the maximum probability of the propositions partially expressed by the sentence. (Or am I misinterpreting the suggestion?)

I'm still unsure of the interactions. I'm about to flip a fair coin. Consider the proposition (H) that the fair coin will land heads; and the proposition (T) that the fair coin will land tails. The probability of each proposition is 1/2. We can suppose it's probability 1 that the disjunctive proposition, H or T, obtains.

Now introduce the word "heils" to be indeterminate in reference between heads and tails. Then the sentence "the fair coin will land heils" will partially express two propositions that are each probability 1/2. So it looks that the sentence has probability 1/2 whether we calculate that by super or by sub-probability recipes.

What proposition is expressed by the sentence "it is not determinately the case that the fair coin will land heils"? The proposition obtains iff at least one of the propositions expressed by the embedded sentence fails to obtain. So the proposition expressed by this sentence, I reckon, is the proposition that either the fair coin lands heads, or the fair coin lands tails. And we've got probability 1 invested in this proposition, ex hypothesi.

So it seems to me that the upshot of all of this will be that we've got a sentence "S" which is probability 1/2, such that "it is indeterminate whether S" is probability 1. And, working things through, both the propositions expressed by "S and it is indeterminate whether S" are probability 1/2. So, whichever way we choose to interpret probability talk in this sort of setting, it looks like that sentence will get probability 1/2.

Of course, none of that fits with the idea that "S and it is indeterminate whether S" is a logical contradiction (together with the probabilistic constraint on validities I mentioned). So if this is the right way to go, it strikes me that the supervaluationist will need to go against orthodoxy and deny that "S and it is indeterminate with S" is a logical contradiction. (They don't have to do this in the way that I sketch in the paper: they might take it as an argument for a local definition of supervaluational logical consequence, for example).

In any case, this looks to me like a different sort of proposal from the particular implementation of Shafer-function machinery that Brian Weatherson describes as "probability theory for supervaluationists" in the paper I linked to. That way of going gives us probability 0 in the sentence I mentioned, not probability 1/2.

But maybe I've messed up or misinterpreted something here.

Robbie Williams said...

Ant:

Rereading your comment, I noticed that your suggestion was that the probabilistic constraint be applied on a precisification-by-precisification level.

First question about this: what's the notion of validity with which the credences in precisifications must cohere? If it's one on which p&~Dp comes out contradictory, then (I guess) it'll be tricky to find any classical probability function to plausibly do the job.

An alternative reading is that the probabilities must cohere with e.g. a locally defined consequence relation (=guaranteed truth preservation on each precisification). It's well known that local validity is fully classical, and in particular doesn't make p&~Dp a contradiction.

I guess at this point I'd like to ask whether what we end up saying about credences in vague sentences simpliciter should impose any constraints on consequence, simpliciter, for a vague language? Of course our language *is* vague, and so the arguments we want to consider (and evaluate for validity/assess whether we believe the premises) *are* usually formulated in terms vaguely expressed premises and conclusions. So it's extremely natural, I think, to think that the constraint must apply at this level too.

If that's conceded, the dialectic sketched out in the previous comment can get going.

Even if it's not conceded (e.g. if we think that all logical constraints on credences can be captured "locally") then I think we still have some traction on the debate about what formal construction deserves the name "consequence" simpliciter. For something which is a Q-contradiction can (rationally, determinately) get credence 1/2, I think we've got a big pro tanto case for Q being a formal construction that doesn't play the consequence-role.

In the case at hand, global consequence as traditionally defined looks like it's in this situation. And local consequence (I argue in the paper) looks like it doesn't play the consequence-role for other reasons (you can get a locally valid argument all of whose premises you should accept; and all of whose conclusions you should reject).

Anonymous said...

Robbie: yep, your second thought was right, I was thinking of doing it all locally. The point about global consequence is well taken; not being an expert on this, or particularly enamoured of supervaluationism, I'm tempted to concede it.

Anonymous said...

Robbie, I think I'd always assumed that supervaluationists would go for classical probability and your option (2), which probably goes to show how little I understand the motivations of supervaluationists.

A strange consequence though: if we interdefine degrees of truth with a measure on the admissible precifications, and also with expert degrees of belief, what we get is that an admissible precisification is just one which an expert would not assign credence 0 to being the correct precisification.

But that makes me think that supervaluationism is introducing problems which weren't there before. For instance, it seems pretty harmless to think that even an expert would have >0 (though very small) credence that a man with a billion hairs is bald. (I.e. the view that expert credence should be asymptotic to 0 as number of hairs tends to infinity.) But if we buy the measure on precisifications thing that will mean that precisifications counting billion-haired men as bald are admissible, and thus most admissible precifications make million-haired men bald.

We want to avoid the result that 'A man with a million hairs is bald' has a degree of truth close to 1, so I guess that for the measure on the precisifications to give the right answer the precisifications have to have something like intensities (corresponding to the credences that experts assign to them). But it doesn't seem to me that such intensities are part of a natural story about precisifications which doesn't see them as defined by credences. So in the unified credences-verities-supervaluationist position, it looks to be credences that wear the trousers.

Apologies if that's either obvious or obviously wrong, but I really struggle to see what work supervaluationism ends up doing.

Robbie Williams said...

Hi Daniel.

I'm very unsure about supervaluationist sociology here. The idea that p&~Dp is a contradiction is a pretty entrenched doctrine, among those who think that supertruth is truth. And as I emphasised above, it would seem really peculiar to combine *that* with anything other than the non-classical handling of probabilities. So if they do go for (2), then I think there's some tension in their views unless they want to say the sort of things I do about the logic. Quite a few people with supervaluationist sympathies that I've spoken to also have sympathy for the Jeffrey idea about representing credence with sets of probability functions. As Ant and Kenny emphasize, its a very supervaluationist-sounding idea. As far as I can see, the principled way to fit that into the framework here is to go for (1). It'd be interesting to find examples where supervaluationists discuss this. I don't know of many places where its discussed.

I'm personally sympathetic to (2), and to the idea that credences of an ideal agent should match the degree of truth in the proposition (contra Edgington, from what you were telling me). But I guess I start to wonder about the utility of even talking about truth/supertruth once we've got degrees of truth on the table. If we need to go for (2) anyway, and so become a degree-supervaluationist, I think that undercuts the motivation for lots that standard supevaluationism usually says.

I think you're right to emphasize that we can't expect to be able to define degrees of truth by anything like "counting the number of precisifications" on which a given sentence is true.

The way I think of this, the basic notion of the degree-supervaluationist (a la Edgington, Kamp, Lewis) is not that of a class of admissible precisifications, but of the measure of an arbitrary set of classical interpretations. In the finite case, you can think of the measure of a single interpretation as its "degree of intendedness".

In the first instance, this gets you degrees of truth (as the measure of the set of precisifications where the sentence is true). There's then a question of whether and how to analyze the notion of truth simpliciter. On some options, you might end up agreeing with the letter if not the spirit of standard supervaluationism. And that's sort of the position I had in mind for (2) above.

If we're a degree-supervaluationist, we face the foundational question: what establishes degrees of intendedness? And that's exactly parallel to the question that faces a classic semanticist: what fixes one interpretation as the intended one; and the foundational question facing the standard supervaluationist: what fixes a set of of interpretations as admissible?

One answer, that you allude to, would be to appeal to some kind of projective story from the credences assigned by ideal agents. Another, which I've thought about a bit, appeals to a degreed story about conventions of truthfulness and trust in uttering sentences. I suppose a robust anti-reductionist could just take the measure as a metaphysical primitive. I'm sure there are other options.

Anonymous said...

Thanks for explaining: I'd kind of guessed that that's what you meant by 'measure' but always best to get these things spelled out when, like me, you aren't a mathmo. I'm really glad that you think that accepting (2) ends up undercutting lots of the motivation for standard supervaluationism - that's my impression too.

I guess that supervaluationists who go for (1) will have a choice: they can either accept something like degrees of intendedness for precisifications to explain the patterns of our credences in the kind of case I describe, or else they can appeal to uncertainty about the admissibility of precisifications, which is the option precluded by (2). But really the first option seems much better, since it avoids some higher-order vagueness stuff. And anyone accepting degress of intendedness should probably just go along with (2) anyway - thus even accepting (1) is going to lead to some pressure towards (2).

I really want to hear the degreed story about conventions of truthfulness sometime, 'cos I get the feeling that I'm not too keen on the implicit direction of explanation. But that can wait till you can explain in person.

One final comment: it may just be a quirk of philosophical etymology, but the phrase 'degree of intendedness' sounds very congenial to projectivism about meaning and vagueness...

Anonymous said...

Cool story as for me. It would be great to read a bit more concerning this topic.
By the way look at the design I've made myself High class escorts