Does nihilism about ordinary things help us out with puzzles surrounding maximal properties and the problem of the many? It's hard to see how.

First, maximal properties. Suppose that I have a rock. Surprisingly, there seem to be microphysical duplicates of the rock that are not themselves rocks. For suppose we have a microphysical duplicate of the rock (call it Rocky) that is surrounded by extra rocky stuff. Then, plausibly, the fusion of Rocky and the extra rocky stuff is the rock, and Rocky himself isn't, being out-competed for rock-status by his more extensive rival. Not being shared among duplicates, being a rock isn't intrinsic. And cases meeting this recipe can be plausibly constructed for chairs, tables, rivers, nations, human bodies, human animals and (perhaps) even human persons. Most kind-terms, in fact, look maximal and (hence) extrinsic. Sider has argued that non-sortal properties such as consciousness are likewise maximal and extrinsic.

Second, the problem of the many. In its strongest version, suppose that we have a plentitude of candidates (sums of atoms, say) more or less equally qualified to be a table, cloud, human body or whatever. Suppose further that both the sum and intersection of all these candidates isn't itself a candidate for being the object. (This is often left out of the description of the case, but (1) there seems no reason to think that the set of candidates will always be closed under summing or intersection (2) life is more difficult--and more interesting--if these candidates aren't around.) Which of these candidates is the table, cloud, human body or whatnot?

What puzzles me is why nihilism---rejecting the existence of tables, clouds, human bodies or whatever---should be thought to avoid any puzzles around here. It's true that the nihilist rejects a premise in terms of which these puzzles would normally be stated. So you might imagine that the puzzles give you reason to modus tollens and reject that premise, ending up with nihilism (that's how Unger's original presentation of the POM went, if I recall). But that's no good if we can state equally compelling puzzles in the nihilist's preferred vocabulary.

Take our maximality scenario. Nihilists allow that we have, not a rock, but some things arranged rockwise. And we now conceive of a situation where those things, arranged just as they actually are, still exist (let "Rocky" be a plural term that picks them out). But in this situation, they are surrounded by more things of a qualitatively similar arrangement. Now are the things in Rocky arranged rockwise? Don't consult intuitions at this point---"rockwise" is a term of art. The theoretical role of "rockwise" is to explain how ordinary talk is ok. If some things are in fact arranged rockwise, then ordinary talk should count them as forming a rock. So, for example, van Inwagen's paraphrase of "that's is a rock" would be "those things are arranged rockwise". If we point to Rocky and say "that's a rock", intuitively we speak falsely (that underpins the original puzzle). But if the things that are Rocky are in fact arranged rockwise, then this would be paraphrased to something true. What we get is that "are arranged rockwise" expresses a maximal, extrinsic plural property. For a contrast case, consider "is a circle". What replaces this by nihilist lights are plural predicates like "being arranged circularly". But this seems to express a non-maximal, intrinsic plural property. I can't see any very philosophically significant difference between the puzzle as transcribed into the nihilists favoured setting and the original.

Similarly, consider a bunch of (what we hitherto thought were) cloud-candidates. The nihilist says that none of these exist. Still, there are things which are arranged candidate-cloudwise. Call them the As. And there are other things---differing from the first lot---which are also arranged candidate-cloudwise. Call them the Bs. Are the A's or the B's arranged cloudwise? Are there some other objects, including many but not all of the As and the B's that *are* arranged cloudwise? Again, the puzzle translates straight through: originally we had to talk about the relation between the many cloud-candidates and the single cloud; now we talk about the many pluralities which are arranged candidate-cloudwise, and how they relate to the plurality that is cloudwise arranged. The puzzle is harder to write down. But so far as I can see, it's still there.

Pursuing the idea for a bit, suppose we decided to say that there were many distinct pluralities that are arranged cloudwise. Then "there at least two distinct clouds" would be paraphrased to a truth (that there are some xx and some yy, such that not all the xx are among the yy and vice versa, such that the xx are arranged cloudwise and the yy are arranged cloudwise). But of course it's the unassertibility of this sort of sentence (staring at what looks to be a single fluffy body in the sky) that leads many to reject Lewis's "many but almost one" response to the problem of the many.

I don't think that nihilism leaves everything dialectically unchanged. It's not so clear how many of the solutions people propose to the problem of the many can be translated into the nihilist's setting. And more positively, some options may seem more attractive once one is a nihilist than they did taken cold. Example: once you're going in for a mismatch between common sense ontology and what there really is, then maybe you're more prepared for the sort of linguistic-trick reconstructions of common sense that Lewis suggests in support of his "many but almost one". Going back to the case we considered above, let's suppose you think that there are many extensionally distinct pluralities that are all arranged cloudwise. Then perhaps "there are two distinct clouds" should be paraphrased, not as suggested above, but as:

there are some xx and some yy, such that almost all the xx are among the yy and vice versa, such that the xx are arranged cloudwise and the yy are arranged cloudwise.

The thought here is that, given one is already buying into unobvious paraphrase to capture the real content of what's said, maybe the costs of putting in a few extra tweaks into that paraphrase are minimal.

Caveats: notice that this isn't to say that nihilism solves your problems, it's to say that nihilism may make it easier to accept a response that was already on the table (Lewis's "many but almost one" idea). And even this is sensitive to the details of how nihilism want to relate ordinary thought and talk to metaphysics: van Inwagen's paraphrase strategy is one such proposal, and meshes quite neatly with the Lewis idea, but it's not clear that alternatives (such as Dorr's counterfactual version) have the same benefits. So it's not the metaphysical component of nihilism that's doing the work in helping accommodate the problem of the many: it's whatever machinery the nihilist uses to justify ordinary thought and talk.

There's one style of nihilist who might stand their ground. Call nihilists friendly if they attempt to say what's good about ordinary thought and talk (making use of things like "rockwise", or counterfactual paraphrases, or whatever). I'm suggesting that friendly nihilists face transcribed versions of the puzzles that everyone faces. Nihilists might though be unfriendly: prepared to say that ordinary thought and talk is largely false, but not to reconstruct some subsidiary norm which ordinary thought and talk meets. Friendly nihilism is an interesting position, I think. Unfriendly nihilism is pushing the nuclear button on all attempts to sort out paradoxes statable in ordinary language. But they have at least this virtue: the puzzles they react against don't come back to bite them.

[Update: I've been sent a couple of good references for discussions of nihilism in a similar spirit. First Matt McGrath's paper "No objects, no problem?" argues that the nihilist doesn't escape statue/lump puzzles. Second, Karen Bennett has a forthcoming paper called "Composition, Colocation, and Metaontology" that resurrects problems for nihilists including the problem of the many (though it doesn't now appear to be available online).]

## Tuesday, November 27, 2007

## Tuesday, November 20, 2007

### Logically good inference and the rest

From time to time in my papers, the putative epistemological significance of logically good inference has been cropping up. I've been recently trying to think a bit more systematically about the issues involved.

Some terminology. Suppose that the argument "A therefore B" is logically valid. Then I'll say that reasoning from "A" is true, to "B" is true, is logically good. Two caveats (1) the logical goodness of a piece of reasoning from X to Y doesn't mean that, all things considered, it's ok to infer Y from X. At best, the case is pro tanto: if Y were a contradiction, for example, all things considered you should give up X rather than come to believe Y; (2) I think the validity of an argument-type won't in the end be sufficient for for the logically goodness of a token inference of that type---partly because we probably need to tie it much closer to deductive moves, partially because of worries about the different contexts in play with any given token inference. But let me just ignore those issues for now.

I'm going to blur use-mention a bit by classifying material-mode inferences from A to B (rather than: "A" is true to "B" is true") as logically good in the same circumstances. I'll also call a piece of reasoning from A to B "modally good" if A entails B, and "a priori good" if it's a priori that if A then B (nb: material conditional). If it's a priori that A entails B, I'll call it "a priori modally good".

Suppose now we perform a competent deduction of B from A. What I'm interested in is whether the fact that the inference is logically good is something that we should pay attention to in our epistemological story about what's going on. You might think this isn't forced on us. For (arguably: see below) whenever an inference is logically good, it's also modally and a priori good. So---the thought runs---for all we've said we could have an epistemology that just looks at whether inferences are modally/a priori good, and simply sets aside questions of logical goodness. If so, logical goodness may not be epistemically interesting as such.

(That's obviously a bit quick: it might be that you can't just stop with declaring something a priori good; rather, any a priori good inference falls into one of a number of subcases, one of which is the class of logically good inferences, and that the real epistemic story proceeds at the level of the "thickly" described subcases. But let's set that sort of issue aside).

Are there reasons to think competent deduction/logically good inference is an especially interesting epistemological category of inference?

One obvious reason to refuse to subsume logically good inference within modally good inferences (for example) is if you thought that some logically good inferences aren't necessarily truth-preserving. There's a precedent for that thought: Kaplan argues in "Demonstratives" that "I am here now" is a logical validity, but isn't necessarily true. If that's the case, then logically good inferences won't be a subclass of the modally good ones, and so the attempt to talk only about the modally good inferences would just miss some of the cases.

I'm not aware of persuasive examples of logically good inferences that aren't a priori good. And I'm not persuaded that the Kaplanian classification is the right one. So let's suppose pro tem that the logically good inference are always modally, a priori, and a priori modally, good.

We're left with the following situation: the logically good inferences are a subclass of inferences that are also fall under other "good" categories. In a particular case where we come to believe B on the basis of A, where is the particular need to talk about its logical "goodness", rather than simply about its modal, a priori or whatever goodness?

To make things a little more concrete: suppose that our story about what makes a modally good inference good is that it's ultra-reliable. Then, since we're supposing all logically good inferences are modally good, just from their modal goodness, we're going to get that they're ultra-reliable. It's not so clear that epistemologically, we need say any more. (Of course, their logical goodness might explain *why* they're reliable: but that's not clearly an *epistemic* explanation, any more than is the biophysical story about perception's reliability.)

So long as we're focusing on cases where we deploy reasoning directly, to move from something we believe to something else we believe, I'm not sure how to get traction on this issue (at least, not in such an abstract setting: I'm sure we could fight on the details if they are filled out.). But even in this abstract setting, I do think we can see that the idea just sketched ignores one vital role that logically good reasoning plays: namely, reasoning under a supposition in the course of an indirect proof.

Familiar cases: If reasoning from A to B is logically good, then it's ok to believe (various) conditional(s) "if A, B". If reasoning from A1 to B is logically good, and reasoning from A2 to B is logically good, then inferring B from the disjunction A1vA2 is ok. If reasoning from A to a contradiction is logically good, then inferring not-A is good. If reasoning from A to B is logically good, then reasoning from A&C to B is good.

What's important about these sort of deployments is that if you replace "logically good" by some wider epistemological category of ok reasoning, you'll be in danger of losing these patterns.

Suppose, for example, that there are "deeply contingent a priori truths". One schematic example that John Hawthorne offers is the material conditional "My experiences are of kind H > theory T of the world is true". The idea here is that the experiences specified should be the kind that lead to T via inference to the best explanation. Of course, this'll be a case where the a priori goodness doesn't give us modal goodness: it could be that my experiences are H but the world is such that ~T. Nevertheless, I think there's a pretty strong case that in suitable settings inferring T from H will be (defeasibly but) a priori good.

Now suppose that the correct theory of the world isn't T, and I don't undergo experiences H. Consider the counterfactual "were my experiences to be H, theory T would be true". There's no reason at all to think this counterfactual would be true in the specified circumstances: it may well be that, given the actual world meets description T*, the closest world where my experiences are H is still an approximately T*-world rather than a T-world. E.g. the nearest world where various tests for general relativity come back negative may well be a world where general relativity is still the right theory, but it's effects aren't detectable on the kind of scales initially tested (that's just a for-instance: I'm sure better cases could be constructed).

Here's another illustration of the worry. Granted, reasoning from H to T seems a priori. But reasoning from H+X to T seems terrible, for a variety of X. (So: My experiences are of H + my experiences are misleading in way W will plausibly a priori supports some T' incompatible with T). But if we were allowed to use a priori good reasoning in indirect proofs, then we could simply argue from H+X to H, and thence (a priori) to T, overall getting an a priori route from H+X to T. the moral is that we can't treat a priori good pieces of reasoning as "lemmas" that we can rely on under the scope of whatever suppositions we like. A priori goodness threatens to be "non-monotonic": which is fine on its own, but I think does show quite clearly that it can completely crash when we try to make it play a role designed for logical goodness.

This sort of problem isn't a surprise: the reliability of indirect proofs is going to get *more problematic* the more inclusive the reasoning in play is. Suppose the indirect reasoning says that whenever reasoning of type R is good, one can infer C. The more pieces of reasoning count as "good", the more potential there is to come into conflict with the rule, because there's simply more cases of reasoning that are potential counterexamples.

Of course, a priori goodness is just one of the inferential virtues mentioned earlier: modal goodness is another; and a priori modal goodness a third. Modal goodness already looks a bit implausible as an attempt to capture the epistemic status of deduction: it doesn't seem all that plausible to classify the inferential move from A and B to B as w the same category as the move from this is water to this is H2O. Moreover, we'll again have trouble with conditional proof: this time for indicative conditionals. Intuitively, and (I'm independently convinced) actually, the indicative conditional "if the watery stuff around here is XYZ, then water is H2O" is false. But the inferential move from the antecedent to the consequent is modally good.

Of the options mentioned, this leaves a priori modal goodness. The hope would be that this'll cut out the cases of modally good inference that cause trouble (those based around a posteriori necessities). Will this help?

I don't think so: I think the problems for a priori goodness resurface here. if the move from H to T is a priori good, then it seems that the move from Actually(H) to Actually(T) should equally be a priori good. But in a wide variety of cases, this inference will also be modally good (all cases except H&~T ones). But just as before, thinking that this piece of reasoning preserves its status in indirect proofs gives us very bad results: e.g. that there's an a priori route from Actually(H) and Actually(X) to Actually (T), which for suitably chosen X looks really bad.

Anyway, of course there's wriggle room here, and I'm sure a suitably ingenious defender of one of these positions could spin a story (and I'd be genuinely interested in hearing it). But my main interest is just to block the dialectical maneuver that says: well, all logically good inferences are X-good ones, so we can get everything we want having a decent epistemology of X-good inferences. The cases of indirect reasoning I think show that the *limitations* on what inferences are logically good can be epistemologically central: and anyone wanting to ignore logic better have a story to tell about how their story plays in these cases.

[NB: one kind of good inference I haven't talked about is that backed by what 2-dimensionalists might call "1-necessary truth preservation": I.e. truth preservation at every centred world considered as actual. I've got no guarantees to offer that this notion won't run into problems, but I haven't as yet constructed cases against it. Happily, for my purposes, logically good inference and this sort of 1-modally good inference give rise to the same issues, so if I had to concede that this was a viable epistemological category for subsuming logically good inference, it wouldn't substantially effect my wider project.]

Some terminology. Suppose that the argument "A therefore B" is logically valid. Then I'll say that reasoning from "A" is true, to "B" is true, is logically good. Two caveats (1) the logical goodness of a piece of reasoning from X to Y doesn't mean that, all things considered, it's ok to infer Y from X. At best, the case is pro tanto: if Y were a contradiction, for example, all things considered you should give up X rather than come to believe Y; (2) I think the validity of an argument-type won't in the end be sufficient for for the logically goodness of a token inference of that type---partly because we probably need to tie it much closer to deductive moves, partially because of worries about the different contexts in play with any given token inference. But let me just ignore those issues for now.

I'm going to blur use-mention a bit by classifying material-mode inferences from A to B (rather than: "A" is true to "B" is true") as logically good in the same circumstances. I'll also call a piece of reasoning from A to B "modally good" if A entails B, and "a priori good" if it's a priori that if A then B (nb: material conditional). If it's a priori that A entails B, I'll call it "a priori modally good".

Suppose now we perform a competent deduction of B from A. What I'm interested in is whether the fact that the inference is logically good is something that we should pay attention to in our epistemological story about what's going on. You might think this isn't forced on us. For (arguably: see below) whenever an inference is logically good, it's also modally and a priori good. So---the thought runs---for all we've said we could have an epistemology that just looks at whether inferences are modally/a priori good, and simply sets aside questions of logical goodness. If so, logical goodness may not be epistemically interesting as such.

(That's obviously a bit quick: it might be that you can't just stop with declaring something a priori good; rather, any a priori good inference falls into one of a number of subcases, one of which is the class of logically good inferences, and that the real epistemic story proceeds at the level of the "thickly" described subcases. But let's set that sort of issue aside).

Are there reasons to think competent deduction/logically good inference is an especially interesting epistemological category of inference?

One obvious reason to refuse to subsume logically good inference within modally good inferences (for example) is if you thought that some logically good inferences aren't necessarily truth-preserving. There's a precedent for that thought: Kaplan argues in "Demonstratives" that "I am here now" is a logical validity, but isn't necessarily true. If that's the case, then logically good inferences won't be a subclass of the modally good ones, and so the attempt to talk only about the modally good inferences would just miss some of the cases.

I'm not aware of persuasive examples of logically good inferences that aren't a priori good. And I'm not persuaded that the Kaplanian classification is the right one. So let's suppose pro tem that the logically good inference are always modally, a priori, and a priori modally, good.

We're left with the following situation: the logically good inferences are a subclass of inferences that are also fall under other "good" categories. In a particular case where we come to believe B on the basis of A, where is the particular need to talk about its logical "goodness", rather than simply about its modal, a priori or whatever goodness?

To make things a little more concrete: suppose that our story about what makes a modally good inference good is that it's ultra-reliable. Then, since we're supposing all logically good inferences are modally good, just from their modal goodness, we're going to get that they're ultra-reliable. It's not so clear that epistemologically, we need say any more. (Of course, their logical goodness might explain *why* they're reliable: but that's not clearly an *epistemic* explanation, any more than is the biophysical story about perception's reliability.)

So long as we're focusing on cases where we deploy reasoning directly, to move from something we believe to something else we believe, I'm not sure how to get traction on this issue (at least, not in such an abstract setting: I'm sure we could fight on the details if they are filled out.). But even in this abstract setting, I do think we can see that the idea just sketched ignores one vital role that logically good reasoning plays: namely, reasoning under a supposition in the course of an indirect proof.

Familiar cases: If reasoning from A to B is logically good, then it's ok to believe (various) conditional(s) "if A, B". If reasoning from A1 to B is logically good, and reasoning from A2 to B is logically good, then inferring B from the disjunction A1vA2 is ok. If reasoning from A to a contradiction is logically good, then inferring not-A is good. If reasoning from A to B is logically good, then reasoning from A&C to B is good.

What's important about these sort of deployments is that if you replace "logically good" by some wider epistemological category of ok reasoning, you'll be in danger of losing these patterns.

Suppose, for example, that there are "deeply contingent a priori truths". One schematic example that John Hawthorne offers is the material conditional "My experiences are of kind H > theory T of the world is true". The idea here is that the experiences specified should be the kind that lead to T via inference to the best explanation. Of course, this'll be a case where the a priori goodness doesn't give us modal goodness: it could be that my experiences are H but the world is such that ~T. Nevertheless, I think there's a pretty strong case that in suitable settings inferring T from H will be (defeasibly but) a priori good.

Now suppose that the correct theory of the world isn't T, and I don't undergo experiences H. Consider the counterfactual "were my experiences to be H, theory T would be true". There's no reason at all to think this counterfactual would be true in the specified circumstances: it may well be that, given the actual world meets description T*, the closest world where my experiences are H is still an approximately T*-world rather than a T-world. E.g. the nearest world where various tests for general relativity come back negative may well be a world where general relativity is still the right theory, but it's effects aren't detectable on the kind of scales initially tested (that's just a for-instance: I'm sure better cases could be constructed).

Here's another illustration of the worry. Granted, reasoning from H to T seems a priori. But reasoning from H+X to T seems terrible, for a variety of X. (So: My experiences are of H + my experiences are misleading in way W will plausibly a priori supports some T' incompatible with T). But if we were allowed to use a priori good reasoning in indirect proofs, then we could simply argue from H+X to H, and thence (a priori) to T, overall getting an a priori route from H+X to T. the moral is that we can't treat a priori good pieces of reasoning as "lemmas" that we can rely on under the scope of whatever suppositions we like. A priori goodness threatens to be "non-monotonic": which is fine on its own, but I think does show quite clearly that it can completely crash when we try to make it play a role designed for logical goodness.

This sort of problem isn't a surprise: the reliability of indirect proofs is going to get *more problematic* the more inclusive the reasoning in play is. Suppose the indirect reasoning says that whenever reasoning of type R is good, one can infer C. The more pieces of reasoning count as "good", the more potential there is to come into conflict with the rule, because there's simply more cases of reasoning that are potential counterexamples.

Of course, a priori goodness is just one of the inferential virtues mentioned earlier: modal goodness is another; and a priori modal goodness a third. Modal goodness already looks a bit implausible as an attempt to capture the epistemic status of deduction: it doesn't seem all that plausible to classify the inferential move from A and B to B as w the same category as the move from this is water to this is H2O. Moreover, we'll again have trouble with conditional proof: this time for indicative conditionals. Intuitively, and (I'm independently convinced) actually, the indicative conditional "if the watery stuff around here is XYZ, then water is H2O" is false. But the inferential move from the antecedent to the consequent is modally good.

Of the options mentioned, this leaves a priori modal goodness. The hope would be that this'll cut out the cases of modally good inference that cause trouble (those based around a posteriori necessities). Will this help?

I don't think so: I think the problems for a priori goodness resurface here. if the move from H to T is a priori good, then it seems that the move from Actually(H) to Actually(T) should equally be a priori good. But in a wide variety of cases, this inference will also be modally good (all cases except H&~T ones). But just as before, thinking that this piece of reasoning preserves its status in indirect proofs gives us very bad results: e.g. that there's an a priori route from Actually(H) and Actually(X) to Actually (T), which for suitably chosen X looks really bad.

Anyway, of course there's wriggle room here, and I'm sure a suitably ingenious defender of one of these positions could spin a story (and I'd be genuinely interested in hearing it). But my main interest is just to block the dialectical maneuver that says: well, all logically good inferences are X-good ones, so we can get everything we want having a decent epistemology of X-good inferences. The cases of indirect reasoning I think show that the *limitations* on what inferences are logically good can be epistemologically central: and anyone wanting to ignore logic better have a story to tell about how their story plays in these cases.

[NB: one kind of good inference I haven't talked about is that backed by what 2-dimensionalists might call "1-necessary truth preservation": I.e. truth preservation at every centred world considered as actual. I've got no guarantees to offer that this notion won't run into problems, but I haven't as yet constructed cases against it. Happily, for my purposes, logically good inference and this sort of 1-modally good inference give rise to the same issues, so if I had to concede that this was a viable epistemological category for subsuming logically good inference, it wouldn't substantially effect my wider project.]

## Monday, November 05, 2007

### CEM journalism

The literature on the linguistics/philosophy interface on conditionals is full of excellent stuff. Here's just one nice thing we get. (Directly drawn from a paper by von Fintel and Iatridou). Nothing here is due to me: but it's something I want to put down so I don't forget it, since it looks like it'll be useful all over the place. Think of what follows as a bit of journalism.

Here's a general puzzle for people who like "iffy" analyses of conditionals.

What the paper cited above notes is that so long as we've got CEM, we won't go wrong. For [No x:Fx]Gx is equivalent to [All x:Fx]~Gx. And where G is the conditional "if x goofs off, x passes", the negated conditional "not: if x goofs off, x passes" is equivalent to "if x goofs off, x doesn't pass" if we have the relevant instance of conditional excluded middle. What we wind up with is an equivalence between the obvious first-pass regimentation and:

Suppose we're convinced by this that we need the relevant instances of CEM. There remains a question of *how* to secure these instances. The suggestion in the paper is that rules governing legitimate contexts for conditionals give us the result (paired with a contextually shifty strict conditional account of conditionals). An obvious alternative is to hard-wire in CEM into the semantics, as Stalnaker does. So unless you're prepared (with von Fintel, Gillies et al) to defend in detail fine-tuned shiftiness of the contexts in which conditionals can be uttered then it looks like you should smile upon the Stalnaker analysis.

[Update: It's interesting to think how this would look as an argument for (instances of) CEM.

Premise 1: The following are equivalent:

A. No student will pass if she goofs off

B. Every student will fail to pass if she goofs off

Premise 2: A and B can be regimented respectively as follows:

A*. [No x: student x](if x goofs off, x passes)

B*. [Every x: student x](if x goofs off, ~x passes)

Premise 3: [No x: Fx]Gx is equivalent to [Every x: Fx]~Gx

Premise 4: if [Every x: Fx]Hx is equivalent to [Every x: Fx]Ix, then Hx is equivalent to Ix.

We argue as follows. By an instance of premise 3, A* is equivalent to:

C*. [Every x: student x] not(if x goofs off, x passes)

But C* is equivalent to A*, which is equivalent to A (premise 2) which is equivalent to B (premise 1) which is equivalent to B* (premise 2). So C* is equivalent to B*.

But this equivalence is of the form of the antecedent of premise 4, so we get:

(Neg/Cond instances) ~(if x goofs off, x passes) iff if x goofs off, ~x passes.

And we quickly get from the law of excluded middle and a bit of logic:

(CEM instances) (if x goofs off, x passes) or (if x goofs off, ~ x passes). QED.

The present version is phrased in terms of indicative conditionals. But it looks like parallel arguments can be run for CEM for counterfactuals (Thanks to Richard Woodward for asking about this). For one of the controversial cases, for example, the basic premise will be that the following are equivalent:

D. No coin would have landed heads, if it had been flipped.

E. Every coin would have landed tails, if it had been flipped.

This looks pretty good, so the argument can run just as before.]

Here's a general puzzle for people who like "iffy" analyses of conditionals.

- No student passes if they goof off.

- [No x: x is a student](if x goofs off, x passes)

- [No x: x is a student](x goofs off and x passes)

What the paper cited above notes is that so long as we've got CEM, we won't go wrong. For [No x:Fx]Gx is equivalent to [All x:Fx]~Gx. And where G is the conditional "if x goofs off, x passes", the negated conditional "not: if x goofs off, x passes" is equivalent to "if x goofs off, x doesn't pass" if we have the relevant instance of conditional excluded middle. What we wind up with is an equivalence between the obvious first-pass regimentation and:

- [All x: x is a student](if x goofs off, x won't pass).

Suppose we're convinced by this that we need the relevant instances of CEM. There remains a question of *how* to secure these instances. The suggestion in the paper is that rules governing legitimate contexts for conditionals give us the result (paired with a contextually shifty strict conditional account of conditionals). An obvious alternative is to hard-wire in CEM into the semantics, as Stalnaker does. So unless you're prepared (with von Fintel, Gillies et al) to defend in detail fine-tuned shiftiness of the contexts in which conditionals can be uttered then it looks like you should smile upon the Stalnaker analysis.

[Update: It's interesting to think how this would look as an argument for (instances of) CEM.

Premise 1: The following are equivalent:

A. No student will pass if she goofs off

B. Every student will fail to pass if she goofs off

Premise 2: A and B can be regimented respectively as follows:

A*. [No x: student x](if x goofs off, x passes)

B*. [Every x: student x](if x goofs off, ~x passes)

Premise 3: [No x: Fx]Gx is equivalent to [Every x: Fx]~Gx

Premise 4: if [Every x: Fx]Hx is equivalent to [Every x: Fx]Ix, then Hx is equivalent to Ix.

We argue as follows. By an instance of premise 3, A* is equivalent to:

C*. [Every x: student x] not(if x goofs off, x passes)

But C* is equivalent to A*, which is equivalent to A (premise 2) which is equivalent to B (premise 1) which is equivalent to B* (premise 2). So C* is equivalent to B*.

But this equivalence is of the form of the antecedent of premise 4, so we get:

(Neg/Cond instances) ~(if x goofs off, x passes) iff if x goofs off, ~x passes.

And we quickly get from the law of excluded middle and a bit of logic:

(CEM instances) (if x goofs off, x passes) or (if x goofs off, ~ x passes). QED.

The present version is phrased in terms of indicative conditionals. But it looks like parallel arguments can be run for CEM for counterfactuals (Thanks to Richard Woodward for asking about this). For one of the controversial cases, for example, the basic premise will be that the following are equivalent:

D. No coin would have landed heads, if it had been flipped.

E. Every coin would have landed tails, if it had been flipped.

This looks pretty good, so the argument can run just as before.]

### Must, Might and Moore.

I've just been enjoying reading a paper by Thony Gillies. One thing that's very striking is the dilemma he poses---quite generally---for "iffy" accounts of "if" (i.e. accounts that see English "if" as expressing a sentential connective, pace Kratzer's restrictor account).

The dilemma is constructed around finding a story that handles the interaction between modals and conditionals. The prima facie data is that the following pairs are equivalent:

It's a really familiar tactic, when presented with a putative equivalence that causes trouble for your favourite theory, to say that the pairs aren't equivalent at all, but can be "reasonably inferred" from each other (think of various ways of explaining away "or-to-if" inferences). But taken cold such pragmatic explanations can look a bit ad hoc.

So it'd be nice if we could find independent motivation for the inequivalence we need. In a related setting, Bob Stalnaker uses the acceptability of Moorean-patterns to do this job. To me, the Stalnaker point seems to bear directly on the Gillies dilemma above.

Before we even consider conditionals, notice that "p but it might be that not p" sounds terrible. Attractive story: this is because you shouldn't assert something unless you know it to be true; and to say that p might not be the case is (inter alia) to deny you know it. One way of bringing out the pretty obviously pragmatic nature of the tension in uttering the conjunction here is to note that asserting the following sort of thing looks much much better:

(I can sometimes still hear a little tension in the example: what are you doing believing that you'll catch the train if you know you might not? But for me this goes away if we replace "I believe that" with "I'm confident that" (which still, in vanilla cases, gives you Moorean phenomena). I think in the examples to be given below, residual tension can be eliminated in the same way. The folks who work on norms of assertion I'm sure have explored this sort of territory lots.)

That's the prototypical case. Let's move on to examples where there are more moving parts. David Lewis famously alleged that the following pair are equivalent:

We find pretty much the same cases for "must" and indicative "if".

These sorts of patterns make me very suspicious of claims that "if p, must q" and "if p, q" are equivalent, just as the analogous patterns make me suspicious of the Lewis idea that "if p, might ~q" and "if p, q" are contradictories when the "if" is subjunctive. So I'm thinking the horns of Gillies' dilemma aren't equal: denying the must conditional/bare conditional equivalence is independently motivated.

None of this is meant to undermine the positive theory that Thony Gillies is presenting in the paper: his way of accounting for lots of the data looks super-interesting, and I've got no reason to suppose his positive story won't have a story about everything I've said here. I'm just wondering whether the dilemma that frames the debate should suck us in.

The dilemma is constructed around finding a story that handles the interaction between modals and conditionals. The prima facie data is that the following pairs are equivalent:

- If p, must be q
- If p, q

- If p, might be q
- Might be (p&q)

It's a really familiar tactic, when presented with a putative equivalence that causes trouble for your favourite theory, to say that the pairs aren't equivalent at all, but can be "reasonably inferred" from each other (think of various ways of explaining away "or-to-if" inferences). But taken cold such pragmatic explanations can look a bit ad hoc.

So it'd be nice if we could find independent motivation for the inequivalence we need. In a related setting, Bob Stalnaker uses the acceptability of Moorean-patterns to do this job. To me, the Stalnaker point seems to bear directly on the Gillies dilemma above.

Before we even consider conditionals, notice that "p but it might be that not p" sounds terrible. Attractive story: this is because you shouldn't assert something unless you know it to be true; and to say that p might not be the case is (inter alia) to deny you know it. One way of bringing out the pretty obviously pragmatic nature of the tension in uttering the conjunction here is to note that asserting the following sort of thing looks much much better:

- it might be that not p; but I believe that p

(I can sometimes still hear a little tension in the example: what are you doing believing that you'll catch the train if you know you might not? But for me this goes away if we replace "I believe that" with "I'm confident that" (which still, in vanilla cases, gives you Moorean phenomena). I think in the examples to be given below, residual tension can be eliminated in the same way. The folks who work on norms of assertion I'm sure have explored this sort of territory lots.)

That's the prototypical case. Let's move on to examples where there are more moving parts. David Lewis famously alleged that the following pair are equivalent:

- it's not the case that: if were the case that p, it would have been that q
- if were that p, it might have been that ~q

- if it were that p, it might have been that not q; but I believe if it were that p it would have been that q.

We find pretty much the same cases for "must" and indicative "if".

- It's not true that if p, then it must be that q; but I believe that if p, q.

These sorts of patterns make me very suspicious of claims that "if p, must q" and "if p, q" are equivalent, just as the analogous patterns make me suspicious of the Lewis idea that "if p, might ~q" and "if p, q" are contradictories when the "if" is subjunctive. So I'm thinking the horns of Gillies' dilemma aren't equal: denying the must conditional/bare conditional equivalence is independently motivated.

None of this is meant to undermine the positive theory that Thony Gillies is presenting in the paper: his way of accounting for lots of the data looks super-interesting, and I've got no reason to suppose his positive story won't have a story about everything I've said here. I'm just wondering whether the dilemma that frames the debate should suck us in.

## Friday, November 02, 2007

### Degrees of belief and supervaluations

Suppose you've got an argument with one premise and one conclusion, and you think its valid. Call the premise p and the conclusion q. Plausibly, constraints on rational belief follow: in particular, you can't rationally have a lesser degree of belief in q than you have in p.

The natural generalization of this to multi-premise cases is that if p1...pn|-q, then your degree of disbelief in q can't rationally exceed the sum of your degrees of disbelief in the premises.

FWIW, there's a natural generalization to the multi-conclusion case too (a multi-conclusion argument is valid, roughly, if the truth of all the premises secures the truth of at least one conclusion). If p1...pn|-q1...qm, then the sum of your degrees of disbelief in the conclusions can't rationally exceed the sum of your degrees of disbelief in the premises.

What I'm interested in at the moment is to what extent this sort of connection can be extended to non-classical settings. In particular (and connected with the last post) I'm interested in what the supervaluationist should think about all this.

There's a fundamental choice to be made at the get-go. Do we think that "degrees of belief" in sentences of a vague language can be represented by a standard classical probability function? Or do we need to be a bit more devious?

Let's take a simple case. Construct the artificial predicate B(x), so that numbers less than 5 satisfy B, and numbers greater than5 fail to satisfy it. We'll suppose that it is indeterminate whether 5 itself is B, and that supervaluationism gives the right way to model this.

First observation. It's generally accepted that for the standard supervaluationist

p &~Det(p)|-absurdity;

Given this and the constraints on rational credence mentioned earlier, we'd have to conclude that my credence in B(5)&~Det(B(5)) must be 0. I have credence 0 in absurdity; and the degree of disbelief in the conclusion of this valid argument (1) must not exceed the degree of disbelief in its premises.

Let's think that through. Notice that in this case, my credence in ~Det(B(5)) can be taken to be 1. So given minimal assumptions about the logic of credences, my credence in B(5) must be 0.

A parallel argument running from ~B(5)&~Det(~B(5))|-absurdity gives us that my credence in ~B(5) must be 0.

Moreover, supervaluational entails all classical tautologies. So in particular we have the validity: |-B(5)v~B(5). The standard constraint in this case tells us that rational credence in this disjunction must be 1. And so, we have a disjunction in which we have credence 1, each disjunct of which we have credence 0 in. (Compare the standard observation that supervaluational disjunctions can be non-prime: the disjunction can be true when neither disjunct is).

This is a fairly direct argument that something non-classical has to be going on with the probability calculus. One move at this point is to consider Shafer functions (which I know little about: but see here). Now maybe that works out nicely, maybe it doesn't. But I find it kinda interesting that the little constraint on validity and credences gets us so quickly into a position where something like this is needed if the constraint is to work. It also gives us a recipe for arguing against standard supervaluationism: argue against the Shafer-function like behaviour in our degrees of belief, and you'll ipso facto have an argument against supervaluationism. For this, the probablistic constraint on validity is needed (as far as I can see): for its this that makes the distinctive features mandatory.

I'd like to connect this to two other issues I've been working on. One is the paper on the logic of supervaluationism cited below. The key thing here is that it raises the prospect of p&~Dp|-absurdity not holding, even for your standard "truth=supertruth" supervaluationist. If that works, the key premise of the argument that forces you to have degree of belief 0 in both an indeterminate sentence 'p' and its negation goes missing.

Maybe we can replace it by some other argument. If you read "D" as "it is true that..." as the standard supervaluationist encourages you to, then "p&~Dp" should be read "p&it is not true that p". And perhaps that sounds to you just like an analytic falsity (it sure sounded to me that way); and analytic falsities are the sorts of things one should paradigmatically have degree of belief 0 in.

But here's another observation that might give you pause (I owe this point to discussions with Peter Simons and John Hawthorne). Suppose p is indeterminate. Then we have ~Dp&~D~p. And given supervaluationism's conservativism, we also have pv~p. So by a bit of jiggery-pokery, we'll get (p&~Dp v ~p&~D~p). But in moods where I'm hyped up thinking that "p&~Dp" is analytically false and terrible, I'm equally worried by this disjunction. But that suggests that the source of my intuitive repulsion here isn't the sort of thing that the standard supervaluationist should be buying. Of course, the friend of Shafer functions could just say that this is another case where our credence in the disjunction is 1 while our credences in each disjunct is 0. That seems dialectically stable to me: after all, they'll have *independent* reason for thinking that p&~Dp should have credence 0. All I want to insist is that the "it sounds really terrible" reason for assigning p&~Dp credence 0 looks like it overgeneralizes, and so should be distrusted.

I also think that if we set aside truth-talk, there's some plausibility in the claim that "p&~Dp" should get non-zero credence. Suppose you're initially in a mindset where you should be about half-confident of a borderline case. Well, one thing that you absolutely want to say about borderline cases is that they're neither true nor false. So why shouldn't you be at least half-confident in the combination of these?

And yet, and yet... there's the fundamental implausibility of "p&it's not true that p" (the standard supervaluationist's reading of "p&~Dp") having anything other than credence 0. But ex hypothesi, we've lost the standard positive argument for that claim. So we're left, I think, with the bare intuition. But it's a powerful one, and something needs to be said about it.

Two defensive maneuvers for the standard supervaluationist:

(1) Say that what you're committed to is just "p& it's not supertrue that p". Deny that the ordinary concept of truth can be identified with supertruth (something that as many have emphasized, is anyway quite plausible given the non-disquotational nature of supertruth). But crucially, don't seek to replace this with some other gloss on supertruth: just say that supertruth, superfalsity and gap between them are appropriate successor concepts, and that ordinary truth-talk is appropriate only when we're ignoring the possibility of the third case. If we disclaim conceptual analysis in this way, then it won't be appropriate to appeal to intuitions about the English word "true" to kick away independently motivated theoretical claims about supertruth. In particular, we can't appeal to intuitions to argue that "p&~supertrue that p" should be assigned credence 0. (There's a question of whether this should be seen as an error-theory about English "truth"-ascriptions. I don't see it needs to be. It might be that the English word "true" latches on to supertruth because supertruth what best fits the truth-role. On this model, "true" stands to supertruth as "de-phlogistonated air" according to some, stands to oxygen. And so this is still a "truth=supertruth" standard supervaluationism.)

(2) The second maneuver is to appeal to supervaluational degrees of truth. Let the degree of supertruth of S be, roughly, the measure of the precisifications on which S is true. S is supertrue simpliciter when it is true on all the precisifications, i.e. measure 1 of the precisifications. If we then identify degrees of supertruth with degrees of truth, the contention that truth is supertruth becomes something that many find independently attractive: that in the context of a degree theory, truth simpliciter should be identified with truth to degree 1. (I think that this tendancy has something deeply in common with the temptation (following Unger) to think that nothing that nothing can be flatter than a flat thing: nothing can be truer than a true thing. I've heard people claim that Unger was right to think that a certain class of adjectives in English work this way).

I think when we understand the supertruth=truth claim in that way, the idea that "p&~true that p" should be something in which we should always have degree of belief 0 loses much of its appeal. After all, compatibly with "p" not being absolutely perfectly true (=true), it might be something that's almost absolutely perfectly true. And it doesn't sound bad or uncomfortable to me to think that one should conform one's credences to the known degree of truth: indeed, that seems to be a natural generalization of the sort of thing that originally motivated our worries.

In summary. If you're a supervaluationist who takes the orthodox line on supervaluational logic, then it looks like there's a strong case for a non-classical take on what degrees of belief look like. That's a potentially vulnerable point for the theory. If you're a (standard, global, truth=supertruth) supervaluationist who's open to the sort of position I sketch in the paper below, prima facie we can run with a classical take on degrees of belief.

Let me finish off by mentioning a connection between all this and some material on probability and conditionals I've been working on recently. I think a pretty strong case can be constructed for thinking that for some conditional sentences S, we should be all-but-certain that S&~DS. But that's exactly of the form that we've been talking about throughout: and here we've got *independent* motivation to think that this should be high-probability, not probability zero.

Now, one reaction is to take this as evidence that "D" shouldn't be understood along standard supervaluationist lines. That was my first reaction too (in fact, I couldn't see how anyone but the epistemicist could deal with such cases). But now I'm thinking that this may be too hasty. What seems right is that (a) the standard supervaluationist with the Shafer-esque treatment of credences can't deal with this case. But (b) the standard supervaluationist articulated in one of the ways just sketched shouldn't think there's an incompatibility here.

My own preference is to go for the degrees-of-truth explication of all this. Perhaps, once we've bought into that, the "truth=degree 1 supertruth" element starts to look less important, and we'll find other useful things to do with supervaluational degrees of truth (a la Kamp, Lewis, Edgington). But I think the "phlogiston" model of supertruth is just about stable too.

[P.S. Thanks to Daniel Elstein, for a paper today at the CMM seminar which started me thinking again about all this.]

The natural generalization of this to multi-premise cases is that if p1...pn|-q, then your degree of disbelief in q can't rationally exceed the sum of your degrees of disbelief in the premises.

FWIW, there's a natural generalization to the multi-conclusion case too (a multi-conclusion argument is valid, roughly, if the truth of all the premises secures the truth of at least one conclusion). If p1...pn|-q1...qm, then the sum of your degrees of disbelief in the conclusions can't rationally exceed the sum of your degrees of disbelief in the premises.

What I'm interested in at the moment is to what extent this sort of connection can be extended to non-classical settings. In particular (and connected with the last post) I'm interested in what the supervaluationist should think about all this.

There's a fundamental choice to be made at the get-go. Do we think that "degrees of belief" in sentences of a vague language can be represented by a standard classical probability function? Or do we need to be a bit more devious?

Let's take a simple case. Construct the artificial predicate B(x), so that numbers less than 5 satisfy B, and numbers greater than5 fail to satisfy it. We'll suppose that it is indeterminate whether 5 itself is B, and that supervaluationism gives the right way to model this.

First observation. It's generally accepted that for the standard supervaluationist

p &~Det(p)|-absurdity;

Given this and the constraints on rational credence mentioned earlier, we'd have to conclude that my credence in B(5)&~Det(B(5)) must be 0. I have credence 0 in absurdity; and the degree of disbelief in the conclusion of this valid argument (1) must not exceed the degree of disbelief in its premises.

Let's think that through. Notice that in this case, my credence in ~Det(B(5)) can be taken to be 1. So given minimal assumptions about the logic of credences, my credence in B(5) must be 0.

A parallel argument running from ~B(5)&~Det(~B(5))|-absurdity gives us that my credence in ~B(5) must be 0.

Moreover, supervaluational entails all classical tautologies. So in particular we have the validity: |-B(5)v~B(5). The standard constraint in this case tells us that rational credence in this disjunction must be 1. And so, we have a disjunction in which we have credence 1, each disjunct of which we have credence 0 in. (Compare the standard observation that supervaluational disjunctions can be non-prime: the disjunction can be true when neither disjunct is).

This is a fairly direct argument that something non-classical has to be going on with the probability calculus. One move at this point is to consider Shafer functions (which I know little about: but see here). Now maybe that works out nicely, maybe it doesn't. But I find it kinda interesting that the little constraint on validity and credences gets us so quickly into a position where something like this is needed if the constraint is to work. It also gives us a recipe for arguing against standard supervaluationism: argue against the Shafer-function like behaviour in our degrees of belief, and you'll ipso facto have an argument against supervaluationism. For this, the probablistic constraint on validity is needed (as far as I can see): for its this that makes the distinctive features mandatory.

I'd like to connect this to two other issues I've been working on. One is the paper on the logic of supervaluationism cited below. The key thing here is that it raises the prospect of p&~Dp|-absurdity not holding, even for your standard "truth=supertruth" supervaluationist. If that works, the key premise of the argument that forces you to have degree of belief 0 in both an indeterminate sentence 'p' and its negation goes missing.

Maybe we can replace it by some other argument. If you read "D" as "it is true that..." as the standard supervaluationist encourages you to, then "p&~Dp" should be read "p&it is not true that p". And perhaps that sounds to you just like an analytic falsity (it sure sounded to me that way); and analytic falsities are the sorts of things one should paradigmatically have degree of belief 0 in.

But here's another observation that might give you pause (I owe this point to discussions with Peter Simons and John Hawthorne). Suppose p is indeterminate. Then we have ~Dp&~D~p. And given supervaluationism's conservativism, we also have pv~p. So by a bit of jiggery-pokery, we'll get (p&~Dp v ~p&~D~p). But in moods where I'm hyped up thinking that "p&~Dp" is analytically false and terrible, I'm equally worried by this disjunction. But that suggests that the source of my intuitive repulsion here isn't the sort of thing that the standard supervaluationist should be buying. Of course, the friend of Shafer functions could just say that this is another case where our credence in the disjunction is 1 while our credences in each disjunct is 0. That seems dialectically stable to me: after all, they'll have *independent* reason for thinking that p&~Dp should have credence 0. All I want to insist is that the "it sounds really terrible" reason for assigning p&~Dp credence 0 looks like it overgeneralizes, and so should be distrusted.

I also think that if we set aside truth-talk, there's some plausibility in the claim that "p&~Dp" should get non-zero credence. Suppose you're initially in a mindset where you should be about half-confident of a borderline case. Well, one thing that you absolutely want to say about borderline cases is that they're neither true nor false. So why shouldn't you be at least half-confident in the combination of these?

And yet, and yet... there's the fundamental implausibility of "p&it's not true that p" (the standard supervaluationist's reading of "p&~Dp") having anything other than credence 0. But ex hypothesi, we've lost the standard positive argument for that claim. So we're left, I think, with the bare intuition. But it's a powerful one, and something needs to be said about it.

Two defensive maneuvers for the standard supervaluationist:

(1) Say that what you're committed to is just "p& it's not supertrue that p". Deny that the ordinary concept of truth can be identified with supertruth (something that as many have emphasized, is anyway quite plausible given the non-disquotational nature of supertruth). But crucially, don't seek to replace this with some other gloss on supertruth: just say that supertruth, superfalsity and gap between them are appropriate successor concepts, and that ordinary truth-talk is appropriate only when we're ignoring the possibility of the third case. If we disclaim conceptual analysis in this way, then it won't be appropriate to appeal to intuitions about the English word "true" to kick away independently motivated theoretical claims about supertruth. In particular, we can't appeal to intuitions to argue that "p&~supertrue that p" should be assigned credence 0. (There's a question of whether this should be seen as an error-theory about English "truth"-ascriptions. I don't see it needs to be. It might be that the English word "true" latches on to supertruth because supertruth what best fits the truth-role. On this model, "true" stands to supertruth as "de-phlogistonated air" according to some, stands to oxygen. And so this is still a "truth=supertruth" standard supervaluationism.)

(2) The second maneuver is to appeal to supervaluational degrees of truth. Let the degree of supertruth of S be, roughly, the measure of the precisifications on which S is true. S is supertrue simpliciter when it is true on all the precisifications, i.e. measure 1 of the precisifications. If we then identify degrees of supertruth with degrees of truth, the contention that truth is supertruth becomes something that many find independently attractive: that in the context of a degree theory, truth simpliciter should be identified with truth to degree 1. (I think that this tendancy has something deeply in common with the temptation (following Unger) to think that nothing that nothing can be flatter than a flat thing: nothing can be truer than a true thing. I've heard people claim that Unger was right to think that a certain class of adjectives in English work this way).

I think when we understand the supertruth=truth claim in that way, the idea that "p&~true that p" should be something in which we should always have degree of belief 0 loses much of its appeal. After all, compatibly with "p" not being absolutely perfectly true (=true), it might be something that's almost absolutely perfectly true. And it doesn't sound bad or uncomfortable to me to think that one should conform one's credences to the known degree of truth: indeed, that seems to be a natural generalization of the sort of thing that originally motivated our worries.

In summary. If you're a supervaluationist who takes the orthodox line on supervaluational logic, then it looks like there's a strong case for a non-classical take on what degrees of belief look like. That's a potentially vulnerable point for the theory. If you're a (standard, global, truth=supertruth) supervaluationist who's open to the sort of position I sketch in the paper below, prima facie we can run with a classical take on degrees of belief.

Let me finish off by mentioning a connection between all this and some material on probability and conditionals I've been working on recently. I think a pretty strong case can be constructed for thinking that for some conditional sentences S, we should be all-but-certain that S&~DS. But that's exactly of the form that we've been talking about throughout: and here we've got *independent* motivation to think that this should be high-probability, not probability zero.

Now, one reaction is to take this as evidence that "D" shouldn't be understood along standard supervaluationist lines. That was my first reaction too (in fact, I couldn't see how anyone but the epistemicist could deal with such cases). But now I'm thinking that this may be too hasty. What seems right is that (a) the standard supervaluationist with the Shafer-esque treatment of credences can't deal with this case. But (b) the standard supervaluationist articulated in one of the ways just sketched shouldn't think there's an incompatibility here.

My own preference is to go for the degrees-of-truth explication of all this. Perhaps, once we've bought into that, the "truth=degree 1 supertruth" element starts to look less important, and we'll find other useful things to do with supervaluational degrees of truth (a la Kamp, Lewis, Edgington). But I think the "phlogiston" model of supertruth is just about stable too.

[P.S. Thanks to Daniel Elstein, for a paper today at the CMM seminar which started me thinking again about all this.]

### Supervaluations and logical revisionism paper

Happy news today: the Journal of Philosophy is going to publish my paper on the logic of supervaluationism. Swift moral. It ain't logical revisionary; and if it is, it doesn't matter.

This previous post gives an overview, if anyone's interested...

Now I've just got to figure out how to transmute my beautiful LaTeX symbols into Word...

This previous post gives an overview, if anyone's interested...

Now I've just got to figure out how to transmute my beautiful LaTeX symbols into Word...

Subscribe to:
Posts (Atom)