Friday, March 28, 2008

Theories n things moves to wordpress

I've decided to follow the recent lead of others and migrate this blog to a new wordpress site.

The big appeal for me of this is added functionality---in particular I'll be able to typeset logical notation using latex commands. Should make things prettier and easier.

Hope to see people over at the new site!

Monday, March 17, 2008

Paracompleteness and credences in contradictions.

The last few posts have discussed non-classical approaches to indeterminacy.

One of the big stumbling blocks about "folklore" non-classicism, for me, is the suggestion that contradictions (A&~A) be "half true" where A is indeterminate.

Here's a way of putting a constraint that appeals to me: I'm inclined to think that an ideal agent ought to fully reject such contradictions.

(Actually, I'm not quite as unsympathetic to contradictions as this makes it sound. I'm interested in the dialethic/paraconsistent package. But in that setting, the right thing to say isn't that A&~A is half-true, but that it's true (and probably also false). Attitudinally, the ideal agent ought to fully accept it.)

Now the no-interpretation non-classicist has the resources to satisfy this constraint. She can maintain that the ideal degree of belief in A&~A is always 0. Given that:

p(A)+p(B)=p(AvB)+p(A&B)

we have:

p(A)+p(~A)=p(Av~A)

And now, whenever we fail to fully accept Av~A, it will follow that our credences in A and ~A don't sum to one. That's the price we pay for continuing to utterly reject contradictions.

The *natural* view in this setting, it seems to me, is that accepting indeterminacy of A corresponds to rejecting Av~A. So someone fully aware that A is indeterminate should fully reject Av~A. (Here and in the above I'm following Field's "No fact of the matter" presentation of the nonclassicist).

But now consider the folklore nonclassicist, who does take talk of indeterminate propositions being "half true" (or more generally, degree-of-truth talk) seriously. This is the sort of position that the Smith paper cited in the last post explores. The idea there is that indeterminacy corresponds to half-truth, and fully informed ideal agents should set their partial beliefs to match the degree-of-truth of a proposition (e.g. in a 3-valued setting, an indeterminate A should be believed to degree 0.5). [NB: obviously partial beliefs aren't going to behave like a probability function if truth-functional degrees of truth are taken as an "expert function" for them.]

Given the usual min/max take on how these multiple truth values get settled over conjunction and negation, for the fullyinformed agent we'll get p(Av~A) set equal to the degree of truth of Av~A, i.e. 0.5. And exactly the same value will be given to A&~A. So contradictions, far from being rejected, are appropriately given the same doxastic attitude as I assign to "this fair coin will land heads"

Another way of putting this: the difference between our overall attitude to "the coin will land heads" and "Jim is bald and not bald" only comes out when we consider attitudes to contents in which these are embedded. For example, I fully disbelieve B&~B when B=the coin lands heads; but I half-accept it for B=A&~A . That doesn't at all ameliorate the implausibility of the initial identification, for me, but it's something to work with.

In short, the Field-like nonclassicist sets A&~A to 0; and that seems exactly right. Given this and one or two other principles, we get a picture where our confidence in Av~A can take any value---right down to 0; and as flagged before, the probabilities of A and ~A carve up this credence between them, so in the limit where Av~A has probability 0, they take probability 0 too.

But the folklore nonclassicist I've been considering, for whom degrees-of-truth are an expert function for degrees-of-belief, has 0.5 as a pivot. For the fully informed, Av~A always exceeds this by exactly the amount that A&~A falls below it---and where A is indeterminate, we assign them all probability 0.5.

As will be clear, I'm very much on the Fieldian side here (if I were to be a nonclassicist in the first place). It'd be interesting to know whether folklore nonclassicists do in general have a picture about partial beliefs that works as Smith describes. Consistently with taking semantics seriously, they might think of the probability of A as equal to the measure of the set of possibilities where A is perfectly true. And that will always make the probability of A&~A 0 (since it's never perfectly true); and meet various other of the Fieldian descriptions of the case. What it does put pressure on is the assumption (more common in degree theorists than 3-value theorists perhaps) that we should describe degree-of-truth-0.5 as a way of being "half true"---why in a situation where we know A is halftrue, would we be compelled to fully reject it? So it does seem to me that the rhetoric of folklore degree theorists fits a lot better with Smith's suggestions about how partial beliefs work. And I think it's objectionable on that account.

[Just a quick update. First observation. To get a fix on the "pivot" view, think of the constraint being that P(A)+P(~A)=1. Then we can see that P(Av~A)=1-P(A&~A), which summarizes the result. Second observation. I mentioned above that something that treated the degrees of truth as an expert function "won't behave like a probability function". One reflection of that is that the logic-probability link will be violated, given certain choices for the logic. E.g. suppose we require valid arguments to preserve perfect truth (e.g. we're working with the K3 logic). Then A&~A will be inconsistent. And, for example, P(A&~A) can be 0.5, while for some unrelated B, P(B) is 0. But in the logic A&~A|-B, so probability has decreased over a valid argument. Likewise if we're preserving non-perfect-falsity (e.g. we're working with the LP system). Av~A will then be a validity, but P(Av~A) can be 0.5, yet P(B) be 1. These are for the 3-valued case, but clearly that point generalizes to the analogous definitions of validity in a degree valued setting. One of the tricky things about thinking about the area is that there are lots of choice-points around, and one is the definition of validity. So, for example, one might demand that valid arguments preserve both perfect truth and non-perfect falsity; and then the two arguments above drop away since neither |-Av~A nor A&~A|- on this logic. The generalization to this in the many-valued setting is to demand e-truth preservation for every e. Clearly these logics are far more constrained than the K3 or LP logics, and so there's a better chance of avoiding violations of the logic-probability link. Whether one gets away with it is another matter.]

Regimentation (x-post).

Here's something you frequently hear said about ontological commitment. First, that to determine the ontological commitments of some sentence S, one must look not at S, but at a regimentation or paraphrase of S, S*. Second (very roughly), you determine the ontological commitments of S by looking at what existential claims follow from S*.

Leave aside the second step of this. What I'm perplexed about is how people are thinking about the first step. Here's one way to express the confusion. We're asked about the sentence S, but to determine the ontological commitments we look at features of some quite different sentence S*. But what makes us think that looking at S* is a good way of finding out about what's required of the world for S to be true?

Reaction (1). The regimentation may be constrained so as to make the relevance of S* transparent. Silly example: regimentation could be required to be null, i.e. every sentence has to be "regimented" as itself. No mystery there. Less silly example: the regimentation might be required to preserve meaning, or truth-conditions, or something similar. If that's the case then one could plausibly argue that the OC's of S and S* coincide, and looking at the OC's of S* is a good way of figuring out what the OC's of S is.

(The famous "symmetry" objections are likely to kick in here; i.e. if certain existential statements follow from S but not from S*, and what we know is that S and S* have the same OC's, why take it that S* reveals those OC's better than S?---so for example if S is "prime numbers exist" and S* is a nominalistic paraphrase, we have to say something about whether S* shows that S is innocent of OC to prime numbers, or whether S shows that S* is in a hidden way committed to prime numbers).

Obviously this isn't plausibly taken as Quine view---the appeal to synonymy is totally unQuinean (moreover in Word and Object, he's pretty explicit that the regimentation relationship is constrained by whether S* can play the same theoretical role as we initially thought S played---and that'll allow for lots of paraphrases where the sentences don't even have the appearance of being truth-conditionally equivalent).

Reaction (2). Adopt a certain general account of the nature of language. In particular, adopt a deflationism about truth and reference. Roughly: T- and R-schemes are in effect introduced into the object language as defining a disquotational truth-predicate. Then note that a truth-predicate so introduced will struggle to explain the predications of truth for sentences not in one's home language. So appeal to translation, and let the word "true" apply to a sentence in a non-home language iff that sentence translates to some sentence of the home language that is true in the disquotational sense. Truth for non-home languages is then the product of translation and disquotational truth. (We can take the "home language" for present purposes to be each person's idiolect).

I think from this perspective the regimentation steps in the Quinean characterization of ontological commitment have an obvious place. Suppose I'm a nominalist, and refuse to speak of numbers. But the mathematicians go around saying things like "prime numbers exist". Do I have to say that what they say is untrue (am I going to go up to them and tell them this?) Well, they're not speaking my idiolect; so according to the deflationary conception under consideration, what I need to do is figure out whether there sentences translate to something that's deflationarily true in my idiolect. And if I translate them according to a paraphrase on which their sentences pair with something that is "nominalistically acceptable", then it'll turn out that I can call what they say true.

This way of construing the regimentation step of ontological commitment identifies it with the translation step of the translation-disquotation treatment of truth sketched above. So obviously what sorts of constraints we have on translation will transfer directly to constraints on regimentation. One *could* appeal to a notion of truth-conditional equivalence to ground the notion of translatability---and so get back to a conception whereby synonymy (or something close to it) was central to our analysis of language.

It's in the Quinean spirit to take translatability to stand free of such notions (to make an intuitive case for separation here, one might, for example, that synonymy should be an equivalence relation, whereas translatability is plausibly non-transitive). There are several options. Quine I guess focuses on preservation of patterns of assent and dissent to translated pairs; Field appeals to his projectivist treatment of norms and takes "good translation" as something to be explained in projective terms. No doubt there are other ways to go.

This way of defending the regimentation step in treatments of ontological commitment turns essentially on deflationism about truth; and more than that, on a non-universal part of the deflationary project: the appeal to translation as a way to extend usage of the truth-predicate to non-home languages. If one has some non-translation story about how this should go (and there are some reasons for wanting one, to do with applying "true" to languages whose expressive power outstrips that of one's own) then the grounding for the regimentation step falls away.

So the Quinean regimentation-involving treatment of ontological commitment makes perfect sense within a Quinean translation-involving treatment of language in general. But I can't imagine that people who buy into to the received view of ontological commitment really mean to be taking a stance on deflationism vs. its rivals; or about the exact implementation of deflationism.

Of course, regimentation or translatability (in a more Quinean, preservation-of-theoretical-role sense, rather than a synonymy-sense) can still be significant for debates about ontological commitments. One might think that arithmetic was ontologically committing, but the existence of some nominalistic paraphrase that was suited to play the same theoretical role gave one some reassurance that one doesn't *have* to use the committing language, and maybe overall these kind of relationships will undermine the case for believing in dubious entities---not because ordinary talk isn't committed to them, but because for theoretical purposes talk needn't be committed to them. But unlike the earlier role for regimentation, this isn't a "hermeneutic" result. E.g. on the Quinean way of doing things, some non-home sentence "there are prime numbers" can be true, despite there being no numbers---just because the best translation of the quoted sentence translates it to something other than the home sentence "there are prime numbers". This kind of flexibility is apparently lost if you ditch the Quinean use of regimentation.

Saturday, March 15, 2008

Arche talks

In a few weeks time (31st March-5th April) I'm going to be visiting the Arche research centre in St Andrews, and giving a series of talks. I studied at Arche for my PhD, so it'll be really good to go back and see what's going on.

The talks I'm giving relate to the material on indeterminacy and probability (in particular, evidential probability or partial belief). The titles are as follows:
  • Indeterminacy and partial belief I: The open future and future-directed belief.
  • Indeterminacy and partial belief II: Conditionals and conditional belief.
  • Indeterminacy and partial belief III: Vague survival and de se belief.
A lot of these are based around exploring the consequences of the view that if p is indeterminate, and one knows this (or is certain of it) then one shouldn't invest any probability in p. In the case of the open future, of conditionals, and in vague survival---for rather different reasons in each case---this seems highly problematic.

But why should you believe that key principle about how attitudes to indeterminacy constrain attitudes to p? The case I've been focussing on up till now has concerned a truth-value gappy position on indeterminacy. With a broadly classical logic governing the object language, one postulates truth-value gaps in indeterminate cases. There's then an argument directly from this to the sort of revisionism associated with supervaluationist positions in vagueness. And from there, and a certain consistency requirement on rational partial belief (or evidence) we get the result. The consistency requirement is simply the claim, for example, that if q follows from p, one cannot rationally invest more confidence in p than one invests in q (given, of course, that one is aware of the relevant facts).

The only place I appeal to what I've previously called the "Aristotelian" view of indeterminacy (truth value gaps but LEM retained) is in arguing for the connection between attitudes to determinately p and attitudes to p. But I've just realized something that should have been obvious all along---which is that there's a quick argument to something similar for someone who thinks determinacy is marked by a rejection of excluded middle. Assume, to begin with, that the paracompletist nonclassicist will think in borderline cases, characteristically, one should reject the relevant instance of excluded middle. So if one is fully convinced that p is borderline, one should utterly reject pv~p.

It's dangerous to generalize about non-classical systems, but the ones I'm thinking of all endorse the claim p|-pvq---i.e. disjunction introduction. So in particular, an instance of excluded middle will follow from p.

But if we utterly reject pv~p in a borderline case (assign it credence 0), then by the probability-logic link we should utterly reject (assign credence 0) anything from which it follows.
In particular, we should assign credence 0 to p. And by parallel reasoning, we should assign credence 0 to ~p.

[Edit: there's a question, I think, about whether the non-classicist should take us to utterly reject LEM in a borderline case (i.e. degree of partial belief=0). The folklore non-classicist, at least, might suggest that on her conception degrees of truth should be expert functions for partial beliefs---i.e. absent uncertainty about what the degrees of truth are, one should conform the partial beliefs to the degrees of truth. Nick J. J. Smith has a paper where he works out a view that has this effect, from what I can see. It's available here and is well worth a read. If a paradigm borderline case for the folklore nonclassicist is one where degree of truth of p, not p and pv~p are all 0.5, then one's degree of belief in all of them should be 0.5. And there's no obvious violation of the probability-logic link here. (At least in this specific case. The logic will have to be pretty constrained if it isn't to violate probability-logic connection somewhere).]

If all this is correct, then I don't need to restrict myself to discussing the consequences of the Aristotelian/supervaluation sort of view. Everything will generalize to cover the nonclassical cases---and will cover both the folklore nonclassicist and the no interpretation nonclassicist discussed in the previous cases (here's a place where there's convergence).

[A folklore classicist might object that for them, there isn't a unique "logic" for which to run the argument. If one focuses on truth-preservation, one gets say a Kleene logic; if one focuses on non-falsity preservation, one gets an LP logic. But I don't think this thought really goes anywhere...]

Friday, March 14, 2008

Non-classical logics: the no interpretation account

In the previous post, I set out what I took to be one folklore conception of a non-classicist treatment of indeterminacy. Essential elements were (a) the postulation of not two, but several truth statuses; (b) the treatment of "it is indeterminate whether" (or degreed variants thereof) as an extensional operator; (c) the generalization to this setting of a classicist picture, where logic is defined as truth preservation over a range of reinterpretations, one amongst which is the interpretation that gets things right.

I said in that post that I thought that folklore non-classicism was a defensible position, though there's some fairly common maneuvers which I think the folklore non-classicist would be better off ditching. One of these is the idea that the intended interpretation is describable "only non-classically".

However, there's a powerful alternative way of being a non-classicist. The last couple of weeks I've had a sort of road to Damascus moment about this, through thinking about non-classicist approaches to the Liar paradox---and in particular, by reading Hartry Field's articles and new book where he defends a "paracomplete" (excluded-middle rejecting) approach to the semantic paradoxes and work by JC Beall on a "paraconsistent" (contradiction-allowing) approach.

One interpretative issue with the non-classical approaches to the Liar and the like is that a crucial element is a truth-predicate that works in a way very unlike the notion of "truth" or "perfect truth" ("semantic value 1", if you want neutral terminology) that feature in the many-valued semantics. But that's not necessarily a reason by itself to start questioning the folklore picture. For it might be that "truth" is ambiguous---sometimes picking up on a disquotational notion, sometimes tracking the perfect truth notion featuring in the nonclassicists semantics. But in fact there are tensions here, and they run deep.

Let's warm up with a picky point. I was loosely throwing around terms like "3-valued logic" in the last post, and mentioned the (strong) Kleene system. But then I said that we could treat "indeterminate whether p" as an extensional operator (the "tertium operator" that makes "indet p" true when p is third-valued, and otherwise false). But that operator doesn't exist in the Kleene system---the Kleene system isn't expressively complete with respect to the truth functions definable over three values, and this operator is one of the truth-functions that isn't there. (Actually, I believe if you add this operator, you do get something that is expressively complete with respect to the three valued truth-functions).

One might take this to be just an expressive limitation of the Kleene system. After all, one might think, in the intended interpretation there is a truth-function behaving in the way just described lying around, and we can introduce an expression that picks up on it if we like.

But it's absolutely crucial to the nonclassical treatments of the Liar that we can't do this. The problem is that if we have this operator in the language, then "exclusion negation" is definable---an operator "neg" such that "neg p" is true when p is false or indeterminate, and otherwise false (this will correspond to "not determinately p"---i.e. ~p&~indeterminate p, where ~ is so-called "choice" negation, i.e. |~p|=1-|p|). "p v neg p" will be a tautology; and arbitrary q will follow from the pair {p, neg p}. But this is exactly the sort of device that leads to so-called "revenge" puzzles---Liar paradoxes that are paradoxical even in the 3-valued system. Very roughly, it looks as if on reasonable assumptions a system with exclusion negation can't have a transparent truth predicate in it (something where p and T(p) are intersubstitutable in all extensional contexts). It's the whole point of Field and Beall's approaches to retain something with this property. So they can't allow that there is such a notion around (so for example, Beall calls such notions "incoherent").

What's going on? Aren't these approaches just denying us the resources to express the real Liar paradox? The key, I think, is a part of the nonclassicist picture that Beall and Field are quite explicit about and which totally runs against the folklore conception. They do not buy into the idea that model theory is ranging over a class of "interpretations" of the language among which we might hope to find the "intended" interpretation. The core role of the model theory is to give an extensionally adequate characterization of the consequence relation. But the significance of this consequence relation is not to be explained in model-theoretic terms (in particular, in terms of one among the models being intended, so that truth-preservation on every model automatically gives us truth-preservation simpliciter).

(Field sometimes talks about the "heuristic value" of this or that model and explicitly says that there is something more going on than just the use of model theory as an "algebraic device". But while I don't pretend to understand exactly what is being invoked here, it's quite quite clear that the "added value" doesn't consist on some classical 3-valued model being "intended".)

Without appeal to the intended interpretation, I just don't see how the revenge problem could be argued for. The key thought was that there is a truth-function hanging around just waiting to be given a name, "neg". But without the intended interpretation, what does this even mean? Isn't the right thought simply that we're characterizing a consequence relation using rich set-theoretic resources---and in terms of which we can draw differences that correspond to nothing in the phenomenon being modelled.

So it's absolutely essential to the nonclassicist treatment of the Liar paradox that we drop the "intended interpretation" view of language. Field, for one, has a ready-made alternative approach to suggest---a Quinean combination of deflationism about truth and reference, with perhaps something like translatability being invoked to explain how such predicates can be applied to expressions in a language other than ones own.

I'm therefore inclined to think of the non-classicism---at least about the Liar---as a position that *requires* something like this deflationist package. Whereas the folklore non-classicist I was describing previously is clearly someone who takes semantics seriously, and who buys into a generalization of the powerful connections between truth and consequence that a semantic theory of truth affords.

When we come to the analysis of vagueness and other (non-semantic-paradox related) kinds of indeterminacy, it's now natural to consider this "no interpretation" non-classicism. (Field does exactly this---he conceives of his project as giving a unified account of the semantic paradoxes and the paradoxes of vagueness. So at least *this* kind of nonclassicism, we can confidently attribute to a leading figure in the field). All the puzzles described previously for the non-classicist position are thrown into a totally new light. Once we make this move.

To begin with, there's no obvious place for the thought that there are multiple truth statuses. For you get that by looking at a many valued model, and imagining that to be an image of what the intended interpretation of the language must be like. And that is exactly the move that's now illegitimate. Notice that this undercuts one motivation for going towards a fuzzy logic---the idea that one represents vague predicates as some smoothly varying in truth status. Likewise, the idea that we're just "iterating a bad idea" in multiplying truth values doesn't hold water on this conception---since the many-values assigned to sentences in models just don't correspond to truth statuses.

Connectedly, one shouldn't say that contradictions can be "half true" (nor that excluded middle is "half true". It's true that (on say the Kleene approach) that you won't have ~(p&~p) as a tautology. Maybe you could object to *that* feature. But that to me doesn't seem nearly as difficult to swallow as a contradiction having "some truth to it" despite the fact that from a contradiction, everything follows.

One shouldn't assume that "determinately" should be treated as the tertium operator. Indeed, if you're shooting for a combined non-classical theory of vagueness and semantic paradoxes, you *really* shouldn't treat it this way, since as noted above this would give you paradox back.

There is therefore a central and really important question: what is the non-classical treatment of "determinately" to be? Sample answer (lifted from Field's discussion of the literature): define D(p) as p&~(p-->~p), where --> is a certain fuzzy logic conditional. This, Field argues, has many of the features we'd intuitively want a determinately operator to have; and in particular, it allows for non-trivial iterations. So if something like this treatment of "determinately" were correct, then higher-order indeterminacy wouldn't be obviously problematic (Field himself thinks this proposal is on the right lines, but that one must use another kind of conditional to make the case).

"No interpretation" nonclassicism is an utterly, completely different position from the folklore nonclassicism I was talking about before. For me, the reasons to think about indeterminacy and the semantic and vagueness-related paradoxes in the first place, is that they shed light on the nature of language, representation, logic and epistemology. And on these sorts of issues, the no interpretation nonclassicism and the folklore version take diametrically opposed positions on such issues, and flowing from this, the appropriate ways to arguing for or against these views are just very very different.

Non-classical logics: some folklore

Having just finished the final revisions to my Phil Compass survey article on Metaphysical indeterminacy and ontic vagueness (penultimate draft available here) I started thinking some more about how those who favour non-classical logics think of their proposal (in particular, people who think that something like the Kleene 3-valued logic or some continuum valued generalization of it is the appropriate setting for analyzing vagueness or indeterminacy).

The way that I've thought of non-classical treatments in the past is I think a natural interpretation of one non-classical picture, and I think it's reasonably widely shared. In this post, I'm going to lay out some of that folklore-y conception of non-classicism (I won't attribute views to authors, since I'm starting to wonder whether elements of the folklore conception are characterizations offered by opponents, rather than something that the nonclassicists should accept---ultimately I want to go back through the literature and check exactly what people really do say in defence of non-classicism).

Here's my take on folklore nonclassicism. While classicists think there are two truth-statuses, non-classicists believe in three, four or continuum many truth-statuses (let's focus on the 3-valued system for now). They might have various opinions about the structure of these truth-statuses---the most common ones being that they're linearly ordered (so for any two truth-statuses, one is truer than the other). Some sentences (say, Jimmy is bald) get a status that's intermediate between perfect truth and perfect falsity. And if we want to understand the operator "it is indeterminate whether" in such settings, we can basically treat it as a one-place extensional connective: "indeterminate(p)" is perfectly true just in case p has the intermediate status; otherwise it is perfectly false.

So interpreted, non-classicism generalizes classicism smoothly. Just as the classicist can think there is an intended interpretation of language (a two valued model which gets the representation properties of words right) the non-classicist can think there's an intended interpretation (say a three valued model getting the representational features right). And that then dovetails very nicely with a model-theoretic characterization of consequence as truth-preservation under (almost) arbitrary reinterpretations of the language. For if one knows that some pattern is truth-preserving under arbitrary reinterpretations of the language, then that pattern is truth-preserving in particular in the intended interpretation---which is just to say that preserves truth simpliciter. This forges a connection between validity and preserving a status we have all sorts of reason to be interested in---truth. (Of course, one just has to write down this thought to start worrying about the details. Personally, I think this integrated package is tremendously powerful and interesting, deserves detailed scrutiny, and should be given up only as an option of last resort---but maybe others take a different view). All this carries over to the non-classicist position described. So for example, on a Kleene system, validity is a matter of preserving perfect truth under arbitrary reinterpretations---and to the extent we're interested in reasoning which preserves that status, we've got the same reasons as before to be interested in consequence. Of course, one might also think that reasoning that preserves non-perfect-falsity is also an interesting thing to think about. And very nicely, we have a systematic story about that too---this non-perfect falsity sense of validity would be the paraconsistent logic LP (though of course not under an interpretation where contradictions get to be true).

With this much on board, one can put into position various familiar gambits in the literature.

  1. One could say that allowing contradictions to be half-true (i.e. to be indeterminate, to have the middle-status) is just terrible. Or that allowing a parity of truth-status between "Jimmy is bald or he isn't" and "Jimmy's both bald and not bald" just gets intuitions wrong (the most powerful way dialectically to deploy this is if the non-classicist backs their position primarily by intuitions about cases---e.g. our reluctance to endorse the first sentence in borderline cases. The accusation is that if our game is taking intuitions about sentences at face value, it's not at all clear that the non-classicist is doing a good job.)
  2. One could point out that "indeterminacy" for the nonclassicist will trivially iterate. If one defines Determinate(p) as p&~indeterminate(p) (or directly as the one-place connective that is perfectly true if p is, and perfectly false otherwise) then we'll quickly see that Determinately determinately p will follow from determinately p; and determinately indeterminate whether p will follow from indeterminate whether p. And so on.
  3. In reaction to this, one might abandon the 3-valued setting for a smooth, "fuzzy" setting. It's not quite so clear what value "indeterminate p" should take (though there are actually some very funky options out there). Perhaps we might just replace such talk with direct talk of "degrees of determinacy" thought of as degrees of truth---with "D(p)=n" being again a one-place extensional operator perfectly true iff p has degree of truth n; and otherwise perfectly false.
  4. One might complain that all this multiplying of truth-values is fundamentally misguiding. Think of people saying that the "third status" view of indeterminacy is all wrong---indeterminacy is not a status that competes with truth and falsity; or the quip (maybe due to Mark Sainsbury?) that one does "not improve a bad idea by iterating it"---i.e. by introducing finer and finer distinctions.
I don't think these are knock-down worries. (1) I do find persuasive, but I don't think it's very dialectically forceful---I wouldn't know how to argue against someone who claimed their intuitions systematically followed, say, the Kleene tables. (I also think that the nonclassicist can't really appeal to intuitions against the classicist effectively). Maybe some empirical surveying could break a deadlock. But pursued in this way the debate seems sort of dull to me.

(2) seems pretty interesting. It looks like the non-classicist's treatment of indeterminacy, if they stick in the 3-valued setting, doesn't allow for "higher-order" indeterminacy at all. Now, if the nonclassicist is aiming to treat determinacy rather than vagueness *in general* (say if they're giving an account of the indeterminacy purportedly characteristic of the open future, or of the status of personal identity across fission cases) then it's not clear one need to posit higher-order indeterminacy.

I should say that there's one response to the "higher order" issues that I don't really understand. That's the move of saying that strictly, the semantics should be done in a non-classical metalanguage, where we can't assume that "x is true or x is indeterminate or x is false" itself holds. I think Williamson's complaints here in the chapter of his vagueness book are justified---I just don't know how what the "non-classical theory" being appealed to here is, or how one would write it down in order to assess its merits (this is of course just a technical challenge: maybe it could be done).

I'd like to point out one thing here (probably not new to me!). The "nonclassical metalanguage" move at best evades the challange that by saying that there's an intended 3-valued interpretation, one is committed to deny higher-order indeterminacy. But we achieve this, supposedly, by saying that the intended interpretation needs to be described non-classically (or perhaps notions like "the intended interpretation" need to be replaced by some more nuanced characterization). The 3-valued logic is standardly defined in terms of what preserves truth over all 3-valued interpretations describable in a classical metalanguage. We might continue with the classical model-theoretic characterization of the logic. But then (a) if the real interpretation is describable only non-classically, it's not at all clear why truth-preservation in all classical models should entail truth-preservation in the real, non-classical interpretation. And moreover, our object-language "determinacy" operator, treated extensionally, will still trivially iterate---that was a feature of the *logic* itself. This last feature in particular might suggest that we should really be characterizing the logic as truth-preservation under all interpretations including those describable non-classically. But that means we don't even have a fix on the *logic*, for who knows what will turn out to be truth-preserving on these non-classical models (if only because I just don't know how to think about them).

To emphasize again---maybe someone could convince me this could all be done. But I'm inclined to think that it'd be much neater for this view to deny higher-order indeterminacy---which as I mentioned above just may not be a cost in some cases. My suggested answer to (4), therefore, is just to take it on directly---to provide independent motivation for wanting however many values that is independent of having higher-order indeterminacy around (I think Nick J.J. Smith's AJP paper "Vagueness as closeness" pretty explicitly takes this tack for the fuzzy logic folk).

Anyway, I take this to be some of the folklore and dialectical moves that people try out in this setting. Certainly it's the way I once thought of the debate shaping up. It's still, I think, something that's worth thinking about. But in the next post I'm going to say why I think there's a far far more attractive way of being a non-classicist.

Saturday, February 23, 2008

Metaphysics Conference

Announcing: Perspectives on Ontology

A major international conference on metaphysics to be held at the University of Leeds, Sep 5th-7th 2008.

Speakers:
Karen Bennett (Cornell)
John Hawthorne (Oxford)
Gabriel Uzquiano (Oxford)

Jill North (Yale)
Helen Steward (Leeds)
Jessica Wilson (Toronto)

Commentators:
Benj Hellie (Toronto)
Kris McDaniel (Syracuse)
Ted Sider (NYU)
Jason Turner (Leeds)
Robbie Williams (Leeds)


This will be a great conference: so keep your diaries free, and spread the word!

[Update: The conference website is now up.]

Friday, February 22, 2008

"Supervaluationism": the word

I've got progressively more confused over the years about the word "supervaluations". It seems lots of people use it in slightly different ways. I'm going to set out my understanding of some of the issues, but I'm very happy to be contradicted---I'm really in search of information here.

The first occurrence I know of is van Fraassen's treatment of empty names in a 1960's JP article. IIRC, the view there is that language comes with a partial intended interpretation function, specifying the references of non-empty names. When figuring out what is true in the language, we
look at what is true on all the full interpretations that extend the intended partial interpretation. And the result is that "Zeus is blue" will come out neither true nor false, because on some completions of the intended interpretation the empty name"Zeus" will designate a blue object, and others he won't.

So that gives us one meaning of a "supervaluation": a certain technique for defining truth simpliciter out of the model-theoretic notions of truth-relative-to-an-index. It also, so far as I can see, closes off the question of how truth and "supertruth" (=truth on all completions) relate. Supervaluationism, in this original sense, just is the thesis that truth simpliciter should be defined as truth-on-all-interpretations. (Of course, one could argue against supervaluationism in this sense by arguing against the identification; and one could also consistently with this position argue for the ambiguity view that "truth" is ambiguous between supertruth and some other notion---but what's not open is to be a supervaluationist and deny that supertruth is truth in any sense.)

Notice that there's nothing in the use of supervaluations in this sense that enforces any connection to "semantic theories of vagueness". But the technique is obviously suggestive of applications to indeterminacy. So, for example, Thomason in 1970 uses the technique within an "open future" semantics. The idea there is that the future is open between a number of currently-possible histories. And what is true about is what happens on all these histories.

In 1975, Kit Fine published a big and technically sophisticated article mapping out a view of vagueness arising from partially assigned meanings, that used among other things supervaluational techniques. Roughly, the basic move was to assign each predicate with an extension (the set of things to which it definitely applies) and an anti-extension (the set of things to which it definitely doesn't apply). An interpretation is "admissible" only if it assigns an set of objects to a predicate that is a superset of the extension, and which doesn't overlap the anti-extension. There are other constraints on admissibility too: so-called "penumbral connections" have to be respected.

Now, Fine does lots of clever stuff with this basic setup, and explores many options (particularly in dealing with "higher-order" vagueness). But one thing that's been very influential in the folklore is the idea that based on the sort of factors just given, we can get our hands on a set of "admissible" fully precise classical interpretations of the language.

Now the supervaluationist way of working with this would tell you that truth=truth on each admissible interpretation, and falsity=falsity on all such interpretations. But one needn't be a supervaluationist in this sense to be interested in all the interesting technologies that Fine introduces, or the distinctive way of thinking about semantic indecision he introduces. The supervaluational bit of all this refers only to one stage of the whole process---the step from identifying a set of admissible interpretations to the definition of truth simpliciter.

However, "supervaluationism" has often, I think, been identified with the whole Finean programme. In the context of theories of vagueness, for example, it is often used to refer to the idea that vagueness or indeterminacy arises as a matter of some kind of unsettledness as to what precise extensions are expressions pick out ("semantic indecision"). But even if the topic is indeterminacy, the association with *semantic indecision* wasn't part of the original conception of supervaluations---Thomason's use of them in his account of indeterminacy about future contingents illustrates that.

If one understands "supervaluationism" as tied up with the idea of semantic indecision theories of vagueness, then it does become a live issue whether one should identify truth with truth on all admissible interpretations (Fine himself raises this issue). One might think that the philosophically motivated semantic machinery of partial interpretations, penumbral connections and admissible interpretations is best supplemented by a definition of truth in the way that the original VF-supervaluationists favoured. Or one might think that truth-talk should be handled differently, and that the status of "being true on all admissible assignments" shouldn't be identified with truth simpliciter (say because the disquotational schemes fail).

If you think that the latter is the way to go, you can be a "supervaluationist" in the sense of favouring a semantic indecision theory of vagueness elaborated along Kit Fine's lines, without being a supervaluationist in the sense of using Van Fraassen's techniques.

So we've got at least these two disambiguations of "supervaluationism", potentially cross-cutting:

(A) Formal supervaluationism: the view that truth=truth on each of a range of relevant interpretations (e.g. truth on all admissible interpretations (Fine); on all completions (Van Fraassen); or on all histories (Thomason)).
(B) Semantic indeterminacy supervaluationism: the view that (semantic) indeterminacy is a matter of semantic indecision: there being a range of classical interpretations of the language, which, all-in, have equal claim to be the right one.

A couple of comments on each. (A) of course, needs to be tightened up in each case by saying which are the relevant range of classical interpretations quantified over. Notice that a standard way of defining truth in logic books is actually supervaluationist in this sense. Because if you define what it is for a formula "p" to be true as it being true relative to all variable assignments, then open formulae which vary in truth value from variable-assignment to variable assignment end up exactly analogous to formulae like "Zeus is blue" in Van Fraassen's setting: they will be neither true nor false.

Even when it's clear we're talking about supervaluationism in the sense of (B), there's continuing ambiguity. Kit Fine's article is incredibly rich, and as mentioned above, both philosophically and technically he goes far beyond the minimal idea that semantic vagueness has something to do with the meaning-fixing facts not settling on a single classical interpretation.

So there's room for an understanding of "supervaluationism" in the semantic-indecision sense that is also minimal, and which does not commit itself to Fine's ideas about partial interpretations, conceptual truths as "penumbral constraints" etc. David Lewis in "Many but also one", as I read him, has this more minimal understanding of the semantic indecision view---I guess it goes back to Hartry Field's material on inscrutability and indeterminacy and "partial reference" in the early 1970's, and Lewis's own brief comments on related ideas in his (1969).

So even if your understanding of "supervaluationism" is the (B)-sense, and we're thinking only in terms of semantic indeterminacy, then you still owe elaboration of whether you're thinking of a minimal "semantic indecision" notion a la Lewis, or the far richer elaboration of that view inspired by Fine. Once you've settled this, you can go on to say whether or not you're a supervaluationist in the formal, (A)-sense---and that's the debate in the vagueness literature over whether truth should be identified with supertruth.

Finally, there's the question of whether the "semantic indecision" view (B), should be spelled out in semantic or metasemantic terms. One possible view has the meaning-fixing facts picking out not a single interpretation, but a great range of them, which collectively play the role of "semantic value" of the term. That's a semantic or "first-level" (in Matti Eklund's terminology) view of semantic indeterminacy. Another possible view has the meaning-fixing facts trying to fix on a single interpretation which will give the unique semantic value of each term in the language, but it being unsettled which one they favour. That's a metasemantic or "second-level" view of the case.

If you want to complain that second view is spelled out quite metaphorically, I've some sympathy (I think at least in some settings it can be spelled out a bit more tightly). One might also want to press the case that the distinction between semantic and metasemantic here is somewhat terminological---what we choose to label the facts "semantic" or not. Again, I think there might be something to this. There are also questions about how this relates to the earlier distinctions---it's quite natural to think of Fine's elaboration as being a paradigmatically semantic (rather than metasemantic) conception of semantic supervaluationism. It's also quite natural to take the metasemantic idea to go with a conception that is non-supervaluational in the (A) sense. (Perhaps the Lewis-style "semantic indecision" rhetoric might be taken to suggest a metasemantic reading all along, in which way it is not a good way to cash out what's the common ground among (B)-theorists is). But there's room for a lot of debate and negotiation on these and similar points.

Now all this is very confusing to me, and I'm sure I've used the terminology confusingly in the past. It kind of seems to me that ideally, we'd go back to using "supervaluationism" in the (A) sense (on which truth=supertruth is analytic of the notion); and that we'd then talk of "semantic indecision" views of vagueness of various forms, with its formal representation stretching from the minimal Lewis version to the rich Fine elaboration, and its semantic/metasemantic status specified. In any case, by depriving ourselves of commonly used terminology, we'd force ourselves to spell out exactly what the subject matter we're discussing is.

As I say, I'm not sure I've got the history straight, so I'd welcome comments and corrections.

Phlox

I just found about about Phlox, a (relatively) new weblog in philosophy of logic, language and metaphysics. It's attached to a project at Humboldt University in Berlin. As well as following the tradition of philosophy centres with Greek names (this one means "flame", apparently) "Phlox" is a cunning acronym for the group's research interests.

There's several really interesting posts to check out already. Worth heading over!

Thursday, February 14, 2008

Aristotelian indeterminacy and partial beliefs

I’ve just finished a first draft of the second paper of my research leave---title the same as this post. There’s a few different ways to think about this material, but since I hadn't posted for a while I thought I'd write up something about how it connects with/arises from some earlier concerns of mine.

The paper I’m working on ends up with arguments against standard “Aristotelian” accounts of the open future, and standard supervaluational accounts of vague survival. But one starting point was an abstract question in the philosophy of logic: in what sense is standard supervaluationism supposed to be revisionary? So let's start there.

The basic result---allegedly---is that while all classical tautologies are supervaluational tautologies, certain classical rules of inference (such as reductio, proof by cases, conditional proof, etc) fail in the supervaluational setting.

Now I’ve argued previously that one might plausibly evade even this basic form of revisionism (while sticking to the “global” consequence relation, which preserves traditional connections between logical consequence and truth-preservation). But I don’t think it’s crazy to think that global supervaluational consequence is in this sense revisionary. I just think that it requires an often-unacknowledged premise about what should count as a logical constant (in particular, whether “Definitely” counts as one). So for now let’s suppose that there are genuine counterexamples to conditional proof and the rest.

The standard move at this point is to declare this revisionism a problem for supervaluationists. Conditional proof, argument by cases: all these are theoretical descriptions of widespread, sensible and entrenched modes of reasoning. It is objectionably revisionary to give them up.

Of course some philosophers quite like logical revisionism, and would want to face-down the accusation that there’s anything wrong with such revisionism directly. But there’s a more subtle response available. One can admit that the letter of conditional proof, etc are given up, but the pieces of reasoning we normally call “instances of conditional proof” are all covered by supervaluationally valid inference principles. So there’s no piece of inferential practice that’s thrown into doubt by the revisionism of supervaluational consequence: it seems that all that happens is that the theoretical representation of that practice has to take a slightly more subtle form than one might except (but still quite a neat and elegant one).

One thing I mention in that earlier paper but don’t go into is a different way of drawing out consequences of logical revisionism. Forget inferential practice and the like. Another way in which logic connects with the rest of philosophy is in connection to probability (in the sense of rational credences, or Williamson’s epistemic probabilities, or whatever). As I sketched in a previous post, so long as you accept a basic probability-logic constraint, which says that the probability of a tautology should be 1, and the probability of a contradiction should be 0, then the revisionary supervaluational setting quickly forces you to a non-classical theory of probability: one that allows disjunctions to have probability 1 where each disjunct has probability 0. (Maybe we shouldn't call such a thing "probability": I take it that's terminological).

Folk like Hartry Field have argued completely independently of this connection to Supervaluationism that this is the right and necessary way to handle probabilities in the context of indeterminacy. I’ve heard others say, and argue, that we want something closer to classicism (maybe tweaked to allow sets of probability functions, etc). And there are Dutch Book arguments to consider in favour of the classical setting (though I think the responses to these from the perspective of non-classical probabilities are quite convincing).

I’ve got the feeling the debate is at a stand-off, at least at this level of generality. I’m particularly unmoved by people swapping intuitions about degrees of belief it is appropriate to have in borderline cases of vague predicates, and the like (NB: I don’t think that Field ever argues from intuition like this, but others do). Sometimes introspection suggests intriguing things (for example, Schiffer makes the interesting suggestion that one’s degree of belief in a conjunction of two vague propositions is typically matches one’s degree of belief in the propositions themselves). But I can’t see any real dialectical force here. In my own case, I don’t have robust intuitions about these cases. And if I'm to go on testimonial evidence on others intuitions, it’s just too unclear what people are reporting on for me to feel comfortable taking their word for it. I'm worried, for example, they might just be reporting the phenomenological level of confidence they have in the proposition in question: surely that needn’t coincide with one’s degree of belief in the proposition (thinking of an exam you are highly nervous about, but are fairly certain you will pass… your behaviour may well manifest a high degree of belief, even in the absence of phenomenological trappings of confidence). In paradigm cases of indeterminacy, it’s hard to see how to do better than this.

However, I think in application to particular debates we might be able to make much more progress. Let us suppose that the topic for the day is the open future, construed, minimally, as the claim that while there are definite facts about the past and present, the future is indefinite.

Might we model this indefiniteness supervaluationally? Something like this idea (with possible futures playing the role of precisifications) is pretty widespread, perhaps orthodoxy (among friends of the open future). It’s a feature of MacFarlane’s relativistic take on the open future, for example. Even though he’s not a straightforward supervaluationist, he still has truth-value gaps, and he still treats them in a recognizably supervaluational-style way.

The link between supervaluational consequence and the revisionionary behaviour of partial beliefs should now kick in. For if you know with certainty that some P is neither true nor false, we can argue that you should invest no credence at all in P (or in its negation). Likewise, in a framework of evidential probabilities, P gets no evidential probability at all (nor does its negation).

But think what this says in the context of the open future. It’s open which way this fair coin lands: it could be heads, it could be tails. On the “Aristotelian” truth-value conception of this openness, we can know that “the coin will land heads” is gappy. So we should have credence 0 in it, and none of our evidence supports it.

But that’s just silly. This is pretty much a paradigmatic case where we know what partial belief we have and should have in the coin landing heads: one half. And our evidence gives exactly that too. No amount of fancy footwork and messing around with the technicalities of Dempster-Shafer theory leads to a sensible story here, as far as I can see. It’s just plainly the wrong result. (One doesn't improve matters very much by relaxing the assumptions, e.g. taking the degree of belief in a failure of bivalence in such cases to fall short of one: you can still argue for a clearly incorrect degree of belief in the heads-proposition).

Where does that leave us? Well, you might reject the logic-probability link (I think that’d be a bad idea). Or you might try to argue that supervaluational consequence isn’t revisionary in any sense (I sketched one line of thought in support of this in the paper cited). You might give up on it being indeterminate which way the coin will land---i.e. deny the open future, a reasonably popular option. My own favoured reaction, in moods when I’m feeling sympathetic to the open future, is to go for a treatment of metaphysical indeterminacy where bivalence can continue to hold---my colleague Elizabeth Barnes has been advocating such a framework for a while, and it’s taken a long time for me to come round.

All of these reactions will concede the broader point---that at least in this case, we’ve got an independent grip on what the probabilities should be, and that gives us traction against the Supervaluationist.

I think there are other cases where we can find similar grounds for rejecting the structure of partial beliefs/evidential probabilities that supervaluational logic forces upon us. One is simply a case where empirical data on folk judgements has been collected---in connection with indicative conditions. I talk about this in some other work in progress here. Another which I talk about in the current paper, and which I’m particularly interested in, concerns cases of indeterminate survival. The considerations here are much more involved than in indeterminacy we find in connection to the open future or conditionals. But I think the case against the sort of partial beliefs supervaluationism induces can be made out.

All these results turn on very local issues. None, so far as see, generalizes to the case of paradigmatic borderline cases of baldness and the rest. I think that makes the arguments even more interesting: potentially, they can serve as a kind of diagnostic: this style of theory of indeterminacy is suitable over here; that theory over there. That’s a useful thing to have in one’s toolkit.