tag:blogger.com,1999:blog-64321112024-03-07T05:22:47.142+00:00Theories 'n ThingsPhilosophy 'n stuffRobbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.comBlogger86125tag:blogger.com,1999:blog-6432111.post-27801457072956334162008-03-28T09:54:00.002+00:002008-03-28T09:59:05.709+00:00Theories n things moves to wordpressI've decided to follow the recent lead of <a href="http://longwordsbotherme.wordpress.com/">others</a> and <a href="http://theoriesnthings.wordpress.com/">migrate this blog to a new wordpress site</a>.<br /><br />The big appeal for me of this is added functionality---in particular I'll be able to typeset logical notation using latex commands. Should make things prettier and easier.<br /><br />Hope to see people over at the new site!Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com3tag:blogger.com,1999:blog-6432111.post-49959969253646937162008-03-17T16:53:00.007+00:002008-03-18T10:33:11.763+00:00Paracompleteness and credences in contradictions.The last few posts have discussed non-classical approaches to indeterminacy.<br /><br />One of the big stumbling blocks about "folklore" non-classicism, for me, is the suggestion that contradictions (A&~A) be "half true" where A is indeterminate.<br /><br />Here's a way of putting a constraint that appeals to me: I'm inclined to think that an ideal agent ought to fully reject such contradictions.<br /><br />(Actually, I'm not quite as unsympathetic to contradictions as this makes it sound. I'm interested in the dialethic/paraconsistent package. But in that setting, the right thing to say isn't that A&~A is half-true, but that it's true (and probably also false). Attitudinally, the ideal agent ought to fully accept it.)<br /><br />Now the no-interpretation non-classicist has the resources to satisfy this constraint. She can maintain that the ideal degree of belief in A&~A is always 0. Given that:<br /><br />p(A)+p(B)=p(AvB)+p(A&B)<br /><br />we have:<br /><br />p(A)+p(~A)=p(Av~A)<br /><br />And now, whenever we fail to fully accept Av~A, it will follow that our credences in A and ~A don't sum to one. That's the price we pay for continuing to utterly reject contradictions.<br /><br />The *natural* view in this setting, it seems to me, is that accepting indeterminacy of A corresponds to rejecting Av~A. So someone fully aware that A is indeterminate should fully reject Av~A. (Here and in the above I'm following Field's "No fact of the matter" presentation of the nonclassicist).<br /><br />But now consider the folklore nonclassicist, who does take talk of indeterminate propositions being "half true" (or more generally, degree-of-truth talk) seriously. This is the sort of position that the Smith paper cited in the last post explores. The idea there is that indeterminacy corresponds to half-truth, and fully informed ideal agents should set their partial beliefs to match the degree-of-truth of a proposition (e.g. in a 3-valued setting, an indeterminate A should be believed to degree 0.5). [NB: obviously partial beliefs aren't going to behave like a probability function if truth-functional degrees of truth are taken as an "expert function" for them.]<br /><br />Given the usual min/max take on how these multiple truth values get settled over conjunction and negation, for the fullyinformed agent we'll get p(Av~A) set equal to the degree of truth of Av~A, i.e. 0.5. And exactly the same value will be given to A&~A. So contradictions, far from being rejected, are appropriately given the same doxastic attitude as I assign to "this fair coin will land heads"<br /><br />Another way of putting this: the difference between our overall attitude to "the coin will land heads" and "Jim is bald and not bald" only comes out when we consider attitudes to contents in which these are embedded. For example, I fully disbelieve B&~B when B=the coin lands heads; but I half-accept it for B=A&~A . That doesn't at all ameliorate the implausibility of the initial identification, for me, but it's something to work with.<br /><br />In short, the Field-like nonclassicist sets A&~A to 0; and that seems exactly right. Given this and one or two other principles, we get a picture where our confidence in Av~A can take any value---right down to 0; and as flagged before, the probabilities of A and ~A carve up this credence between them, so in the limit where Av~A has probability 0, they take probability 0 too.<br /><br />But the folklore nonclassicist I've been considering, for whom degrees-of-truth are an expert function for degrees-of-belief, has 0.5 as a pivot. For the fully informed, Av~A always exceeds this by exactly the amount that A&~A falls below it---and where A is indeterminate, we assign them all probability 0.5.<br /><br />As will be clear, I'm very much on the Fieldian side here (if I were to be a nonclassicist in the first place). It'd be interesting to know whether folklore nonclassicists do in general have a picture about partial beliefs that works as Smith describes. Consistently with taking semantics seriously, they might think of the probability of A as equal to the measure of the set of possibilities where A is perfectly true. And that will always make the probability of A&~A 0 (since it's never perfectly true); and meet various other of the Fieldian descriptions of the case. What it does put pressure on is the assumption (more common in degree theorists than 3-value theorists perhaps) that we should describe degree-of-truth-0.5 as a way of being "half true"---why in a situation where we know A is halftrue, would we be compelled to fully reject it? So it does seem to me that the rhetoric of folklore degree theorists fits a lot better with Smith's suggestions about how partial beliefs work. And I think it's objectionable on that account.<br /><br />[Just a quick update. First observation. To get a fix on the "pivot" view, think of the constraint being that P(A)+P(~A)=1. Then we can see that P(Av~A)=1-P(A&~A), which summarizes the result. Second observation. I mentioned above that something that treated the degrees of truth as an expert function "won't behave like a probability function". One reflection of that is that the logic-probability link will be violated, given certain choices for the logic. E.g. suppose we require valid arguments to preserve perfect truth (e.g. we're working with the K3 logic). Then A&~A will be inconsistent. And, for example, P(A&~A) can be 0.5, while for some unrelated B, P(B) is 0. But in the logic A&~A|-B, so probability has decreased over a valid argument. Likewise if we're preserving non-perfect-falsity (e.g. we're working with the LP system). Av~A will then be a validity, but P(Av~A) can be 0.5, yet P(B) be 1. These are for the 3-valued case, but clearly that point generalizes to the analogous definitions of validity in a degree valued setting. One of the tricky things about thinking about the area is that there are lots of choice-points around, and one is the definition of validity. So, for example, one might demand that valid arguments preserve both perfect truth and non-perfect falsity; and then the two arguments above drop away since neither |-Av~A nor A&~A|- on this logic. The generalization to this in the many-valued setting is to demand e-truth preservation for every e. Clearly these logics are far more constrained than the K3 or LP logics, and so there's a better chance of avoiding violations of the logic-probability link. Whether one gets away with it is another matter.]Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com3tag:blogger.com,1999:blog-6432111.post-17595890592044925532008-03-17T15:27:00.001+00:002008-03-17T15:28:49.816+00:00Regimentation (x-post).Here's something you frequently hear said about ontological commitment. First, that to determine the ontological commitments of some sentence S, one must look not at S, but at a regimentation or paraphrase of S, S*. Second (very roughly), you determine the ontological commitments of S by looking at what existential claims follow from S*.<br /><br />Leave aside the second step of this. What I'm perplexed about is how people are thinking about the first step. Here's one way to express the confusion. We're asked about the sentence S, but to determine the ontological commitments we look at features of some quite different sentence S*. But what makes us think that looking at S* is a good way of finding out about what's required of the world for S to be true?<br /><br />Reaction (1). The regimentation may be constrained so as to make the relevance of S* transparent. Silly example: regimentation could be required to be null, i.e. every sentence has to be "regimented" as itself. No mystery there. Less silly example: the regimentation might be required to preserve meaning, or truth-conditions, or something similar. If that's the case then one could plausibly argue that the OC's of S and S* coincide, and looking at the OC's of S* is a good way of figuring out what the OC's of S is.<br /><br />(The famous "symmetry" objections are likely to kick in here; i.e. if certain existential statements follow from S but not from S*, and what we know is that S and S* have the same OC's, why take it that S* reveals those OC's better than S?---so for example if S is "prime numbers exist" and S* is a nominalistic paraphrase, we have to say something about whether S* shows that S is innocent of OC to prime numbers, or whether S shows that S* is in a hidden way committed to prime numbers).<br /><br />Obviously this isn't plausibly taken as Quine view---the appeal to synonymy is totally unQuinean (moreover in Word and Object, he's pretty explicit that the regimentation relationship is constrained by whether S* can play the same theoretical role as we initially thought S played---and that'll allow for lots of paraphrases where the sentences don't even have the appearance of being truth-conditionally equivalent).<br /><br />Reaction (2). Adopt a certain general account of the nature of language. In particular, adopt a deflationism about truth and reference. Roughly: T- and R-schemes are in effect introduced into the object language as defining a disquotational truth-predicate. Then note that a truth-predicate so introduced will struggle to explain the predications of truth for sentences not in one's home language. So appeal to translation, and let the word "true" apply to a sentence in a non-home language iff that sentence translates to some sentence of the home language that is true in the disquotational sense. Truth for non-home languages is then the product of translation and disquotational truth. (We can take the "home language" for present purposes to be each person's idiolect).<br /><br />I think from this perspective the regimentation steps in the Quinean characterization of ontological commitment have an obvious place. Suppose I'm a nominalist, and refuse to speak of numbers. But the mathematicians go around saying things like "prime numbers exist". Do I have to say that what they say is untrue (am I going to go up to them and tell them this?) Well, they're not speaking my idiolect; so according to the deflationary conception under consideration, what I need to do is figure out whether there sentences translate to something that's deflationarily true in my idiolect. And if I translate them according to a paraphrase on which their sentences pair with something that is "nominalistically acceptable", then it'll turn out that I can call what they say true.<br /><br />This way of construing the regimentation step of ontological commitment identifies it with the translation step of the translation-disquotation treatment of truth sketched above. So obviously what sorts of constraints we have on translation will transfer directly to constraints on regimentation. One *could* appeal to a notion of truth-conditional equivalence to ground the notion of translatability---and so get back to a conception whereby synonymy (or something close to it) was central to our analysis of language.<br /><br />It's in the Quinean spirit to take translatability to stand free of such notions (to make an intuitive case for separation here, one might, for example, that synonymy should be an equivalence relation, whereas translatability is plausibly non-transitive). There are several options. Quine I guess focuses on preservation of patterns of assent and dissent to translated pairs; Field appeals to his projectivist treatment of norms and takes "good translation" as something to be explained in projective terms. No doubt there are other ways to go.<br /><br />This way of defending the regimentation step in treatments of ontological commitment turns essentially on deflationism about truth; and more than that, on a non-universal part of the deflationary project: the appeal to translation as a way to extend usage of the truth-predicate to non-home languages. If one has some non-translation story about how this should go (and there are some reasons for wanting one, to do with applying "true" to languages whose expressive power outstrips that of one's own) then the grounding for the regimentation step falls away.<br /><br />So the Quinean regimentation-involving treatment of ontological commitment makes perfect sense within a Quinean translation-involving treatment of language in general. But I can't imagine that people who buy into to the received view of ontological commitment really mean to be taking a stance on deflationism vs. its rivals; or about the exact implementation of deflationism.<br /><br />Of course, regimentation or translatability (in a more Quinean, preservation-of-theoretical-role sense, rather than a synonymy-sense) can still be significant for debates about ontological commitments. One might think that arithmetic was ontologically committing, but the existence of some nominalistic paraphrase that was suited to play the same theoretical role gave one some reassurance that one doesn't *have* to use the committing language, and maybe overall these kind of relationships will undermine the case for believing in dubious entities---not because ordinary talk isn't committed to them, but because for theoretical purposes talk needn't be committed to them. But unlike the earlier role for regimentation, this isn't a "hermeneutic" result. E.g. on the Quinean way of doing things, some non-home sentence "there are prime numbers" can be true, despite there being no numbers---just because the best translation of the quoted sentence translates it to something other than the home sentence "there are prime numbers". This kind of flexibility is apparently lost if you ditch the Quinean use of regimentation.Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com1tag:blogger.com,1999:blog-6432111.post-48373815535089686262008-03-15T00:58:00.005+00:002008-03-18T10:21:50.470+00:00Arche talksIn a few weeks time (31st March-5th April) I'm going to be visiting the Arche research centre in St Andrews, and giving a series of talks. I studied at Arche for my PhD, so it'll be really good to go back and see what's going on.<br /><br />The talks I'm giving relate to the material on indeterminacy and probability (in particular, evidential probability or partial belief). The titles are as follows:<br /><ul><li> Indeterminacy and partial belief I: The open future and future-directed belief. </li><li> Indeterminacy and partial belief II: Conditionals and conditional belief. </li><li> Indeterminacy and partial belief III: Vague survival and de se belief. </li></ul>A lot of these are based around exploring the consequences of the view that if p is indeterminate, and one knows this (or is certain of it) then one shouldn't invest any probability in p. In the case of the open future, of conditionals, and in vague survival---for rather different reasons in each case---this seems highly problematic.<br /><br />But why should you believe that key principle about how attitudes to indeterminacy constrain attitudes to p? The case I've been focussing on up till now has concerned a truth-value gappy position on indeterminacy. With a broadly classical logic governing the object language, one postulates truth-value gaps in indeterminate cases. There's then an argument directly from this to the sort of revisionism associated with supervaluationist positions in vagueness. And from there, and a certain consistency requirement on rational partial belief (or evidence) we get the result. The consistency requirement is simply the claim, for example, that if q follows from p, one cannot rationally invest more confidence in p than one invests in q (given, of course, that one is aware of the relevant facts).<br /><br />The only place I appeal to what I've previously called the "Aristotelian" view of indeterminacy (truth value gaps but LEM retained) is in arguing for the connection between attitudes to determinately p and attitudes to p. But I've just realized something that should have been obvious all along---which is that there's a quick argument to something similar for someone who thinks determinacy is marked by a rejection of excluded middle. Assume, to begin with, that the paracompletist nonclassicist will think in borderline cases, characteristically, one should reject the relevant instance of excluded middle. So if one is fully convinced that p is borderline, one should utterly reject pv~p.<br /><br />It's dangerous to generalize about non-classical systems, but the ones I'm thinking of all endorse the claim p|-pvq---i.e. disjunction introduction. So in particular, an instance of excluded middle will follow from p.<br /><br />But if we utterly reject pv~p in a borderline case (assign it credence 0), then by the probability-logic link we should utterly reject (assign credence 0) anything from which it follows.<br />In particular, we should assign credence 0 to p. And by parallel reasoning, we should assign credence 0 to ~p.<br /><br />[Edit: there's a question, I think, about whether the non-classicist should take us to utterly reject LEM in a borderline case (i.e. degree of partial belief=0). The folklore non-classicist, at least, might suggest that on her conception degrees of truth should be expert functions for partial beliefs---i.e. absent uncertainty about what the degrees of truth are, one should conform the partial beliefs to the degrees of truth. Nick J. J. Smith has a paper where he works out a view that has this effect, from what I can see. It's available <a href="http://www.personal.usyd.edu.au/%7Enjjsmith/papers/smith-degrees-truth-belief.pdf">here</a> and is well worth a read. If a paradigm borderline case for the folklore nonclassicist is one where degree of truth of p, not p and pv~p are all 0.5, then one's degree of belief in all of them should be 0.5. And there's no obvious violation of the probability-logic link here. (At least in this specific case. The logic will have to be pretty constrained if it isn't to violate probability-logic connection somewhere).]<br /><br />If all this is correct, then I don't need to restrict myself to discussing the consequences of the Aristotelian/supervaluation sort of view. Everything will generalize to cover the nonclassical cases---and will cover both the folklore nonclassicist and the no interpretation nonclassicist discussed in the previous cases (here's a place where there's convergence).<br /><br />[A folklore classicist might object that for them, there isn't a unique "logic" for which to run the argument. If one focuses on truth-preservation, one gets say a Kleene logic; if one focuses on non-falsity preservation, one gets an LP logic. But I don't think this thought really goes anywhere...]Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com0tag:blogger.com,1999:blog-6432111.post-59671744169305955272008-03-14T23:14:00.004+00:002008-03-15T00:57:51.572+00:00Non-classical logics: the no interpretation accountIn the previous post, I set out what I took to be one folklore conception of a non-classicist treatment of indeterminacy. Essential elements were (a) the postulation of not two, but several truth statuses; (b) the treatment of "it is indeterminate whether" (or degreed variants thereof) as an extensional operator; (c) the generalization to this setting of a classicist picture, where logic is defined as truth preservation over a range of reinterpretations, one amongst which is the interpretation that gets things right.<br /><br />I said in that post that I thought that folklore non-classicism was a defensible position, though there's some fairly common maneuvers which I think the folklore non-classicist would be better off ditching. One of these is the idea that the intended interpretation is describable "only non-classically".<br /><br />However, there's a powerful alternative way of being a non-classicist. The last couple of weeks I've had a sort of road to Damascus moment about this, through thinking about non-classicist approaches to the Liar paradox---and in particular, by reading Hartry Field's articles and new book where he defends a "paracomplete" (excluded-middle rejecting) approach to the semantic paradoxes and work by JC Beall on a "paraconsistent" (contradiction-allowing) approach.<br /><br />One interpretative issue with the non-classical approaches to the Liar and the like is that a crucial element is a truth-predicate that works in a way very unlike the notion of "truth" or "perfect truth" ("semantic value 1", if you want neutral terminology) that feature in the many-valued semantics. But that's not necessarily a reason by itself to start questioning the folklore picture. For it might be that "truth" is ambiguous---sometimes picking up on a disquotational notion, sometimes tracking the perfect truth notion featuring in the nonclassicists semantics. But in fact there are tensions here, and they run deep.<br /><br />Let's warm up with a picky point. I was loosely throwing around terms like "3-valued logic" in the last post, and mentioned the (strong) Kleene system. But then I said that we could treat "indeterminate whether p" as an extensional operator (the "tertium operator" that makes "indet p" true when p is third-valued, and otherwise false). But that operator doesn't exist in the Kleene system---the Kleene system isn't expressively complete with respect to the truth functions definable over three values, and this operator is one of the truth-functions that isn't there. (Actually, I believe if you add this operator, you do get something that is expressively complete with respect to the three valued truth-functions).<br /><br />One might take this to be just an expressive limitation of the Kleene system. After all, one might think, in the intended interpretation there is a truth-function behaving in the way just described lying around, and we can introduce an expression that picks up on it if we like.<br /><br />But it's absolutely crucial to the nonclassical treatments of the Liar that we can't do this. The problem is that if we have this operator in the language, then "exclusion negation" is definable---an operator "neg" such that "neg p" is true when p is false or indeterminate, and otherwise false (this will correspond to "not determinately p"---i.e. ~p&~indeterminate p, where ~ is so-called "choice" negation, i.e. |~p|=1-|p|). "p v neg p" will be a tautology; and arbitrary q will follow from the pair {p, neg p}. But this is exactly the sort of device that leads to so-called "revenge" puzzles---Liar paradoxes that are paradoxical even in the 3-valued system. Very roughly, it looks as if on reasonable assumptions a system with exclusion negation can't have a transparent truth predicate in it (something where p and T(p) are intersubstitutable in all extensional contexts). It's the whole point of Field and Beall's approaches to retain something with this property. So they can't allow that there is such a notion around (so for example, Beall calls such notions "incoherent").<br /><br />What's going on? Aren't these approaches just denying us the resources to express the real Liar paradox? The key, I think, is a part of the nonclassicist picture that Beall and Field are quite explicit about and which totally runs against the folklore conception. They do not buy into the idea that model theory is ranging over a class of "interpretations" of the language among which we might hope to find the "intended" interpretation. The core role of the model theory is to give an extensionally adequate characterization of the consequence relation. But the significance of this consequence relation is not to be explained in model-theoretic terms (in particular, in terms of one among the models being intended, so that truth-preservation on every model automatically gives us truth-preservation simpliciter). <br /><br />(Field sometimes talks about the "heuristic value" of this or that model and explicitly says that there is something more going on than just the use of model theory as an "algebraic device". But while I don't pretend to understand exactly what is being invoked here, it's quite quite clear that the "added value" doesn't consist on some classical 3-valued model being "intended".)<br /><br />Without appeal to the intended interpretation, I just don't see how the revenge problem could be argued for. The key thought was that there is a truth-function hanging around just waiting to be given a name, "neg". But without the intended interpretation, what does this even mean? Isn't the right thought simply that we're characterizing a consequence relation using rich set-theoretic resources---and in terms of which we can draw differences that correspond to nothing in the phenomenon being modelled.<br /><br />So it's absolutely essential to the nonclassicist treatment of the Liar paradox that we drop the "intended interpretation" view of language. Field, for one, has a ready-made alternative approach to suggest---a Quinean combination of deflationism about truth and reference, with perhaps something like translatability being invoked to explain how such predicates can be applied to expressions in a language other than ones own.<br /><br />I'm therefore inclined to think of the non-classicism---at least about the Liar---as a position that *requires* something like this deflationist package. Whereas the folklore non-classicist I was describing previously is clearly someone who takes semantics seriously, and who buys into a generalization of the powerful connections between truth and consequence that a semantic theory of truth affords.<br /><br />When we come to the analysis of vagueness and other (non-semantic-paradox related) kinds of indeterminacy, it's now natural to consider this "no interpretation" non-classicism. (Field does exactly this---he conceives of his project as giving a unified account of the semantic paradoxes and the paradoxes of vagueness. So at least *this* kind of nonclassicism, we can confidently attribute to a leading figure in the field). All the puzzles described previously for the non-classicist position are thrown into a totally new light. Once we make this move.<br /><br />To begin with, there's no obvious place for the thought that there are multiple truth statuses. For you get that by looking at a many valued model, and imagining that to be an image of what the intended interpretation of the language must be like. And that is exactly the move that's now illegitimate. Notice that this undercuts one motivation for going towards a fuzzy logic---the idea that one represents vague predicates as some smoothly varying in truth status. Likewise, the idea that we're just "iterating a bad idea" in multiplying truth values doesn't hold water on this conception---since the many-values assigned to sentences in models just don't correspond to truth statuses.<br /><br />Connectedly, one shouldn't say that contradictions can be "half true" (nor that excluded middle is "half true". It's true that (on say the Kleene approach) that you won't have ~(p&~p) as a tautology. Maybe you could object to *that* feature. But that to me doesn't seem nearly as difficult to swallow as a contradiction having "some truth to it" despite the fact that from a contradiction, everything follows.<br /><br />One shouldn't assume that "determinately" should be treated as the tertium operator. Indeed, if you're shooting for a combined non-classical theory of vagueness and semantic paradoxes, you *really* shouldn't treat it this way, since as noted above this would give you paradox back.<br /><br />There is therefore a central and really important question: what is the non-classical treatment of "determinately" to be? Sample answer (lifted from Field's discussion of the literature): define D(p) as p&~(p-->~p), where --> is a certain fuzzy logic conditional. This, Field argues, has many of the features we'd intuitively want a determinately operator to have; and in particular, it allows for non-trivial iterations. So if something like this treatment of "determinately" were correct, then higher-order indeterminacy wouldn't be obviously problematic (Field himself thinks this proposal is on the right lines, but that one must use another kind of conditional to make the case).<br /><br />"No interpretation" nonclassicism is an utterly, completely different position from the folklore nonclassicism I was talking about before. For me, the reasons to think about indeterminacy and the semantic and vagueness-related paradoxes in the first place, is that they shed light on the nature of language, representation, logic and epistemology. And on these sorts of issues, the no interpretation nonclassicism and the folklore version take diametrically opposed positions on such issues, and flowing from this, the appropriate ways to arguing for or against these views are just very very different.Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com6tag:blogger.com,1999:blog-6432111.post-69121408510810437812008-03-14T22:03:00.003+00:002008-03-14T23:13:06.264+00:00Non-classical logics: some folkloreHaving just finished the final revisions to my Phil Compass survey article on Metaphysical indeterminacy and ontic vagueness (penultimate draft available <a href="http://www.personal.leeds.ac.uk/%7Ephljrgw/wip/onticvagueness.pdf">here</a>) I started thinking some more about how those who favour non-classical logics think of their proposal (in particular, people who think that something like the Kleene 3-valued logic or some continuum valued generalization of it is the appropriate setting for analyzing vagueness or indeterminacy).<br /><br />The way that I've thought of non-classical treatments in the past is I think a natural interpretation of one non-classical picture, and I think it's reasonably widely shared. In this post, I'm going to lay out some of that folklore-y conception of non-classicism (I won't attribute views to authors, since I'm starting to wonder whether elements of the folklore conception are characterizations offered by opponents, rather than something that the nonclassicists should accept---ultimately I want to go back through the literature and check exactly what people really do say in defence of non-classicism).<br /><br />Here's my take on folklore nonclassicism. While classicists think there are two truth-statuses, non-classicists believe in three, four or continuum many truth-statuses (let's focus on the 3-valued system for now). They might have various opinions about the structure of these truth-statuses---the most common ones being that they're linearly ordered (so for any two truth-statuses, one is truer than the other). Some sentences (say, Jimmy is bald) get a status that's intermediate between perfect truth and perfect falsity. And if we want to understand the operator "it is indeterminate whether" in such settings, we can basically treat it as a one-place extensional connective: "indeterminate(p)" is perfectly true just in case p has the intermediate status; otherwise it is perfectly false.<br /><br />So interpreted, non-classicism generalizes classicism smoothly. Just as the classicist can think there is an intended interpretation of language (a two valued model which gets the representation properties of words right) the non-classicist can think there's an intended interpretation (say a three valued model getting the representational features right). And that then dovetails very nicely with a model-theoretic characterization of consequence as truth-preservation under (almost) arbitrary reinterpretations of the language. For if one knows that some pattern is truth-preserving under arbitrary reinterpretations of the language, then that pattern is truth-preserving in particular in the intended interpretation---which is just to say that preserves truth simpliciter. This forges a connection between validity and preserving a status we have all sorts of reason to be interested in---truth. (Of course, one just has to write down this thought to start worrying about the details. Personally, I think this integrated package is tremendously powerful and interesting, deserves detailed scrutiny, and should be given up only as an option of last resort---but maybe others take a different view). All this carries over to the non-classicist position described. So for example, on a Kleene system, validity is a matter of preserving perfect truth under arbitrary reinterpretations---and to the extent we're interested in reasoning which preserves that status, we've got the same reasons as before to be interested in consequence. Of course, one might also think that reasoning that preserves non-perfect-falsity is also an interesting thing to think about. And very nicely, we have a systematic story about that too---this non-perfect falsity sense of validity would be the paraconsistent logic LP (though of course not under an interpretation where contradictions get to be true).<br /><br />With this much on board, one can put into position various familiar gambits in the literature.<br /><br /><ol><li>One could say that allowing contradictions to be half-true (i.e. to be indeterminate, to have the middle-status) is just terrible. Or that allowing a parity of truth-status between "Jimmy is bald or he isn't" and "Jimmy's both bald and not bald" just gets intuitions wrong (the most powerful way dialectically to deploy this is if the non-classicist backs their position primarily by intuitions about cases---e.g. our reluctance to endorse the first sentence in borderline cases. The accusation is that if our game is taking intuitions about sentences at face value, it's not at all clear that the non-classicist is doing a good job.)</li><li>One could point out that "indeterminacy" for the nonclassicist will trivially iterate. If one defines Determinate(p) as p&~indeterminate(p) (or directly as the one-place connective that is perfectly true if p is, and perfectly false otherwise) then we'll quickly see that Determinately determinately p will follow from determinately p; and determinately indeterminate whether p will follow from indeterminate whether p. And so on.<br /></li><li>In reaction to this, one might abandon the 3-valued setting for a smooth, "fuzzy" setting. It's not quite so clear what value "indeterminate p" should take (though there are actually some very funky options out there). Perhaps we might just replace such talk with direct talk of "degrees of determinacy" thought of as degrees of truth---with "D(p)=n" being again a one-place extensional operator perfectly true iff p has degree of truth n; and otherwise perfectly false.<br /></li><li>One might complain that all this multiplying of truth-values is fundamentally misguiding. Think of people saying that the "third status" view of indeterminacy is all wrong---indeterminacy is not a status that competes with truth and falsity; or the quip (maybe due to Mark Sainsbury?) that one does "not improve a bad idea by iterating it"---i.e. by introducing finer and finer distinctions.<br /></li></ol>I don't think these are knock-down worries. (1) I do find persuasive, but I don't think it's very dialectically forceful---I wouldn't know how to argue against someone who claimed their intuitions systematically followed, say, the Kleene tables. (I also think that the nonclassicist can't really appeal to intuitions against the classicist effectively). Maybe some empirical surveying could break a deadlock. But pursued in this way the debate seems sort of dull to me.<br /><br />(2) seems pretty interesting. It looks like the non-classicist's treatment of indeterminacy, if they stick in the 3-valued setting, doesn't allow for "higher-order" indeterminacy at all. Now, if the nonclassicist is aiming to treat determinacy rather than vagueness *in general* (say if they're giving an account of the indeterminacy purportedly characteristic of the open future, or of the status of personal identity across fission cases) then it's not clear one need to posit higher-order indeterminacy.<br /><br />I should say that there's one response to the "higher order" issues that I don't really understand. That's the move of saying that strictly, the semantics should be done in a non-classical metalanguage, where we can't assume that "x is true or x is indeterminate or x is false" itself holds. I think Williamson's complaints here in the chapter of his vagueness book are justified---I just don't know how what the "non-classical theory" being appealed to here is, or how one would write it down in order to assess its merits (this is of course just a technical challenge: maybe it could be done).<br /><br />I'd like to point out one thing here (probably not new to me!). The "nonclassical metalanguage" move at best evades the challange that by saying that there's an intended 3-valued interpretation, one is committed to deny higher-order indeterminacy. But we achieve this, supposedly, by saying that the intended interpretation needs to be described non-classically (or perhaps notions like "the intended interpretation" need to be replaced by some more nuanced characterization). The 3-valued logic is standardly defined in terms of what preserves truth over all 3-valued interpretations describable in a classical metalanguage. We might continue with the classical model-theoretic characterization of the logic. But then (a) if the real interpretation is describable only non-classically, it's not at all clear why truth-preservation in all classical models should entail truth-preservation in the real, non-classical interpretation. And moreover, our object-language "determinacy" operator, treated extensionally, will still trivially iterate---that was a feature of the *logic* itself. This last feature in particular might suggest that we should really be characterizing the logic as truth-preservation under all interpretations including those describable non-classically. But that means we don't even have a fix on the *logic*, for who knows what will turn out to be truth-preserving on these non-classical models (if only because I just don't know how to think about them).<br /><br />To emphasize again---maybe someone could convince me this could all be done. But I'm inclined to think that it'd be much neater for this view to deny higher-order indeterminacy---which as I mentioned above just may not be a cost in some cases. My suggested answer to (4), therefore, is just to take it on directly---to provide independent motivation for wanting however many values that is independent of having higher-order indeterminacy around (I think Nick J.J. Smith's AJP paper "Vagueness as closeness" pretty explicitly takes this tack for the fuzzy logic folk).<br /><br />Anyway, I take this to be some of the folklore and dialectical moves that people try out in this setting. Certainly it's the way I once thought of the debate shaping up. It's still, I think, something that's worth thinking about. But in the next post I'm going to say why I think there's a far far more attractive way of being a non-classicist.Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com0tag:blogger.com,1999:blog-6432111.post-73635467669430015402008-02-23T22:44:00.002+00:002008-03-26T13:43:19.238+00:00Metaphysics Conference<p>Announcing: Perspectives on Ontology<br /><br />A major international conference on metaphysics to be held at the University of Leeds, Sep 5th-7th 2008.<br /><br />Speakers:<br />Karen Bennett (Cornell)<br />John Hawthorne (Oxford)<br />Gabriel Uzquiano (Oxford)</p><p>Jill North (Yale)<br />Helen Steward (Leeds)<br />Jessica Wilson (Toronto)<br /><br />Commentators:<br />Benj Hellie (Toronto)<br />Kris McDaniel (Syracuse)<br />Ted Sider (NYU)<br />Jason Turner (Leeds)<br />Robbie Williams (Leeds)</p><p><br /></p><p>This will be a great conference: so keep your diaries free, and spread the word!</p><p>[Update: The <a href="http://www.personal.leeds.ac.uk/%7Ephlrpc/Perspectives%20on%20Ontology.htm">conference website</a> is now up.]<br /></p>Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com2tag:blogger.com,1999:blog-6432111.post-13722478082583470062008-02-22T15:53:00.003+00:002008-02-22T17:08:43.752+00:00"Supervaluationism": the wordI've got progressively more confused over the years about the word "supervaluations". It seems lots of people use it in slightly different ways. I'm going to set out my understanding of some of the issues, but I'm very happy to be contradicted---I'm really in search of information here.<br /><br />The first occurrence I know of is van Fraassen's treatment of empty names in a 1960's JP article. IIRC, the view there is that language comes with a partial intended interpretation function, specifying the references of non-empty names. When figuring out what is true in the language, we<br />look at what is true on all the full interpretations that extend the intended partial interpretation. And the result is that "Zeus is blue" will come out neither true nor false, because on some completions of the intended interpretation the empty name"Zeus" will designate a blue object, and others he won't.<br /><br />So that gives us one meaning of a "supervaluation": a certain technique for defining truth simpliciter out of the model-theoretic notions of truth-relative-to-an-index. It also, so far as I can see, closes off the question of how truth and "supertruth" (=truth on all completions) relate. Supervaluationism, in this original sense, just is the thesis that truth simpliciter should be defined as truth-on-all-interpretations. (Of course, one could argue against supervaluationism in this sense by arguing against the identification; and one could also consistently with this position argue for the ambiguity view that "truth" is ambiguous between supertruth and some other notion---but what's not open is to be a supervaluationist and deny that supertruth is truth in any sense.)<br /><br />Notice that there's nothing in the use of supervaluations in this sense that enforces any connection to "semantic theories of vagueness". But the technique is obviously suggestive of applications to indeterminacy. So, for example, Thomason in 1970 uses the technique within an "open future" semantics. The idea there is that the future is open between a number of currently-possible histories. And what is true about is what happens on all these histories.<br /><br />In 1975, Kit Fine published a big and technically sophisticated article mapping out a view of vagueness arising from partially assigned meanings, that used among other things supervaluational techniques. Roughly, the basic move was to assign each predicate with an extension (the set of things to which it definitely applies) and an anti-extension (the set of things to which it definitely doesn't apply). An interpretation is "admissible" only if it assigns an set of objects to a predicate that is a superset of the extension, and which doesn't overlap the anti-extension. There are other constraints on admissibility too: so-called "penumbral connections" have to be respected.<br /><br />Now, Fine does lots of clever stuff with this basic setup, and explores many options (particularly in dealing with "higher-order" vagueness). But one thing that's been very influential in the folklore is the idea that based on the sort of factors just given, we can get our hands on a set of "admissible" fully precise classical interpretations of the language.<br /><br />Now the supervaluationist way of working with this would tell you that truth=truth on each admissible interpretation, and falsity=falsity on all such interpretations. But one needn't be a supervaluationist in this sense to be interested in all the interesting technologies that Fine introduces, or the distinctive way of thinking about semantic indecision he introduces. The supervaluational bit of all this refers only to one stage of the whole process---the step from identifying a set of admissible interpretations to the definition of truth simpliciter.<br /><br />However, "supervaluationism" has often, I think, been identified with the whole Finean programme. In the context of theories of vagueness, for example, it is often used to refer to the idea that vagueness or indeterminacy arises as a matter of some kind of unsettledness as to what precise extensions are expressions pick out ("semantic indecision"). But even if the topic is indeterminacy, the association with *semantic indecision* wasn't part of the original conception of supervaluations---Thomason's use of them in his account of indeterminacy about future contingents illustrates that.<br /><br />If one understands "supervaluationism" as tied up with the idea of semantic indecision theories of vagueness, then it does become a live issue whether one should identify truth with truth on all admissible interpretations (Fine himself raises this issue). One might think that the philosophically motivated semantic machinery of partial interpretations, penumbral connections and admissible interpretations is best supplemented by a definition of truth in the way that the original VF-supervaluationists favoured. Or one might think that truth-talk should be handled differently, and that the status of "being true on all admissible assignments" shouldn't be identified with truth simpliciter (say because the disquotational schemes fail).<br /><br />If you think that the latter is the way to go, you can be a "supervaluationist" in the sense of favouring a semantic indecision theory of vagueness elaborated along Kit Fine's lines, without being a supervaluationist in the sense of using Van Fraassen's techniques.<br /><br />So we've got at least these two disambiguations of "supervaluationism", potentially cross-cutting:<br /><br />(A) Formal supervaluationism: the view that truth=truth on each of a range of relevant interpretations (e.g. truth on all admissible interpretations (Fine); on all completions (Van Fraassen); or on all histories (Thomason)).<br />(B) Semantic indeterminacy supervaluationism: the view that (semantic) indeterminacy is a matter of semantic indecision: there being a range of classical interpretations of the language, which, all-in, have equal claim to be the right one.<br /><br />A couple of comments on each. (A) of course, needs to be tightened up in each case by saying which are the relevant range of classical interpretations quantified over. Notice that a standard way of defining truth in logic books is actually supervaluationist in this sense. Because if you define what it is for a formula "p" to be true as it being true relative to all variable assignments, then open formulae which vary in truth value from variable-assignment to variable assignment end up exactly analogous to formulae like "Zeus is blue" in Van Fraassen's setting: they will be neither true nor false.<br /><br />Even when it's clear we're talking about supervaluationism in the sense of (B), there's continuing ambiguity. Kit Fine's article is incredibly rich, and as mentioned above, both philosophically and technically he goes far beyond the minimal idea that semantic vagueness has something to do with the meaning-fixing facts not settling on a single classical interpretation.<br /><br />So there's room for an understanding of "supervaluationism" in the semantic-indecision sense that is also minimal, and which does not commit itself to Fine's ideas about partial interpretations, conceptual truths as "penumbral constraints" etc. David Lewis in "Many but also one", as I read him, has this more minimal understanding of the semantic indecision view---I guess it goes back to Hartry Field's material on inscrutability and indeterminacy and "partial reference" in the early 1970's, and Lewis's own brief comments on related ideas in his (1969).<br /><br />So even if your understanding of "supervaluationism" is the (B)-sense, and we're thinking only in terms of semantic indeterminacy, then you still owe elaboration of whether you're thinking of a minimal "semantic indecision" notion a la Lewis, or the far richer elaboration of that view inspired by Fine. Once you've settled this, you can go on to say whether or not you're a supervaluationist in the formal, (A)-sense---and that's the debate in the vagueness literature over whether truth should be identified with supertruth.<br /><br />Finally, there's the question of whether the "semantic indecision" view (B), should be spelled out in semantic or metasemantic terms. One possible view has the meaning-fixing facts picking out not a single interpretation, but a great range of them, which collectively play the role of "semantic value" of the term. That's a semantic or "first-level" (in <a href="http://www.people.cornell.edu/pages/me72/levels.pdf">Matti Eklund</a>'s terminology) view of semantic indeterminacy. Another possible view has the meaning-fixing facts trying to fix on a single interpretation which will give the unique semantic value of each term in the language, but it being unsettled which one they favour. That's a metasemantic or "second-level" view of the case.<br /><br />If you want to complain that second view is spelled out quite metaphorically, I've some sympathy (I think at least in some settings it can be spelled out a bit more tightly). One might also want to press the case that the distinction between semantic and metasemantic here is somewhat terminological---what we choose to label the facts "semantic" or not. Again, I think there might be something to this. There are also questions about how this relates to the earlier distinctions---it's quite natural to think of Fine's elaboration as being a paradigmatically semantic (rather than metasemantic) conception of semantic supervaluationism. It's also quite natural to take the metasemantic idea to go with a conception that is non-supervaluational in the (A) sense. (Perhaps the Lewis-style "semantic indecision" rhetoric might be taken to suggest a metasemantic reading all along, in which way it is not a good way to cash out what's the common ground among (B)-theorists is). But there's room for a lot of debate and negotiation on these and similar points. <br /><br />Now all this is very confusing to me, and I'm sure I've used the terminology confusingly in the past. It kind of seems to me that ideally, we'd go back to using "supervaluationism" in the (A) sense (on which truth=supertruth is analytic of the notion); and that we'd then talk of "semantic indecision" views of vagueness of various forms, with its formal representation stretching from the minimal Lewis version to the rich Fine elaboration, and its semantic/metasemantic status specified. In any case, by depriving ourselves of commonly used terminology, we'd force ourselves to spell out exactly what the subject matter we're discussing is.<br /><br />As I say, I'm not sure I've got the history straight, so I'd welcome comments and corrections.Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com5tag:blogger.com,1999:blog-6432111.post-64286408998809117472008-02-22T15:44:00.002+00:002008-02-22T15:52:21.372+00:00PhloxI just found about about <a href="http://eppe.wordpress.com/">Phlox</a>, a (relatively) new weblog in philosophy of logic, language and metaphysics. It's attached to a project at Humboldt University in Berlin. As well as following the tradition of philosophy centres with <a href="http://www.st-andrews.ac.uk/%7Earche/">Greek</a> <a href="http://www.ub.es/grc_logos/">names </a>(this one means "flame", apparently) "Phlox" is a cunning acronym for the group's research interests.<br /><br />There's several really interesting posts to check out already. Worth heading over!Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com0tag:blogger.com,1999:blog-6432111.post-25831255096695448402008-02-14T02:25:00.002+00:002008-02-14T02:40:51.571+00:00Aristotelian indeterminacy and partial beliefs<p class="MsoNormal">I’ve just finished a first draft of the <a href="http://www.personal.leeds.ac.uk/%7Ephljrgw/wip/AristotelianismBelief.pdf">second paper of my research leave</a>---title the same as this post. There’s a few different ways to think about this material, but since I hadn't posted for a while I thought I'd write up something about how it connects with/arises from some earlier concerns of mine.<br /></p> <p class="MsoNormal"><o:p> </o:p>The paper I’m working on ends up with arguments against standard “Aristotelian” accounts of the open future, and standard supervaluational accounts of vague survival. <span style=""> </span>But one starting point was an abstract question in the philosophy of logic: in what sense is standard supervaluationism supposed to be revisionary? So let's start there.<br /></p> <p class="MsoNormal"><o:p></o:p>The basic result---allegedly---is that while all classical tautologies are supervaluational tautologies, certain classical rules of inference (such as reductio, proof by cases, conditional proof, etc) fail in the supervaluational setting. </p> <p class="MsoNormal"><o:p> </o:p>Now <a href="http://www.personal.leeds.ac.uk/%7Ephljrgw/wip/supervaluationalconsequence.pdf">I’ve argued previously</a> that one might plausibly evade even this basic form of revisionism (while sticking to the “global” consequence relation, which preserves traditional connections between logical consequence and truth-preservation). But I don’t think it’s <i style="">crazy</i> to think that global supervaluational consequence is in this sense revisionary. I just think that it requires an often-unacknowledged premise about what should count as a logical constant (in particular, whether “Definitely” counts as one). So for now let’s suppose that there are genuine counterexamples to conditional proof and the rest. </p> <p class="MsoNormal"><o:p>T</o:p>he standard move at this point is to declare this revisionism a problem for supervaluationists. Conditional proof, argument by cases: all these are theoretical descriptions of widespread, sensible and entrenched modes of reasoning. It is objectionably revisionary to give them up. </p> <p class="MsoNormal"><o:p></o:p>Of course some philosophers quite like logical revisionism, and would want to face-down the accusation that there’s anything wrong with such revisionism directly. But there’s a more subtle response available. One can admit that the <i style="">letter </i>of conditional proof, etc are given up, but the pieces of reasoning we normally call “instances of conditional proof” are all covered by supervaluationally valid inference principles. So there’s no piece of <i style="">inferential practice</i> that’s thrown into doubt by the revisionism of supervaluational consequence: it seems that all that happens is that the <i style="">theoretical representation</i> of that practice has to take a slightly more subtle form than one might except (but still quite a neat and elegant one). </p> <p class="MsoNormal"><o:p></o:p>One thing I mention in that earlier paper but don’t go into is a different way of drawing out consequences of logical revisionism. Forget inferential practice and the like. Another way in which logic connects with the rest of philosophy is in connection to probability (in the sense of rational credences, or Williamson’s epistemic probabilities, or whatever). As <a href="http://theoriesnthings.blogspot.com/2007/11/degrees-of-belief-and-logic.html">I sketched in a previous post</a>, so long as you accept a basic probability-logic constraint, which says that the probability of a tautology should be 1, and the probability of a contradiction should be 0, then the revisionary supervaluational setting quickly forces you to a non-classical theory of probability: one that allows disjunctions to have probability 1 where each disjunct has probability 0. (Maybe we shouldn't call such a thing "probability": I take it that's terminological).<br /></p> <p class="MsoNormal"><o:p></o:p>Folk like Hartry Field have argued completely independently of this connection to Supervaluationism that this is the <i style="">right </i>and <i style="">necessary</i> way to handle probabilities in the context of indeterminacy. I’ve heard others say, and argue, that we want something closer to classicism (maybe tweaked to allow sets of probability functions, etc). And there are Dutch Book arguments to consider in favour of the classical setting (though I think the responses to these from the perspective of non-classical probabilities are quite convincing).</p><p class="MsoNormal">I’ve got the feeling the debate is at a stand-off, at least at this level of generality. I’m particularly unmoved by people swapping intuitions about degrees of belief it is appropriate to have in borderline cases of vague predicates, and the like (NB: I don’t think that Field ever argues from intuition like this, but others do). Sometimes introspection suggests intriguing things (for example, Schiffer makes the interesting suggestion that one’s degree of belief in a conjunction of two vague propositions is typically matches one’s degree of belief in the propositions themselves).<span style=""> </span>But I can’t see any real dialectical force here. In my own case, I don’t have robust intuitions about these cases. And if I'm to go on testimonial evidence on others intuitions, it’s just too unclear what people are reporting on for me to feel comfortable taking their word for it. I'm worried, for example, they might just be reporting the phenomenological level of confidence they have in the proposition in question: surely that needn’t coincide with one’s degree of belief in the proposition (thinking of an exam you are highly nervous about, but are fairly certain you will pass… your behaviour may well manifest a high degree of belief, even in the absence of phenomenological trappings of confidence). In paradigm cases of indeterminacy, it’s hard to see how to do better than this.<br /></p> <p class="MsoNormal">However, I think in application to <i style="">particular</i> debates we might be able to make much more progress. Let us suppose that the topic for the day is the open future, construed, minimally, as the claim that while there are definite facts about the past and present, the future is indefinite. </p> <p class="MsoNormal"><o:p></o:p>Might we model this indefiniteness supervaluationally? Something like this idea (with possible futures playing the role of precisifications) is pretty widespread, perhaps orthodoxy (among friends of the open future). It’s a feature of MacFarlane’s relativistic take on the open future, for example. Even though he’s not a straightforward supervaluationist, he still has truth-value gaps, and he still treats them in a recognizably supervaluational-style way. </p> <p class="MsoNormal"><o:p></o:p>The link between supervaluational consequence and the revisionionary behaviour of partial beliefs should now kick in. For if you know with certainty that some P is neither true nor false, we can argue that you should invest no credence at all in P (or in its negation). Likewise, in a framework of evidential probabilities, P gets no evidential probability at all (nor does its negation). </p> <p class="MsoNormal"><o:p></o:p>But think what this says in the context of the open future. It’s open which way this fair coin lands: it could be heads, it could be tails. On the “Aristotelian” truth-value conception of this openness, we can know that “the coin will land heads” is gappy. So we should have credence 0 in it, and none of our evidence supports it.</p> <p class="MsoNormal"><o:p></o:p>But that’s just silly. This is pretty much a paradigmatic case where we know what partial belief we have and should have in the coin landing heads: one half. And our evidence gives exactly that too. No amount of fancy footwork and messing around with the technicalities of Dempster-Shafer theory leads to a sensible story here, as far as I can see. It’s just plainly the wrong result. (One doesn't improve matters very much by relaxing the assumptions, e.g. taking the degree of belief in a failure of bivalence in such cases to fall short of one: you can still argue for a clearly incorrect degree of belief in the heads-proposition).<br /></p> <p class="MsoNormal"><o:p></o:p>Where does that leave us? Well, you might reject the logic-probability link (I think that’d be a bad idea). Or you might try to argue that supervaluational consequence isn’t revisionary in any sense (I sketched one line of thought in support of this in the paper cited). You might give up on it being indeterminate which way the coin will land---i.e. deny the open future, a reasonably popular option. My own favoured reaction, in moods when I’m feeling sympathetic to the open future, is to go for a treatment of metaphysical indeterminacy where bivalence can continue to hold---my colleague Elizabeth Barnes has been advocating such a framework for a while, and it’s taken a long time for me to come round.</p> <p class="MsoNormal"><o:p> </o:p>All of these reactions will concede the broader point---that at least in this case, we’ve got an independent grip on what the probabilities should be, and that gives us traction against the Supervaluationist. </p> <p class="MsoNormal"><o:p></o:p>I think there are other cases where we can find similar grounds for rejecting the structure of partial beliefs/evidential probabilities that supervaluational logic forces upon us. One is simply a case where empirical data on folk judgements has been collected---in connection with indicative conditions. I talk about this in some other work in progress <a href="http://www.personal.leeds.ac.uk/%7Ephljrgw/wip/vagcond.pdf">here</a>. Another which I talk about in the current paper, and which I’m particularly interested in, concerns cases of indeterminate survival. The considerations here are much more involved than in indeterminacy we find in connection to the open future or conditionals. But I think the case against the sort of partial beliefs supervaluationism induces can be made out.<o:p></o:p></p><p class="MsoNormal"><o:p></o:p>All these results turn on very local issues. None, so far as see, generalizes to the case of paradigmatic borderline cases of baldness and the rest. I think that makes the arguments even more interesting: potentially, they can serve as a kind of diagnostic: this style of theory of indeterminacy is suitable over here; that theory over there. That’s a useful thing to have in one’s toolkit.</p>Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com3tag:blogger.com,1999:blog-6432111.post-86155683390753391172007-12-18T17:13:00.000+00:002007-12-21T09:44:50.362+00:00Structured propositions and metasemanticsHere is the final post (for the time being) on structured propositions. As promised, this is to be an account of the truth-conditions of structured propositions, presupposing a certain reasonably contentious take on the metaphysics of linguistic representation (metasemantics). It's going to be compatible with the view that structured propositions are nothing but certain n-tuples: lists of their components. (See earlier posts if you're getting concerned about other factors, e.g. the potential arbitriness in the choice of which n-tuples are to be identified with the structured proposition that Dummett is a philosopher.)<br /><br />Here's a very natural way of thinking of what the relation between *sentences* and truth-conditions are, on a structured propositions picture. It's that metaphysically, the relation of "S having truth-conditions C" breaks down into two more fundamental relations: "S denoting struc prop p" and "struc prop p having truth-conditions C". The thought is something like: primarily, sentences express thoughts (=struc propositions), and thoughts themselves are the sorts of things that have intrinsic/essential representational properties. Derivatively, sentences are true or false of situations, by expressing thoughts that are true or false of those situations. As I say, it's a natural picture.<br /><br />In the previous posting, I've been talking as though this direction-of-explanation was ok, and that the truth-conditions of structured propositions should have explanatory priority over the truth-conditions of sentences, so we get the neat separation into the contingent feature of linguistic representation (which struc prop a sentence latches onto) and the necessary feature (what the TCs are, given the struc prop expressed).<br /><br />The way I want to think of things, something like the reverse holds. Here’s the way I think of the metaphysics of linguistic representation. In the beginning, there were patterns of assent and dissent. Assent to certain sentences is systematically associated with certain states of the world (coarse-grained propositions, if you like) perhaps by conventions of truthfulness and trust (cf. Lewis's "Language and Languages"). What it is for expressions E in a communal language to have semantic value V is for E to be paired with V under the optimally eligible semantic theory fitting with that association of sentences with coarse-grained propositions.<br /><br />That's a lot to take in all at one go, but it's basically the picture of linguistic representation as fixed by considerations of charity/usage and eligibility/naturalness that lots of people at the moment seem to find appealing. The most striking feature---which it shares with other members of the "radical interpretation" approach to metasemantics---is that rather than starting from the referential properties of lexical items like names and predicates, it depicts linguistic content as fixed holistically by how well it meshes with patterns of usage. (There's lots to say here to unpack these metaphors, and work out what sort of metaphysical story of representation is being appealed to: that's something I went into quite a bit in my thesis---my take on it is that it's something close to a fictionalist proposal).<br /><br />This metasemantics, I think, should be neutral between various semantic frameworks for generating the truth conditions. With a bit of tweaking, you can fit in a Davidsonian T-theoretic semantic theory into this picture (as suggested by, um... Davidson). Someone who likes interpretational semantics but isn't a fan of structured propositions might take the semantic values of names to be objects, and the semantic values of sentences to be coarse-grained propositions, and say that it's these properties that get fixed via best semantic theory of the patterns of assent and dissent (that's Lewis's take).<br /><br />However, if you think that to adequately account for the complexities of natural language you need a more sophisticated, structured proposition, theory, this story also allows for it. The meaning-fixing semantic theory assign objects to names, and structured propositions to sentences, together with a clause specifying how the structured propositions are to be paired up with coarse-grained propositions. Without the second part of the story, we'd end up with an association between sentences and structured propositions, but we wouldn't make connection with the patterns of assent and dissent if these take the form of associations of sentences with *coarse grained* propositions (as on Lewis's convention-based story). So on this radical interpretation story where the targetted semantic theories take a struc prop form, we get a simultaneous fix on *both* the denotation relation between sentences and struc props, and the relation between struc props and coarse-grained truth-conditions.<br /><br />Let's indulge in a bit of "big-picture" metaphor-ing. It’d be misleading to think of this overall story as the analysis of sentential truth-conditions into a prior, and independently understood, notion of the truth-conditions of structured propositions, just as it's wrong on the radical interpretation picture to think of sentential content as "analyzed in terms of" a prior, and independently understood, notion of subsentential reference. Relative to the position sketched, it’s more illuminating to think of the pairing of structured and coarse-grained propositions as playing a purely instrumental role in smoothing the theory of the representational features of language. It's language which is the “genuine” representational phenomenon in the vicinity: the truth-conditional features attributed to struc propositions are a mere byproduct.<br /><br />Again speaking metaphorically, it's not that sentences get to have truth-conditions in a merely derivative sense. Rather, structured propositions have truth-conditions in a merely derivative sense: the structured proposition has truth-conditions C if it is paired with C under the optimal overall theory of linguistic representation.<br /><br />For all we've said, it may turn out that the same assignment of truth-conditions to set-theoretic expressions will always be optimal, no matter which language is in play. If so, then it might be that there's a sense in which structured propositions have "absolute" truth-conditions, not relative to this or that language. But, realistically, one'd expect some indeterminacy in what struc props play the role (recall the Benacerraf point King makes, and the equally fitness of [a,F] and [F,a] to play that "that a is F" role). And it's not immediately clear why the choice to go one way for one natural language should constrain way this element is deployed in another language. So it's at least prima facie open that it's not definitely the case that the same structured propositions, with the same TCs, are used in the semantics of both French and English.<br /><br />It's entirely in the spirit of the current proposal that we think of we identify [a,F] with the structured proposition that a is F only relative to a given natural language, and that this creature only has the truth-conditions it does relative to that language. This is all of a piece with the thought that the structured proposition's role is instrumental to the theory of linguistic representation, and not self-standing.<br /><br />Ok. So with all this on the table, I'm going to return to read the book that prompted all this, and try to figure out whether there's a theoretical need for structured propositions with representational properties richer than those attributed by the view just sketched.<br /><br />[update: interestingly, it turns out that King's book doesn't give the representational properties of propositions explanatory priority over the representational properties of sentences. His view is that the proposition that Dummett thinks is (very crudely, and suppressing details) the fact that in some actual language there is a sentence of (thus-and-such a structure) of which the first element is a word referring to Dummett and the second element is a predicate expressing thinking. So clearly semantic properties of words are going to be prior to the representational properties of propositions, since those semantic properties are components of the proposition. But more than this, from what I can make out, King's thought is that if there was a time where humans spoke a language without attitude-ascriptions and the like, then sentences would have truth-conditions, and the proposition-like facts would be "hanging around" them, but the proposition-like facts wouldn't have any representational role. Once we start making attitude ascriptions, we implicitly treat the proposition-like structure as if it had the same TCs as sentences, and (by something like a charity/eligibility story) the "propositional relation" element acquires semantic significance and the proposition-like structure gets to have truth-conditions for the first time.<br /><br />That's very close to the overall package I'm sketching above. What's significant dialectically, perhaps, is that this story can explain TCs for all sorts of apparently non-semantic entities, like sets. So I'm thinking it really might be the Benacerraf point that's bearing the weight in ruling out set-theoretic entities as struc propns---as explained previously, I don't go along with *that*.]Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com4tag:blogger.com,1999:blog-6432111.post-13542344591236462042007-12-18T13:15:00.000+00:002007-12-18T14:32:01.446+00:00Structured propositions and truth conditions.In the previous post, I talked about the view of structured propositions as lists, or n-tuples, and the Benacerraf objections against it. So now I'm moving on to a different sort of worry. Here's King expressing it:<br /><br />“A final difficulty for the view that propositions are ordered n-tuples concerns the mystery of how or why on that view they have truth conditions. On any definition of ordered n-tuples we are considering, they are just sets. Presumably, many sets have no truth conditions (eg. The set of natural numbers). But then why do certain sets, certain ordered n-tuples, have truth-conditions? Since not all sets have them, there should be some explanation of why certain sets do have them. It is very hard to see what this explanation could be.”<br /><br />I feel the force of something in this vicinity, but I'm not sure how to capture the worry. In particular, I'm not sure whether the it's right to think of structured propositions' having truth-conditions as a particularly "deep" fact over which there is mystery in the way King suggests. To get what I'm after here, it's probably best simply to lay out a putative account of the truth-conditions of structured propositions, and just to think about how we'd formulate the explanatory challenge.<br /><br />Suppose, for example, one put forward the following sort of theory:<br /><br />(i) The structured proposition that Dummett is a philosopher = [Dummett, being a philosopher].<br />(ii) [Dummett, being a philosopher] stands in the T relation to w, iff Dummett is a philosopher according to w.<br />(iii) bearing the T-relation to w=being true at w<br /><br />Generalizing,<br /><br />(i) For all a, F, the structured proposition that a is F = [a, F]<br />(ii) For all individuals a, and properties F, [a, F] stands in the T relation to w iff a instantiates F according to w.<br />(iii) bearing the T-relation to w=being true at w<br /><br />In a full generality, I guess we’d semantically ascend for an analogue of (i), and give a systematic account of what structured propositions are associated with which English sentences (presumably a contingent matter). For (ii), we’d give a specification (which there’s no reason to make relative to any contingent facts) about which ordered n-tuples stand in the T-relation to which worlds. (iii) can stay as it is.<br /><br />The naïve theorist may then claim that (ii) and (iii) amount to a <i style="">reductive account</i> of what it is for a structured proposition to have truth-conditions. Why does [1,2] not have any truth-conditions, but [Dummett, being a philosopher] does? Because the story about <i style="">what it is</i> for an ordered pair to stand in the T-relation to a given world, just doesn’t return an answer where the second component isn’t a property. This seems like a totally cheap and nasty response, I’ll admit. But what’s wrong with it? If that’s what truth-conditions for structured propositions <i style="">are</i>, then what’s left to explain? It doesn't seem that there is any mystery over (ii): this can be treated as a reductive definition of the new term "bearing the T-relation". Are there somehow <span style="font-style: italic;">explanatory</span> challenges facing someone who endorses the property-identity (iii)? Quite generally, I don't see how <span style="font-style: italic;">identities </span>could be the sort of thing that need explaining.<br /><br /><br />(Of course, you might semantically ascend and get a decent explanatory challenge: why should "having truth conditions" refer to the T-relation. But I don't really see any in principle problem with addressing this sort of challenge in the usual ways: just by pointing to the fact that the T-relation is a reasonably natural candidate satisfying platitudes associated with truth-condition talk.)<br /><br />I'm not being willfully obstructive here: I'm genuinely interested in what the dialectic should be at this point. I've got a few ideas about things one might say to bring out what's wrong with the flat-footed response to King's challenge. But none of them persuades me.<br /><br />Some options:<br /><br />(a)Earlier, we ended up claiming that it was indefinite what sets structured propositions were identical with. But now, we’ve given a definition of truth-conditions that is <i style="">committal</i> on this front. For example, [F,a] was supposed to be a candidate precisification of the proposition that a is F. But (ii) won’t assign it truth conditions, since the second component isn’t a property but an individual.<br /><br />Reply: just as it was indefinite what the structured propositions were, it is indefinite what sets have truth-conditions, and what specification of those truth-conditions is. The two kinds of indefiniteness are “penumbrally connected”. On a precisification on which the prop that a is F=[a,F], then the clause holds as above; but on a precisification on which that a is F=[F,a], a slightly twisted version of the clause will hold. But no matter how we precisify structured proposition-talk, there will be <i style="">a </i>clause defining the truth-conditions for the entities that we end up identifying with structured propositions.<br /><br />(b) You can’t just offer definitional clauses or “what it is” claims and think you’ve evaded all explanatory duties! What would we think of a philosopher of mind who put forward a reductive account whereby pain-qualia were <i style="">by definition</i> just some characteristics of C-fibre firing, and then smugly claimed to have no explanatory obligations left.<br /><br />Reply: one presupposition of the above is that clauses like (ii) “do the job” of truth-conditions for structured propositions, i.e. there won’t be a structured proposition (by the lights of (i)) whose assigned “truth-conditions” (by the lights of (ii)) go wrong. So whatever else happens, the T-relation (defined via (ii)) and the truth-at relation we’re interested in have a sort of constant covariation (and, unlike the attempt to use a clause like (ii) to define truth-conditions for sentences, we won’t get into trouble when we vary the language use and the like across worlds, so the constant covariation is modally robust). The equivalent assumption in the mind case is that pain qualia and the candidate aspect of C-fibre firing are necessarily constantly correlated. Under those circumstances, many would think we <i style="">would be</i> entitled to identify pain qualia and the physicalistic underpinning. Another way of putting this: worries about the putative “explanatory gap” between pain-qualia and physical states are often argued to manifest themselves in a merely contingent correlation between the former and the latter. And that’d mean that any attempt to claim that pain qualia <i style="">just are </i>thus-and-such physical state would be objectionable on the grounds that pain qualia and the physical state come apart in other possible worlds.<br />In the case of the truth-conditions of structured propositions, nothing like this seems in the offing. So I don’t see a parody of the methodology recommended here. Maybe there is some residual objection lurking: but if so, I want to hear it spelled out.<br /><br />(c)Truth-conditions aren’t the sort of thing that you can just define up as you please for the special case of structured propositions. Representational properties are the sort of things possessed by structural propositions, token sentences (spoken or written) of natural language, tokens of mentalese, pictures and the rest. If truth-conditions <i style="">were</i> just the T-relation defined by clause (ii), then sentences of mentalese and English, pictures etc couldn’t have truth-conditions. Reductio.<br /><br />Reply: it’s not clear at all that sentences and pictures “have truth-conditions” in the same sense as do structured propositions. It fits very naturally with the structured-proposition picture to think of sentences standing in some “denotation” relation to a structured proposition, through which may be said to <i style="">derivatively</i> have truth-conditions. What we mean when we say that ‘S has truth conditions C’ is that S denotes some structured proposition p and p has truth-conditions C, in the sense defined above. For linguistic representation, at least, it’s fairly plausible that structured propositions can act as a one-stop-shop for truth-conditions.<br /><br />Pictures are a trickier case. Presumably they can represent situations accurately or non-accurately, and so it might be worth theorizing about them by associating them with a coarse-grained proposition (the set of worlds in which they represent accurately). But presumably, in a painting that represents Napolean’s defeat at waterloo, there doesn’t need to be separable elements corresponding to Napolean, <st1:place st="on"><st1:city st="on">Waterloo</st1:city></st1:place>, and <i style="">being defeated at</i>, which’d make for a neat association of the picture with a structured proposition, in the way that sentences are neatly associated with such things. Absent some kind of denotation relation between pictures and structured propositions, it’s not so clear whether we can derivatively define truth-conditions for pictures as the compound of the denotation relation and the truth-condition relation for structured propositions.<br /><br />None of this does anything to suggest that we can’t give an ok story about pairing pictures with (e.g.) coarse-grained propositions. It’s just that the relation between structured propositions and coarse-grained propositions (=truth conditions) and the relation between pictures and coarse-grained propositions can’t be <i style="">the same one</i>, on this account, and nor is even obvious how the two are <i style="">related</i> (unlike e.g. the sentence/structured proposition case).<br />So one thing that may cause trouble for the view I’m sketching is if we have both the following: (A) there is a <i style="">unified</i> representation relation, such that pictures/sentences/structured propositions stand in same (or at least, intimately related) representation relations to C. (B) there’s no story about pictorial (and other) representations that routes via structured propositions, and so no hope of a unified account of representation given (ii)+(iii).<br /><br />The problem here is that I don’t feel terribly uncomfortable denying (A) and (B). But I can imagine debate on this point, so at least here I see some hope of making progress.<br /><br />Having said all this in defence of (ii), I think there are other ways for the naïve, simple set-theoretic account of structured propositions to defend itself that don't look quite so flat-footed. But the ways I’m thinking of depend on some rather more controversial metasemantic theses, so I’ll split that off into a separate post. It’d be nice to find out what’s wrong with this, the most basic and flat-footed response I can think of.<p></p>Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com7tag:blogger.com,1999:blog-6432111.post-10587069207779623182007-12-18T12:59:00.000+00:002007-12-18T14:43:52.088+00:00Structured propositions and BenacerrafI’ve recently been reading Jeff King’s book on structured propositions. It’s really good, as you would expect. There’s one thing that’s bothering me though: I can’t quite get my head around what’s wrong with the simplest, most naïve account of the nature of propositions. (Disclaimer: this might all turn out to be very simple-minded to those in the know. I'd be happy to get pointers to the literature (hey, maybe it'll be to bits of Jeff's book I haven't got to yet...)<br /><br />The first thing you encounter when people start talking about structured propositions is notation like [Dummett, being a philosopher]. This is supposed to stand for the proposition that Dummett is a philosopher, and highlights the fact that (on the Russellian view) Dummett and the property of being a philosopher are components of the proposition. The big question is supposed to be: what do the brackets and comma represent? What sort of compound object is the proposition? In what sense does it have Dummett and being a philosopher as components? (If you prefer a structured intension view, so be it: then you’ll have a similar beast with the individual concept of Dummett and the worlds-intension associated with “is a philosopher” as ‘constituents’. I’ll stick with the Russellian view for illustrative purposes.)<br /><br />For purposes of modelling propositions, people often interpret the commas as brackets as the ordered n-tuples of standard set theory. The simplest, most naïve interpretation of what structured propositions are, is simply to identify them as n-tuples. What’s the structured proposition itself? It’s a certain kind of set. What sense are Dummett and the property of being a philosopher constituents of the structured proposition that Dummett is a philosopher? They’re elements of the transitive closure of the relevant set.<br /><br />So all that is nice and familiar. So what’s the problem? In his ch 1. (and, in passing, in the SEP article <a href="http://plato.stanford.edu/entries/propositions-structured/">here</a>) King mentions two concerns. In this post, I’ll just set the scene by talking about the first. It's a version of a famous Benacerraf worry, which anyone with some familiarity with the philosophy of maths will have come across (King explicitly makes the comparison). The original Benacerraf puzzle is something like this: suppose that the only abstract things are set like, and whatever else they may be, the referents of arithmetical terms should be abstract. Then numerals will stand for some set or other. But there are all sorts of things that behave like the natural numbers within set theory: the constructions known as the (finite) Zermelo ordinals (null, {null}, {{null}}, {{{null}}}...) and the (finite) von Neumann ordinals (null, {null}, {null,{null}}…) are just two. So there’s no non-arbitrary theory of which sets the natural numbers are.<br /><br />The phenomenon crops up all over the place. Think of ordered n-tuples themselves. Famously, within an ontology of unordered sets, you can define up things that behave like ordered pairs: either [a,b]<a,b>={{a},{a,b}} or {{{a},null},{{b}}}. (For details see <a href="http://en.wikipedia.org/wiki/Ordered_pair">http://en.wikipedia.org/wiki/Ordered_pair</a>). It appears there’s no non-arbitrary reason to prefer a theory that ‘reduces’ ordered to unordered pairs one way or the other.<br /><br />Likewise, says King, there looks to be no non-arbitrary choice of set-theoretic representation of structured propositions (not even if we spot ourselves ordered sets as primitive to avoid the familiar ordered-pair worries). Sure, we *could* associate the words “the proposition that Dummett is a philosopher” with the ordered pair [Dummett, being a philosopher]. But we could also associate it with the set [being a philosopher, Dummett] (and choices multiply when we get to more complex structured propositions). <br /><br />One reaction to the Benacerrafian challenge is to take it to be a decisive objection to an ontological story about numbers, ordered pairs or whatever that allows only unordered sets as a basic mathematical ontology. My own feeling is (and this is not uncommon, I think) that this would be an overreaction. More strongly: no argument that I've seen from the Benacerraf phenomenon to this ontological conclusion seems to me to be terribly persuasive.<br /><br />What we should admit, rather, is that if natural numbers or ordered pairs are sets, it’ll be indefinite which sets they are. So, for example, [a,b]={{a},{a,b}} will be neither definitely true nor definitely false (unless we simply <span style="font-style: italic;">stipulatively define</span> the [,] notation one way or another rather than treating it as pre-theoretically understood). Indefiniteness is pervasive in natural language---everyone needs a story about how it works. And the idea is that whatever that story should be, it should be applied here. Maybe some theories of indefiniteness will make these sort of identifications problematic. But prominent theories like Supervaluationism and Epistemicism have neat and apparently smooth theories of what it we’re saying when we call that identity indefinite: for the supervaluationist, it (may) mean that “[a,b]” refers to {{a},{a,b}} on one but not all precisifications of our set-theoretic language. For the epistemicist, it means that (for certain specific principled reasons) we can’t know that the identity claim is false. The epistemicist will also maintains there’s a fact of the matter about which identity statement connecting ordered and unordered sets is true. And there’ll be some residual arbitrariness here (though we’ll probably have to semantically ascend to find it)---but if there is arbitriness, it’s the sort of thing we’re independently committed to to deal with the indefiniteness rife throughout our language. If you’re a supervaluationist, then you won’t admit there’s any arbitriness: (standardly) the identity statement is neither true nor false, so our theory won’t be committed to “making the choice”. <br /><br />If that’s the right way to respond to the general Benacerraf challenge, it’s the obvious thing to say in response to the version of that puzzle that arises for the Benacerraf case. And this sort of generalization of the indefiniteness maneuver to philosophical analysis is pretty familiar, it’s part of the standard machinery of the Lewisian hoardes. Very roughly, the programme goes: figure out what you want the Fs to do, Ramsify away terms for Fs and you get a way to fix where the Fs are amidst the things you believe in: they are whatever satisfy the open sentence that you’re left with. Where there are multiple, equally good satisfiers, then deploy the indefiniteness maneuver.<br /><br />I’m not so worried on this front, for what I take to be pretty routine reasons. But there’s a second challenge King raises for the simple, naïve theory of structured propositions, which I think is trickier. More on this anon.Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com0tag:blogger.com,1999:blog-6432111.post-74051185881390040222007-12-12T02:47:00.000+00:002007-12-12T15:48:44.452+00:00Public service announcements (updated)<span style="font-size:85%;">There's some interesting conferences being announced these days. A couple have caught my eye/been brought to my attention.<br /><br />First is the <a href="http://www.st-andrews.ac.uk/%7Earche/spe/call.shtml">Semantics and Philosophy in Europe</a> CFP. This looks really like a really excellent event... one of those events where I think: If I'm not there, I'll be regretting not being there...<br /><br />The second event is the 2008 Wittgenstein Symposium. It's remit seems far wider than the name might suggest... looks like a funky set of topics. I reproduce the CFP below...<br /><br />[Update: a third is a one-day conference on the philosophy of mathematics in Manchester. Announcement at the bottom of the post.]</span><span style="font-size:85%;"></span><br /><span style="font-size:85%;"><br /><br />CALL FOR PAPERS:<br />31st International Wittgenstein Symposium 2008 on<br /><br /> Reduction and Elimination in Philosophy and the Sciences<br /><br />Kirchberg am Wechsel, Austria, 10-16 August 2008<br /><<a href="https://outlook.leeds.ac.uk/exchweb/bin/redir.asp?URL=http://www.alws.at/" target="_blank">http://www.alws.at/</a>><br /><br /><br />INVITED SPEAKERS:<br />William Bechtel, Ansgar Beckermann, Johan van Benthem, Alexander Bird, Elke<br />Brendel, Otavio Bueno, John P. Burgess, David Chalmers, Igor Douven, Hartry<br />Field, Jerry Fodor, Kenneth Gemes, Volker Halbach, Stephan Hartmann, Alison<br />Hills, Leon Horsten, Jaegwon Kim, James Ladyman, Oystein Linnebo, Bernard<br />Linsky, Thomas Mormann, Carlos Moulines, Thomas Mueller, Karl-Georg<br />Niebergall, Joelle Proust, Stathis Psillos, Sahotra Sarkar, Gerhard Schurz,<br />Patrick Suppes, Crispin Wright, Edward N. Zalta, Albert Anglberger, Elena<br />Castellani, Philip Ebert, Paul Egre, Ludwig Fahrbach, Simon Huttegger,<br />Christian Kanzian, Jeff Ketland, Marcus Rossberg, Holger Sturm, Charlotte<br />Werndl.<br /><br /><br />ORGANISERS:<br />Alexander Hieke (Salzburg) & Hannes Leitgeb (Bristol),<br />on behalf of the Austrian Ludwig Wittgenstein Society.<br /><br /><br />SECTIONS OF THE SYMPOSIUM:<br />Sections:<br />1. Wittgenstein<br />2. Logical Analysis<br />3. Theory Reduction<br />4. Nominalism<br />5. Naturalism &Physicalism<br />6. Supervenience<br />Workshops:<br />- Ontological Reduction & Dependence<br />- Neologicism<br /><br />More detailed information on the contents of the sections and workshops can<br />be found in the "BACKGROUND" part further down.<br /><br /><br />DEADLINE FOR SUBMITTING PAPERS: 30th April 2008<br />Instructions for authors will soon be available at <<a href="https://outlook.leeds.ac.uk/exchweb/bin/redir.asp?URL=http://www.alws.at/" target="_blank">http://www.alws.at/</a>>.<br />All contributions will be peer-reviewed. All submitted papers accepted for<br />presentation at the symposium will appear in the Contributions of the ALWS<br />Series. Since 1993, successive volumes in this series have appeared each<br />August immediately prior to the symposium.<br /><br /><br />FINAL DATE FOR REGISTRATION: 20th July 2008<br />Further information on registration forms and information on travel and<br />accommodation will be posted at <<a href="https://outlook.leeds.ac.uk/exchweb/bin/redir.asp?URL=http://www.alws.at/" target="_blank">http://www.alws.at/</a>>.<br /><br /><br />SCHEDULE OF THE SYMPOSIUM:<br />The symposium will take place in Kirchberg am Wechsel (Austria) from 10-16<br />August 2008. Sunday, 10th of August 2008 is supposed to be the day on which<br />speakers and conference participants are going to arrive and when they<br />register in the conference office. In the evening, we plan on having an<br />informal get together. On the next day (11 August, 10:00am) the first<br />official session of presentations will start with Professor Jerry Fodor's<br />opening lecture of the symposium. The symposium will end officially in the<br />afternoon of 16 August 2008.<br /><br /><br />BACKGROUND:<br />Philosophers often have tried to either reduce "disagreeable" entities or<br />concepts to (more) acceptable entities or concepts, or to eliminate the<br />former altogether. Reduction and elimination, of course, very often have to<br />do with the question of "What is really there?", and thus these notions<br />belong to the most fundamental ones in philosophy. But the topic is not<br />merely restricted to metaphysics or ontology. Indeed, there are a variety<br />of attempts at reduction and elimination to be found in all areas (and<br />periods) of philosophy and science.<br /><br />The symposium is intended to deal with the following topics (among others):<br /><br />- Logical Analysis: The logical analysis of language has long been regarded<br />as the dominating paradigm for philosophy in the modern analytic tradition.<br />Although the importance of projects such as Frege's logicist construction<br />of mathematics, Russell's paraphrasis of definite descriptions, and<br />Carnap's logical reconstruction and explicatory definition of empirical<br />concepts is still acknowledged, many philosophers now doubt the viability<br />of the programme of logical analysis as it was originally conceived.<br />Notorious problems such as those affecting the definitions of knowledge or<br />truth have led to the revival of "non-analysing" approaches to<br />philosophical concepts and problems (see e.g. Williamson's account of<br />knowledge as a primitive notion and the deflationary criticism of Tarski's<br />definition of truth). What role will -- and should -- logical analysis play<br />in philosophy in the future?<br /><br />- Theory Reduction: Paradigm cases of theory reduction, such as the<br />reduction of Kepler's laws of planetary motion to Newtonian mechanics or<br />the reduction of thermodynamics to the kinetic theory of gases, prompted<br />philosophers of science to study the notions of reduction and reducibility<br />in science. Nagel's analysis of reduction in terms of bridge laws is the<br />classical example of such an attempt. However, those early accounts of<br />theory reduction were soon found to be too naive and their underlying<br />treatment of scientific theories unrealistic. What are the state-of-the-art<br />proposals on how to understand the reduction of a scientific theory to<br />another? What is the purpose of such a reduction? In which cases should we<br />NOT attempt to reduce a theory to another one?<br /><br />- Nominalism: Traditionally, nominalism is concerned with denying the<br />existence of universals. Modern versions of nominalism object to abstract<br />entities altogether; in particular they attack the assumption that the<br />success of scientific theories, especially their mathematical components,<br />commit us to the existence of abstract objects. As a consequence,<br />nominalists have to show how the alleged reference to abstract entities can<br />be eliminated or is merely apparent (Field's Science without Numbers is<br />prototypical in this respect). What types of "Constructive Nominalism" (a<br />la Goodman & Quine) are there? Are there any principal obstacles for<br />nominalistic programmes in general? What could nominalistic accounts of<br />quantum theory or of set theory look like?<br /><br />- Naturalism & Physicalism: Naturalism and physicalism both want to<br />eliminate the part of language that does not refer to the "natural facts"<br />that science -- or indeed physics -- describes. Metaphysical Naturalism<br />often goes hand in hand with (or even entails) an epistemological<br />naturalism (Quine) as well as an ethical naturalism (mainly defined by its<br />critics), so that also these two main disciplines of philosophy should<br />restrict their investigations to the world of natural facts. Physicalist<br />theses, of course, play a particularly important role in the philosophy of<br />mind, since neuroscientific findings seem to support the view that,<br />ultimately, the realm of the mental is but a part of the physical world.<br />Which forms of naturalism and physicalism can be maintained within<br />metaphysics, philosophy of science, epistemology and ethics? What are the<br />consequences for philosophy when such views are accepted? Is philosophy a<br />scientific discipline? If naturalism or physicalism is right, can we still<br />see ourselves as autonomous beings with morality and a free will?<br /><br />- Supervenience: Mental, moral, aesthetical, and even "epistemological"<br />properties have been said to supervene on properties of particular kind,<br />e.g., physical properties. Supervenience is claimed to be neither reduction<br />nor elimination but rather something different, but all these notions still<br />belong to the same family, and sometimes it is even assumed that reduction<br />is a borderline case of supervenience. What are the most abstract laws that<br />govern supervenience relations? Which contemporary applications of the<br />notion of supervenience are philosophically successful in the sense that<br />they have more explanatory power than "reductive theories" without leading<br />to unwanted semantical or ontological commitments? What are the logical<br />relations between the concepts of supervenience, reduction, elimination,<br />and ontological dependence?<br /><br />The symposium will also include two workshops on:<br /><br />- Ontological Reduction & Dependence: Reducing a class of entities to<br />another one has always been regarded attractive by those who subscribe to<br />an ideal of ontological parsimony. On the other hand, what it is that gets<br />reduced ontologically (objects or linguistic items?), what it means to be<br />reduced ontologically, and which methods of reduction there are, is<br />controversial (to say the least). Apart from reducing entities to further<br />entities, metaphysicians sometimes aim to show that entities depend<br />ontologically on other entities; e.g., a colour sensation instance would<br />not exist if the person having the sensation did not exist. In other<br />philosophical contexts, entities are rather said to depend ontologically on<br />other entities if the individuation of the former involves the latter; in<br />this sense, sets might be regarded to depend on their members, and<br />mathematical objects would depend on the mathematical structures they are<br />part of. Is there a general formal framework in which such notions of<br />ontological reduction and dependency can be studied more systematically? Is<br />ontological reduction really theory reduction in disguise? How shall we<br />understand ontological dependency of objects which exist necessarily? How<br />do reduction and dependence relate to Quine's notion of ontological<br />commitment?<br /><br />- Neologicism: Classical Logicism aimed at deriving every true mathematical<br />statement from purely logical truths by reducing all mathematical concepts<br />to logical ones. As Frege's formal system proved to be inconsistent, and<br />modern set theory seemed to require strong principles of a genuinely<br />mathematical character, the programme of Logicism was long regarded as<br />dead. However, in the last twenty years neologicist and neo-Fregean<br />approaches in the philosophy of mathematics have experienced an amazing<br />revival (Wright, Boolos, Hale). Abstraction principles, such as Hume's<br />principle, have been suggested to support a logicist reconstruction of<br />mathematics in view of their quasi-analytical status. Do we have to<br />reconceive the notion of reducibility in order to understand in what sense<br />Neologicism is able to reduce mathematics to logic (as Linsky & Zalta have<br />suggested recently)? What are the abstraction principles that govern<br />mathematical theories apart from arithmetic (in particular: calculus and<br />set theory)? How can Neo-Fregeanism avoid the logical and philosophical<br />problems that affected Frege's original system -- cf. the problems of<br />impredicativity and Bad Company?<br /><br /><br />If you know philosophers or scientists, especially excellent graduate<br />students, who might be interested in the topic of Reduction and Elimination<br />in Philosophy and the Sciences, we would be very grateful if you could<br />point them to the symposium.<br /><br />With best wishes,<br /><br />Alexander Hieke and Hannes Leitgeb<br /><br /><br />********************************************************************************************<br /><br /></span><span style="font-size:85%;">Announcing a one-day conference....<br /><br />Metaphysics and Epistemology: Issues in the Philosophy of Mathematics<br />Saturday 15 March 2008<br /><br />Chancellors Hotel and Conference Centre, University of Manchester<br /><br />Speakers to include:<br /><br />Joseph Melia (University of Leeds)<br />Alexander Paseau (University of Oxford)<br />Philip Ebert (University of Stirling)<br /><br />For registration details, see<br /><a href="https://outlook.leeds.ac.uk/exchweb/bin/redir.asp?URL=http://www.socialsciences.manchester.ac.uk/disciplines/philosophy/events/conference/index.html" target="_blank">http://www.socialsciences.manchester.ac.uk/disciplines/philosophy/events/conference/index.html</a><br /><br />This conference is organised with financial support from the Royal Institute of<br />Philosophy.</span>Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com0tag:blogger.com,1999:blog-6432111.post-49131460948702352802007-12-04T12:48:00.000+00:002007-12-18T14:33:37.138+00:00Two problems of the many.Here's a paradigmatic problem of the many (Geach and Unger are the usual sources cited, but I'm not claiming this to be exactly the version they use.) Let's take a moulting cat. There are many hairs that are neither clearly attached, nor clearly unattached to the main body of the cat. Let's enumerate them 1---1000. Then we might consider the material objects which are the masses of cat-arranged matter that include half of the thousand hairs, and exclude to the other half. There are many ways to choose the half that's included. So by this recipe we get many many distinct masses of cat-arranged matter, differing only over hairs. The various pieces of cat-arranged matter change their properties over time in very much the way that cats do: they are now in a sitting-shape, now in a standing-shape, now in a lapping-milk shape, now in an emitting-meows configuration. They each seem to have everything intrinsically required for being a cat.<br /><br />If you're inclined to think (and I am) that a cat is a material object identical to some piece of cat-arranged matter, then the problem of the many arises: which of the various distinct pieces of cat-arranged matters is the cat? Various answers have been suggested. Some of the most obvious (though not necessarily the most sensible) are: (i) nihilism: none of the cat-candidates are cats. (ii) brutalism: exactly one is a cat, and there is a brute fact of the matter which it is; (iii) vague cat: exactly one is a cat, and it's a vague matter which it is; (iii) manyism: lots of the cat-candidates are cats.<br /><br />(By the way, (ii) and (iii) may not be incompatible, if you're an epistemicist about vagueness. And those who are fans of many-valued logics for vagueness should have a think about whether they can really support (iii). Consider the best candidates to be a cat, c1....c1000. Suppose these are each cats to an equal degree. Then "one of c1...c1000 is a cat" will standardly have a degree of truth equal to the disjunction=the maximum of the disjuncts=the degree of truth of "c1 is a cat". And the degree of truth of the conjunction: "all of c1...c1000 is a cat" will standardly have a degree of truth equal to the conjunction=the minimum of the conjuncts=the degree of truth of "c1 is a cat". So to the extent that the (determinately distinct) best candidates aren't all cats, to exactly that extent there's no cat among them (and since we chose the best candidates, we won't get a higher degree of truth for "the cat is present" by including extra disjuncts. Conclusion: if you're tempted by response (iii) to the problem of the many, you've got strong reason not to go for many-valued logic. [Edit (see comments): this needs qualification. I think you've reason not to go for many-valued logics that endorse the (fairly standard, but not undeniable) max/min treatment of disjunction/conjunction; and in which the many values are linearly arranged].)<br /><br />What I'd really like to emphasize is the above leaves open the following question: Is there a super-cat-candidate, i.e. a piece of cat-arranged matter of which every other cat-candidate is a proper part? Take the Tibbles case above, and suppose that the candidates only differ over hairs. Then a potential super-cat-candidate would be the piece of matter that's maximally generous: that includes all the 1000 not-clearly-unattached hairs. If this particular fusion isn't genuinely a cat-candidate, then it's open that if you arrange the cat-candidates by which is a part of which, you'll end up with multiple maximal cat-candidates none of which is a part of the other. Perhaps they each contain 999 hairs, but differ amongst themselves which hair they don't include.<br /><br />If there is a super-cat-candidate, let's say the problem of the many is of type-1, and if there's no super-cat-candidate, let's say that the problem of the many is of type-2.<br /><br />My guess is that our description of cases like Tibbles leaves is simply underspecified as to whether it's of type-1 or type-2. But I certainly don't see any principled reason to think that the actual cases of the POM we find around us are always of type-1. There's certainly no a priori guarantee that the sort of criterion that rules in some things as parts of a cat won't also dismiss other things as non-parts. So for example, perhaps we can rank candidates for degrees of integration: some unintegrated parts are ok, but there's some cut-off where an object is just too unintegrated to count as a candidate. One cat-candidate includes some borderline-attached skin cells, and is to that extent unintegrated. Another cat-candidate includes some borderline-attached teeth, and is to that extent unintegrated. But plausibly the fusion that includes both skin cells and teeth is less integrated: enough to disqualify it from being a cat-candidate. It's hard to know how to argue the case further without going deeply into feline biology, but I hope you get the sense of why type-2 POM need to be dealt with.<br /><br />Now, one response to the standard POM is to appeal to the "maximality" allegedly built into various predicates (like "rock", "cat", "conscious" etc): things that are duplicates of rocks, but which are surrounded by extra rocky stuff, become merely parts of rocks (and so forth). There are presumably intrinisic duplicates of rocks embedded as tiny parts at the centre of large boulders: but there's no intuitive pressure to count them as rocks. Likewise a cat might survive after it's limbs are destroyed by a vengeful deity, but it's unintuitive to think of the duplicate head-and-torso part of Tibbles as itself a cat-candidate. So there's some reasons independently of paradigmatic problem of the many scenarios to think of "cat" and "rock" etc as maximal. (For more discussion of maximality, see Ted Sider's various papers on the topic).<br /><br />If we've got a type-1 problem of the many, then one might think that the maximality of "cat" or "rock" or whatever gives a principled answer to our original question: the super-cat-candidate (/super-rock-candidate) is the one uniquely qualified to be the cat (/rock). For we've then got an explanation for why all the others, though intrinsically qualified just like cats, aren't cats: being a cat is a maximal property, and all the rival cat-candidates are parts of the one true cat in the vicinity.<br /><br />But the type-2 problem of the many really isn't addressed by maximality as such. There's no unique super-cat-candidate in this setup, rather a range of co-maximal ones. So maximality won't save our bacon here.<br /><br />The difference between the two cases is important when we consider other things. For example, in the light of the (fairly widely accepted) maximality of "house" and "cat" and "rock" and the like, few would say that any duplicate of a house must be a house (even setting aside extrinsicality due to social setting). But there's an obvious fall back position, which is floating around the literature: that any duplicate of a house must be a (proper or improper) part of a house (holding fixed social setting etc). That is, any house possesses the property of being part of a house intrinsically (so long as we hold fixed social setting etc). And the same goes for cat: at least holding fixed biological origin, it's plausible that any cat is intrinsically at least part of a cat, and any rock is intrinsically at least part of a rock.<br /><br />These claims aren't threatened by maximality. But appealing to them in a type-2 problem of the many gets us an argument directly for response (iv): manyism. For plausibly if you took a duplicate of one of the co-maximal cat candidates, T, while eliminating from the scene those bits of matter that are not part of T but are part of one of the other co-maximal cat candidates, then you get something T* that's (determinately) a cat. And so, any duplicate of T* must be at least part of a cat. And since T is a duplicate of T*, T must be at least part of a cat. But T isn't proper part of anything that's even a cat-candidate. So T must itself be a cat.<br /><br />So the type-2 POM is harder to resolve than the type-1 kind. Maybe some extra weakening of the properties a cat-candidate has intrinsicality are called for. Or maybe (very surprisingly) type-2 POMs never arise. But either way, more work is needed.Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com6tag:blogger.com,1999:blog-6432111.post-29850893156808569072007-11-27T13:15:00.000+00:002007-11-30T14:56:50.861+00:00Nihilism, maximality, problem of the manyDoes nihilism about ordinary things help us out with puzzles surrounding maximal properties and the problem of the many? It's hard to see how.<br /><br />First, maximal properties. Suppose that I have a rock. Surprisingly, there seem to be microphysical duplicates of the rock that are not themselves rocks. For suppose we have a microphysical duplicate of the rock (call it Rocky) that is surrounded by extra rocky stuff. Then, plausibly, the fusion of Rocky and the extra rocky stuff is the rock, and Rocky himself isn't, being out-competed for rock-status by his more extensive rival. Not being shared among duplicates, being a rock isn't intrinsic. And cases meeting this recipe can be plausibly constructed for chairs, tables, rivers, nations, human bodies, human animals and (perhaps) even human persons. Most kind-terms, in fact, look maximal and (hence) extrinsic. Sider has argued that non-sortal properties such as consciousness are likewise maximal and extrinsic.<br /><br />Second, the problem of the many. In its strongest version, suppose that we have a plentitude of candidates (sums of atoms, say) more or less equally qualified to be a table, cloud, human body or whatever. Suppose further that both the sum and intersection of all these candidates isn't itself a candidate for being the object. (This is often left out of the description of the case, but (1) there seems no reason to think that the set of candidates will always be closed under summing or intersection (2) life is more difficult--and more interesting--if these candidates aren't around.) Which of these candidates is the table, cloud, human body or whatnot?<br /><br />What puzzles me is why nihilism---rejecting the existence of tables, clouds, human bodies or whatever---should be thought to avoid any puzzles around here. It's true that the nihilist rejects a premise in terms of which these puzzles would normally be stated. So you might imagine that the puzzles give you reason to modus tollens and reject that premise, ending up with nihilism (that's how Unger's original presentation of the POM went, if I recall). But that's no good if we can state equally compelling puzzles in the nihilist's preferred vocabulary.<br /><br />Take our maximality scenario. Nihilists allow that we have, not a rock, but some things arranged rockwise. And we now conceive of a situation where those things, arranged just as they actually are, still exist (let "Rocky" be a plural term that picks them out). But in this situation, they are surrounded by more things of a qualitatively similar arrangement. Now are the things in Rocky arranged rockwise? Don't consult intuitions at this point---"rockwise" is a term of art. The theoretical role of "rockwise" is to explain how ordinary talk is ok. If some things are in fact arranged rockwise, then ordinary talk should count them as forming a rock. So, for example, van Inwagen's paraphrase of "that's is a rock" would be "those things are arranged rockwise". If we point to Rocky and say "that's a rock", intuitively we speak falsely (that underpins the original puzzle). But if the things that are Rocky are in fact arranged rockwise, then this would be paraphrased to something true. What we get is that "are arranged rockwise" expresses a maximal, extrinsic plural property. For a contrast case, consider "is a circle". What replaces this by nihilist lights are plural predicates like "being arranged circularly". But this seems to express a non-maximal, intrinsic plural property. I can't see any very philosophically significant difference between the puzzle as transcribed into the nihilists favoured setting and the original.<br /><br />Similarly, consider a bunch of (what we hitherto thought were) cloud-candidates. The nihilist says that none of these exist. Still, there are things which are arranged candidate-cloudwise. Call them the As. And there are other things---differing from the first lot---which are also arranged candidate-cloudwise. Call them the Bs. Are the A's or the B's arranged cloudwise? Are there some other objects, including many but not all of the As and the B's that *are* arranged cloudwise? Again, the puzzle translates straight through: originally we had to talk about the relation between the many cloud-candidates and the single cloud; now we talk about the many pluralities which are arranged candidate-cloudwise, and how they relate to the plurality that is cloudwise arranged. The puzzle is harder to write down. But so far as I can see, it's still there.<br /><br />Pursuing the idea for a bit, suppose we decided to say that there were many distinct pluralities that are arranged cloudwise. Then "there at least two distinct clouds" would be paraphrased to a truth (that there are some xx and some yy, such that not all the xx are among the yy and vice versa, such that the xx are arranged cloudwise and the yy are arranged cloudwise). But of course it's the unassertibility of this sort of sentence (staring at what looks to be a single fluffy body in the sky) that leads many to reject Lewis's "many but almost one" response to the problem of the many.<br /><br />I don't think that nihilism leaves everything dialectically unchanged. It's not so clear how many of the solutions people propose to the problem of the many can be translated into the nihilist's setting. And more positively, some options may seem more attractive once one is a nihilist than they did taken cold. Example: once you're going in for a mismatch between common sense ontology and what there really is, then maybe you're more prepared for the sort of linguistic-trick reconstructions of common sense that Lewis suggests in support of his "many but almost one". Going back to the case we considered above, let's suppose you think that there are many extensionally distinct pluralities that are all arranged cloudwise. Then perhaps "there are two distinct clouds" should be paraphrased, not as suggested above, but as:<br /><br />there are some xx and some yy, such that almost all the xx are among the yy and vice versa, such that the xx are arranged cloudwise and the yy are arranged cloudwise.<br /><br />The thought here is that, given one is already buying into unobvious paraphrase to capture the real content of what's said, maybe the costs of putting in a few extra tweaks into that paraphrase are minimal.<br /><br />Caveats: notice that this isn't to say that nihilism solves your problems, it's to say that nihilism may make it easier to accept a response that was already on the table (Lewis's "many but almost one" idea). And even this is sensitive to the details of how nihilism want to relate ordinary thought and talk to metaphysics: van Inwagen's paraphrase strategy is one such proposal, and meshes quite neatly with the Lewis idea, but it's not clear that alternatives (such as Dorr's counterfactual version) have the same benefits. So it's not the metaphysical component of nihilism that's doing the work in helping accommodate the problem of the many: it's whatever machinery the nihilist uses to justify ordinary thought and talk.<br /><br />There's one style of nihilist who might stand their ground. Call nihilists friendly if they attempt to say what's good about ordinary thought and talk (making use of things like "rockwise", or counterfactual paraphrases, or whatever). I'm suggesting that friendly nihilists face transcribed versions of the puzzles that everyone faces. Nihilists might though be unfriendly: prepared to say that ordinary thought and talk is largely false, but not to reconstruct some subsidiary norm which ordinary thought and talk meets. Friendly nihilism is an interesting position, I think. Unfriendly nihilism is pushing the nuclear button on all attempts to sort out paradoxes statable in ordinary language. But they have at least this virtue: the puzzles they react against don't come back to bite them.<br /><br />[Update: I've been sent a couple of good references for discussions of nihilism in a similar spirit. First Matt McGrath's paper "<a href="http://www.informaworld.com/smpp/content%7Econtent=a727404721%7Edb=all%7Eorder=page">No objects, no problem?</a>" argues that the nihilist doesn't escape statue/lump puzzles. Second, Karen Bennett has a forthcoming paper called "<span style="font-size:-1;">Composition, Colocation, and Metaontology" that resurrects problems for nihilists including the problem of the many (though it doesn't now appear to be available online).]<br /></span>Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com2tag:blogger.com,1999:blog-6432111.post-51115225430894088852007-11-20T13:43:00.000+00:002007-11-20T16:13:57.995+00:00Logically good inference and the restFrom time to time in my papers, the putative epistemological significance of logically good inference has been cropping up. I've been recently trying to think a bit more systematically about the issues involved.<br /><br />Some terminology. Suppose that the argument "A therefore B" is logically valid. Then I'll say that reasoning from "A" is true, to "B" is true, is logically good. Two caveats (1) the logical goodness of a piece of reasoning from X to Y doesn't mean that, all things considered, it's ok to infer Y from X. At best, the case is pro tanto: if Y were a contradiction, for example, all things considered you should give up X rather than come to believe Y; (2) I think the validity of an argument-type won't in the end be sufficient for for the logically goodness of a token inference of that type---partly because we probably need to tie it much closer to deductive moves, partially because of worries about the different contexts in play with any given token inference. But let me just ignore those issues for now.<br /><br />I'm going to blur use-mention a bit by classifying material-mode inferences from A to B (rather than: "A" is true to "B" is true") as logically good in the same circumstances. I'll also call a piece of reasoning from A to B "modally good" if A entails B, and "a priori good" if it's a priori that if A then B (nb: material conditional). If it's a priori that A entails B, I'll call it "a priori modally good".<br /><br />Suppose now we perform a competent deduction of B from A. What I'm interested in is whether the fact that the inference is logically good is something that we should pay attention to in our epistemological story about what's going on. You might think this isn't forced on us. For (arguably: see below) whenever an inference is logically good, it's also modally and a priori good. So---the thought runs---for all we've said we could have an epistemology that just looks at whether inferences are modally/a priori good, and simply sets aside questions of logical goodness. If so, logical goodness may not be epistemically interesting as such.<br /><br />(That's obviously a bit quick: it might be that you can't just stop with declaring something a priori good; rather, any a priori good inference falls into one of a number of subcases, one of which is the class of logically good inferences, and that the real epistemic story proceeds at the level of the "thickly" described subcases. But let's set that sort of issue aside).<br /><br />Are there reasons to think competent deduction/logically good inference is an especially interesting epistemological category of inference?<br /><br />One obvious reason to refuse to subsume logically good inference within modally good inferences (for example) is if you thought that some logically good inferences aren't necessarily truth-preserving. There's a precedent for that thought: Kaplan argues in "Demonstratives" that "I am here now" is a logical validity, but isn't necessarily true. If that's the case, then logically good inferences won't be a subclass of the modally good ones, and so the attempt to talk only about the modally good inferences would just miss some of the cases.<br /><br />I'm not aware of persuasive examples of logically good inferences that aren't a priori good. And I'm not persuaded that the Kaplanian classification is the right one. So let's suppose pro tem that the logically good inference are always modally, a priori, and a priori modally, good.<br /><br />We're left with the following situation: the logically good inferences are a subclass of inferences that are also fall under other "good" categories. In a particular case where we come to believe B on the basis of A, where is the particular need to talk about its logical "goodness", rather than simply about its modal, a priori or whatever goodness?<br /><br />To make things a little more concrete: suppose that our story about what makes a modally good inference good is that it's ultra-reliable. Then, since we're supposing all logically good inferences are modally good, just from their modal goodness, we're going to get that they're ultra-reliable. It's not so clear that epistemologically, we need say any more. (Of course, their logical goodness might explain *why* they're reliable: but that's not clearly an *epistemic* explanation, any more than is the biophysical story about perception's reliability.)<br /><br />So long as we're focusing on cases where we deploy reasoning directly, to move from something we believe to something else we believe, I'm not sure how to get traction on this issue (at least, not in such an abstract setting: I'm sure we could fight on the details if they are filled out.). But even in this abstract setting, I do think we can see that the idea just sketched ignores one vital role that logically good reasoning plays: namely, reasoning under a supposition in the course of an indirect proof.<br /><br />Familiar cases: If reasoning from A to B is logically good, then it's ok to believe (various) conditional(s) "if A, B". If reasoning from A1 to B is logically good, and reasoning from A2 to B is logically good, then inferring B from the disjunction A1vA2 is ok. If reasoning from A to a contradiction is logically good, then inferring not-A is good. If reasoning from A to B is logically good, then reasoning from A&C to B is good.<br /><br />What's important about these sort of deployments is that if you replace "logically good" by some wider epistemological category of ok reasoning, you'll be in danger of losing these patterns.<br /><br />Suppose, for example, that there are "deeply contingent a priori truths". One schematic example that John Hawthorne offers is the material conditional "My experiences are of kind H > theory T of the world is true". The idea here is that the experiences specified should be the kind that lead to T via inference to the best explanation. Of course, this'll be a case where the a priori goodness doesn't give us modal goodness: it could be that my experiences are H but the world is such that ~T. Nevertheless, I think there's a pretty strong case that in suitable settings inferring T from H will be (defeasibly but) a priori good.<br /><br />Now suppose that the correct theory of the world isn't T, and I don't undergo experiences H. Consider the counterfactual "were my experiences to be H, theory T would be true". There's no reason at all to think this counterfactual would be true in the specified circumstances: it may well be that, given the actual world meets description T*, the closest world where my experiences are H is still an approximately T*-world rather than a T-world. E.g. the nearest world where various tests for general relativity come back negative may well be a world where general relativity is still the right theory, but it's effects aren't detectable on the kind of scales initially tested (that's just a for-instance: I'm sure better cases could be constructed).<br /><br />Here's another illustration of the worry. Granted, reasoning from H to T seems a priori. But reasoning from H+X to T seems terrible, for a variety of X. (So: <span style="font-style: italic;">My experiences are of H + my experiences are misleading in way W</span> will plausibly a priori supports some T' incompatible with T). But if we were allowed to use a priori good reasoning in indirect proofs, then we could simply argue from H+X to H, and thence (a priori) to T, overall getting an a priori route from H+X to T. the moral is that we can't treat a priori good pieces of reasoning as "lemmas" that we can rely on under the scope of whatever suppositions we like. A priori goodness threatens to be "non-monotonic": which is fine on its own, but I think does show quite clearly that it can completely crash when we try to make it play a role designed for logical goodness.<br /><br />This sort of problem isn't a surprise: the reliability of indirect proofs is going to get *more problematic* the more inclusive the reasoning in play is. Suppose the indirect reasoning says that whenever reasoning of type R is good, one can infer C. The more pieces of reasoning count as "good", the more potential there is to come into conflict with the rule, because there's simply more cases of reasoning that are potential counterexamples.<br /><br />Of course, a priori goodness is just one of the inferential virtues mentioned earlier: modal goodness is another; and a priori modal goodness a third. Modal goodness already looks a bit implausible as an attempt to capture the epistemic status of deduction: it doesn't seem all that plausible to classify the inferential move from <span style="font-style: italic;">A and B </span>to <span style="font-style: italic;">B</span> as w the same category as the move from <span style="font-style: italic;">this is water</span> to <span style="font-style: italic;">this is H2O.</span> Moreover, we'll again have trouble with conditional proof: this time for indicative conditionals. Intuitively, and (I'm independently convinced) actually, the indicative conditional "if the watery stuff around here is XYZ, then water is H2O" is false. But the inferential move from the antecedent to the consequent is modally good.<br /><br />Of the options mentioned, this leaves a priori modal goodness. The hope would be that this'll cut out the cases of modally good inference that cause trouble (those based around a posteriori necessities). Will this help?<br /><br />I don't think so: I think the problems for a priori goodness resurface here. if the move from H to T is a priori good, then it seems that the move from Actually(H) to Actually(T) should equally be a priori good. But in a wide variety of cases, this inference will also be modally good (all cases except H&~T ones). But just as before, thinking that this piece of reasoning preserves its status in indirect proofs gives us very bad results: e.g. that there's an a priori route from Actually(H) and Actually(X) to Actually (T), which for suitably chosen X looks really bad.<br /><br />Anyway, of course there's wriggle room here, and I'm sure a suitably ingenious defender of one of these positions could spin a story (and I'd be genuinely interested in hearing it). But my main interest is just to block the dialectical maneuver that says: well, all logically good inferences are X-good ones, so we can get everything we want having a decent epistemology of X-good inferences. The cases of indirect reasoning I think show that the *limitations* on what inferences are logically good can be epistemologically central: and anyone wanting to ignore logic better have a story to tell about how their story plays in these cases.<br /><br />[NB: one kind of good inference I haven't talked about is that backed by what 2-dimensionalists might call "1-necessary truth preservation": I.e. truth preservation at every centred world considered as actual. I've got no guarantees to offer that this notion won't run into problems, but I haven't as yet constructed cases against it. Happily, for my purposes, logically good inference and this sort of 1-modally good inference give rise to the same issues, so if I had to concede that this was a viable epistemological category for subsuming logically good inference, it wouldn't substantially effect my wider project.]Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com5tag:blogger.com,1999:blog-6432111.post-16752148631294318302007-11-05T13:45:00.000+00:002007-11-09T17:23:17.521+00:00CEM journalismThe literature on the linguistics/philosophy interface on conditionals is full of excellent stuff. Here's just one nice thing we get. (Directly drawn from a paper by <a href="http://web.mit.edu/fintel/www/lpw.mich.pdf">von Fintel and Iatridou</a>). Nothing here is due to me: but it's something I want to put down so I don't forget it, since it looks like it'll be useful all over the place. Think of what follows as a bit of journalism.<br /><br />Here's a general puzzle for people who like "iffy" analyses of conditionals.<br /><ul><li>No student passes if they goof off. </li></ul>The obvious first-pass regimentation is:<br /><ul><li>[No x: x is a student](if x goofs off, x passes)</li></ul>But for a wide variety of accounts, this'll give you the wrong truth-conditions. E.g. if you read "if" as a material conditional, you'll get it coming out true if all the students goof and succeed! What is wanted, as Higgenbotham urges, is something with the effect:<br /><ul><li>[No x: x is a student](x goofs off and x passes)</li></ul>This seems to suggest that under some embeddings "if" expresses conjunction! But that's hardly what a believer in the iffness of if wants.<br /><br />What the paper cited above notes is that so long as we've got CEM, we won't go wrong. For [No x:Fx]Gx is equivalent to [All x:Fx]~Gx. And where G is the conditional "if x goofs off, x passes", the negated conditional "not: if x goofs off, x passes" is equivalent to "if x goofs off, x doesn't pass" <span style="font-weight: bold;">if we have the relevant instance of conditional excluded middle. </span>What we wind up with is an equivalence between the obvious first-pass regimentation and:<br /><ul><li>[All x: x is a student](if x goofs off, x won't pass).<br /></li></ul>And this seems to get the right results. What it *doesn't* automatically get us is an equivalence to the Higgenbotham regimentation in terms of a conjunction (nor with the Kratzer restrictor analysis). And maybe when we look at the data more generally, we'll can get some traction on which of these theories best fits with usage.<br /><br />Suppose we're convinced by this that we need the relevant instances of CEM. There remains a question of *how* to secure these instances. The suggestion in the paper is that rules governing legitimate contexts for conditionals give us the result (paired with a contextually shifty strict conditional account of conditionals). An obvious alternative is to hard-wire in CEM into the semantics, as Stalnaker does. So unless you're prepared (with von Fintel, Gillies et al) to defend in detail fine-tuned shiftiness of the contexts in which conditionals can be uttered then it looks like you should smile upon the Stalnaker analysis. <span style="font-weight: bold;"><br /><br /><br /></span><span>[Update: It's interesting to think how this would look as an argument for (instances of) CEM.<br /><br /></span><span><span style="font-weight: bold;">Premise 1:</span> The following are equivalent:<br /></span><span>A. No student will pass if she goofs off<br />B. Every student will fail to pass if she goofs off<br /></span><span><br /><span style="font-weight: bold;">Premise 2</span>: A and B can be regimented respectively as follows:<br />A*. [No x: student x](if x goofs off, x passes)<br />B*. [Every x: student x](if x goofs off, ~x passes)<br /><br /><span style="font-weight: bold;">Premise 3:</span> [No x: Fx]Gx is equivalent to [Every x: Fx]~Gx<br /><br /></span><span><span style="font-weight: bold;">Premise 4:</span> if [Every x: Fx]Hx is equivalent to [Every x: Fx]Ix, then Hx is equivalent to Ix.<br /></span><span><br />We argue as follows. By an instance of premise 3, A* is equivalent to:<br /><br />C*. [Every x: student x] not(if x goofs off, x passes)<br /><br />But C* is equivalent to A*, which is equivalent to A (premise 2) which is equivalent to B (premise 1) which is equivalent to B* (premise 2). So C* is equivalent to B*.<br /><br />But this equivalence is of the form of the antecedent of premise 4, so we get:<br /><br /><span style="font-weight: bold;">(Neg/Cond instances)</span> ~(if x goofs off, x passes) iff if x goofs off, ~x passes.<br /><br />And we quickly get from the law of excluded middle and a bit of logic:<br /><br /><span style="font-weight: bold;">(CEM instances) </span>(if x goofs off, x passes) or (if x goofs off, ~ x passes). QED.<br /><br /><br /><br />The present version is phrased in terms of indicative conditionals. But it looks like parallel arguments can be run for CEM for counterfactuals (Thanks to Richard Woodward for asking about this).</span><span> For one of the controversial cases, for example, the basic premise will be that the following are equivalent</span>:<br /><span><br /></span><span>D. No coin would have landed heads, if it had been flipped.<br />E. Every coin would have landed tails, if it had been flipped.<br /><br />This looks pretty good, so the argument can run just as before.</span><span>]<br /><br /><br /></span>Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com0tag:blogger.com,1999:blog-6432111.post-52170502400132624902007-11-05T12:02:00.000+00:002007-11-05T13:41:44.133+00:00Must, Might and Moore.I've just been enjoying reading a paper by <a href="http://semanticsarchive.net/Archive/TI1OGVlY/iffiness.pdf">Thony Gillies</a>. One thing that's very striking is the dilemma he poses---quite generally---for "iffy" accounts of "if" (i.e. accounts that see English "if" as expressing a sentential connective, pace Kratzer's restrictor account).<br /><br />The dilemma is constructed around finding a story that handles the interaction between modals and conditionals. The prima facie data is that the following pairs are equivalent:<br /><br /><ul><li>If p, must be q </li><li>If p, q</li></ul>and<br /><ul><li>If p, might be q</li><li>Might be (p&q)</li></ul>The dilemma proceeds by first looking at whether you want to say that the modals scope over the conditional or vice versa, and then (on the view where the modal is wide-scoped) looking into the details of how the "if" is supposed to work and showing that one or other of the pairs come out inequivalent. The suggestion in the paper is if we have the right theory of context-shiftiness, and narrow-scope the modals, then we can be faithful to the data. I don't want to take issue with that positive proposal. I'm just a bit worried about the alleged data itself.<br /><br />It's a really familiar tactic, when presented with a putative equivalence that causes trouble for your favourite theory, to say that the pairs aren't equivalent at all, but can be "reasonably inferred" from each other (think of various ways of explaining away "or-to-if" inferences). But taken cold such pragmatic explanations can look a bit ad hoc.<br /><br />So it'd be nice if we could find independent motivation for the inequivalence we need. In a related setting, Bob Stalnaker uses the acceptability of Moorean-patterns to do this job. To me, the Stalnaker point seems to bear directly on the Gillies dilemma above.<br /><br />Before we even consider conditionals, notice that "p but it might be that not p" sounds terrible. Attractive story: this is because you shouldn't assert something unless you know it to be true; and to say that p might not be the case is (inter alia) to deny you know it. One way of bringing out the pretty obviously pragmatic nature of the tension in uttering the conjunction here is to note that asserting the following sort of thing looks much much better:<br /><ul><li>it might be that not p; but I believe that p</li></ul>("I might miss the train; but I believe I'll just make it"). The point is that whereas asserting "p" is appropriate only if you know that p, asserting "I believe that p" (arguably) is appropriate even if you know you don't know it. So looking at these conjunctions and figuring out whether they sound "Moorean" seems like a nice way of filtering out some of the noise generated by knowledge-rules for assertion.<br /><br />(I can sometimes still hear a little tension in the example: what are you doing believing that you'll catch the train if you know you might not? But for me this goes away if we replace "I believe that" with "I'm confident that" (which still, in vanilla cases, gives you Moorean phenomena). I think in the examples to be given below, residual tension can be eliminated in the same way. The folks who work on norms of assertion I'm sure have explored this sort of territory lots.)<br /><br />That's the prototypical case. Let's move on to examples where there are more moving parts. David Lewis famously alleged that the following pair are equivalent:<br /><ul><li>it's not the case that: if were the case that p, it would have been that q</li><li>if were that p, it might have been that ~q</li></ul>Stalnaker thinks that this is wrong, since instances of the following sound ok:<br /><ul><li>if it were that p, it might have been that not q; but I believe if it were that p it would have been that q.</li></ul>Consider for example: "if I'd left only 5 mins to walk down the hill, (of course!) I might have missed the train; but I believe that, even if I'd only left 5 mins, I'd have caught it. " That sounds totally fine to me. There's a few decorations to that speech ("even" "of course" "only"). But I think the general pattern here is robust, once we fill in the background context. Stalnaker thinks this cuts against Lewis, since if mights and woulds were obvious contradictories, then the latter speech would be straightforwardly equivalent to something of the form "A and I don't believe that A". But things like that sounds terrible, in a way that the speech above doesn't.<br /><br />We find pretty much the same cases for "must" and indicative "if".<br /><ul><li>It's not true that if p, then it must be that q; but I believe that if p, q.<br /></li></ul>("it's not true that if Gerry is at the party, Jill must be too---Jill sometimes gets called away unexpectedly by her work. But nevertheless I believe that if Gerry's there, Jill's there."). Again, this sounds ok to me; but if the bare conditional and the must-conditional were straightforwardly equivalent, surely this should sound terrible.<br /><br />These sorts of patterns make me very suspicious of claims that "if p, must q" and "if p, q" are equivalent, just as the analogous patterns make me suspicious of the Lewis idea that "if p, might ~q" and "if p, q" are contradictories when the "if" is subjunctive. So I'm thinking the horns of Gillies' dilemma aren't equal: denying the must conditional/bare conditional equivalence is independently motivated.<br /><br />None of this is meant to undermine the positive theory that Thony Gillies is presenting in the paper: his way of accounting for lots of the data looks super-interesting, and I've got no reason to suppose his positive story won't have a story about everything I've said here. I'm just wondering whether the dilemma that frames the debate should suck us in.Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com0tag:blogger.com,1999:blog-6432111.post-8364095751658949542007-11-02T22:35:00.000+00:002007-11-03T01:59:30.942+00:00Degrees of belief and supervaluationsSuppose you've got an argument with one premise and one conclusion, and you think its valid. Call the premise p and the conclusion q. Plausibly, constraints on rational belief follow: in particular, you can't rationally have a lesser degree of belief in q than you have in p.<br /><br />The natural generalization of this to multi-premise cases is that if p1...pn|-q, then your degree of disbelief in q can't rationally exceed the sum of your degrees of disbelief in the premises.<br /><br />FWIW, there's a natural generalization to the multi-conclusion case too (a multi-conclusion argument is valid, roughly, if the truth of all the premises secures the truth of at least one conclusion). If p1...pn|-q1...qm, then the sum of your degrees of disbelief in the conclusions can't rationally exceed the sum of your degrees of disbelief in the premises.<br /><br />What I'm interested in at the moment is to what extent this sort of connection can be extended to non-classical settings. In particular (and connected with the last post) I'm interested in what the supervaluationist should think about all this.<br /><br />There's a fundamental choice to be made at the get-go. Do we think that "degrees of belief" in sentences of a vague language can be represented by a standard classical probability function? Or do we need to be a bit more devious?<br /><br />Let's take a simple case. Construct the artificial predicate B(x), so that numbers less than 5 satisfy B, and numbers greater than5 fail to satisfy it. We'll suppose that it is indeterminate whether 5 itself is B, and that supervaluationism gives the right way to model this.<br /><br />First observation. It's generally accepted that for the standard supervaluationist<br /><br />p &~Det(p)|-absurdity;<br /><br />Given this and the constraints on rational credence mentioned earlier, we'd have to conclude that my credence in B(5)&~Det(B(5)) must be 0. I have credence 0 in absurdity; and the degree of disbelief in the conclusion of this valid argument (1) must not exceed the degree of disbelief in its premises.<br /><br />Let's think that through. Notice that in this case, my credence in ~Det(B(5)) can be taken to be 1. So given minimal assumptions about the logic of credences, my credence in B(5) must be 0.<br /><br />A parallel argument running from ~B(5)&~Det(~B(5))|-absurdity gives us that my credence in ~B(5) must be 0.<br /><br />Moreover, supervaluational entails all classical tautologies. So in particular we have the validity: |-B(5)v~B(5). The standard constraint in this case tells us that rational credence in this disjunction must be 1. And so, we have a disjunction in which we have credence 1, each disjunct of which we have credence 0 in. (Compare the standard observation that supervaluational disjunctions can be non-prime: the disjunction can be true when neither disjunct is).<br /><br />This is a fairly direct argument that something non-classical has to be going on with the probability calculus. One move at this point is to consider Shafer functions (which I know little about: but see <a href="http://brian.weatherson.org/Ch_5.pdf">here</a>). Now maybe that works out nicely, maybe it doesn't. But I find it kinda interesting that the little constraint on validity and credences gets us so quickly into a position where something like this is needed if the constraint is to work. It also gives us a recipe for arguing against standard supervaluationism: argue against the Shafer-function like behaviour in our degrees of belief, and you'll ipso facto have an argument against supervaluationism. For this, the probablistic constraint on validity is needed (as far as I can see): for its this that makes the distinctive features mandatory.<br /><br />I'd like to connect this to two other issues I've been working on. One is the paper on the logic of supervaluationism cited below. The key thing here is that it raises the prospect of p&~Dp|-absurdity not holding, even for your standard "truth=supertruth" supervaluationist. If that works, the key premise of the argument that forces you to have degree of belief 0 in both an indeterminate sentence 'p' and its negation goes missing.<br /><br />Maybe we can replace it by some other argument. If you read "D" as "it is true that..." as the standard supervaluationist encourages you to, then "p&~Dp" should be read "p&it is not true that p". And perhaps that sounds to you just like an analytic falsity (it sure sounded to me that way); and analytic falsities are the sorts of things one should paradigmatically have degree of belief 0 in.<br /><br />But here's another observation that might give you pause (I owe this point to discussions with Peter Simons and John Hawthorne). Suppose p is indeterminate. Then we have ~Dp&~D~p. And given supervaluationism's conservativism, we also have pv~p. So by a bit of jiggery-pokery, we'll get (p&~Dp v ~p&~D~p). But in moods where I'm hyped up thinking that "p&~Dp" is analytically false and terrible, I'm equally worried by this disjunction. But that suggests that the source of my intuitive repulsion here isn't the sort of thing that the standard supervaluationist should be buying. Of course, the friend of Shafer functions could just say that this is another case where our credence in the disjunction is 1 while our credences in each disjunct is 0. That seems dialectically stable to me: after all, they'll have *independent* reason for thinking that p&~Dp should have credence 0. All I want to insist is that the "it sounds really terrible" reason for assigning p&~Dp credence 0 looks like it overgeneralizes, and so should be distrusted.<br /><br />I also think that if we set aside truth-talk, there's some plausibility in the claim that "p&~Dp" should get non-zero credence. Suppose you're initially in a mindset where you should be about half-confident of a borderline case. Well, one thing that you absolutely want to say about borderline cases is that they're neither true nor false. So why shouldn't you be at least half-confident in the combination of these?<br /><br />And yet, and yet... there's the fundamental implausibility of "p&it's not true that p" (the standard supervaluationist's reading of "p&~Dp") having anything other than credence 0. But ex hypothesi, we've lost the standard positive argument for that claim. So we're left, I think, with the bare intuition. But it's a powerful one, and something needs to be said about it.<br /><br />Two defensive maneuvers for the standard supervaluationist:<br /><br />(1) Say that what you're committed to is just "p& it's not supertrue that p". Deny that the ordinary concept of truth can be identified with supertruth (something that as many have emphasized, is anyway quite plausible given the non-disquotational nature of supertruth). But crucially, don't seek to replace this with some other gloss on supertruth: just say that supertruth, superfalsity and gap between them are appropriate successor concepts, and that ordinary truth-talk is appropriate only when we're ignoring the possibility of the third case. If we disclaim conceptual analysis in this way, then it won't be appropriate to appeal to intuitions about the English word "true" to kick away independently motivated theoretical claims about supertruth. In particular, we can't appeal to intuitions to argue that "p&~supertrue that p" should be assigned credence 0. (There's a question of whether this should be seen as an error-theory about English "truth"-ascriptions. I don't see it needs to be. It might be that the English word "true" latches on to supertruth because supertruth what best fits the truth-role. On this model, "true" stands to supertruth as "de-phlogistonated air" according to some, stands to oxygen. And so this is still a "truth=supertruth" standard supervaluationism.)<br /><br />(2) The second maneuver is to appeal to supervaluational degrees of truth. Let the degree of supertruth of S be, roughly, the measure of the precisifications on which S is true. S is supertrue simpliciter when it is true on all the precisifications, i.e. measure 1 of the precisifications. If we then identify degrees of supertruth with degrees of truth, the contention that truth is supertruth becomes something that many find independently attractive: that in the context of a degree theory, truth simpliciter should be identified with truth to degree 1. (I think that this tendancy has something deeply in common with the temptation (following Unger) to think that nothing that nothing can be flatter than a flat thing: nothing can be truer than a true thing. I've heard people claim that Unger was right to think that a certain class of adjectives in English work this way).<br /><br />I think when we understand the supertruth=truth claim in that way, the idea that "p&~true that p" should be something in which we should always have degree of belief 0 loses much of its appeal. After all, compatibly with "p" not being absolutely perfectly true (=true), it might be something that's almost absolutely perfectly true. And it doesn't sound bad or uncomfortable to me to think that one should conform one's credences to the known degree of truth: indeed, that seems to be a natural generalization of the sort of thing that originally motivated our worries.<br /><br />In summary. If you're a supervaluationist who takes the orthodox line on supervaluational logic, then it looks like there's a strong case for a non-classical take on what degrees of belief look like. That's a potentially vulnerable point for the theory. If you're a (standard, global, truth=supertruth) supervaluationist who's open to the sort of position I sketch in the paper below, prima facie we can run with a classical take on degrees of belief.<br /><br />Let me finish off by mentioning a connection between all this and some material on probability and conditionals I've been working on recently. I think a pretty strong case can be constructed for thinking that for some conditional sentences S, we should be all-but-certain that S&~DS. But that's exactly of the form that we've been talking about throughout: and here we've got *independent* motivation to think that this should be high-probability, not probability zero.<br /><br />Now, one reaction is to take this as evidence that "D" shouldn't be understood along standard supervaluationist lines. That was my first reaction too (in fact, I couldn't see how anyone but the epistemicist could deal with such cases). But now I'm thinking that this may be too hasty. What seems right is that (a) the standard supervaluationist with the Shafer-esque treatment of credences can't deal with this case. But (b) the standard supervaluationist articulated in one of the ways just sketched shouldn't think there's an incompatibility here.<br /><br />My own preference is to go for the degrees-of-truth explication of all this. Perhaps, once we've bought into that, the "truth=degree 1 supertruth" element starts to look less important, and we'll find other useful things to do with supervaluational degrees of truth (a la Kamp, Lewis, Edgington). But I think the "phlogiston" model of supertruth is just about stable too.<br /><br />[P.S. Thanks to Daniel Elstein, for a paper today at the CMM seminar which started me thinking again about all this.]Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com9tag:blogger.com,1999:blog-6432111.post-68536001578945855392007-11-02T22:13:00.000+00:002007-11-03T01:09:18.706+00:00Supervaluations and logical revisionism paperHappy news today: the Journal of Philosophy is going to publish my <a href="http://www.personal.leeds.ac.uk/%7Ephljrgw/wip/supervaluationalconsequence.pdf">paper</a> on the logic of supervaluationism. Swift moral. It ain't logical revisionary; and if it is, it doesn't matter.<br /><br />This <a href="http://theoriesnthings.blogspot.com/2007/03/ive-just-finished-new-version-of-my.html">previous post</a> gives an overview, if anyone's interested...<br /><br />Now I've just got to figure out how to transmute my beautiful LaTeX symbols into Word...Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com1tag:blogger.com,1999:blog-6432111.post-8357934832477200062007-10-24T14:41:00.001+01:002007-10-24T14:41:24.303+01:00London Logic and Metaphysics Forum (x-posted from MV)If you're in London on a Tuesday evening, what better to do than to take in a talk by a young philosopher on logic or metaphysics?<br /><br />Spotting this gap in the tourist offerings, the clever folks in the capital have set up the London Logic and Metaphysics forum. Looks an exciting programme, though I have my doubts about the joker on the 11th Dec...<br /><br />Tues 30 Oct: David Liggins (Manchester)<br />Quantities<br /><br />Tues 13 Nov: Oystein Linnebo (Bristol & IP)<br />Compositionality and Frege's Context Principle<br /><br />Tues 27 Nov: Ofra Magidor (Oxford)<br />Epistemicism about vagueness and meta-linguistic safety<br /><br />Tues 11 Dec: Robbie Williams (Leeds)<br />Is survival intrinsic?<span style="color:#ffcc00;"><br /><br /></span>8 Jan: Stephan Leuenberger (Leeds)<br /><br />22 Jan: Antony Eagle (Oxford)<br /><br />5 Feb: Owen Greenhall (Oslo & IP)<br /><br />4 Mar: Guy Longworth (Warwick)<br /><br /><br />Full details can be found <a href="http://www.philosophy.sas.ac.uk/content.php?id=41&pid=12">here</a>.Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com2tag:blogger.com,1999:blog-6432111.post-65033415191296330112007-10-24T13:18:00.001+01:002007-10-25T12:45:54.828+01:00In RutgersAs Brian Weatherson reports <a href="http://tar.weatherson.org/2007/10/14/a-conference-at-rutgers/">here</a>, there's a metaphysics/phil physics conference at Rutgers this weekend (26-28th). I'm in Rutgers for the week, and am responding to one of the papers at the event. I'm looking forward to what looks like a really interesting conference.<br /><br />Tonight (24th) I'm giving a talk to a phil language group at Rutgers. I'm going to be presenting some material on modal accounts of indicative conditionals (a la Stalnaker, Weatherson, Nolan). This piece has evolved quite a bit during the last few weeks as I've been working on it. A bit unexpectedly, I've ended up with an argument for <a href="http://www.google.com/url?sa=t&ct=res&cd=2&url=http%3A%2F%2Fwww.blackwell-synergy.com%2Fdoi%2Fpdf%2F10.1111%2Fj.0031-8094.2001.00224.x&ei=lTgfR7WzL6KGepOcjK0N&usg=AFQjCNHoBKe2J8PW1siNdyVmclvpV6A_SA&sig2=cGJVb9BN89J1dGV6KJrdxw">Weatherson's views</a>.<br /><br />Briefly, the idea is to look at what mileage we can get out of paradigmatic instances of the identification of the probability of a conditional "If A, B" with the conditional probability of B on A (CCCP). We know that in general that identification is highly problematic, due to notorious impossibility results due to David Lewis and more recently Ned Hall and Al Hajek. But I think it's interesting to divide the issue into two halves:<br /><br />First, what would a modal account of indicative conditionals that obeys (a handful of paradigmatic) instances of CCCP have to look like? I think there's a lot we can say about this: of the salient options, it'll look a lot like Weatherson's theory; it'll have to have a particular take on what kind of vagueness can effect the conditional; it'll have to say that any proposition you know should have probability 1.<br /><br />Second, is this package sustainable in the face of impossibility results? Al Hajek (in his papers in the Eels/Skyrms probability and conditionals volume) does a really nice job of formulating the challenges here. If we're prepared to give up some instances of CCCP in recherche cases (like left-embedded conditionals, things of the form "if (if A, B), C", then many of the general impossibility results won't apply. But nevertheless, there a bunch of puzzles that remain: in particular, concerning how even the paradigmatic instances can survive when we receive new information.<br /><br />I'll mostly be talking about the first part of the talk this evening.Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com2tag:blogger.com,1999:blog-6432111.post-92137226100540358382007-10-12T17:08:00.000+01:002007-10-12T22:11:56.608+01:00Edgington vs. StalnakerOne of the things I'm thinking about at the moment is Stalnaker-esque treatments of indicative conditionals. Stalnaker's story, roughly, is that indicative conditionals have almost exactly the same truth conditions as (on his theory) counterfactuals do. That is, A>B is true at w iff B is true at the nearest B-world to w. The difference comes only in the fine details about which worlds count as nearest. For counterfactuals, Stalnaker like Lewis thinks that some sort of similarity does the job. For indicatives, Stalnaker thinks that the nearness ordering is rooted in the same similarity metric, but distorted by the following overriding principle: if A and w are consistent with what we collectively presuppose, then the nearest A-worlds will also be consistent with what we collectively presuppose. In the jargon, all worlds outside the "context set" are pushed further out than they would be on the counterfactual ordering.<br /><br />I'm interested in this sort of "push worlds" modal account of indicatives. (Others in a similar family include <a href="http://www.google.co.uk/url?sa=t&ct=res&cd=5&url=http%3A%2F%2Fwww.ingentaconnect.com%2Fcontent%2Fklu%2Fphil%2F2003%2F00000116%2F00000003%2F05103410&ei=mp0PR7vFJY76wQHJ2pDcCQ&usg=AFQjCNGdiCJW7XStpGmjEauSjF0wIFnrdA&sig2=HWLDQaPEYwg2Qfhh8w_u0Q">Daniel Nolan's theory</a>, whereby it's knowledge that does the pushing rather than collective presuppositions). Lots of criticisms of Stalnaker's theory don't engage with the fine details of what he says about the closeness ordering, but more general aspects (e.g. its inability to sustain Adams' thesis that the conditional probability is the probability of the conditional; its handling of Gibbard cases; its sensitivity to fine factors of conversational context). An exception, however, is an argument that Dorothy Edgington puts forward in her <a href="http://plato.stanford.edu/entries/conditionals/">SEP survey article</a> (which, by the way, I very much recommend!)<br /><br /><br />Here's the case. Let's suppose that Jill is uncertain how much fuel is in Jane's car. The tank has a capacity for 100-miles'-worth, but Jill has no knowledge of what level it is at. Jane is<br />going to drive it until it runs out of fuel. For Jill, the probability of the car being driven for n miles, given that it's driven for no more than fifty, is 1/50. (for n<51).<br /><br />Suppose that in fact the tank is full. The most similar worlds to actuality, arguably, are those where the tank is 50 per cent full, and so where Jane drives 50 miles. The same goes for any world where the tank is more than 50 per cent full. So, if nearness of worlds is determined by similarity, the conditional is true as uttered at each of the worlds where the tank is more than 50 per cent full. So without knowing the details of the level of the tank, we should be at least 50 per cent confident that if it goes for under 50 miles, it'll go for exactly 50 miles. But this seems all wrong. Varying the numbers we can make the case even worse: we should be almost sure of "If it goes for no more than 3 miles, it'll go for exactly 3 miles", even though we regard 3, 2, 1 as equiprobable fuel levels.<br /><br />Of course, that's only to take into account the comparative similarity of worlds in determining the ordering, and Stalnaker and Nolan have the distorting factor to appeal to: worlds that are incompatible with something we presuppose/know to be true, can be pushed further out. But it doesn't seem in this case that anything relevant is being presupposed/known.<br /><br />I don't think this objection works. To see that something is going wrong, notice that the argument, if successful, would work against other theories too. Consider, for example, Stalnaker's theory of the counterfactual conditional. Take the case as before, but suppose we're a day later and Jill doesn't know how far Jane drove. Consider the counterfactual "Had it stopped after no more than 50 miles, it'd have gone for exactly 50 miles". By the previous reasoning, the most similar worlds to over-50 worlds are exactly-50 worlds; so we should be half confident of the truth of that conditional. Varying the numbers, we should be almost sure that "If it had gone no more than 3, it'd go exactly 3", despite regarding the probabilities of 3, 2 and 1 as equally likely. But these all seem like bizarre results.<br /><br />Moral: the counterfactual ordering of worlds isn't fixed by the kind of similarity that Edgington appeals to: the sort of similarity whereby a world in which the car stops after 53 miles is more similar to one in which the car stops after 50 miles than one in which the car stops after 3 miles. Of course, in some sense (perhaps an "overall" sense) those similarity judgements are just right. But we know from the Fine/Bennett cases that the sense of similarity that supports the right counterfactual verdicts can't be all in cases (those cases are ones concerning counterfactuals starting "if Nixon had pushed the nuclear button in the 70's..." All-in similarity arguably says that closest such worlds are ones where no missiles are released, leading to the wrong results).<br /><br />Spelling out what the right notion of similarity is is tricky. Lewis gave us one recipe. In effect, we look for a little miracle that'll suffice to let the counterfactual world diverge from actual history to bring about the antecedent. Then we let events run on according to actual laws, and see what happens. So in worlds where the tank is full, say, let's look for the little miracle required to to make it run for no more than 50 miles, and run things on. What are the plausible candidates? Perhaps Jane's decides to take an extra journey yesterday, or forgets to fill up her car two days ago. Small miracles could suffice to get us into those sorts of worlds. But those sorts of divergences don't really suggest that she'll end up with exactly 50 miles worth of fuel in the tank, and so this approach undermines the case for "If were at most 50, then exactly 50" being true in antecedent-false worlds. (Which is a good thing!)<br /><br />If that's the right thing to say in the counterfactual case, the indicative case too will be sorted. For it's designed to be a case where presuppositions/knowledge don't have a relevant distorting effect. And so, once more, the case for "If the car goes for at most 50, then it'll go for exactly 50" doesn't work.<br /><br />I think that the basic interest of push-worlds theories of indicatives like Stalnaker's and Nolan's is to connect up the counterfactual and indicative ordering: whether there's anything informative to say about the counterfactual ordering of worlds itself is an entirely different matter. So if the glosses of the position lead to problems, it's best to figure out whether the problems lie withthe gloss of the counterfactual ordering (which then should be assessed in connection with that familiar and worked through literature) or with the push-worlds maneuver itself (which has, I think, been less fully examined). I think Edgington's objection is really connected with the first facet, and I've tried to say why I think a more detailed theory will make the problem dissolve. But even if it did turn out to be a problem, the push-worlds thesis itself is still standing.<br /><br />(Incidentally, I do think Edgington's setup (which she attributes to a student, James Studd) has wider interest. It looks to me like Jackson's modal theory of counterfactuals, and Davis' modal theory of indicatives, both deliver the wrong results in this case.)<br /><br />[Actually, now I've written this out, it strikes me that maybe the anti-Stalnaker argument is fixable. The trick would be to specify the background state of the world to make the result for counterfactual probabilities seem plausible, but such that (given Jill's ignorance of the background conditions) the indicative probabilities still seem wrong. So maybe the example is at least a recipe for a counterexample to Stalnaker, even if the original case is resistable as described.]Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com2tag:blogger.com,1999:blog-6432111.post-8678517224892706652007-09-18T09:26:00.001+01:002007-09-18T15:23:37.852+01:00UK job marketAs the next crop of <a href="http://aidan.mcglynn.googlepages.com/adviceforwannabephilosophers">PhDers gear up for the job market</a>, I thought I’d try to systematize some info about the <st1:country-region st="on"><st1:place st="on">UK</st1:place></st1:country-region> system that might not be transparent to outsiders. It's not always totally transparent to insiders either, but I’m hoping that everything I say below is accurate, at least as a rule of thumb. I’d very much welcome queries, corrections and supplements.<o:p><br /></o:p><p class="MsoNormal"><b style="">Basics:<o:p></o:p></b></p> <ol style="margin-top: 0cm;" start="1" type="1"><li class="MsoNormal" style="">The <st1:place st="on"><st1:country-region st="on">UK</st1:country-region></st1:place> job market has no unified system for applications. Jobs come out in dribs and drabs, and you apply individually to each one you fancy going for at the appropriate time. </li><li class="MsoNormal" style="">Three main categories of job that PhDers look for: </li><ol style="margin-top: 0cm;" start="1" type="a"><li class="MsoNormal" style="">“Lectureships”. Usually these jobs comes with responsibilities to teach, to do research, and to do certain amounts of administrative work. These come with the equivalent of tenure. These are sometimes called “permanent” or “continuing” lectureships to distinguish them from (c) below.<br /> People sometimes think of these---very roughly---as <st1:country-region st="on"><st1:place st="on">US</st1:place></st1:country-region> assistant professorships (though coming with the equivalent of tenure). </li><li class="MsoNormal" style=""><span style=""> </span>“Postdocs”: (normally) full time research positions, often held for two or three years. </li><li class="MsoNormal" style="">“Fixed term lectureships” (including “teaching fellowships” and “replacement lectureships”). These are usually positions covering teaching needs within a department. Pay and conditions vary wildly: some of them are de facto full-time teaching positions, some of them will have the same conditions as the non-fixed term lectureships. </li></ol><li class="MsoNormal" style="">It’s far more common in the <st1:country-region st="on">UK</st1:country-region> than in the <st1:country-region st="on"><st1:place st="on">US</st1:place></st1:country-region> for PhD-ers aiming for a research career to try for a postdoc position for a few years. Postdocs are pretty prestigious things to get. However, over the last few years quite a few PhD leavers have moved straight into continuing lectureships, which offers the extremely nice feature of instant job security, if less upfront time to spend on research. </li><li class="MsoNormal" style="">Lectureships come in grades: Lecturer A and lecturer B are the entry-level grades (lecturer Bs getting a bit more money than lecturer As). Then come senior lectureships and “readerships”; then professorships. You can roughly map the <st1:country-region st="on">UK</st1:country-region> lecturer/senior lecturer/prof divisions onto <st1:country-region st="on"><st1:place st="on">US</st1:place></st1:country-region> assistant/associate/full prof. But I don’t have enough familiarity with the <st1:country-region st="on">US</st1:country-region> system to know how closely it matches---and of course there isn’t a tenured/untenured line to be drawn as there is in the <st1:place st="on"><st1:country-region st="on">US</st1:country-region></st1:place>. <br /> Fixed term lectureships are the equivalent, I guess, of US visiting professorships.</li><li class="MsoNormal" style="">Sometimes but not always <st1:place st="on"><st1:country-region st="on">UK</st1:country-region></st1:place> jobs will be advertised according to US norms (e.g. specified with AOS/AOC, advertised in JFP). But to get the full whack, the best thing to do is to sign up for the most popular <st1:place st="on"><st1:country-region st="on">UK</st1:country-region></st1:place> academic job website, <a href="http://www.jobs.ac.uk/">www.jobs.ac.uk</a>.</li></ol> <p class="MsoNormal"><b style="">Appointments procedure<o:p></o:p></b></p> <ol style="margin-top: 0cm;" start="6" type="1"><li class="MsoNormal" style="">What’s asked for in a <st1:country-region st="on"><st1:place st="on">UK</st1:place></st1:country-region> application will vary. There’s often an application form to be given out, writing samples will be asked for (perhaps with a specified wordlimit) and of course you need to include a CV. References aren’t typically required at the initial application stage. But often US applicants find it easiest to send their standard application pack, including references, teaching reports and whatever. This’ll probably mean that US applicants end up providing a *lot* more info than their rivals at the initial stages. Obviously you can contact the dept for guidance if you’re worried your application pack will be out of sync with what’s officially requested. </li><li class="MsoNormal" style=""><span style=""> </span>If you’re invited for interview, you should be aware that the setup will differ markedly from US norms. Often, there’ll be a shortlist of 4 or 5 for a continuing lectureship, and often all candidates will be interviewed on the same day, and even taken out for dinner together. Some people find this totally awkward, and hate it. I sort of enjoyed the camaraderie. But it is standard practice, so don’t be surprised by having to socialize with your competitors. (Remember: there's nothing like the APA smoker to go through in the UK system: the interview days are a one-stop-shop). The institution will tell you what to expect, but often the formal proceedings will include a presentation and an interview, carried out over one or two days. </li><li class="MsoNormal" style="">Any presentation will typically be to the whole department, who’ll give feedback to the appointments committee who run the interview and who actually make the hiring decisions. Presentations can be of<span style=""> </span>various formats: from 20 min presentations with 10 mins for questions, to hour-long presentations with substantial discussion time. For fixed term lectureships, you might be asked for a teaching-based presentation (“give a presentation suitable for a first-year course”). For postdocs, obviously a research presentation is appropriate. For continuing lectureships, it’ll probably veer towards the research. The institution will give guidance, and don’t be afraid to ask for clarification/advice if you’re unsure what they’re wanting (particularly if they ask for something totally impossible e.g. “a twenty minute presentation accessible by second year undergraduates that gives an overview of your research programme”). </li><li class="MsoNormal" style="">As mentioned, <st1:country-region st="on"><st1:place st="on">UK</st1:place></st1:country-region> appointments are made by appointments committees, not by individual departments. The makeup of appointments committees can vary, but it isn’t atypical for there to be just two philosophers in a committee of five or more for a philosophy-only post. Obviously, these philosophers have an influential voice, both in the interview itself (you can expect them to ask the majority of the questions) and in the hiring decisions. But for continuing positions probably also be asked questions by non-philosophers. For obvious reasons “are there interdisciplinary aspects to your research?” is a question often heard at that stage of the interview. </li></ol> <p class="MsoNormal">Finally, the Oxbridge section:</p> <ol style="margin-top: 0cm;" start="10" type="1"><li class="MsoNormal" style=""><st1:city st="on">Oxford</st1:city> and <st1:city st="on">Cambridge</st1:city> are exceptions to almost every <st1:place st="on"><st1:country-region st="on">UK</st1:country-region></st1:place> rule. Their unique college-based setup means that their jobs are titled differently and graded differently: [for example, in Oxford] CUFs and University Lectureships are the main continuing jobs they offer. Those are, again, both tenured positions. See <a href="http://leiterreports.typepad.com/blog/2007/02/jobs_at_oxford_.html">this discussion on Leiter</a> (by Michael Rosen) for the lowdown. <st1:city st="on">Oxford</st1:city> and <st1:city st="on"><st1:place st="on">Cambridge</st1:place></st1:city> also have a vast stock of postdoc positions (called in Oxbridge “junior research fellowships” or JRFs) and a vast stock of fixed term appointments “lectureships”. All terribly confusing even to <st1:country-region st="on"><st1:place st="on">UK</st1:place></st1:country-region> folk. </li><li class="MsoNormal" style="">Not all <st1:city st="on"><st1:place st="on">Oxford</st1:place></st1:city> and Cambridge JRFs are advertised on jobs.ac.uk, though I believe all continuing positions will be. I found the only way to get comprehensive listings for JRFs is by looking at the Oxford Gazette: <a href="http://www.ox.ac.uk/gazette/">http://www.ox.ac.uk/gazette/</a> (this comes out weekly, and you can sign up for email notification). From the homepage, click on “weekly issues”, then an issue, then “appointments”. Amusingly under “positions outside <st1:city st="on">Oxford</st1:city>”, it lists all and only positions available at <st1:city st="on"><st1:place st="on">Cambridge</st1:place></st1:city>. That’s quantifier restriction for you. Two warnings: these positions are often (though not always) advertised across disciplines, so that a philosopher will be competing with biologists and mathematicians and whatever. Also, at least when I applied, each JRF position that came up (and there are lots) seemed to require it’s own research statement, of varying lengths with varying requirements. That’s hugely time-consuming for the applicant (Oxbridge: please introduce some uniformity!)</li></ol>[updated in the light of Brian's queries in the comments about whether the Cambridge continuing positions are relevantly like those in Oxford or relevantly like those in the rest of the UK. I don't know about this. Be very pleased to receive information.]Robbie Williamshttp://www.blogger.com/profile/02081389310232077607noreply@blogger.com18