Here is the final post (for the time being) on structured propositions. As promised, this is to be an account of the truth-conditions of structured propositions, presupposing a certain reasonably contentious take on the metaphysics of linguistic representation (metasemantics). It's going to be compatible with the view that structured propositions are nothing but certain n-tuples: lists of their components. (See earlier posts if you're getting concerned about other factors, e.g. the potential arbitriness in the choice of which n-tuples are to be identified with the structured proposition that Dummett is a philosopher.)
Here's a very natural way of thinking of what the relation between *sentences* and truth-conditions are, on a structured propositions picture. It's that metaphysically, the relation of "S having truth-conditions C" breaks down into two more fundamental relations: "S denoting struc prop p" and "struc prop p having truth-conditions C". The thought is something like: primarily, sentences express thoughts (=struc propositions), and thoughts themselves are the sorts of things that have intrinsic/essential representational properties. Derivatively, sentences are true or false of situations, by expressing thoughts that are true or false of those situations. As I say, it's a natural picture.
In the previous posting, I've been talking as though this direction-of-explanation was ok, and that the truth-conditions of structured propositions should have explanatory priority over the truth-conditions of sentences, so we get the neat separation into the contingent feature of linguistic representation (which struc prop a sentence latches onto) and the necessary feature (what the TCs are, given the struc prop expressed).
The way I want to think of things, something like the reverse holds. Here’s the way I think of the metaphysics of linguistic representation. In the beginning, there were patterns of assent and dissent. Assent to certain sentences is systematically associated with certain states of the world (coarse-grained propositions, if you like) perhaps by conventions of truthfulness and trust (cf. Lewis's "Language and Languages"). What it is for expressions E in a communal language to have semantic value V is for E to be paired with V under the optimally eligible semantic theory fitting with that association of sentences with coarse-grained propositions.
That's a lot to take in all at one go, but it's basically the picture of linguistic representation as fixed by considerations of charity/usage and eligibility/naturalness that lots of people at the moment seem to find appealing. The most striking feature---which it shares with other members of the "radical interpretation" approach to metasemantics---is that rather than starting from the referential properties of lexical items like names and predicates, it depicts linguistic content as fixed holistically by how well it meshes with patterns of usage. (There's lots to say here to unpack these metaphors, and work out what sort of metaphysical story of representation is being appealed to: that's something I went into quite a bit in my thesis---my take on it is that it's something close to a fictionalist proposal).
This metasemantics, I think, should be neutral between various semantic frameworks for generating the truth conditions. With a bit of tweaking, you can fit in a Davidsonian T-theoretic semantic theory into this picture (as suggested by, um... Davidson). Someone who likes interpretational semantics but isn't a fan of structured propositions might take the semantic values of names to be objects, and the semantic values of sentences to be coarse-grained propositions, and say that it's these properties that get fixed via best semantic theory of the patterns of assent and dissent (that's Lewis's take).
However, if you think that to adequately account for the complexities of natural language you need a more sophisticated, structured proposition, theory, this story also allows for it. The meaning-fixing semantic theory assign objects to names, and structured propositions to sentences, together with a clause specifying how the structured propositions are to be paired up with coarse-grained propositions. Without the second part of the story, we'd end up with an association between sentences and structured propositions, but we wouldn't make connection with the patterns of assent and dissent if these take the form of associations of sentences with *coarse grained* propositions (as on Lewis's convention-based story). So on this radical interpretation story where the targetted semantic theories take a struc prop form, we get a simultaneous fix on *both* the denotation relation between sentences and struc props, and the relation between struc props and coarse-grained truth-conditions.
Let's indulge in a bit of "big-picture" metaphor-ing. It’d be misleading to think of this overall story as the analysis of sentential truth-conditions into a prior, and independently understood, notion of the truth-conditions of structured propositions, just as it's wrong on the radical interpretation picture to think of sentential content as "analyzed in terms of" a prior, and independently understood, notion of subsentential reference. Relative to the position sketched, it’s more illuminating to think of the pairing of structured and coarse-grained propositions as playing a purely instrumental role in smoothing the theory of the representational features of language. It's language which is the “genuine” representational phenomenon in the vicinity: the truth-conditional features attributed to struc propositions are a mere byproduct.
Again speaking metaphorically, it's not that sentences get to have truth-conditions in a merely derivative sense. Rather, structured propositions have truth-conditions in a merely derivative sense: the structured proposition has truth-conditions C if it is paired with C under the optimal overall theory of linguistic representation.
For all we've said, it may turn out that the same assignment of truth-conditions to set-theoretic expressions will always be optimal, no matter which language is in play. If so, then it might be that there's a sense in which structured propositions have "absolute" truth-conditions, not relative to this or that language. But, realistically, one'd expect some indeterminacy in what struc props play the role (recall the Benacerraf point King makes, and the equally fitness of [a,F] and [F,a] to play that "that a is F" role). And it's not immediately clear why the choice to go one way for one natural language should constrain way this element is deployed in another language. So it's at least prima facie open that it's not definitely the case that the same structured propositions, with the same TCs, are used in the semantics of both French and English.
It's entirely in the spirit of the current proposal that we think of we identify [a,F] with the structured proposition that a is F only relative to a given natural language, and that this creature only has the truth-conditions it does relative to that language. This is all of a piece with the thought that the structured proposition's role is instrumental to the theory of linguistic representation, and not self-standing.
Ok. So with all this on the table, I'm going to return to read the book that prompted all this, and try to figure out whether there's a theoretical need for structured propositions with representational properties richer than those attributed by the view just sketched.
[update: interestingly, it turns out that King's book doesn't give the representational properties of propositions explanatory priority over the representational properties of sentences. His view is that the proposition that Dummett thinks is (very crudely, and suppressing details) the fact that in some actual language there is a sentence of (thus-and-such a structure) of which the first element is a word referring to Dummett and the second element is a predicate expressing thinking. So clearly semantic properties of words are going to be prior to the representational properties of propositions, since those semantic properties are components of the proposition. But more than this, from what I can make out, King's thought is that if there was a time where humans spoke a language without attitude-ascriptions and the like, then sentences would have truth-conditions, and the proposition-like facts would be "hanging around" them, but the proposition-like facts wouldn't have any representational role. Once we start making attitude ascriptions, we implicitly treat the proposition-like structure as if it had the same TCs as sentences, and (by something like a charity/eligibility story) the "propositional relation" element acquires semantic significance and the proposition-like structure gets to have truth-conditions for the first time.
That's very close to the overall package I'm sketching above. What's significant dialectically, perhaps, is that this story can explain TCs for all sorts of apparently non-semantic entities, like sets. So I'm thinking it really might be the Benacerraf point that's bearing the weight in ruling out set-theoretic entities as struc propns---as explained previously, I don't go along with *that*.]
Tuesday, December 18, 2007
Structured propositions and truth conditions.
In the previous post, I talked about the view of structured propositions as lists, or n-tuples, and the Benacerraf objections against it. So now I'm moving on to a different sort of worry. Here's King expressing it:
“A final difficulty for the view that propositions are ordered n-tuples concerns the mystery of how or why on that view they have truth conditions. On any definition of ordered n-tuples we are considering, they are just sets. Presumably, many sets have no truth conditions (eg. The set of natural numbers). But then why do certain sets, certain ordered n-tuples, have truth-conditions? Since not all sets have them, there should be some explanation of why certain sets do have them. It is very hard to see what this explanation could be.”
I feel the force of something in this vicinity, but I'm not sure how to capture the worry. In particular, I'm not sure whether the it's right to think of structured propositions' having truth-conditions as a particularly "deep" fact over which there is mystery in the way King suggests. To get what I'm after here, it's probably best simply to lay out a putative account of the truth-conditions of structured propositions, and just to think about how we'd formulate the explanatory challenge.
Suppose, for example, one put forward the following sort of theory:
(i) The structured proposition that Dummett is a philosopher = [Dummett, being a philosopher].
(ii) [Dummett, being a philosopher] stands in the T relation to w, iff Dummett is a philosopher according to w.
(iii) bearing the T-relation to w=being true at w
Generalizing,
(i) For all a, F, the structured proposition that a is F = [a, F]
(ii) For all individuals a, and properties F, [a, F] stands in the T relation to w iff a instantiates F according to w.
(iii) bearing the T-relation to w=being true at w
In a full generality, I guess we’d semantically ascend for an analogue of (i), and give a systematic account of what structured propositions are associated with which English sentences (presumably a contingent matter). For (ii), we’d give a specification (which there’s no reason to make relative to any contingent facts) about which ordered n-tuples stand in the T-relation to which worlds. (iii) can stay as it is.
The naïve theorist may then claim that (ii) and (iii) amount to a reductive account of what it is for a structured proposition to have truth-conditions. Why does [1,2] not have any truth-conditions, but [Dummett, being a philosopher] does? Because the story about what it is for an ordered pair to stand in the T-relation to a given world, just doesn’t return an answer where the second component isn’t a property. This seems like a totally cheap and nasty response, I’ll admit. But what’s wrong with it? If that’s what truth-conditions for structured propositions are, then what’s left to explain? It doesn't seem that there is any mystery over (ii): this can be treated as a reductive definition of the new term "bearing the T-relation". Are there somehow explanatory challenges facing someone who endorses the property-identity (iii)? Quite generally, I don't see how identities could be the sort of thing that need explaining.
(Of course, you might semantically ascend and get a decent explanatory challenge: why should "having truth conditions" refer to the T-relation. But I don't really see any in principle problem with addressing this sort of challenge in the usual ways: just by pointing to the fact that the T-relation is a reasonably natural candidate satisfying platitudes associated with truth-condition talk.)
I'm not being willfully obstructive here: I'm genuinely interested in what the dialectic should be at this point. I've got a few ideas about things one might say to bring out what's wrong with the flat-footed response to King's challenge. But none of them persuades me.
Some options:
(a)Earlier, we ended up claiming that it was indefinite what sets structured propositions were identical with. But now, we’ve given a definition of truth-conditions that is committal on this front. For example, [F,a] was supposed to be a candidate precisification of the proposition that a is F. But (ii) won’t assign it truth conditions, since the second component isn’t a property but an individual.
Reply: just as it was indefinite what the structured propositions were, it is indefinite what sets have truth-conditions, and what specification of those truth-conditions is. The two kinds of indefiniteness are “penumbrally connected”. On a precisification on which the prop that a is F=[a,F], then the clause holds as above; but on a precisification on which that a is F=[F,a], a slightly twisted version of the clause will hold. But no matter how we precisify structured proposition-talk, there will be a clause defining the truth-conditions for the entities that we end up identifying with structured propositions.
(b) You can’t just offer definitional clauses or “what it is” claims and think you’ve evaded all explanatory duties! What would we think of a philosopher of mind who put forward a reductive account whereby pain-qualia were by definition just some characteristics of C-fibre firing, and then smugly claimed to have no explanatory obligations left.
Reply: one presupposition of the above is that clauses like (ii) “do the job” of truth-conditions for structured propositions, i.e. there won’t be a structured proposition (by the lights of (i)) whose assigned “truth-conditions” (by the lights of (ii)) go wrong. So whatever else happens, the T-relation (defined via (ii)) and the truth-at relation we’re interested in have a sort of constant covariation (and, unlike the attempt to use a clause like (ii) to define truth-conditions for sentences, we won’t get into trouble when we vary the language use and the like across worlds, so the constant covariation is modally robust). The equivalent assumption in the mind case is that pain qualia and the candidate aspect of C-fibre firing are necessarily constantly correlated. Under those circumstances, many would think we would be entitled to identify pain qualia and the physicalistic underpinning. Another way of putting this: worries about the putative “explanatory gap” between pain-qualia and physical states are often argued to manifest themselves in a merely contingent correlation between the former and the latter. And that’d mean that any attempt to claim that pain qualia just are thus-and-such physical state would be objectionable on the grounds that pain qualia and the physical state come apart in other possible worlds.
In the case of the truth-conditions of structured propositions, nothing like this seems in the offing. So I don’t see a parody of the methodology recommended here. Maybe there is some residual objection lurking: but if so, I want to hear it spelled out.
(c)Truth-conditions aren’t the sort of thing that you can just define up as you please for the special case of structured propositions. Representational properties are the sort of things possessed by structural propositions, token sentences (spoken or written) of natural language, tokens of mentalese, pictures and the rest. If truth-conditions were just the T-relation defined by clause (ii), then sentences of mentalese and English, pictures etc couldn’t have truth-conditions. Reductio.
Reply: it’s not clear at all that sentences and pictures “have truth-conditions” in the same sense as do structured propositions. It fits very naturally with the structured-proposition picture to think of sentences standing in some “denotation” relation to a structured proposition, through which may be said to derivatively have truth-conditions. What we mean when we say that ‘S has truth conditions C’ is that S denotes some structured proposition p and p has truth-conditions C, in the sense defined above. For linguistic representation, at least, it’s fairly plausible that structured propositions can act as a one-stop-shop for truth-conditions.
Pictures are a trickier case. Presumably they can represent situations accurately or non-accurately, and so it might be worth theorizing about them by associating them with a coarse-grained proposition (the set of worlds in which they represent accurately). But presumably, in a painting that represents Napolean’s defeat at waterloo, there doesn’t need to be separable elements corresponding to Napolean,Waterloo , and being defeated at, which’d make for a neat association of the picture with a structured proposition, in the way that sentences are neatly associated with such things. Absent some kind of denotation relation between pictures and structured propositions, it’s not so clear whether we can derivatively define truth-conditions for pictures as the compound of the denotation relation and the truth-condition relation for structured propositions.
None of this does anything to suggest that we can’t give an ok story about pairing pictures with (e.g.) coarse-grained propositions. It’s just that the relation between structured propositions and coarse-grained propositions (=truth conditions) and the relation between pictures and coarse-grained propositions can’t be the same one, on this account, and nor is even obvious how the two are related (unlike e.g. the sentence/structured proposition case).
So one thing that may cause trouble for the view I’m sketching is if we have both the following: (A) there is a unified representation relation, such that pictures/sentences/structured propositions stand in same (or at least, intimately related) representation relations to C. (B) there’s no story about pictorial (and other) representations that routes via structured propositions, and so no hope of a unified account of representation given (ii)+(iii).
The problem here is that I don’t feel terribly uncomfortable denying (A) and (B). But I can imagine debate on this point, so at least here I see some hope of making progress.
Having said all this in defence of (ii), I think there are other ways for the naïve, simple set-theoretic account of structured propositions to defend itself that don't look quite so flat-footed. But the ways I’m thinking of depend on some rather more controversial metasemantic theses, so I’ll split that off into a separate post. It’d be nice to find out what’s wrong with this, the most basic and flat-footed response I can think of.
“A final difficulty for the view that propositions are ordered n-tuples concerns the mystery of how or why on that view they have truth conditions. On any definition of ordered n-tuples we are considering, they are just sets. Presumably, many sets have no truth conditions (eg. The set of natural numbers). But then why do certain sets, certain ordered n-tuples, have truth-conditions? Since not all sets have them, there should be some explanation of why certain sets do have them. It is very hard to see what this explanation could be.”
I feel the force of something in this vicinity, but I'm not sure how to capture the worry. In particular, I'm not sure whether the it's right to think of structured propositions' having truth-conditions as a particularly "deep" fact over which there is mystery in the way King suggests. To get what I'm after here, it's probably best simply to lay out a putative account of the truth-conditions of structured propositions, and just to think about how we'd formulate the explanatory challenge.
Suppose, for example, one put forward the following sort of theory:
(i) The structured proposition that Dummett is a philosopher = [Dummett, being a philosopher].
(ii) [Dummett, being a philosopher] stands in the T relation to w, iff Dummett is a philosopher according to w.
(iii) bearing the T-relation to w=being true at w
Generalizing,
(i) For all a, F, the structured proposition that a is F = [a, F]
(ii) For all individuals a, and properties F, [a, F] stands in the T relation to w iff a instantiates F according to w.
(iii) bearing the T-relation to w=being true at w
In a full generality, I guess we’d semantically ascend for an analogue of (i), and give a systematic account of what structured propositions are associated with which English sentences (presumably a contingent matter). For (ii), we’d give a specification (which there’s no reason to make relative to any contingent facts) about which ordered n-tuples stand in the T-relation to which worlds. (iii) can stay as it is.
The naïve theorist may then claim that (ii) and (iii) amount to a reductive account of what it is for a structured proposition to have truth-conditions. Why does [1,2] not have any truth-conditions, but [Dummett, being a philosopher] does? Because the story about what it is for an ordered pair to stand in the T-relation to a given world, just doesn’t return an answer where the second component isn’t a property. This seems like a totally cheap and nasty response, I’ll admit. But what’s wrong with it? If that’s what truth-conditions for structured propositions are, then what’s left to explain? It doesn't seem that there is any mystery over (ii): this can be treated as a reductive definition of the new term "bearing the T-relation". Are there somehow explanatory challenges facing someone who endorses the property-identity (iii)? Quite generally, I don't see how identities could be the sort of thing that need explaining.
(Of course, you might semantically ascend and get a decent explanatory challenge: why should "having truth conditions" refer to the T-relation. But I don't really see any in principle problem with addressing this sort of challenge in the usual ways: just by pointing to the fact that the T-relation is a reasonably natural candidate satisfying platitudes associated with truth-condition talk.)
I'm not being willfully obstructive here: I'm genuinely interested in what the dialectic should be at this point. I've got a few ideas about things one might say to bring out what's wrong with the flat-footed response to King's challenge. But none of them persuades me.
Some options:
(a)Earlier, we ended up claiming that it was indefinite what sets structured propositions were identical with. But now, we’ve given a definition of truth-conditions that is committal on this front. For example, [F,a] was supposed to be a candidate precisification of the proposition that a is F. But (ii) won’t assign it truth conditions, since the second component isn’t a property but an individual.
Reply: just as it was indefinite what the structured propositions were, it is indefinite what sets have truth-conditions, and what specification of those truth-conditions is. The two kinds of indefiniteness are “penumbrally connected”. On a precisification on which the prop that a is F=[a,F], then the clause holds as above; but on a precisification on which that a is F=[F,a], a slightly twisted version of the clause will hold. But no matter how we precisify structured proposition-talk, there will be a clause defining the truth-conditions for the entities that we end up identifying with structured propositions.
(b) You can’t just offer definitional clauses or “what it is” claims and think you’ve evaded all explanatory duties! What would we think of a philosopher of mind who put forward a reductive account whereby pain-qualia were by definition just some characteristics of C-fibre firing, and then smugly claimed to have no explanatory obligations left.
Reply: one presupposition of the above is that clauses like (ii) “do the job” of truth-conditions for structured propositions, i.e. there won’t be a structured proposition (by the lights of (i)) whose assigned “truth-conditions” (by the lights of (ii)) go wrong. So whatever else happens, the T-relation (defined via (ii)) and the truth-at relation we’re interested in have a sort of constant covariation (and, unlike the attempt to use a clause like (ii) to define truth-conditions for sentences, we won’t get into trouble when we vary the language use and the like across worlds, so the constant covariation is modally robust). The equivalent assumption in the mind case is that pain qualia and the candidate aspect of C-fibre firing are necessarily constantly correlated. Under those circumstances, many would think we would be entitled to identify pain qualia and the physicalistic underpinning. Another way of putting this: worries about the putative “explanatory gap” between pain-qualia and physical states are often argued to manifest themselves in a merely contingent correlation between the former and the latter. And that’d mean that any attempt to claim that pain qualia just are thus-and-such physical state would be objectionable on the grounds that pain qualia and the physical state come apart in other possible worlds.
In the case of the truth-conditions of structured propositions, nothing like this seems in the offing. So I don’t see a parody of the methodology recommended here. Maybe there is some residual objection lurking: but if so, I want to hear it spelled out.
(c)Truth-conditions aren’t the sort of thing that you can just define up as you please for the special case of structured propositions. Representational properties are the sort of things possessed by structural propositions, token sentences (spoken or written) of natural language, tokens of mentalese, pictures and the rest. If truth-conditions were just the T-relation defined by clause (ii), then sentences of mentalese and English, pictures etc couldn’t have truth-conditions. Reductio.
Reply: it’s not clear at all that sentences and pictures “have truth-conditions” in the same sense as do structured propositions. It fits very naturally with the structured-proposition picture to think of sentences standing in some “denotation” relation to a structured proposition, through which may be said to derivatively have truth-conditions. What we mean when we say that ‘S has truth conditions C’ is that S denotes some structured proposition p and p has truth-conditions C, in the sense defined above. For linguistic representation, at least, it’s fairly plausible that structured propositions can act as a one-stop-shop for truth-conditions.
Pictures are a trickier case. Presumably they can represent situations accurately or non-accurately, and so it might be worth theorizing about them by associating them with a coarse-grained proposition (the set of worlds in which they represent accurately). But presumably, in a painting that represents Napolean’s defeat at waterloo, there doesn’t need to be separable elements corresponding to Napolean,
None of this does anything to suggest that we can’t give an ok story about pairing pictures with (e.g.) coarse-grained propositions. It’s just that the relation between structured propositions and coarse-grained propositions (=truth conditions) and the relation between pictures and coarse-grained propositions can’t be the same one, on this account, and nor is even obvious how the two are related (unlike e.g. the sentence/structured proposition case).
So one thing that may cause trouble for the view I’m sketching is if we have both the following: (A) there is a unified representation relation, such that pictures/sentences/structured propositions stand in same (or at least, intimately related) representation relations to C. (B) there’s no story about pictorial (and other) representations that routes via structured propositions, and so no hope of a unified account of representation given (ii)+(iii).
The problem here is that I don’t feel terribly uncomfortable denying (A) and (B). But I can imagine debate on this point, so at least here I see some hope of making progress.
Having said all this in defence of (ii), I think there are other ways for the naïve, simple set-theoretic account of structured propositions to defend itself that don't look quite so flat-footed. But the ways I’m thinking of depend on some rather more controversial metasemantic theses, so I’ll split that off into a separate post. It’d be nice to find out what’s wrong with this, the most basic and flat-footed response I can think of.
Structured propositions and Benacerraf
I’ve recently been reading Jeff King’s book on structured propositions. It’s really good, as you would expect. There’s one thing that’s bothering me though: I can’t quite get my head around what’s wrong with the simplest, most naïve account of the nature of propositions. (Disclaimer: this might all turn out to be very simple-minded to those in the know. I'd be happy to get pointers to the literature (hey, maybe it'll be to bits of Jeff's book I haven't got to yet...)
The first thing you encounter when people start talking about structured propositions is notation like [Dummett, being a philosopher]. This is supposed to stand for the proposition that Dummett is a philosopher, and highlights the fact that (on the Russellian view) Dummett and the property of being a philosopher are components of the proposition. The big question is supposed to be: what do the brackets and comma represent? What sort of compound object is the proposition? In what sense does it have Dummett and being a philosopher as components? (If you prefer a structured intension view, so be it: then you’ll have a similar beast with the individual concept of Dummett and the worlds-intension associated with “is a philosopher” as ‘constituents’. I’ll stick with the Russellian view for illustrative purposes.)
For purposes of modelling propositions, people often interpret the commas as brackets as the ordered n-tuples of standard set theory. The simplest, most naïve interpretation of what structured propositions are, is simply to identify them as n-tuples. What’s the structured proposition itself? It’s a certain kind of set. What sense are Dummett and the property of being a philosopher constituents of the structured proposition that Dummett is a philosopher? They’re elements of the transitive closure of the relevant set.
So all that is nice and familiar. So what’s the problem? In his ch 1. (and, in passing, in the SEP article here) King mentions two concerns. In this post, I’ll just set the scene by talking about the first. It's a version of a famous Benacerraf worry, which anyone with some familiarity with the philosophy of maths will have come across (King explicitly makes the comparison). The original Benacerraf puzzle is something like this: suppose that the only abstract things are set like, and whatever else they may be, the referents of arithmetical terms should be abstract. Then numerals will stand for some set or other. But there are all sorts of things that behave like the natural numbers within set theory: the constructions known as the (finite) Zermelo ordinals (null, {null}, {{null}}, {{{null}}}...) and the (finite) von Neumann ordinals (null, {null}, {null,{null}}…) are just two. So there’s no non-arbitrary theory of which sets the natural numbers are.
The phenomenon crops up all over the place. Think of ordered n-tuples themselves. Famously, within an ontology of unordered sets, you can define up things that behave like ordered pairs: either [a,b]={{a},{a,b}} or {{{a},null},{{b}}}. (For details see http://en.wikipedia.org/wiki/Ordered_pair). It appears there’s no non-arbitrary reason to prefer a theory that ‘reduces’ ordered to unordered pairs one way or the other.
Likewise, says King, there looks to be no non-arbitrary choice of set-theoretic representation of structured propositions (not even if we spot ourselves ordered sets as primitive to avoid the familiar ordered-pair worries). Sure, we *could* associate the words “the proposition that Dummett is a philosopher” with the ordered pair [Dummett, being a philosopher]. But we could also associate it with the set [being a philosopher, Dummett] (and choices multiply when we get to more complex structured propositions).
One reaction to the Benacerrafian challenge is to take it to be a decisive objection to an ontological story about numbers, ordered pairs or whatever that allows only unordered sets as a basic mathematical ontology. My own feeling is (and this is not uncommon, I think) that this would be an overreaction. More strongly: no argument that I've seen from the Benacerraf phenomenon to this ontological conclusion seems to me to be terribly persuasive.
What we should admit, rather, is that if natural numbers or ordered pairs are sets, it’ll be indefinite which sets they are. So, for example, [a,b]={{a},{a,b}} will be neither definitely true nor definitely false (unless we simply stipulatively define the [,] notation one way or another rather than treating it as pre-theoretically understood). Indefiniteness is pervasive in natural language---everyone needs a story about how it works. And the idea is that whatever that story should be, it should be applied here. Maybe some theories of indefiniteness will make these sort of identifications problematic. But prominent theories like Supervaluationism and Epistemicism have neat and apparently smooth theories of what it we’re saying when we call that identity indefinite: for the supervaluationist, it (may) mean that “[a,b]” refers to {{a},{a,b}} on one but not all precisifications of our set-theoretic language. For the epistemicist, it means that (for certain specific principled reasons) we can’t know that the identity claim is false. The epistemicist will also maintains there’s a fact of the matter about which identity statement connecting ordered and unordered sets is true. And there’ll be some residual arbitrariness here (though we’ll probably have to semantically ascend to find it)---but if there is arbitriness, it’s the sort of thing we’re independently committed to to deal with the indefiniteness rife throughout our language. If you’re a supervaluationist, then you won’t admit there’s any arbitriness: (standardly) the identity statement is neither true nor false, so our theory won’t be committed to “making the choice”.
If that’s the right way to respond to the general Benacerraf challenge, it’s the obvious thing to say in response to the version of that puzzle that arises for the Benacerraf case. And this sort of generalization of the indefiniteness maneuver to philosophical analysis is pretty familiar, it’s part of the standard machinery of the Lewisian hoardes. Very roughly, the programme goes: figure out what you want the Fs to do, Ramsify away terms for Fs and you get a way to fix where the Fs are amidst the things you believe in: they are whatever satisfy the open sentence that you’re left with. Where there are multiple, equally good satisfiers, then deploy the indefiniteness maneuver.
I’m not so worried on this front, for what I take to be pretty routine reasons. But there’s a second challenge King raises for the simple, naïve theory of structured propositions, which I think is trickier. More on this anon.
The first thing you encounter when people start talking about structured propositions is notation like [Dummett, being a philosopher]. This is supposed to stand for the proposition that Dummett is a philosopher, and highlights the fact that (on the Russellian view) Dummett and the property of being a philosopher are components of the proposition. The big question is supposed to be: what do the brackets and comma represent? What sort of compound object is the proposition? In what sense does it have Dummett and being a philosopher as components? (If you prefer a structured intension view, so be it: then you’ll have a similar beast with the individual concept of Dummett and the worlds-intension associated with “is a philosopher” as ‘constituents’. I’ll stick with the Russellian view for illustrative purposes.)
For purposes of modelling propositions, people often interpret the commas as brackets as the ordered n-tuples of standard set theory. The simplest, most naïve interpretation of what structured propositions are, is simply to identify them as n-tuples. What’s the structured proposition itself? It’s a certain kind of set. What sense are Dummett and the property of being a philosopher constituents of the structured proposition that Dummett is a philosopher? They’re elements of the transitive closure of the relevant set.
So all that is nice and familiar. So what’s the problem? In his ch 1. (and, in passing, in the SEP article here) King mentions two concerns. In this post, I’ll just set the scene by talking about the first. It's a version of a famous Benacerraf worry, which anyone with some familiarity with the philosophy of maths will have come across (King explicitly makes the comparison). The original Benacerraf puzzle is something like this: suppose that the only abstract things are set like, and whatever else they may be, the referents of arithmetical terms should be abstract. Then numerals will stand for some set or other. But there are all sorts of things that behave like the natural numbers within set theory: the constructions known as the (finite) Zermelo ordinals (null, {null}, {{null}}, {{{null}}}...) and the (finite) von Neumann ordinals (null, {null}, {null,{null}}…) are just two. So there’s no non-arbitrary theory of which sets the natural numbers are.
The phenomenon crops up all over the place. Think of ordered n-tuples themselves. Famously, within an ontology of unordered sets, you can define up things that behave like ordered pairs: either [a,b]={{a},{a,b}} or {{{a},null},{{b}}}. (For details see http://en.wikipedia.org/wiki/Ordered_pair). It appears there’s no non-arbitrary reason to prefer a theory that ‘reduces’ ordered to unordered pairs one way or the other.
Likewise, says King, there looks to be no non-arbitrary choice of set-theoretic representation of structured propositions (not even if we spot ourselves ordered sets as primitive to avoid the familiar ordered-pair worries). Sure, we *could* associate the words “the proposition that Dummett is a philosopher” with the ordered pair [Dummett, being a philosopher]. But we could also associate it with the set [being a philosopher, Dummett] (and choices multiply when we get to more complex structured propositions).
One reaction to the Benacerrafian challenge is to take it to be a decisive objection to an ontological story about numbers, ordered pairs or whatever that allows only unordered sets as a basic mathematical ontology. My own feeling is (and this is not uncommon, I think) that this would be an overreaction. More strongly: no argument that I've seen from the Benacerraf phenomenon to this ontological conclusion seems to me to be terribly persuasive.
What we should admit, rather, is that if natural numbers or ordered pairs are sets, it’ll be indefinite which sets they are. So, for example, [a,b]={{a},{a,b}} will be neither definitely true nor definitely false (unless we simply stipulatively define the [,] notation one way or another rather than treating it as pre-theoretically understood). Indefiniteness is pervasive in natural language---everyone needs a story about how it works. And the idea is that whatever that story should be, it should be applied here. Maybe some theories of indefiniteness will make these sort of identifications problematic. But prominent theories like Supervaluationism and Epistemicism have neat and apparently smooth theories of what it we’re saying when we call that identity indefinite: for the supervaluationist, it (may) mean that “[a,b]” refers to {{a},{a,b}} on one but not all precisifications of our set-theoretic language. For the epistemicist, it means that (for certain specific principled reasons) we can’t know that the identity claim is false. The epistemicist will also maintains there’s a fact of the matter about which identity statement connecting ordered and unordered sets is true. And there’ll be some residual arbitrariness here (though we’ll probably have to semantically ascend to find it)---but if there is arbitriness, it’s the sort of thing we’re independently committed to to deal with the indefiniteness rife throughout our language. If you’re a supervaluationist, then you won’t admit there’s any arbitriness: (standardly) the identity statement is neither true nor false, so our theory won’t be committed to “making the choice”.
If that’s the right way to respond to the general Benacerraf challenge, it’s the obvious thing to say in response to the version of that puzzle that arises for the Benacerraf case. And this sort of generalization of the indefiniteness maneuver to philosophical analysis is pretty familiar, it’s part of the standard machinery of the Lewisian hoardes. Very roughly, the programme goes: figure out what you want the Fs to do, Ramsify away terms for Fs and you get a way to fix where the Fs are amidst the things you believe in: they are whatever satisfy the open sentence that you’re left with. Where there are multiple, equally good satisfiers, then deploy the indefiniteness maneuver.
I’m not so worried on this front, for what I take to be pretty routine reasons. But there’s a second challenge King raises for the simple, naïve theory of structured propositions, which I think is trickier. More on this anon.
Wednesday, December 12, 2007
Public service announcements (updated)
There's some interesting conferences being announced these days. A couple have caught my eye/been brought to my attention.
First is the Semantics and Philosophy in Europe CFP. This looks really like a really excellent event... one of those events where I think: If I'm not there, I'll be regretting not being there...
The second event is the 2008 Wittgenstein Symposium. It's remit seems far wider than the name might suggest... looks like a funky set of topics. I reproduce the CFP below...
[Update: a third is a one-day conference on the philosophy of mathematics in Manchester. Announcement at the bottom of the post.]
CALL FOR PAPERS:
31st International Wittgenstein Symposium 2008 on
Reduction and Elimination in Philosophy and the Sciences
Kirchberg am Wechsel, Austria, 10-16 August 2008
<http://www.alws.at/>
INVITED SPEAKERS:
William Bechtel, Ansgar Beckermann, Johan van Benthem, Alexander Bird, Elke
Brendel, Otavio Bueno, John P. Burgess, David Chalmers, Igor Douven, Hartry
Field, Jerry Fodor, Kenneth Gemes, Volker Halbach, Stephan Hartmann, Alison
Hills, Leon Horsten, Jaegwon Kim, James Ladyman, Oystein Linnebo, Bernard
Linsky, Thomas Mormann, Carlos Moulines, Thomas Mueller, Karl-Georg
Niebergall, Joelle Proust, Stathis Psillos, Sahotra Sarkar, Gerhard Schurz,
Patrick Suppes, Crispin Wright, Edward N. Zalta, Albert Anglberger, Elena
Castellani, Philip Ebert, Paul Egre, Ludwig Fahrbach, Simon Huttegger,
Christian Kanzian, Jeff Ketland, Marcus Rossberg, Holger Sturm, Charlotte
Werndl.
ORGANISERS:
Alexander Hieke (Salzburg) & Hannes Leitgeb (Bristol),
on behalf of the Austrian Ludwig Wittgenstein Society.
SECTIONS OF THE SYMPOSIUM:
Sections:
1. Wittgenstein
2. Logical Analysis
3. Theory Reduction
4. Nominalism
5. Naturalism &Physicalism
6. Supervenience
Workshops:
- Ontological Reduction & Dependence
- Neologicism
More detailed information on the contents of the sections and workshops can
be found in the "BACKGROUND" part further down.
DEADLINE FOR SUBMITTING PAPERS: 30th April 2008
Instructions for authors will soon be available at <http://www.alws.at/>.
All contributions will be peer-reviewed. All submitted papers accepted for
presentation at the symposium will appear in the Contributions of the ALWS
Series. Since 1993, successive volumes in this series have appeared each
August immediately prior to the symposium.
FINAL DATE FOR REGISTRATION: 20th July 2008
Further information on registration forms and information on travel and
accommodation will be posted at <http://www.alws.at/>.
SCHEDULE OF THE SYMPOSIUM:
The symposium will take place in Kirchberg am Wechsel (Austria) from 10-16
August 2008. Sunday, 10th of August 2008 is supposed to be the day on which
speakers and conference participants are going to arrive and when they
register in the conference office. In the evening, we plan on having an
informal get together. On the next day (11 August, 10:00am) the first
official session of presentations will start with Professor Jerry Fodor's
opening lecture of the symposium. The symposium will end officially in the
afternoon of 16 August 2008.
BACKGROUND:
Philosophers often have tried to either reduce "disagreeable" entities or
concepts to (more) acceptable entities or concepts, or to eliminate the
former altogether. Reduction and elimination, of course, very often have to
do with the question of "What is really there?", and thus these notions
belong to the most fundamental ones in philosophy. But the topic is not
merely restricted to metaphysics or ontology. Indeed, there are a variety
of attempts at reduction and elimination to be found in all areas (and
periods) of philosophy and science.
The symposium is intended to deal with the following topics (among others):
- Logical Analysis: The logical analysis of language has long been regarded
as the dominating paradigm for philosophy in the modern analytic tradition.
Although the importance of projects such as Frege's logicist construction
of mathematics, Russell's paraphrasis of definite descriptions, and
Carnap's logical reconstruction and explicatory definition of empirical
concepts is still acknowledged, many philosophers now doubt the viability
of the programme of logical analysis as it was originally conceived.
Notorious problems such as those affecting the definitions of knowledge or
truth have led to the revival of "non-analysing" approaches to
philosophical concepts and problems (see e.g. Williamson's account of
knowledge as a primitive notion and the deflationary criticism of Tarski's
definition of truth). What role will -- and should -- logical analysis play
in philosophy in the future?
- Theory Reduction: Paradigm cases of theory reduction, such as the
reduction of Kepler's laws of planetary motion to Newtonian mechanics or
the reduction of thermodynamics to the kinetic theory of gases, prompted
philosophers of science to study the notions of reduction and reducibility
in science. Nagel's analysis of reduction in terms of bridge laws is the
classical example of such an attempt. However, those early accounts of
theory reduction were soon found to be too naive and their underlying
treatment of scientific theories unrealistic. What are the state-of-the-art
proposals on how to understand the reduction of a scientific theory to
another? What is the purpose of such a reduction? In which cases should we
NOT attempt to reduce a theory to another one?
- Nominalism: Traditionally, nominalism is concerned with denying the
existence of universals. Modern versions of nominalism object to abstract
entities altogether; in particular they attack the assumption that the
success of scientific theories, especially their mathematical components,
commit us to the existence of abstract objects. As a consequence,
nominalists have to show how the alleged reference to abstract entities can
be eliminated or is merely apparent (Field's Science without Numbers is
prototypical in this respect). What types of "Constructive Nominalism" (a
la Goodman & Quine) are there? Are there any principal obstacles for
nominalistic programmes in general? What could nominalistic accounts of
quantum theory or of set theory look like?
- Naturalism & Physicalism: Naturalism and physicalism both want to
eliminate the part of language that does not refer to the "natural facts"
that science -- or indeed physics -- describes. Metaphysical Naturalism
often goes hand in hand with (or even entails) an epistemological
naturalism (Quine) as well as an ethical naturalism (mainly defined by its
critics), so that also these two main disciplines of philosophy should
restrict their investigations to the world of natural facts. Physicalist
theses, of course, play a particularly important role in the philosophy of
mind, since neuroscientific findings seem to support the view that,
ultimately, the realm of the mental is but a part of the physical world.
Which forms of naturalism and physicalism can be maintained within
metaphysics, philosophy of science, epistemology and ethics? What are the
consequences for philosophy when such views are accepted? Is philosophy a
scientific discipline? If naturalism or physicalism is right, can we still
see ourselves as autonomous beings with morality and a free will?
- Supervenience: Mental, moral, aesthetical, and even "epistemological"
properties have been said to supervene on properties of particular kind,
e.g., physical properties. Supervenience is claimed to be neither reduction
nor elimination but rather something different, but all these notions still
belong to the same family, and sometimes it is even assumed that reduction
is a borderline case of supervenience. What are the most abstract laws that
govern supervenience relations? Which contemporary applications of the
notion of supervenience are philosophically successful in the sense that
they have more explanatory power than "reductive theories" without leading
to unwanted semantical or ontological commitments? What are the logical
relations between the concepts of supervenience, reduction, elimination,
and ontological dependence?
The symposium will also include two workshops on:
- Ontological Reduction & Dependence: Reducing a class of entities to
another one has always been regarded attractive by those who subscribe to
an ideal of ontological parsimony. On the other hand, what it is that gets
reduced ontologically (objects or linguistic items?), what it means to be
reduced ontologically, and which methods of reduction there are, is
controversial (to say the least). Apart from reducing entities to further
entities, metaphysicians sometimes aim to show that entities depend
ontologically on other entities; e.g., a colour sensation instance would
not exist if the person having the sensation did not exist. In other
philosophical contexts, entities are rather said to depend ontologically on
other entities if the individuation of the former involves the latter; in
this sense, sets might be regarded to depend on their members, and
mathematical objects would depend on the mathematical structures they are
part of. Is there a general formal framework in which such notions of
ontological reduction and dependency can be studied more systematically? Is
ontological reduction really theory reduction in disguise? How shall we
understand ontological dependency of objects which exist necessarily? How
do reduction and dependence relate to Quine's notion of ontological
commitment?
- Neologicism: Classical Logicism aimed at deriving every true mathematical
statement from purely logical truths by reducing all mathematical concepts
to logical ones. As Frege's formal system proved to be inconsistent, and
modern set theory seemed to require strong principles of a genuinely
mathematical character, the programme of Logicism was long regarded as
dead. However, in the last twenty years neologicist and neo-Fregean
approaches in the philosophy of mathematics have experienced an amazing
revival (Wright, Boolos, Hale). Abstraction principles, such as Hume's
principle, have been suggested to support a logicist reconstruction of
mathematics in view of their quasi-analytical status. Do we have to
reconceive the notion of reducibility in order to understand in what sense
Neologicism is able to reduce mathematics to logic (as Linsky & Zalta have
suggested recently)? What are the abstraction principles that govern
mathematical theories apart from arithmetic (in particular: calculus and
set theory)? How can Neo-Fregeanism avoid the logical and philosophical
problems that affected Frege's original system -- cf. the problems of
impredicativity and Bad Company?
If you know philosophers or scientists, especially excellent graduate
students, who might be interested in the topic of Reduction and Elimination
in Philosophy and the Sciences, we would be very grateful if you could
point them to the symposium.
With best wishes,
Alexander Hieke and Hannes Leitgeb
********************************************************************************************
Announcing a one-day conference....
Metaphysics and Epistemology: Issues in the Philosophy of Mathematics
Saturday 15 March 2008
Chancellors Hotel and Conference Centre, University of Manchester
Speakers to include:
Joseph Melia (University of Leeds)
Alexander Paseau (University of Oxford)
Philip Ebert (University of Stirling)
For registration details, see
http://www.socialsciences.manchester.ac.uk/disciplines/philosophy/events/conference/index.html
This conference is organised with financial support from the Royal Institute of
Philosophy.
First is the Semantics and Philosophy in Europe CFP. This looks really like a really excellent event... one of those events where I think: If I'm not there, I'll be regretting not being there...
The second event is the 2008 Wittgenstein Symposium. It's remit seems far wider than the name might suggest... looks like a funky set of topics. I reproduce the CFP below...
[Update: a third is a one-day conference on the philosophy of mathematics in Manchester. Announcement at the bottom of the post.]
CALL FOR PAPERS:
31st International Wittgenstein Symposium 2008 on
Reduction and Elimination in Philosophy and the Sciences
Kirchberg am Wechsel, Austria, 10-16 August 2008
<http://www.alws.at/>
INVITED SPEAKERS:
William Bechtel, Ansgar Beckermann, Johan van Benthem, Alexander Bird, Elke
Brendel, Otavio Bueno, John P. Burgess, David Chalmers, Igor Douven, Hartry
Field, Jerry Fodor, Kenneth Gemes, Volker Halbach, Stephan Hartmann, Alison
Hills, Leon Horsten, Jaegwon Kim, James Ladyman, Oystein Linnebo, Bernard
Linsky, Thomas Mormann, Carlos Moulines, Thomas Mueller, Karl-Georg
Niebergall, Joelle Proust, Stathis Psillos, Sahotra Sarkar, Gerhard Schurz,
Patrick Suppes, Crispin Wright, Edward N. Zalta, Albert Anglberger, Elena
Castellani, Philip Ebert, Paul Egre, Ludwig Fahrbach, Simon Huttegger,
Christian Kanzian, Jeff Ketland, Marcus Rossberg, Holger Sturm, Charlotte
Werndl.
ORGANISERS:
Alexander Hieke (Salzburg) & Hannes Leitgeb (Bristol),
on behalf of the Austrian Ludwig Wittgenstein Society.
SECTIONS OF THE SYMPOSIUM:
Sections:
1. Wittgenstein
2. Logical Analysis
3. Theory Reduction
4. Nominalism
5. Naturalism &Physicalism
6. Supervenience
Workshops:
- Ontological Reduction & Dependence
- Neologicism
More detailed information on the contents of the sections and workshops can
be found in the "BACKGROUND" part further down.
DEADLINE FOR SUBMITTING PAPERS: 30th April 2008
Instructions for authors will soon be available at <http://www.alws.at/>.
All contributions will be peer-reviewed. All submitted papers accepted for
presentation at the symposium will appear in the Contributions of the ALWS
Series. Since 1993, successive volumes in this series have appeared each
August immediately prior to the symposium.
FINAL DATE FOR REGISTRATION: 20th July 2008
Further information on registration forms and information on travel and
accommodation will be posted at <http://www.alws.at/>.
SCHEDULE OF THE SYMPOSIUM:
The symposium will take place in Kirchberg am Wechsel (Austria) from 10-16
August 2008. Sunday, 10th of August 2008 is supposed to be the day on which
speakers and conference participants are going to arrive and when they
register in the conference office. In the evening, we plan on having an
informal get together. On the next day (11 August, 10:00am) the first
official session of presentations will start with Professor Jerry Fodor's
opening lecture of the symposium. The symposium will end officially in the
afternoon of 16 August 2008.
BACKGROUND:
Philosophers often have tried to either reduce "disagreeable" entities or
concepts to (more) acceptable entities or concepts, or to eliminate the
former altogether. Reduction and elimination, of course, very often have to
do with the question of "What is really there?", and thus these notions
belong to the most fundamental ones in philosophy. But the topic is not
merely restricted to metaphysics or ontology. Indeed, there are a variety
of attempts at reduction and elimination to be found in all areas (and
periods) of philosophy and science.
The symposium is intended to deal with the following topics (among others):
- Logical Analysis: The logical analysis of language has long been regarded
as the dominating paradigm for philosophy in the modern analytic tradition.
Although the importance of projects such as Frege's logicist construction
of mathematics, Russell's paraphrasis of definite descriptions, and
Carnap's logical reconstruction and explicatory definition of empirical
concepts is still acknowledged, many philosophers now doubt the viability
of the programme of logical analysis as it was originally conceived.
Notorious problems such as those affecting the definitions of knowledge or
truth have led to the revival of "non-analysing" approaches to
philosophical concepts and problems (see e.g. Williamson's account of
knowledge as a primitive notion and the deflationary criticism of Tarski's
definition of truth). What role will -- and should -- logical analysis play
in philosophy in the future?
- Theory Reduction: Paradigm cases of theory reduction, such as the
reduction of Kepler's laws of planetary motion to Newtonian mechanics or
the reduction of thermodynamics to the kinetic theory of gases, prompted
philosophers of science to study the notions of reduction and reducibility
in science. Nagel's analysis of reduction in terms of bridge laws is the
classical example of such an attempt. However, those early accounts of
theory reduction were soon found to be too naive and their underlying
treatment of scientific theories unrealistic. What are the state-of-the-art
proposals on how to understand the reduction of a scientific theory to
another? What is the purpose of such a reduction? In which cases should we
NOT attempt to reduce a theory to another one?
- Nominalism: Traditionally, nominalism is concerned with denying the
existence of universals. Modern versions of nominalism object to abstract
entities altogether; in particular they attack the assumption that the
success of scientific theories, especially their mathematical components,
commit us to the existence of abstract objects. As a consequence,
nominalists have to show how the alleged reference to abstract entities can
be eliminated or is merely apparent (Field's Science without Numbers is
prototypical in this respect). What types of "Constructive Nominalism" (a
la Goodman & Quine) are there? Are there any principal obstacles for
nominalistic programmes in general? What could nominalistic accounts of
quantum theory or of set theory look like?
- Naturalism & Physicalism: Naturalism and physicalism both want to
eliminate the part of language that does not refer to the "natural facts"
that science -- or indeed physics -- describes. Metaphysical Naturalism
often goes hand in hand with (or even entails) an epistemological
naturalism (Quine) as well as an ethical naturalism (mainly defined by its
critics), so that also these two main disciplines of philosophy should
restrict their investigations to the world of natural facts. Physicalist
theses, of course, play a particularly important role in the philosophy of
mind, since neuroscientific findings seem to support the view that,
ultimately, the realm of the mental is but a part of the physical world.
Which forms of naturalism and physicalism can be maintained within
metaphysics, philosophy of science, epistemology and ethics? What are the
consequences for philosophy when such views are accepted? Is philosophy a
scientific discipline? If naturalism or physicalism is right, can we still
see ourselves as autonomous beings with morality and a free will?
- Supervenience: Mental, moral, aesthetical, and even "epistemological"
properties have been said to supervene on properties of particular kind,
e.g., physical properties. Supervenience is claimed to be neither reduction
nor elimination but rather something different, but all these notions still
belong to the same family, and sometimes it is even assumed that reduction
is a borderline case of supervenience. What are the most abstract laws that
govern supervenience relations? Which contemporary applications of the
notion of supervenience are philosophically successful in the sense that
they have more explanatory power than "reductive theories" without leading
to unwanted semantical or ontological commitments? What are the logical
relations between the concepts of supervenience, reduction, elimination,
and ontological dependence?
The symposium will also include two workshops on:
- Ontological Reduction & Dependence: Reducing a class of entities to
another one has always been regarded attractive by those who subscribe to
an ideal of ontological parsimony. On the other hand, what it is that gets
reduced ontologically (objects or linguistic items?), what it means to be
reduced ontologically, and which methods of reduction there are, is
controversial (to say the least). Apart from reducing entities to further
entities, metaphysicians sometimes aim to show that entities depend
ontologically on other entities; e.g., a colour sensation instance would
not exist if the person having the sensation did not exist. In other
philosophical contexts, entities are rather said to depend ontologically on
other entities if the individuation of the former involves the latter; in
this sense, sets might be regarded to depend on their members, and
mathematical objects would depend on the mathematical structures they are
part of. Is there a general formal framework in which such notions of
ontological reduction and dependency can be studied more systematically? Is
ontological reduction really theory reduction in disguise? How shall we
understand ontological dependency of objects which exist necessarily? How
do reduction and dependence relate to Quine's notion of ontological
commitment?
- Neologicism: Classical Logicism aimed at deriving every true mathematical
statement from purely logical truths by reducing all mathematical concepts
to logical ones. As Frege's formal system proved to be inconsistent, and
modern set theory seemed to require strong principles of a genuinely
mathematical character, the programme of Logicism was long regarded as
dead. However, in the last twenty years neologicist and neo-Fregean
approaches in the philosophy of mathematics have experienced an amazing
revival (Wright, Boolos, Hale). Abstraction principles, such as Hume's
principle, have been suggested to support a logicist reconstruction of
mathematics in view of their quasi-analytical status. Do we have to
reconceive the notion of reducibility in order to understand in what sense
Neologicism is able to reduce mathematics to logic (as Linsky & Zalta have
suggested recently)? What are the abstraction principles that govern
mathematical theories apart from arithmetic (in particular: calculus and
set theory)? How can Neo-Fregeanism avoid the logical and philosophical
problems that affected Frege's original system -- cf. the problems of
impredicativity and Bad Company?
If you know philosophers or scientists, especially excellent graduate
students, who might be interested in the topic of Reduction and Elimination
in Philosophy and the Sciences, we would be very grateful if you could
point them to the symposium.
With best wishes,
Alexander Hieke and Hannes Leitgeb
********************************************************************************************
Announcing a one-day conference....
Metaphysics and Epistemology: Issues in the Philosophy of Mathematics
Saturday 15 March 2008
Chancellors Hotel and Conference Centre, University of Manchester
Speakers to include:
Joseph Melia (University of Leeds)
Alexander Paseau (University of Oxford)
Philip Ebert (University of Stirling)
For registration details, see
http://www.socialsciences.manchester.ac.uk/disciplines/philosophy/events/conference/index.html
This conference is organised with financial support from the Royal Institute of
Philosophy.
Tuesday, December 04, 2007
Two problems of the many.
Here's a paradigmatic problem of the many (Geach and Unger are the usual sources cited, but I'm not claiming this to be exactly the version they use.) Let's take a moulting cat. There are many hairs that are neither clearly attached, nor clearly unattached to the main body of the cat. Let's enumerate them 1---1000. Then we might consider the material objects which are the masses of cat-arranged matter that include half of the thousand hairs, and exclude to the other half. There are many ways to choose the half that's included. So by this recipe we get many many distinct masses of cat-arranged matter, differing only over hairs. The various pieces of cat-arranged matter change their properties over time in very much the way that cats do: they are now in a sitting-shape, now in a standing-shape, now in a lapping-milk shape, now in an emitting-meows configuration. They each seem to have everything intrinsically required for being a cat.
If you're inclined to think (and I am) that a cat is a material object identical to some piece of cat-arranged matter, then the problem of the many arises: which of the various distinct pieces of cat-arranged matters is the cat? Various answers have been suggested. Some of the most obvious (though not necessarily the most sensible) are: (i) nihilism: none of the cat-candidates are cats. (ii) brutalism: exactly one is a cat, and there is a brute fact of the matter which it is; (iii) vague cat: exactly one is a cat, and it's a vague matter which it is; (iii) manyism: lots of the cat-candidates are cats.
(By the way, (ii) and (iii) may not be incompatible, if you're an epistemicist about vagueness. And those who are fans of many-valued logics for vagueness should have a think about whether they can really support (iii). Consider the best candidates to be a cat, c1....c1000. Suppose these are each cats to an equal degree. Then "one of c1...c1000 is a cat" will standardly have a degree of truth equal to the disjunction=the maximum of the disjuncts=the degree of truth of "c1 is a cat". And the degree of truth of the conjunction: "all of c1...c1000 is a cat" will standardly have a degree of truth equal to the conjunction=the minimum of the conjuncts=the degree of truth of "c1 is a cat". So to the extent that the (determinately distinct) best candidates aren't all cats, to exactly that extent there's no cat among them (and since we chose the best candidates, we won't get a higher degree of truth for "the cat is present" by including extra disjuncts. Conclusion: if you're tempted by response (iii) to the problem of the many, you've got strong reason not to go for many-valued logic. [Edit (see comments): this needs qualification. I think you've reason not to go for many-valued logics that endorse the (fairly standard, but not undeniable) max/min treatment of disjunction/conjunction; and in which the many values are linearly arranged].)
What I'd really like to emphasize is the above leaves open the following question: Is there a super-cat-candidate, i.e. a piece of cat-arranged matter of which every other cat-candidate is a proper part? Take the Tibbles case above, and suppose that the candidates only differ over hairs. Then a potential super-cat-candidate would be the piece of matter that's maximally generous: that includes all the 1000 not-clearly-unattached hairs. If this particular fusion isn't genuinely a cat-candidate, then it's open that if you arrange the cat-candidates by which is a part of which, you'll end up with multiple maximal cat-candidates none of which is a part of the other. Perhaps they each contain 999 hairs, but differ amongst themselves which hair they don't include.
If there is a super-cat-candidate, let's say the problem of the many is of type-1, and if there's no super-cat-candidate, let's say that the problem of the many is of type-2.
My guess is that our description of cases like Tibbles leaves is simply underspecified as to whether it's of type-1 or type-2. But I certainly don't see any principled reason to think that the actual cases of the POM we find around us are always of type-1. There's certainly no a priori guarantee that the sort of criterion that rules in some things as parts of a cat won't also dismiss other things as non-parts. So for example, perhaps we can rank candidates for degrees of integration: some unintegrated parts are ok, but there's some cut-off where an object is just too unintegrated to count as a candidate. One cat-candidate includes some borderline-attached skin cells, and is to that extent unintegrated. Another cat-candidate includes some borderline-attached teeth, and is to that extent unintegrated. But plausibly the fusion that includes both skin cells and teeth is less integrated: enough to disqualify it from being a cat-candidate. It's hard to know how to argue the case further without going deeply into feline biology, but I hope you get the sense of why type-2 POM need to be dealt with.
Now, one response to the standard POM is to appeal to the "maximality" allegedly built into various predicates (like "rock", "cat", "conscious" etc): things that are duplicates of rocks, but which are surrounded by extra rocky stuff, become merely parts of rocks (and so forth). There are presumably intrinisic duplicates of rocks embedded as tiny parts at the centre of large boulders: but there's no intuitive pressure to count them as rocks. Likewise a cat might survive after it's limbs are destroyed by a vengeful deity, but it's unintuitive to think of the duplicate head-and-torso part of Tibbles as itself a cat-candidate. So there's some reasons independently of paradigmatic problem of the many scenarios to think of "cat" and "rock" etc as maximal. (For more discussion of maximality, see Ted Sider's various papers on the topic).
If we've got a type-1 problem of the many, then one might think that the maximality of "cat" or "rock" or whatever gives a principled answer to our original question: the super-cat-candidate (/super-rock-candidate) is the one uniquely qualified to be the cat (/rock). For we've then got an explanation for why all the others, though intrinsically qualified just like cats, aren't cats: being a cat is a maximal property, and all the rival cat-candidates are parts of the one true cat in the vicinity.
But the type-2 problem of the many really isn't addressed by maximality as such. There's no unique super-cat-candidate in this setup, rather a range of co-maximal ones. So maximality won't save our bacon here.
The difference between the two cases is important when we consider other things. For example, in the light of the (fairly widely accepted) maximality of "house" and "cat" and "rock" and the like, few would say that any duplicate of a house must be a house (even setting aside extrinsicality due to social setting). But there's an obvious fall back position, which is floating around the literature: that any duplicate of a house must be a (proper or improper) part of a house (holding fixed social setting etc). That is, any house possesses the property of being part of a house intrinsically (so long as we hold fixed social setting etc). And the same goes for cat: at least holding fixed biological origin, it's plausible that any cat is intrinsically at least part of a cat, and any rock is intrinsically at least part of a rock.
These claims aren't threatened by maximality. But appealing to them in a type-2 problem of the many gets us an argument directly for response (iv): manyism. For plausibly if you took a duplicate of one of the co-maximal cat candidates, T, while eliminating from the scene those bits of matter that are not part of T but are part of one of the other co-maximal cat candidates, then you get something T* that's (determinately) a cat. And so, any duplicate of T* must be at least part of a cat. And since T is a duplicate of T*, T must be at least part of a cat. But T isn't proper part of anything that's even a cat-candidate. So T must itself be a cat.
So the type-2 POM is harder to resolve than the type-1 kind. Maybe some extra weakening of the properties a cat-candidate has intrinsicality are called for. Or maybe (very surprisingly) type-2 POMs never arise. But either way, more work is needed.
If you're inclined to think (and I am) that a cat is a material object identical to some piece of cat-arranged matter, then the problem of the many arises: which of the various distinct pieces of cat-arranged matters is the cat? Various answers have been suggested. Some of the most obvious (though not necessarily the most sensible) are: (i) nihilism: none of the cat-candidates are cats. (ii) brutalism: exactly one is a cat, and there is a brute fact of the matter which it is; (iii) vague cat: exactly one is a cat, and it's a vague matter which it is; (iii) manyism: lots of the cat-candidates are cats.
(By the way, (ii) and (iii) may not be incompatible, if you're an epistemicist about vagueness. And those who are fans of many-valued logics for vagueness should have a think about whether they can really support (iii). Consider the best candidates to be a cat, c1....c1000. Suppose these are each cats to an equal degree. Then "one of c1...c1000 is a cat" will standardly have a degree of truth equal to the disjunction=the maximum of the disjuncts=the degree of truth of "c1 is a cat". And the degree of truth of the conjunction: "all of c1...c1000 is a cat" will standardly have a degree of truth equal to the conjunction=the minimum of the conjuncts=the degree of truth of "c1 is a cat". So to the extent that the (determinately distinct) best candidates aren't all cats, to exactly that extent there's no cat among them (and since we chose the best candidates, we won't get a higher degree of truth for "the cat is present" by including extra disjuncts. Conclusion: if you're tempted by response (iii) to the problem of the many, you've got strong reason not to go for many-valued logic. [Edit (see comments): this needs qualification. I think you've reason not to go for many-valued logics that endorse the (fairly standard, but not undeniable) max/min treatment of disjunction/conjunction; and in which the many values are linearly arranged].)
What I'd really like to emphasize is the above leaves open the following question: Is there a super-cat-candidate, i.e. a piece of cat-arranged matter of which every other cat-candidate is a proper part? Take the Tibbles case above, and suppose that the candidates only differ over hairs. Then a potential super-cat-candidate would be the piece of matter that's maximally generous: that includes all the 1000 not-clearly-unattached hairs. If this particular fusion isn't genuinely a cat-candidate, then it's open that if you arrange the cat-candidates by which is a part of which, you'll end up with multiple maximal cat-candidates none of which is a part of the other. Perhaps they each contain 999 hairs, but differ amongst themselves which hair they don't include.
If there is a super-cat-candidate, let's say the problem of the many is of type-1, and if there's no super-cat-candidate, let's say that the problem of the many is of type-2.
My guess is that our description of cases like Tibbles leaves is simply underspecified as to whether it's of type-1 or type-2. But I certainly don't see any principled reason to think that the actual cases of the POM we find around us are always of type-1. There's certainly no a priori guarantee that the sort of criterion that rules in some things as parts of a cat won't also dismiss other things as non-parts. So for example, perhaps we can rank candidates for degrees of integration: some unintegrated parts are ok, but there's some cut-off where an object is just too unintegrated to count as a candidate. One cat-candidate includes some borderline-attached skin cells, and is to that extent unintegrated. Another cat-candidate includes some borderline-attached teeth, and is to that extent unintegrated. But plausibly the fusion that includes both skin cells and teeth is less integrated: enough to disqualify it from being a cat-candidate. It's hard to know how to argue the case further without going deeply into feline biology, but I hope you get the sense of why type-2 POM need to be dealt with.
Now, one response to the standard POM is to appeal to the "maximality" allegedly built into various predicates (like "rock", "cat", "conscious" etc): things that are duplicates of rocks, but which are surrounded by extra rocky stuff, become merely parts of rocks (and so forth). There are presumably intrinisic duplicates of rocks embedded as tiny parts at the centre of large boulders: but there's no intuitive pressure to count them as rocks. Likewise a cat might survive after it's limbs are destroyed by a vengeful deity, but it's unintuitive to think of the duplicate head-and-torso part of Tibbles as itself a cat-candidate. So there's some reasons independently of paradigmatic problem of the many scenarios to think of "cat" and "rock" etc as maximal. (For more discussion of maximality, see Ted Sider's various papers on the topic).
If we've got a type-1 problem of the many, then one might think that the maximality of "cat" or "rock" or whatever gives a principled answer to our original question: the super-cat-candidate (/super-rock-candidate) is the one uniquely qualified to be the cat (/rock). For we've then got an explanation for why all the others, though intrinsically qualified just like cats, aren't cats: being a cat is a maximal property, and all the rival cat-candidates are parts of the one true cat in the vicinity.
But the type-2 problem of the many really isn't addressed by maximality as such. There's no unique super-cat-candidate in this setup, rather a range of co-maximal ones. So maximality won't save our bacon here.
The difference between the two cases is important when we consider other things. For example, in the light of the (fairly widely accepted) maximality of "house" and "cat" and "rock" and the like, few would say that any duplicate of a house must be a house (even setting aside extrinsicality due to social setting). But there's an obvious fall back position, which is floating around the literature: that any duplicate of a house must be a (proper or improper) part of a house (holding fixed social setting etc). That is, any house possesses the property of being part of a house intrinsically (so long as we hold fixed social setting etc). And the same goes for cat: at least holding fixed biological origin, it's plausible that any cat is intrinsically at least part of a cat, and any rock is intrinsically at least part of a rock.
These claims aren't threatened by maximality. But appealing to them in a type-2 problem of the many gets us an argument directly for response (iv): manyism. For plausibly if you took a duplicate of one of the co-maximal cat candidates, T, while eliminating from the scene those bits of matter that are not part of T but are part of one of the other co-maximal cat candidates, then you get something T* that's (determinately) a cat. And so, any duplicate of T* must be at least part of a cat. And since T is a duplicate of T*, T must be at least part of a cat. But T isn't proper part of anything that's even a cat-candidate. So T must itself be a cat.
So the type-2 POM is harder to resolve than the type-1 kind. Maybe some extra weakening of the properties a cat-candidate has intrinsicality are called for. Or maybe (very surprisingly) type-2 POMs never arise. But either way, more work is needed.
Tuesday, November 27, 2007
Nihilism, maximality, problem of the many
Does nihilism about ordinary things help us out with puzzles surrounding maximal properties and the problem of the many? It's hard to see how.
First, maximal properties. Suppose that I have a rock. Surprisingly, there seem to be microphysical duplicates of the rock that are not themselves rocks. For suppose we have a microphysical duplicate of the rock (call it Rocky) that is surrounded by extra rocky stuff. Then, plausibly, the fusion of Rocky and the extra rocky stuff is the rock, and Rocky himself isn't, being out-competed for rock-status by his more extensive rival. Not being shared among duplicates, being a rock isn't intrinsic. And cases meeting this recipe can be plausibly constructed for chairs, tables, rivers, nations, human bodies, human animals and (perhaps) even human persons. Most kind-terms, in fact, look maximal and (hence) extrinsic. Sider has argued that non-sortal properties such as consciousness are likewise maximal and extrinsic.
Second, the problem of the many. In its strongest version, suppose that we have a plentitude of candidates (sums of atoms, say) more or less equally qualified to be a table, cloud, human body or whatever. Suppose further that both the sum and intersection of all these candidates isn't itself a candidate for being the object. (This is often left out of the description of the case, but (1) there seems no reason to think that the set of candidates will always be closed under summing or intersection (2) life is more difficult--and more interesting--if these candidates aren't around.) Which of these candidates is the table, cloud, human body or whatnot?
What puzzles me is why nihilism---rejecting the existence of tables, clouds, human bodies or whatever---should be thought to avoid any puzzles around here. It's true that the nihilist rejects a premise in terms of which these puzzles would normally be stated. So you might imagine that the puzzles give you reason to modus tollens and reject that premise, ending up with nihilism (that's how Unger's original presentation of the POM went, if I recall). But that's no good if we can state equally compelling puzzles in the nihilist's preferred vocabulary.
Take our maximality scenario. Nihilists allow that we have, not a rock, but some things arranged rockwise. And we now conceive of a situation where those things, arranged just as they actually are, still exist (let "Rocky" be a plural term that picks them out). But in this situation, they are surrounded by more things of a qualitatively similar arrangement. Now are the things in Rocky arranged rockwise? Don't consult intuitions at this point---"rockwise" is a term of art. The theoretical role of "rockwise" is to explain how ordinary talk is ok. If some things are in fact arranged rockwise, then ordinary talk should count them as forming a rock. So, for example, van Inwagen's paraphrase of "that's is a rock" would be "those things are arranged rockwise". If we point to Rocky and say "that's a rock", intuitively we speak falsely (that underpins the original puzzle). But if the things that are Rocky are in fact arranged rockwise, then this would be paraphrased to something true. What we get is that "are arranged rockwise" expresses a maximal, extrinsic plural property. For a contrast case, consider "is a circle". What replaces this by nihilist lights are plural predicates like "being arranged circularly". But this seems to express a non-maximal, intrinsic plural property. I can't see any very philosophically significant difference between the puzzle as transcribed into the nihilists favoured setting and the original.
Similarly, consider a bunch of (what we hitherto thought were) cloud-candidates. The nihilist says that none of these exist. Still, there are things which are arranged candidate-cloudwise. Call them the As. And there are other things---differing from the first lot---which are also arranged candidate-cloudwise. Call them the Bs. Are the A's or the B's arranged cloudwise? Are there some other objects, including many but not all of the As and the B's that *are* arranged cloudwise? Again, the puzzle translates straight through: originally we had to talk about the relation between the many cloud-candidates and the single cloud; now we talk about the many pluralities which are arranged candidate-cloudwise, and how they relate to the plurality that is cloudwise arranged. The puzzle is harder to write down. But so far as I can see, it's still there.
Pursuing the idea for a bit, suppose we decided to say that there were many distinct pluralities that are arranged cloudwise. Then "there at least two distinct clouds" would be paraphrased to a truth (that there are some xx and some yy, such that not all the xx are among the yy and vice versa, such that the xx are arranged cloudwise and the yy are arranged cloudwise). But of course it's the unassertibility of this sort of sentence (staring at what looks to be a single fluffy body in the sky) that leads many to reject Lewis's "many but almost one" response to the problem of the many.
I don't think that nihilism leaves everything dialectically unchanged. It's not so clear how many of the solutions people propose to the problem of the many can be translated into the nihilist's setting. And more positively, some options may seem more attractive once one is a nihilist than they did taken cold. Example: once you're going in for a mismatch between common sense ontology and what there really is, then maybe you're more prepared for the sort of linguistic-trick reconstructions of common sense that Lewis suggests in support of his "many but almost one". Going back to the case we considered above, let's suppose you think that there are many extensionally distinct pluralities that are all arranged cloudwise. Then perhaps "there are two distinct clouds" should be paraphrased, not as suggested above, but as:
there are some xx and some yy, such that almost all the xx are among the yy and vice versa, such that the xx are arranged cloudwise and the yy are arranged cloudwise.
The thought here is that, given one is already buying into unobvious paraphrase to capture the real content of what's said, maybe the costs of putting in a few extra tweaks into that paraphrase are minimal.
Caveats: notice that this isn't to say that nihilism solves your problems, it's to say that nihilism may make it easier to accept a response that was already on the table (Lewis's "many but almost one" idea). And even this is sensitive to the details of how nihilism want to relate ordinary thought and talk to metaphysics: van Inwagen's paraphrase strategy is one such proposal, and meshes quite neatly with the Lewis idea, but it's not clear that alternatives (such as Dorr's counterfactual version) have the same benefits. So it's not the metaphysical component of nihilism that's doing the work in helping accommodate the problem of the many: it's whatever machinery the nihilist uses to justify ordinary thought and talk.
There's one style of nihilist who might stand their ground. Call nihilists friendly if they attempt to say what's good about ordinary thought and talk (making use of things like "rockwise", or counterfactual paraphrases, or whatever). I'm suggesting that friendly nihilists face transcribed versions of the puzzles that everyone faces. Nihilists might though be unfriendly: prepared to say that ordinary thought and talk is largely false, but not to reconstruct some subsidiary norm which ordinary thought and talk meets. Friendly nihilism is an interesting position, I think. Unfriendly nihilism is pushing the nuclear button on all attempts to sort out paradoxes statable in ordinary language. But they have at least this virtue: the puzzles they react against don't come back to bite them.
[Update: I've been sent a couple of good references for discussions of nihilism in a similar spirit. First Matt McGrath's paper "No objects, no problem?" argues that the nihilist doesn't escape statue/lump puzzles. Second, Karen Bennett has a forthcoming paper called "Composition, Colocation, and Metaontology" that resurrects problems for nihilists including the problem of the many (though it doesn't now appear to be available online).]
First, maximal properties. Suppose that I have a rock. Surprisingly, there seem to be microphysical duplicates of the rock that are not themselves rocks. For suppose we have a microphysical duplicate of the rock (call it Rocky) that is surrounded by extra rocky stuff. Then, plausibly, the fusion of Rocky and the extra rocky stuff is the rock, and Rocky himself isn't, being out-competed for rock-status by his more extensive rival. Not being shared among duplicates, being a rock isn't intrinsic. And cases meeting this recipe can be plausibly constructed for chairs, tables, rivers, nations, human bodies, human animals and (perhaps) even human persons. Most kind-terms, in fact, look maximal and (hence) extrinsic. Sider has argued that non-sortal properties such as consciousness are likewise maximal and extrinsic.
Second, the problem of the many. In its strongest version, suppose that we have a plentitude of candidates (sums of atoms, say) more or less equally qualified to be a table, cloud, human body or whatever. Suppose further that both the sum and intersection of all these candidates isn't itself a candidate for being the object. (This is often left out of the description of the case, but (1) there seems no reason to think that the set of candidates will always be closed under summing or intersection (2) life is more difficult--and more interesting--if these candidates aren't around.) Which of these candidates is the table, cloud, human body or whatnot?
What puzzles me is why nihilism---rejecting the existence of tables, clouds, human bodies or whatever---should be thought to avoid any puzzles around here. It's true that the nihilist rejects a premise in terms of which these puzzles would normally be stated. So you might imagine that the puzzles give you reason to modus tollens and reject that premise, ending up with nihilism (that's how Unger's original presentation of the POM went, if I recall). But that's no good if we can state equally compelling puzzles in the nihilist's preferred vocabulary.
Take our maximality scenario. Nihilists allow that we have, not a rock, but some things arranged rockwise. And we now conceive of a situation where those things, arranged just as they actually are, still exist (let "Rocky" be a plural term that picks them out). But in this situation, they are surrounded by more things of a qualitatively similar arrangement. Now are the things in Rocky arranged rockwise? Don't consult intuitions at this point---"rockwise" is a term of art. The theoretical role of "rockwise" is to explain how ordinary talk is ok. If some things are in fact arranged rockwise, then ordinary talk should count them as forming a rock. So, for example, van Inwagen's paraphrase of "that's is a rock" would be "those things are arranged rockwise". If we point to Rocky and say "that's a rock", intuitively we speak falsely (that underpins the original puzzle). But if the things that are Rocky are in fact arranged rockwise, then this would be paraphrased to something true. What we get is that "are arranged rockwise" expresses a maximal, extrinsic plural property. For a contrast case, consider "is a circle". What replaces this by nihilist lights are plural predicates like "being arranged circularly". But this seems to express a non-maximal, intrinsic plural property. I can't see any very philosophically significant difference between the puzzle as transcribed into the nihilists favoured setting and the original.
Similarly, consider a bunch of (what we hitherto thought were) cloud-candidates. The nihilist says that none of these exist. Still, there are things which are arranged candidate-cloudwise. Call them the As. And there are other things---differing from the first lot---which are also arranged candidate-cloudwise. Call them the Bs. Are the A's or the B's arranged cloudwise? Are there some other objects, including many but not all of the As and the B's that *are* arranged cloudwise? Again, the puzzle translates straight through: originally we had to talk about the relation between the many cloud-candidates and the single cloud; now we talk about the many pluralities which are arranged candidate-cloudwise, and how they relate to the plurality that is cloudwise arranged. The puzzle is harder to write down. But so far as I can see, it's still there.
Pursuing the idea for a bit, suppose we decided to say that there were many distinct pluralities that are arranged cloudwise. Then "there at least two distinct clouds" would be paraphrased to a truth (that there are some xx and some yy, such that not all the xx are among the yy and vice versa, such that the xx are arranged cloudwise and the yy are arranged cloudwise). But of course it's the unassertibility of this sort of sentence (staring at what looks to be a single fluffy body in the sky) that leads many to reject Lewis's "many but almost one" response to the problem of the many.
I don't think that nihilism leaves everything dialectically unchanged. It's not so clear how many of the solutions people propose to the problem of the many can be translated into the nihilist's setting. And more positively, some options may seem more attractive once one is a nihilist than they did taken cold. Example: once you're going in for a mismatch between common sense ontology and what there really is, then maybe you're more prepared for the sort of linguistic-trick reconstructions of common sense that Lewis suggests in support of his "many but almost one". Going back to the case we considered above, let's suppose you think that there are many extensionally distinct pluralities that are all arranged cloudwise. Then perhaps "there are two distinct clouds" should be paraphrased, not as suggested above, but as:
there are some xx and some yy, such that almost all the xx are among the yy and vice versa, such that the xx are arranged cloudwise and the yy are arranged cloudwise.
The thought here is that, given one is already buying into unobvious paraphrase to capture the real content of what's said, maybe the costs of putting in a few extra tweaks into that paraphrase are minimal.
Caveats: notice that this isn't to say that nihilism solves your problems, it's to say that nihilism may make it easier to accept a response that was already on the table (Lewis's "many but almost one" idea). And even this is sensitive to the details of how nihilism want to relate ordinary thought and talk to metaphysics: van Inwagen's paraphrase strategy is one such proposal, and meshes quite neatly with the Lewis idea, but it's not clear that alternatives (such as Dorr's counterfactual version) have the same benefits. So it's not the metaphysical component of nihilism that's doing the work in helping accommodate the problem of the many: it's whatever machinery the nihilist uses to justify ordinary thought and talk.
There's one style of nihilist who might stand their ground. Call nihilists friendly if they attempt to say what's good about ordinary thought and talk (making use of things like "rockwise", or counterfactual paraphrases, or whatever). I'm suggesting that friendly nihilists face transcribed versions of the puzzles that everyone faces. Nihilists might though be unfriendly: prepared to say that ordinary thought and talk is largely false, but not to reconstruct some subsidiary norm which ordinary thought and talk meets. Friendly nihilism is an interesting position, I think. Unfriendly nihilism is pushing the nuclear button on all attempts to sort out paradoxes statable in ordinary language. But they have at least this virtue: the puzzles they react against don't come back to bite them.
[Update: I've been sent a couple of good references for discussions of nihilism in a similar spirit. First Matt McGrath's paper "No objects, no problem?" argues that the nihilist doesn't escape statue/lump puzzles. Second, Karen Bennett has a forthcoming paper called "Composition, Colocation, and Metaontology" that resurrects problems for nihilists including the problem of the many (though it doesn't now appear to be available online).]
Tuesday, November 20, 2007
Logically good inference and the rest
From time to time in my papers, the putative epistemological significance of logically good inference has been cropping up. I've been recently trying to think a bit more systematically about the issues involved.
Some terminology. Suppose that the argument "A therefore B" is logically valid. Then I'll say that reasoning from "A" is true, to "B" is true, is logically good. Two caveats (1) the logical goodness of a piece of reasoning from X to Y doesn't mean that, all things considered, it's ok to infer Y from X. At best, the case is pro tanto: if Y were a contradiction, for example, all things considered you should give up X rather than come to believe Y; (2) I think the validity of an argument-type won't in the end be sufficient for for the logically goodness of a token inference of that type---partly because we probably need to tie it much closer to deductive moves, partially because of worries about the different contexts in play with any given token inference. But let me just ignore those issues for now.
I'm going to blur use-mention a bit by classifying material-mode inferences from A to B (rather than: "A" is true to "B" is true") as logically good in the same circumstances. I'll also call a piece of reasoning from A to B "modally good" if A entails B, and "a priori good" if it's a priori that if A then B (nb: material conditional). If it's a priori that A entails B, I'll call it "a priori modally good".
Suppose now we perform a competent deduction of B from A. What I'm interested in is whether the fact that the inference is logically good is something that we should pay attention to in our epistemological story about what's going on. You might think this isn't forced on us. For (arguably: see below) whenever an inference is logically good, it's also modally and a priori good. So---the thought runs---for all we've said we could have an epistemology that just looks at whether inferences are modally/a priori good, and simply sets aside questions of logical goodness. If so, logical goodness may not be epistemically interesting as such.
(That's obviously a bit quick: it might be that you can't just stop with declaring something a priori good; rather, any a priori good inference falls into one of a number of subcases, one of which is the class of logically good inferences, and that the real epistemic story proceeds at the level of the "thickly" described subcases. But let's set that sort of issue aside).
Are there reasons to think competent deduction/logically good inference is an especially interesting epistemological category of inference?
One obvious reason to refuse to subsume logically good inference within modally good inferences (for example) is if you thought that some logically good inferences aren't necessarily truth-preserving. There's a precedent for that thought: Kaplan argues in "Demonstratives" that "I am here now" is a logical validity, but isn't necessarily true. If that's the case, then logically good inferences won't be a subclass of the modally good ones, and so the attempt to talk only about the modally good inferences would just miss some of the cases.
I'm not aware of persuasive examples of logically good inferences that aren't a priori good. And I'm not persuaded that the Kaplanian classification is the right one. So let's suppose pro tem that the logically good inference are always modally, a priori, and a priori modally, good.
We're left with the following situation: the logically good inferences are a subclass of inferences that are also fall under other "good" categories. In a particular case where we come to believe B on the basis of A, where is the particular need to talk about its logical "goodness", rather than simply about its modal, a priori or whatever goodness?
To make things a little more concrete: suppose that our story about what makes a modally good inference good is that it's ultra-reliable. Then, since we're supposing all logically good inferences are modally good, just from their modal goodness, we're going to get that they're ultra-reliable. It's not so clear that epistemologically, we need say any more. (Of course, their logical goodness might explain *why* they're reliable: but that's not clearly an *epistemic* explanation, any more than is the biophysical story about perception's reliability.)
So long as we're focusing on cases where we deploy reasoning directly, to move from something we believe to something else we believe, I'm not sure how to get traction on this issue (at least, not in such an abstract setting: I'm sure we could fight on the details if they are filled out.). But even in this abstract setting, I do think we can see that the idea just sketched ignores one vital role that logically good reasoning plays: namely, reasoning under a supposition in the course of an indirect proof.
Familiar cases: If reasoning from A to B is logically good, then it's ok to believe (various) conditional(s) "if A, B". If reasoning from A1 to B is logically good, and reasoning from A2 to B is logically good, then inferring B from the disjunction A1vA2 is ok. If reasoning from A to a contradiction is logically good, then inferring not-A is good. If reasoning from A to B is logically good, then reasoning from A&C to B is good.
What's important about these sort of deployments is that if you replace "logically good" by some wider epistemological category of ok reasoning, you'll be in danger of losing these patterns.
Suppose, for example, that there are "deeply contingent a priori truths". One schematic example that John Hawthorne offers is the material conditional "My experiences are of kind H > theory T of the world is true". The idea here is that the experiences specified should be the kind that lead to T via inference to the best explanation. Of course, this'll be a case where the a priori goodness doesn't give us modal goodness: it could be that my experiences are H but the world is such that ~T. Nevertheless, I think there's a pretty strong case that in suitable settings inferring T from H will be (defeasibly but) a priori good.
Now suppose that the correct theory of the world isn't T, and I don't undergo experiences H. Consider the counterfactual "were my experiences to be H, theory T would be true". There's no reason at all to think this counterfactual would be true in the specified circumstances: it may well be that, given the actual world meets description T*, the closest world where my experiences are H is still an approximately T*-world rather than a T-world. E.g. the nearest world where various tests for general relativity come back negative may well be a world where general relativity is still the right theory, but it's effects aren't detectable on the kind of scales initially tested (that's just a for-instance: I'm sure better cases could be constructed).
Here's another illustration of the worry. Granted, reasoning from H to T seems a priori. But reasoning from H+X to T seems terrible, for a variety of X. (So: My experiences are of H + my experiences are misleading in way W will plausibly a priori supports some T' incompatible with T). But if we were allowed to use a priori good reasoning in indirect proofs, then we could simply argue from H+X to H, and thence (a priori) to T, overall getting an a priori route from H+X to T. the moral is that we can't treat a priori good pieces of reasoning as "lemmas" that we can rely on under the scope of whatever suppositions we like. A priori goodness threatens to be "non-monotonic": which is fine on its own, but I think does show quite clearly that it can completely crash when we try to make it play a role designed for logical goodness.
This sort of problem isn't a surprise: the reliability of indirect proofs is going to get *more problematic* the more inclusive the reasoning in play is. Suppose the indirect reasoning says that whenever reasoning of type R is good, one can infer C. The more pieces of reasoning count as "good", the more potential there is to come into conflict with the rule, because there's simply more cases of reasoning that are potential counterexamples.
Of course, a priori goodness is just one of the inferential virtues mentioned earlier: modal goodness is another; and a priori modal goodness a third. Modal goodness already looks a bit implausible as an attempt to capture the epistemic status of deduction: it doesn't seem all that plausible to classify the inferential move from A and B to B as w the same category as the move from this is water to this is H2O. Moreover, we'll again have trouble with conditional proof: this time for indicative conditionals. Intuitively, and (I'm independently convinced) actually, the indicative conditional "if the watery stuff around here is XYZ, then water is H2O" is false. But the inferential move from the antecedent to the consequent is modally good.
Of the options mentioned, this leaves a priori modal goodness. The hope would be that this'll cut out the cases of modally good inference that cause trouble (those based around a posteriori necessities). Will this help?
I don't think so: I think the problems for a priori goodness resurface here. if the move from H to T is a priori good, then it seems that the move from Actually(H) to Actually(T) should equally be a priori good. But in a wide variety of cases, this inference will also be modally good (all cases except H&~T ones). But just as before, thinking that this piece of reasoning preserves its status in indirect proofs gives us very bad results: e.g. that there's an a priori route from Actually(H) and Actually(X) to Actually (T), which for suitably chosen X looks really bad.
Anyway, of course there's wriggle room here, and I'm sure a suitably ingenious defender of one of these positions could spin a story (and I'd be genuinely interested in hearing it). But my main interest is just to block the dialectical maneuver that says: well, all logically good inferences are X-good ones, so we can get everything we want having a decent epistemology of X-good inferences. The cases of indirect reasoning I think show that the *limitations* on what inferences are logically good can be epistemologically central: and anyone wanting to ignore logic better have a story to tell about how their story plays in these cases.
[NB: one kind of good inference I haven't talked about is that backed by what 2-dimensionalists might call "1-necessary truth preservation": I.e. truth preservation at every centred world considered as actual. I've got no guarantees to offer that this notion won't run into problems, but I haven't as yet constructed cases against it. Happily, for my purposes, logically good inference and this sort of 1-modally good inference give rise to the same issues, so if I had to concede that this was a viable epistemological category for subsuming logically good inference, it wouldn't substantially effect my wider project.]
Some terminology. Suppose that the argument "A therefore B" is logically valid. Then I'll say that reasoning from "A" is true, to "B" is true, is logically good. Two caveats (1) the logical goodness of a piece of reasoning from X to Y doesn't mean that, all things considered, it's ok to infer Y from X. At best, the case is pro tanto: if Y were a contradiction, for example, all things considered you should give up X rather than come to believe Y; (2) I think the validity of an argument-type won't in the end be sufficient for for the logically goodness of a token inference of that type---partly because we probably need to tie it much closer to deductive moves, partially because of worries about the different contexts in play with any given token inference. But let me just ignore those issues for now.
I'm going to blur use-mention a bit by classifying material-mode inferences from A to B (rather than: "A" is true to "B" is true") as logically good in the same circumstances. I'll also call a piece of reasoning from A to B "modally good" if A entails B, and "a priori good" if it's a priori that if A then B (nb: material conditional). If it's a priori that A entails B, I'll call it "a priori modally good".
Suppose now we perform a competent deduction of B from A. What I'm interested in is whether the fact that the inference is logically good is something that we should pay attention to in our epistemological story about what's going on. You might think this isn't forced on us. For (arguably: see below) whenever an inference is logically good, it's also modally and a priori good. So---the thought runs---for all we've said we could have an epistemology that just looks at whether inferences are modally/a priori good, and simply sets aside questions of logical goodness. If so, logical goodness may not be epistemically interesting as such.
(That's obviously a bit quick: it might be that you can't just stop with declaring something a priori good; rather, any a priori good inference falls into one of a number of subcases, one of which is the class of logically good inferences, and that the real epistemic story proceeds at the level of the "thickly" described subcases. But let's set that sort of issue aside).
Are there reasons to think competent deduction/logically good inference is an especially interesting epistemological category of inference?
One obvious reason to refuse to subsume logically good inference within modally good inferences (for example) is if you thought that some logically good inferences aren't necessarily truth-preserving. There's a precedent for that thought: Kaplan argues in "Demonstratives" that "I am here now" is a logical validity, but isn't necessarily true. If that's the case, then logically good inferences won't be a subclass of the modally good ones, and so the attempt to talk only about the modally good inferences would just miss some of the cases.
I'm not aware of persuasive examples of logically good inferences that aren't a priori good. And I'm not persuaded that the Kaplanian classification is the right one. So let's suppose pro tem that the logically good inference are always modally, a priori, and a priori modally, good.
We're left with the following situation: the logically good inferences are a subclass of inferences that are also fall under other "good" categories. In a particular case where we come to believe B on the basis of A, where is the particular need to talk about its logical "goodness", rather than simply about its modal, a priori or whatever goodness?
To make things a little more concrete: suppose that our story about what makes a modally good inference good is that it's ultra-reliable. Then, since we're supposing all logically good inferences are modally good, just from their modal goodness, we're going to get that they're ultra-reliable. It's not so clear that epistemologically, we need say any more. (Of course, their logical goodness might explain *why* they're reliable: but that's not clearly an *epistemic* explanation, any more than is the biophysical story about perception's reliability.)
So long as we're focusing on cases where we deploy reasoning directly, to move from something we believe to something else we believe, I'm not sure how to get traction on this issue (at least, not in such an abstract setting: I'm sure we could fight on the details if they are filled out.). But even in this abstract setting, I do think we can see that the idea just sketched ignores one vital role that logically good reasoning plays: namely, reasoning under a supposition in the course of an indirect proof.
Familiar cases: If reasoning from A to B is logically good, then it's ok to believe (various) conditional(s) "if A, B". If reasoning from A1 to B is logically good, and reasoning from A2 to B is logically good, then inferring B from the disjunction A1vA2 is ok. If reasoning from A to a contradiction is logically good, then inferring not-A is good. If reasoning from A to B is logically good, then reasoning from A&C to B is good.
What's important about these sort of deployments is that if you replace "logically good" by some wider epistemological category of ok reasoning, you'll be in danger of losing these patterns.
Suppose, for example, that there are "deeply contingent a priori truths". One schematic example that John Hawthorne offers is the material conditional "My experiences are of kind H > theory T of the world is true". The idea here is that the experiences specified should be the kind that lead to T via inference to the best explanation. Of course, this'll be a case where the a priori goodness doesn't give us modal goodness: it could be that my experiences are H but the world is such that ~T. Nevertheless, I think there's a pretty strong case that in suitable settings inferring T from H will be (defeasibly but) a priori good.
Now suppose that the correct theory of the world isn't T, and I don't undergo experiences H. Consider the counterfactual "were my experiences to be H, theory T would be true". There's no reason at all to think this counterfactual would be true in the specified circumstances: it may well be that, given the actual world meets description T*, the closest world where my experiences are H is still an approximately T*-world rather than a T-world. E.g. the nearest world where various tests for general relativity come back negative may well be a world where general relativity is still the right theory, but it's effects aren't detectable on the kind of scales initially tested (that's just a for-instance: I'm sure better cases could be constructed).
Here's another illustration of the worry. Granted, reasoning from H to T seems a priori. But reasoning from H+X to T seems terrible, for a variety of X. (So: My experiences are of H + my experiences are misleading in way W will plausibly a priori supports some T' incompatible with T). But if we were allowed to use a priori good reasoning in indirect proofs, then we could simply argue from H+X to H, and thence (a priori) to T, overall getting an a priori route from H+X to T. the moral is that we can't treat a priori good pieces of reasoning as "lemmas" that we can rely on under the scope of whatever suppositions we like. A priori goodness threatens to be "non-monotonic": which is fine on its own, but I think does show quite clearly that it can completely crash when we try to make it play a role designed for logical goodness.
This sort of problem isn't a surprise: the reliability of indirect proofs is going to get *more problematic* the more inclusive the reasoning in play is. Suppose the indirect reasoning says that whenever reasoning of type R is good, one can infer C. The more pieces of reasoning count as "good", the more potential there is to come into conflict with the rule, because there's simply more cases of reasoning that are potential counterexamples.
Of course, a priori goodness is just one of the inferential virtues mentioned earlier: modal goodness is another; and a priori modal goodness a third. Modal goodness already looks a bit implausible as an attempt to capture the epistemic status of deduction: it doesn't seem all that plausible to classify the inferential move from A and B to B as w the same category as the move from this is water to this is H2O. Moreover, we'll again have trouble with conditional proof: this time for indicative conditionals. Intuitively, and (I'm independently convinced) actually, the indicative conditional "if the watery stuff around here is XYZ, then water is H2O" is false. But the inferential move from the antecedent to the consequent is modally good.
Of the options mentioned, this leaves a priori modal goodness. The hope would be that this'll cut out the cases of modally good inference that cause trouble (those based around a posteriori necessities). Will this help?
I don't think so: I think the problems for a priori goodness resurface here. if the move from H to T is a priori good, then it seems that the move from Actually(H) to Actually(T) should equally be a priori good. But in a wide variety of cases, this inference will also be modally good (all cases except H&~T ones). But just as before, thinking that this piece of reasoning preserves its status in indirect proofs gives us very bad results: e.g. that there's an a priori route from Actually(H) and Actually(X) to Actually (T), which for suitably chosen X looks really bad.
Anyway, of course there's wriggle room here, and I'm sure a suitably ingenious defender of one of these positions could spin a story (and I'd be genuinely interested in hearing it). But my main interest is just to block the dialectical maneuver that says: well, all logically good inferences are X-good ones, so we can get everything we want having a decent epistemology of X-good inferences. The cases of indirect reasoning I think show that the *limitations* on what inferences are logically good can be epistemologically central: and anyone wanting to ignore logic better have a story to tell about how their story plays in these cases.
[NB: one kind of good inference I haven't talked about is that backed by what 2-dimensionalists might call "1-necessary truth preservation": I.e. truth preservation at every centred world considered as actual. I've got no guarantees to offer that this notion won't run into problems, but I haven't as yet constructed cases against it. Happily, for my purposes, logically good inference and this sort of 1-modally good inference give rise to the same issues, so if I had to concede that this was a viable epistemological category for subsuming logically good inference, it wouldn't substantially effect my wider project.]
Monday, November 05, 2007
CEM journalism
The literature on the linguistics/philosophy interface on conditionals is full of excellent stuff. Here's just one nice thing we get. (Directly drawn from a paper by von Fintel and Iatridou). Nothing here is due to me: but it's something I want to put down so I don't forget it, since it looks like it'll be useful all over the place. Think of what follows as a bit of journalism.
Here's a general puzzle for people who like "iffy" analyses of conditionals.
What the paper cited above notes is that so long as we've got CEM, we won't go wrong. For [No x:Fx]Gx is equivalent to [All x:Fx]~Gx. And where G is the conditional "if x goofs off, x passes", the negated conditional "not: if x goofs off, x passes" is equivalent to "if x goofs off, x doesn't pass" if we have the relevant instance of conditional excluded middle. What we wind up with is an equivalence between the obvious first-pass regimentation and:
Suppose we're convinced by this that we need the relevant instances of CEM. There remains a question of *how* to secure these instances. The suggestion in the paper is that rules governing legitimate contexts for conditionals give us the result (paired with a contextually shifty strict conditional account of conditionals). An obvious alternative is to hard-wire in CEM into the semantics, as Stalnaker does. So unless you're prepared (with von Fintel, Gillies et al) to defend in detail fine-tuned shiftiness of the contexts in which conditionals can be uttered then it looks like you should smile upon the Stalnaker analysis.
[Update: It's interesting to think how this would look as an argument for (instances of) CEM.
Premise 1: The following are equivalent:
A. No student will pass if she goofs off
B. Every student will fail to pass if she goofs off
Premise 2: A and B can be regimented respectively as follows:
A*. [No x: student x](if x goofs off, x passes)
B*. [Every x: student x](if x goofs off, ~x passes)
Premise 3: [No x: Fx]Gx is equivalent to [Every x: Fx]~Gx
Premise 4: if [Every x: Fx]Hx is equivalent to [Every x: Fx]Ix, then Hx is equivalent to Ix.
We argue as follows. By an instance of premise 3, A* is equivalent to:
C*. [Every x: student x] not(if x goofs off, x passes)
But C* is equivalent to A*, which is equivalent to A (premise 2) which is equivalent to B (premise 1) which is equivalent to B* (premise 2). So C* is equivalent to B*.
But this equivalence is of the form of the antecedent of premise 4, so we get:
(Neg/Cond instances) ~(if x goofs off, x passes) iff if x goofs off, ~x passes.
And we quickly get from the law of excluded middle and a bit of logic:
(CEM instances) (if x goofs off, x passes) or (if x goofs off, ~ x passes). QED.
The present version is phrased in terms of indicative conditionals. But it looks like parallel arguments can be run for CEM for counterfactuals (Thanks to Richard Woodward for asking about this). For one of the controversial cases, for example, the basic premise will be that the following are equivalent:
D. No coin would have landed heads, if it had been flipped.
E. Every coin would have landed tails, if it had been flipped.
This looks pretty good, so the argument can run just as before.]
Here's a general puzzle for people who like "iffy" analyses of conditionals.
- No student passes if they goof off.
- [No x: x is a student](if x goofs off, x passes)
- [No x: x is a student](x goofs off and x passes)
What the paper cited above notes is that so long as we've got CEM, we won't go wrong. For [No x:Fx]Gx is equivalent to [All x:Fx]~Gx. And where G is the conditional "if x goofs off, x passes", the negated conditional "not: if x goofs off, x passes" is equivalent to "if x goofs off, x doesn't pass" if we have the relevant instance of conditional excluded middle. What we wind up with is an equivalence between the obvious first-pass regimentation and:
- [All x: x is a student](if x goofs off, x won't pass).
Suppose we're convinced by this that we need the relevant instances of CEM. There remains a question of *how* to secure these instances. The suggestion in the paper is that rules governing legitimate contexts for conditionals give us the result (paired with a contextually shifty strict conditional account of conditionals). An obvious alternative is to hard-wire in CEM into the semantics, as Stalnaker does. So unless you're prepared (with von Fintel, Gillies et al) to defend in detail fine-tuned shiftiness of the contexts in which conditionals can be uttered then it looks like you should smile upon the Stalnaker analysis.
[Update: It's interesting to think how this would look as an argument for (instances of) CEM.
Premise 1: The following are equivalent:
A. No student will pass if she goofs off
B. Every student will fail to pass if she goofs off
Premise 2: A and B can be regimented respectively as follows:
A*. [No x: student x](if x goofs off, x passes)
B*. [Every x: student x](if x goofs off, ~x passes)
Premise 3: [No x: Fx]Gx is equivalent to [Every x: Fx]~Gx
Premise 4: if [Every x: Fx]Hx is equivalent to [Every x: Fx]Ix, then Hx is equivalent to Ix.
We argue as follows. By an instance of premise 3, A* is equivalent to:
C*. [Every x: student x] not(if x goofs off, x passes)
But C* is equivalent to A*, which is equivalent to A (premise 2) which is equivalent to B (premise 1) which is equivalent to B* (premise 2). So C* is equivalent to B*.
But this equivalence is of the form of the antecedent of premise 4, so we get:
(Neg/Cond instances) ~(if x goofs off, x passes) iff if x goofs off, ~x passes.
And we quickly get from the law of excluded middle and a bit of logic:
(CEM instances) (if x goofs off, x passes) or (if x goofs off, ~ x passes). QED.
The present version is phrased in terms of indicative conditionals. But it looks like parallel arguments can be run for CEM for counterfactuals (Thanks to Richard Woodward for asking about this). For one of the controversial cases, for example, the basic premise will be that the following are equivalent:
D. No coin would have landed heads, if it had been flipped.
E. Every coin would have landed tails, if it had been flipped.
This looks pretty good, so the argument can run just as before.]
Must, Might and Moore.
I've just been enjoying reading a paper by Thony Gillies. One thing that's very striking is the dilemma he poses---quite generally---for "iffy" accounts of "if" (i.e. accounts that see English "if" as expressing a sentential connective, pace Kratzer's restrictor account).
The dilemma is constructed around finding a story that handles the interaction between modals and conditionals. The prima facie data is that the following pairs are equivalent:
It's a really familiar tactic, when presented with a putative equivalence that causes trouble for your favourite theory, to say that the pairs aren't equivalent at all, but can be "reasonably inferred" from each other (think of various ways of explaining away "or-to-if" inferences). But taken cold such pragmatic explanations can look a bit ad hoc.
So it'd be nice if we could find independent motivation for the inequivalence we need. In a related setting, Bob Stalnaker uses the acceptability of Moorean-patterns to do this job. To me, the Stalnaker point seems to bear directly on the Gillies dilemma above.
Before we even consider conditionals, notice that "p but it might be that not p" sounds terrible. Attractive story: this is because you shouldn't assert something unless you know it to be true; and to say that p might not be the case is (inter alia) to deny you know it. One way of bringing out the pretty obviously pragmatic nature of the tension in uttering the conjunction here is to note that asserting the following sort of thing looks much much better:
(I can sometimes still hear a little tension in the example: what are you doing believing that you'll catch the train if you know you might not? But for me this goes away if we replace "I believe that" with "I'm confident that" (which still, in vanilla cases, gives you Moorean phenomena). I think in the examples to be given below, residual tension can be eliminated in the same way. The folks who work on norms of assertion I'm sure have explored this sort of territory lots.)
That's the prototypical case. Let's move on to examples where there are more moving parts. David Lewis famously alleged that the following pair are equivalent:
We find pretty much the same cases for "must" and indicative "if".
These sorts of patterns make me very suspicious of claims that "if p, must q" and "if p, q" are equivalent, just as the analogous patterns make me suspicious of the Lewis idea that "if p, might ~q" and "if p, q" are contradictories when the "if" is subjunctive. So I'm thinking the horns of Gillies' dilemma aren't equal: denying the must conditional/bare conditional equivalence is independently motivated.
None of this is meant to undermine the positive theory that Thony Gillies is presenting in the paper: his way of accounting for lots of the data looks super-interesting, and I've got no reason to suppose his positive story won't have a story about everything I've said here. I'm just wondering whether the dilemma that frames the debate should suck us in.
The dilemma is constructed around finding a story that handles the interaction between modals and conditionals. The prima facie data is that the following pairs are equivalent:
- If p, must be q
- If p, q
- If p, might be q
- Might be (p&q)
It's a really familiar tactic, when presented with a putative equivalence that causes trouble for your favourite theory, to say that the pairs aren't equivalent at all, but can be "reasonably inferred" from each other (think of various ways of explaining away "or-to-if" inferences). But taken cold such pragmatic explanations can look a bit ad hoc.
So it'd be nice if we could find independent motivation for the inequivalence we need. In a related setting, Bob Stalnaker uses the acceptability of Moorean-patterns to do this job. To me, the Stalnaker point seems to bear directly on the Gillies dilemma above.
Before we even consider conditionals, notice that "p but it might be that not p" sounds terrible. Attractive story: this is because you shouldn't assert something unless you know it to be true; and to say that p might not be the case is (inter alia) to deny you know it. One way of bringing out the pretty obviously pragmatic nature of the tension in uttering the conjunction here is to note that asserting the following sort of thing looks much much better:
- it might be that not p; but I believe that p
(I can sometimes still hear a little tension in the example: what are you doing believing that you'll catch the train if you know you might not? But for me this goes away if we replace "I believe that" with "I'm confident that" (which still, in vanilla cases, gives you Moorean phenomena). I think in the examples to be given below, residual tension can be eliminated in the same way. The folks who work on norms of assertion I'm sure have explored this sort of territory lots.)
That's the prototypical case. Let's move on to examples where there are more moving parts. David Lewis famously alleged that the following pair are equivalent:
- it's not the case that: if were the case that p, it would have been that q
- if were that p, it might have been that ~q
- if it were that p, it might have been that not q; but I believe if it were that p it would have been that q.
We find pretty much the same cases for "must" and indicative "if".
- It's not true that if p, then it must be that q; but I believe that if p, q.
These sorts of patterns make me very suspicious of claims that "if p, must q" and "if p, q" are equivalent, just as the analogous patterns make me suspicious of the Lewis idea that "if p, might ~q" and "if p, q" are contradictories when the "if" is subjunctive. So I'm thinking the horns of Gillies' dilemma aren't equal: denying the must conditional/bare conditional equivalence is independently motivated.
None of this is meant to undermine the positive theory that Thony Gillies is presenting in the paper: his way of accounting for lots of the data looks super-interesting, and I've got no reason to suppose his positive story won't have a story about everything I've said here. I'm just wondering whether the dilemma that frames the debate should suck us in.
Friday, November 02, 2007
Degrees of belief and supervaluations
Suppose you've got an argument with one premise and one conclusion, and you think its valid. Call the premise p and the conclusion q. Plausibly, constraints on rational belief follow: in particular, you can't rationally have a lesser degree of belief in q than you have in p.
The natural generalization of this to multi-premise cases is that if p1...pn|-q, then your degree of disbelief in q can't rationally exceed the sum of your degrees of disbelief in the premises.
FWIW, there's a natural generalization to the multi-conclusion case too (a multi-conclusion argument is valid, roughly, if the truth of all the premises secures the truth of at least one conclusion). If p1...pn|-q1...qm, then the sum of your degrees of disbelief in the conclusions can't rationally exceed the sum of your degrees of disbelief in the premises.
What I'm interested in at the moment is to what extent this sort of connection can be extended to non-classical settings. In particular (and connected with the last post) I'm interested in what the supervaluationist should think about all this.
There's a fundamental choice to be made at the get-go. Do we think that "degrees of belief" in sentences of a vague language can be represented by a standard classical probability function? Or do we need to be a bit more devious?
Let's take a simple case. Construct the artificial predicate B(x), so that numbers less than 5 satisfy B, and numbers greater than5 fail to satisfy it. We'll suppose that it is indeterminate whether 5 itself is B, and that supervaluationism gives the right way to model this.
First observation. It's generally accepted that for the standard supervaluationist
p &~Det(p)|-absurdity;
Given this and the constraints on rational credence mentioned earlier, we'd have to conclude that my credence in B(5)&~Det(B(5)) must be 0. I have credence 0 in absurdity; and the degree of disbelief in the conclusion of this valid argument (1) must not exceed the degree of disbelief in its premises.
Let's think that through. Notice that in this case, my credence in ~Det(B(5)) can be taken to be 1. So given minimal assumptions about the logic of credences, my credence in B(5) must be 0.
A parallel argument running from ~B(5)&~Det(~B(5))|-absurdity gives us that my credence in ~B(5) must be 0.
Moreover, supervaluational entails all classical tautologies. So in particular we have the validity: |-B(5)v~B(5). The standard constraint in this case tells us that rational credence in this disjunction must be 1. And so, we have a disjunction in which we have credence 1, each disjunct of which we have credence 0 in. (Compare the standard observation that supervaluational disjunctions can be non-prime: the disjunction can be true when neither disjunct is).
This is a fairly direct argument that something non-classical has to be going on with the probability calculus. One move at this point is to consider Shafer functions (which I know little about: but see here). Now maybe that works out nicely, maybe it doesn't. But I find it kinda interesting that the little constraint on validity and credences gets us so quickly into a position where something like this is needed if the constraint is to work. It also gives us a recipe for arguing against standard supervaluationism: argue against the Shafer-function like behaviour in our degrees of belief, and you'll ipso facto have an argument against supervaluationism. For this, the probablistic constraint on validity is needed (as far as I can see): for its this that makes the distinctive features mandatory.
I'd like to connect this to two other issues I've been working on. One is the paper on the logic of supervaluationism cited below. The key thing here is that it raises the prospect of p&~Dp|-absurdity not holding, even for your standard "truth=supertruth" supervaluationist. If that works, the key premise of the argument that forces you to have degree of belief 0 in both an indeterminate sentence 'p' and its negation goes missing.
Maybe we can replace it by some other argument. If you read "D" as "it is true that..." as the standard supervaluationist encourages you to, then "p&~Dp" should be read "p&it is not true that p". And perhaps that sounds to you just like an analytic falsity (it sure sounded to me that way); and analytic falsities are the sorts of things one should paradigmatically have degree of belief 0 in.
But here's another observation that might give you pause (I owe this point to discussions with Peter Simons and John Hawthorne). Suppose p is indeterminate. Then we have ~Dp&~D~p. And given supervaluationism's conservativism, we also have pv~p. So by a bit of jiggery-pokery, we'll get (p&~Dp v ~p&~D~p). But in moods where I'm hyped up thinking that "p&~Dp" is analytically false and terrible, I'm equally worried by this disjunction. But that suggests that the source of my intuitive repulsion here isn't the sort of thing that the standard supervaluationist should be buying. Of course, the friend of Shafer functions could just say that this is another case where our credence in the disjunction is 1 while our credences in each disjunct is 0. That seems dialectically stable to me: after all, they'll have *independent* reason for thinking that p&~Dp should have credence 0. All I want to insist is that the "it sounds really terrible" reason for assigning p&~Dp credence 0 looks like it overgeneralizes, and so should be distrusted.
I also think that if we set aside truth-talk, there's some plausibility in the claim that "p&~Dp" should get non-zero credence. Suppose you're initially in a mindset where you should be about half-confident of a borderline case. Well, one thing that you absolutely want to say about borderline cases is that they're neither true nor false. So why shouldn't you be at least half-confident in the combination of these?
And yet, and yet... there's the fundamental implausibility of "p&it's not true that p" (the standard supervaluationist's reading of "p&~Dp") having anything other than credence 0. But ex hypothesi, we've lost the standard positive argument for that claim. So we're left, I think, with the bare intuition. But it's a powerful one, and something needs to be said about it.
Two defensive maneuvers for the standard supervaluationist:
(1) Say that what you're committed to is just "p& it's not supertrue that p". Deny that the ordinary concept of truth can be identified with supertruth (something that as many have emphasized, is anyway quite plausible given the non-disquotational nature of supertruth). But crucially, don't seek to replace this with some other gloss on supertruth: just say that supertruth, superfalsity and gap between them are appropriate successor concepts, and that ordinary truth-talk is appropriate only when we're ignoring the possibility of the third case. If we disclaim conceptual analysis in this way, then it won't be appropriate to appeal to intuitions about the English word "true" to kick away independently motivated theoretical claims about supertruth. In particular, we can't appeal to intuitions to argue that "p&~supertrue that p" should be assigned credence 0. (There's a question of whether this should be seen as an error-theory about English "truth"-ascriptions. I don't see it needs to be. It might be that the English word "true" latches on to supertruth because supertruth what best fits the truth-role. On this model, "true" stands to supertruth as "de-phlogistonated air" according to some, stands to oxygen. And so this is still a "truth=supertruth" standard supervaluationism.)
(2) The second maneuver is to appeal to supervaluational degrees of truth. Let the degree of supertruth of S be, roughly, the measure of the precisifications on which S is true. S is supertrue simpliciter when it is true on all the precisifications, i.e. measure 1 of the precisifications. If we then identify degrees of supertruth with degrees of truth, the contention that truth is supertruth becomes something that many find independently attractive: that in the context of a degree theory, truth simpliciter should be identified with truth to degree 1. (I think that this tendancy has something deeply in common with the temptation (following Unger) to think that nothing that nothing can be flatter than a flat thing: nothing can be truer than a true thing. I've heard people claim that Unger was right to think that a certain class of adjectives in English work this way).
I think when we understand the supertruth=truth claim in that way, the idea that "p&~true that p" should be something in which we should always have degree of belief 0 loses much of its appeal. After all, compatibly with "p" not being absolutely perfectly true (=true), it might be something that's almost absolutely perfectly true. And it doesn't sound bad or uncomfortable to me to think that one should conform one's credences to the known degree of truth: indeed, that seems to be a natural generalization of the sort of thing that originally motivated our worries.
In summary. If you're a supervaluationist who takes the orthodox line on supervaluational logic, then it looks like there's a strong case for a non-classical take on what degrees of belief look like. That's a potentially vulnerable point for the theory. If you're a (standard, global, truth=supertruth) supervaluationist who's open to the sort of position I sketch in the paper below, prima facie we can run with a classical take on degrees of belief.
Let me finish off by mentioning a connection between all this and some material on probability and conditionals I've been working on recently. I think a pretty strong case can be constructed for thinking that for some conditional sentences S, we should be all-but-certain that S&~DS. But that's exactly of the form that we've been talking about throughout: and here we've got *independent* motivation to think that this should be high-probability, not probability zero.
Now, one reaction is to take this as evidence that "D" shouldn't be understood along standard supervaluationist lines. That was my first reaction too (in fact, I couldn't see how anyone but the epistemicist could deal with such cases). But now I'm thinking that this may be too hasty. What seems right is that (a) the standard supervaluationist with the Shafer-esque treatment of credences can't deal with this case. But (b) the standard supervaluationist articulated in one of the ways just sketched shouldn't think there's an incompatibility here.
My own preference is to go for the degrees-of-truth explication of all this. Perhaps, once we've bought into that, the "truth=degree 1 supertruth" element starts to look less important, and we'll find other useful things to do with supervaluational degrees of truth (a la Kamp, Lewis, Edgington). But I think the "phlogiston" model of supertruth is just about stable too.
[P.S. Thanks to Daniel Elstein, for a paper today at the CMM seminar which started me thinking again about all this.]
The natural generalization of this to multi-premise cases is that if p1...pn|-q, then your degree of disbelief in q can't rationally exceed the sum of your degrees of disbelief in the premises.
FWIW, there's a natural generalization to the multi-conclusion case too (a multi-conclusion argument is valid, roughly, if the truth of all the premises secures the truth of at least one conclusion). If p1...pn|-q1...qm, then the sum of your degrees of disbelief in the conclusions can't rationally exceed the sum of your degrees of disbelief in the premises.
What I'm interested in at the moment is to what extent this sort of connection can be extended to non-classical settings. In particular (and connected with the last post) I'm interested in what the supervaluationist should think about all this.
There's a fundamental choice to be made at the get-go. Do we think that "degrees of belief" in sentences of a vague language can be represented by a standard classical probability function? Or do we need to be a bit more devious?
Let's take a simple case. Construct the artificial predicate B(x), so that numbers less than 5 satisfy B, and numbers greater than5 fail to satisfy it. We'll suppose that it is indeterminate whether 5 itself is B, and that supervaluationism gives the right way to model this.
First observation. It's generally accepted that for the standard supervaluationist
p &~Det(p)|-absurdity;
Given this and the constraints on rational credence mentioned earlier, we'd have to conclude that my credence in B(5)&~Det(B(5)) must be 0. I have credence 0 in absurdity; and the degree of disbelief in the conclusion of this valid argument (1) must not exceed the degree of disbelief in its premises.
Let's think that through. Notice that in this case, my credence in ~Det(B(5)) can be taken to be 1. So given minimal assumptions about the logic of credences, my credence in B(5) must be 0.
A parallel argument running from ~B(5)&~Det(~B(5))|-absurdity gives us that my credence in ~B(5) must be 0.
Moreover, supervaluational entails all classical tautologies. So in particular we have the validity: |-B(5)v~B(5). The standard constraint in this case tells us that rational credence in this disjunction must be 1. And so, we have a disjunction in which we have credence 1, each disjunct of which we have credence 0 in. (Compare the standard observation that supervaluational disjunctions can be non-prime: the disjunction can be true when neither disjunct is).
This is a fairly direct argument that something non-classical has to be going on with the probability calculus. One move at this point is to consider Shafer functions (which I know little about: but see here). Now maybe that works out nicely, maybe it doesn't. But I find it kinda interesting that the little constraint on validity and credences gets us so quickly into a position where something like this is needed if the constraint is to work. It also gives us a recipe for arguing against standard supervaluationism: argue against the Shafer-function like behaviour in our degrees of belief, and you'll ipso facto have an argument against supervaluationism. For this, the probablistic constraint on validity is needed (as far as I can see): for its this that makes the distinctive features mandatory.
I'd like to connect this to two other issues I've been working on. One is the paper on the logic of supervaluationism cited below. The key thing here is that it raises the prospect of p&~Dp|-absurdity not holding, even for your standard "truth=supertruth" supervaluationist. If that works, the key premise of the argument that forces you to have degree of belief 0 in both an indeterminate sentence 'p' and its negation goes missing.
Maybe we can replace it by some other argument. If you read "D" as "it is true that..." as the standard supervaluationist encourages you to, then "p&~Dp" should be read "p&it is not true that p". And perhaps that sounds to you just like an analytic falsity (it sure sounded to me that way); and analytic falsities are the sorts of things one should paradigmatically have degree of belief 0 in.
But here's another observation that might give you pause (I owe this point to discussions with Peter Simons and John Hawthorne). Suppose p is indeterminate. Then we have ~Dp&~D~p. And given supervaluationism's conservativism, we also have pv~p. So by a bit of jiggery-pokery, we'll get (p&~Dp v ~p&~D~p). But in moods where I'm hyped up thinking that "p&~Dp" is analytically false and terrible, I'm equally worried by this disjunction. But that suggests that the source of my intuitive repulsion here isn't the sort of thing that the standard supervaluationist should be buying. Of course, the friend of Shafer functions could just say that this is another case where our credence in the disjunction is 1 while our credences in each disjunct is 0. That seems dialectically stable to me: after all, they'll have *independent* reason for thinking that p&~Dp should have credence 0. All I want to insist is that the "it sounds really terrible" reason for assigning p&~Dp credence 0 looks like it overgeneralizes, and so should be distrusted.
I also think that if we set aside truth-talk, there's some plausibility in the claim that "p&~Dp" should get non-zero credence. Suppose you're initially in a mindset where you should be about half-confident of a borderline case. Well, one thing that you absolutely want to say about borderline cases is that they're neither true nor false. So why shouldn't you be at least half-confident in the combination of these?
And yet, and yet... there's the fundamental implausibility of "p&it's not true that p" (the standard supervaluationist's reading of "p&~Dp") having anything other than credence 0. But ex hypothesi, we've lost the standard positive argument for that claim. So we're left, I think, with the bare intuition. But it's a powerful one, and something needs to be said about it.
Two defensive maneuvers for the standard supervaluationist:
(1) Say that what you're committed to is just "p& it's not supertrue that p". Deny that the ordinary concept of truth can be identified with supertruth (something that as many have emphasized, is anyway quite plausible given the non-disquotational nature of supertruth). But crucially, don't seek to replace this with some other gloss on supertruth: just say that supertruth, superfalsity and gap between them are appropriate successor concepts, and that ordinary truth-talk is appropriate only when we're ignoring the possibility of the third case. If we disclaim conceptual analysis in this way, then it won't be appropriate to appeal to intuitions about the English word "true" to kick away independently motivated theoretical claims about supertruth. In particular, we can't appeal to intuitions to argue that "p&~supertrue that p" should be assigned credence 0. (There's a question of whether this should be seen as an error-theory about English "truth"-ascriptions. I don't see it needs to be. It might be that the English word "true" latches on to supertruth because supertruth what best fits the truth-role. On this model, "true" stands to supertruth as "de-phlogistonated air" according to some, stands to oxygen. And so this is still a "truth=supertruth" standard supervaluationism.)
(2) The second maneuver is to appeal to supervaluational degrees of truth. Let the degree of supertruth of S be, roughly, the measure of the precisifications on which S is true. S is supertrue simpliciter when it is true on all the precisifications, i.e. measure 1 of the precisifications. If we then identify degrees of supertruth with degrees of truth, the contention that truth is supertruth becomes something that many find independently attractive: that in the context of a degree theory, truth simpliciter should be identified with truth to degree 1. (I think that this tendancy has something deeply in common with the temptation (following Unger) to think that nothing that nothing can be flatter than a flat thing: nothing can be truer than a true thing. I've heard people claim that Unger was right to think that a certain class of adjectives in English work this way).
I think when we understand the supertruth=truth claim in that way, the idea that "p&~true that p" should be something in which we should always have degree of belief 0 loses much of its appeal. After all, compatibly with "p" not being absolutely perfectly true (=true), it might be something that's almost absolutely perfectly true. And it doesn't sound bad or uncomfortable to me to think that one should conform one's credences to the known degree of truth: indeed, that seems to be a natural generalization of the sort of thing that originally motivated our worries.
In summary. If you're a supervaluationist who takes the orthodox line on supervaluational logic, then it looks like there's a strong case for a non-classical take on what degrees of belief look like. That's a potentially vulnerable point for the theory. If you're a (standard, global, truth=supertruth) supervaluationist who's open to the sort of position I sketch in the paper below, prima facie we can run with a classical take on degrees of belief.
Let me finish off by mentioning a connection between all this and some material on probability and conditionals I've been working on recently. I think a pretty strong case can be constructed for thinking that for some conditional sentences S, we should be all-but-certain that S&~DS. But that's exactly of the form that we've been talking about throughout: and here we've got *independent* motivation to think that this should be high-probability, not probability zero.
Now, one reaction is to take this as evidence that "D" shouldn't be understood along standard supervaluationist lines. That was my first reaction too (in fact, I couldn't see how anyone but the epistemicist could deal with such cases). But now I'm thinking that this may be too hasty. What seems right is that (a) the standard supervaluationist with the Shafer-esque treatment of credences can't deal with this case. But (b) the standard supervaluationist articulated in one of the ways just sketched shouldn't think there's an incompatibility here.
My own preference is to go for the degrees-of-truth explication of all this. Perhaps, once we've bought into that, the "truth=degree 1 supertruth" element starts to look less important, and we'll find other useful things to do with supervaluational degrees of truth (a la Kamp, Lewis, Edgington). But I think the "phlogiston" model of supertruth is just about stable too.
[P.S. Thanks to Daniel Elstein, for a paper today at the CMM seminar which started me thinking again about all this.]
Supervaluations and logical revisionism paper
Happy news today: the Journal of Philosophy is going to publish my paper on the logic of supervaluationism. Swift moral. It ain't logical revisionary; and if it is, it doesn't matter.
This previous post gives an overview, if anyone's interested...
Now I've just got to figure out how to transmute my beautiful LaTeX symbols into Word...
This previous post gives an overview, if anyone's interested...
Now I've just got to figure out how to transmute my beautiful LaTeX symbols into Word...
Wednesday, October 24, 2007
London Logic and Metaphysics Forum (x-posted from MV)
If you're in London on a Tuesday evening, what better to do than to take in a talk by a young philosopher on logic or metaphysics?
Spotting this gap in the tourist offerings, the clever folks in the capital have set up the London Logic and Metaphysics forum. Looks an exciting programme, though I have my doubts about the joker on the 11th Dec...
Tues 30 Oct: David Liggins (Manchester)
Quantities
Tues 13 Nov: Oystein Linnebo (Bristol & IP)
Compositionality and Frege's Context Principle
Tues 27 Nov: Ofra Magidor (Oxford)
Epistemicism about vagueness and meta-linguistic safety
Tues 11 Dec: Robbie Williams (Leeds)
Is survival intrinsic?
8 Jan: Stephan Leuenberger (Leeds)
22 Jan: Antony Eagle (Oxford)
5 Feb: Owen Greenhall (Oslo & IP)
4 Mar: Guy Longworth (Warwick)
Full details can be found here.
Spotting this gap in the tourist offerings, the clever folks in the capital have set up the London Logic and Metaphysics forum. Looks an exciting programme, though I have my doubts about the joker on the 11th Dec...
Tues 30 Oct: David Liggins (Manchester)
Quantities
Tues 13 Nov: Oystein Linnebo (Bristol & IP)
Compositionality and Frege's Context Principle
Tues 27 Nov: Ofra Magidor (Oxford)
Epistemicism about vagueness and meta-linguistic safety
Tues 11 Dec: Robbie Williams (Leeds)
Is survival intrinsic?
8 Jan: Stephan Leuenberger (Leeds)
22 Jan: Antony Eagle (Oxford)
5 Feb: Owen Greenhall (Oslo & IP)
4 Mar: Guy Longworth (Warwick)
Full details can be found here.
In Rutgers
As Brian Weatherson reports here, there's a metaphysics/phil physics conference at Rutgers this weekend (26-28th). I'm in Rutgers for the week, and am responding to one of the papers at the event. I'm looking forward to what looks like a really interesting conference.
Tonight (24th) I'm giving a talk to a phil language group at Rutgers. I'm going to be presenting some material on modal accounts of indicative conditionals (a la Stalnaker, Weatherson, Nolan). This piece has evolved quite a bit during the last few weeks as I've been working on it. A bit unexpectedly, I've ended up with an argument for Weatherson's views.
Briefly, the idea is to look at what mileage we can get out of paradigmatic instances of the identification of the probability of a conditional "If A, B" with the conditional probability of B on A (CCCP). We know that in general that identification is highly problematic, due to notorious impossibility results due to David Lewis and more recently Ned Hall and Al Hajek. But I think it's interesting to divide the issue into two halves:
First, what would a modal account of indicative conditionals that obeys (a handful of paradigmatic) instances of CCCP have to look like? I think there's a lot we can say about this: of the salient options, it'll look a lot like Weatherson's theory; it'll have to have a particular take on what kind of vagueness can effect the conditional; it'll have to say that any proposition you know should have probability 1.
Second, is this package sustainable in the face of impossibility results? Al Hajek (in his papers in the Eels/Skyrms probability and conditionals volume) does a really nice job of formulating the challenges here. If we're prepared to give up some instances of CCCP in recherche cases (like left-embedded conditionals, things of the form "if (if A, B), C", then many of the general impossibility results won't apply. But nevertheless, there a bunch of puzzles that remain: in particular, concerning how even the paradigmatic instances can survive when we receive new information.
I'll mostly be talking about the first part of the talk this evening.
Tonight (24th) I'm giving a talk to a phil language group at Rutgers. I'm going to be presenting some material on modal accounts of indicative conditionals (a la Stalnaker, Weatherson, Nolan). This piece has evolved quite a bit during the last few weeks as I've been working on it. A bit unexpectedly, I've ended up with an argument for Weatherson's views.
Briefly, the idea is to look at what mileage we can get out of paradigmatic instances of the identification of the probability of a conditional "If A, B" with the conditional probability of B on A (CCCP). We know that in general that identification is highly problematic, due to notorious impossibility results due to David Lewis and more recently Ned Hall and Al Hajek. But I think it's interesting to divide the issue into two halves:
First, what would a modal account of indicative conditionals that obeys (a handful of paradigmatic) instances of CCCP have to look like? I think there's a lot we can say about this: of the salient options, it'll look a lot like Weatherson's theory; it'll have to have a particular take on what kind of vagueness can effect the conditional; it'll have to say that any proposition you know should have probability 1.
Second, is this package sustainable in the face of impossibility results? Al Hajek (in his papers in the Eels/Skyrms probability and conditionals volume) does a really nice job of formulating the challenges here. If we're prepared to give up some instances of CCCP in recherche cases (like left-embedded conditionals, things of the form "if (if A, B), C", then many of the general impossibility results won't apply. But nevertheless, there a bunch of puzzles that remain: in particular, concerning how even the paradigmatic instances can survive when we receive new information.
I'll mostly be talking about the first part of the talk this evening.
Friday, October 12, 2007
Edgington vs. Stalnaker
One of the things I'm thinking about at the moment is Stalnaker-esque treatments of indicative conditionals. Stalnaker's story, roughly, is that indicative conditionals have almost exactly the same truth conditions as (on his theory) counterfactuals do. That is, A>B is true at w iff B is true at the nearest B-world to w. The difference comes only in the fine details about which worlds count as nearest. For counterfactuals, Stalnaker like Lewis thinks that some sort of similarity does the job. For indicatives, Stalnaker thinks that the nearness ordering is rooted in the same similarity metric, but distorted by the following overriding principle: if A and w are consistent with what we collectively presuppose, then the nearest A-worlds will also be consistent with what we collectively presuppose. In the jargon, all worlds outside the "context set" are pushed further out than they would be on the counterfactual ordering.
I'm interested in this sort of "push worlds" modal account of indicatives. (Others in a similar family include Daniel Nolan's theory, whereby it's knowledge that does the pushing rather than collective presuppositions). Lots of criticisms of Stalnaker's theory don't engage with the fine details of what he says about the closeness ordering, but more general aspects (e.g. its inability to sustain Adams' thesis that the conditional probability is the probability of the conditional; its handling of Gibbard cases; its sensitivity to fine factors of conversational context). An exception, however, is an argument that Dorothy Edgington puts forward in her SEP survey article (which, by the way, I very much recommend!)
Here's the case. Let's suppose that Jill is uncertain how much fuel is in Jane's car. The tank has a capacity for 100-miles'-worth, but Jill has no knowledge of what level it is at. Jane is
going to drive it until it runs out of fuel. For Jill, the probability of the car being driven for n miles, given that it's driven for no more than fifty, is 1/50. (for n<51).
Suppose that in fact the tank is full. The most similar worlds to actuality, arguably, are those where the tank is 50 per cent full, and so where Jane drives 50 miles. The same goes for any world where the tank is more than 50 per cent full. So, if nearness of worlds is determined by similarity, the conditional is true as uttered at each of the worlds where the tank is more than 50 per cent full. So without knowing the details of the level of the tank, we should be at least 50 per cent confident that if it goes for under 50 miles, it'll go for exactly 50 miles. But this seems all wrong. Varying the numbers we can make the case even worse: we should be almost sure of "If it goes for no more than 3 miles, it'll go for exactly 3 miles", even though we regard 3, 2, 1 as equiprobable fuel levels.
Of course, that's only to take into account the comparative similarity of worlds in determining the ordering, and Stalnaker and Nolan have the distorting factor to appeal to: worlds that are incompatible with something we presuppose/know to be true, can be pushed further out. But it doesn't seem in this case that anything relevant is being presupposed/known.
I don't think this objection works. To see that something is going wrong, notice that the argument, if successful, would work against other theories too. Consider, for example, Stalnaker's theory of the counterfactual conditional. Take the case as before, but suppose we're a day later and Jill doesn't know how far Jane drove. Consider the counterfactual "Had it stopped after no more than 50 miles, it'd have gone for exactly 50 miles". By the previous reasoning, the most similar worlds to over-50 worlds are exactly-50 worlds; so we should be half confident of the truth of that conditional. Varying the numbers, we should be almost sure that "If it had gone no more than 3, it'd go exactly 3", despite regarding the probabilities of 3, 2 and 1 as equally likely. But these all seem like bizarre results.
Moral: the counterfactual ordering of worlds isn't fixed by the kind of similarity that Edgington appeals to: the sort of similarity whereby a world in which the car stops after 53 miles is more similar to one in which the car stops after 50 miles than one in which the car stops after 3 miles. Of course, in some sense (perhaps an "overall" sense) those similarity judgements are just right. But we know from the Fine/Bennett cases that the sense of similarity that supports the right counterfactual verdicts can't be all in cases (those cases are ones concerning counterfactuals starting "if Nixon had pushed the nuclear button in the 70's..." All-in similarity arguably says that closest such worlds are ones where no missiles are released, leading to the wrong results).
Spelling out what the right notion of similarity is is tricky. Lewis gave us one recipe. In effect, we look for a little miracle that'll suffice to let the counterfactual world diverge from actual history to bring about the antecedent. Then we let events run on according to actual laws, and see what happens. So in worlds where the tank is full, say, let's look for the little miracle required to to make it run for no more than 50 miles, and run things on. What are the plausible candidates? Perhaps Jane's decides to take an extra journey yesterday, or forgets to fill up her car two days ago. Small miracles could suffice to get us into those sorts of worlds. But those sorts of divergences don't really suggest that she'll end up with exactly 50 miles worth of fuel in the tank, and so this approach undermines the case for "If were at most 50, then exactly 50" being true in antecedent-false worlds. (Which is a good thing!)
If that's the right thing to say in the counterfactual case, the indicative case too will be sorted. For it's designed to be a case where presuppositions/knowledge don't have a relevant distorting effect. And so, once more, the case for "If the car goes for at most 50, then it'll go for exactly 50" doesn't work.
I think that the basic interest of push-worlds theories of indicatives like Stalnaker's and Nolan's is to connect up the counterfactual and indicative ordering: whether there's anything informative to say about the counterfactual ordering of worlds itself is an entirely different matter. So if the glosses of the position lead to problems, it's best to figure out whether the problems lie withthe gloss of the counterfactual ordering (which then should be assessed in connection with that familiar and worked through literature) or with the push-worlds maneuver itself (which has, I think, been less fully examined). I think Edgington's objection is really connected with the first facet, and I've tried to say why I think a more detailed theory will make the problem dissolve. But even if it did turn out to be a problem, the push-worlds thesis itself is still standing.
(Incidentally, I do think Edgington's setup (which she attributes to a student, James Studd) has wider interest. It looks to me like Jackson's modal theory of counterfactuals, and Davis' modal theory of indicatives, both deliver the wrong results in this case.)
[Actually, now I've written this out, it strikes me that maybe the anti-Stalnaker argument is fixable. The trick would be to specify the background state of the world to make the result for counterfactual probabilities seem plausible, but such that (given Jill's ignorance of the background conditions) the indicative probabilities still seem wrong. So maybe the example is at least a recipe for a counterexample to Stalnaker, even if the original case is resistable as described.]
I'm interested in this sort of "push worlds" modal account of indicatives. (Others in a similar family include Daniel Nolan's theory, whereby it's knowledge that does the pushing rather than collective presuppositions). Lots of criticisms of Stalnaker's theory don't engage with the fine details of what he says about the closeness ordering, but more general aspects (e.g. its inability to sustain Adams' thesis that the conditional probability is the probability of the conditional; its handling of Gibbard cases; its sensitivity to fine factors of conversational context). An exception, however, is an argument that Dorothy Edgington puts forward in her SEP survey article (which, by the way, I very much recommend!)
Here's the case. Let's suppose that Jill is uncertain how much fuel is in Jane's car. The tank has a capacity for 100-miles'-worth, but Jill has no knowledge of what level it is at. Jane is
going to drive it until it runs out of fuel. For Jill, the probability of the car being driven for n miles, given that it's driven for no more than fifty, is 1/50. (for n<51).
Suppose that in fact the tank is full. The most similar worlds to actuality, arguably, are those where the tank is 50 per cent full, and so where Jane drives 50 miles. The same goes for any world where the tank is more than 50 per cent full. So, if nearness of worlds is determined by similarity, the conditional is true as uttered at each of the worlds where the tank is more than 50 per cent full. So without knowing the details of the level of the tank, we should be at least 50 per cent confident that if it goes for under 50 miles, it'll go for exactly 50 miles. But this seems all wrong. Varying the numbers we can make the case even worse: we should be almost sure of "If it goes for no more than 3 miles, it'll go for exactly 3 miles", even though we regard 3, 2, 1 as equiprobable fuel levels.
Of course, that's only to take into account the comparative similarity of worlds in determining the ordering, and Stalnaker and Nolan have the distorting factor to appeal to: worlds that are incompatible with something we presuppose/know to be true, can be pushed further out. But it doesn't seem in this case that anything relevant is being presupposed/known.
I don't think this objection works. To see that something is going wrong, notice that the argument, if successful, would work against other theories too. Consider, for example, Stalnaker's theory of the counterfactual conditional. Take the case as before, but suppose we're a day later and Jill doesn't know how far Jane drove. Consider the counterfactual "Had it stopped after no more than 50 miles, it'd have gone for exactly 50 miles". By the previous reasoning, the most similar worlds to over-50 worlds are exactly-50 worlds; so we should be half confident of the truth of that conditional. Varying the numbers, we should be almost sure that "If it had gone no more than 3, it'd go exactly 3", despite regarding the probabilities of 3, 2 and 1 as equally likely. But these all seem like bizarre results.
Moral: the counterfactual ordering of worlds isn't fixed by the kind of similarity that Edgington appeals to: the sort of similarity whereby a world in which the car stops after 53 miles is more similar to one in which the car stops after 50 miles than one in which the car stops after 3 miles. Of course, in some sense (perhaps an "overall" sense) those similarity judgements are just right. But we know from the Fine/Bennett cases that the sense of similarity that supports the right counterfactual verdicts can't be all in cases (those cases are ones concerning counterfactuals starting "if Nixon had pushed the nuclear button in the 70's..." All-in similarity arguably says that closest such worlds are ones where no missiles are released, leading to the wrong results).
Spelling out what the right notion of similarity is is tricky. Lewis gave us one recipe. In effect, we look for a little miracle that'll suffice to let the counterfactual world diverge from actual history to bring about the antecedent. Then we let events run on according to actual laws, and see what happens. So in worlds where the tank is full, say, let's look for the little miracle required to to make it run for no more than 50 miles, and run things on. What are the plausible candidates? Perhaps Jane's decides to take an extra journey yesterday, or forgets to fill up her car two days ago. Small miracles could suffice to get us into those sorts of worlds. But those sorts of divergences don't really suggest that she'll end up with exactly 50 miles worth of fuel in the tank, and so this approach undermines the case for "If were at most 50, then exactly 50" being true in antecedent-false worlds. (Which is a good thing!)
If that's the right thing to say in the counterfactual case, the indicative case too will be sorted. For it's designed to be a case where presuppositions/knowledge don't have a relevant distorting effect. And so, once more, the case for "If the car goes for at most 50, then it'll go for exactly 50" doesn't work.
I think that the basic interest of push-worlds theories of indicatives like Stalnaker's and Nolan's is to connect up the counterfactual and indicative ordering: whether there's anything informative to say about the counterfactual ordering of worlds itself is an entirely different matter. So if the glosses of the position lead to problems, it's best to figure out whether the problems lie withthe gloss of the counterfactual ordering (which then should be assessed in connection with that familiar and worked through literature) or with the push-worlds maneuver itself (which has, I think, been less fully examined). I think Edgington's objection is really connected with the first facet, and I've tried to say why I think a more detailed theory will make the problem dissolve. But even if it did turn out to be a problem, the push-worlds thesis itself is still standing.
(Incidentally, I do think Edgington's setup (which she attributes to a student, James Studd) has wider interest. It looks to me like Jackson's modal theory of counterfactuals, and Davis' modal theory of indicatives, both deliver the wrong results in this case.)
[Actually, now I've written this out, it strikes me that maybe the anti-Stalnaker argument is fixable. The trick would be to specify the background state of the world to make the result for counterfactual probabilities seem plausible, but such that (given Jill's ignorance of the background conditions) the indicative probabilities still seem wrong. So maybe the example is at least a recipe for a counterexample to Stalnaker, even if the original case is resistable as described.]
Subscribe to:
Posts (Atom)