Friday, March 30, 2007

West coast journeying

I'm currently in Atlanta airport.

I didn't mean to be still here. A combination of tiredness, lack of care with a watch, and (I suspect) there being different timezones in different terminals, mean that I missed my connecting flight.

On the positive side, I was happily making notes on excellent metametaphysics papers while missing my flight. Still, an all-things-considered bad, I think.

But the nice people at Delta rebooked me, and (modulo a taxi journey and quite possibly sleeping at San Jose airport) my travel plans are back in the swing.

So long as I don't miss another flight through blogging...

Monday, March 26, 2007

Probabilistic multi-conclusion validity

I've been thinking a bit recently about how to generalize standard results relating probability to validity to a multi-conclusion setting.

The standard result is the following (where the uncertainty of p is 1-probability of p):

An argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises is at least as great as the uncertainty of the conclusion.

It'll help if we restate this as follows:

An argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises + the probability of the conclusion is at least 1.

Stated this way, there's a natural generalization available:

A multi-conclusion argument is classically valid
iff
for all classical probability functions, the sum of the uncertainties of the premises + the probabilities of the conclusions is greater than or equal to 1.

And once we've got it stated, it's a corollary of the standard result (I believe).
It's pretty easy to see directly that this works in the "if" direction, just by considering classical probability functions which only assign 1 or 0 to propositions.

In the "only if" direction (writing u for uncertainty and p for probability)

Consider A,B|=C,D. This holds iff A,B,~C,~D|= holds by a standard premise/conclusion swap result. And we know u(~C)=p(C), u(~D)=p(D). By the standard result, the sum of uncertainties of the premises of a single-conclusion argument must be greater than that of the conclusion. That is, the single-conc argument holds iff u(A)+u(B)+u(~C)+u(~D) is greater than equal to 1. But by the above identification, this holds iff u(A)+u(B)+p(C)+p(D) is greater than or equal to 1. This should generalize to arbitrary cases. QED.

Sunday, March 25, 2007

Naturalness in Idaho (x-post from MV)

I'm off very soon to the INPC metametaphysics conference in Boise. Many other fun people will be there (not least fellow CMM-er Andy McGonigal, fresh from a spell at Cornell).

Together with Iris Einheuser, I'm going to be responding to Ted Sider's paper "Which disputes are substantive?". It's been great to have a serious think about the way that Ted thinks of this stuff, and how it relates to the Kit Fine inspired setting that I've been working on lately.

Anyway, the whole writing-a-response thing got way out of hand, and I've ended up with a 7,500 word first draft. I do think there's a couple of substantive issues raised therein for the kind of framework (otherwise really really attractive) that he's been pushing here and in recent work. The worry centres around quantification into the scope of Ted's "naturalness" operator. For any who are interested, I've put the draft response up online.

After the INPC, I'll be in San Fran for the Pacific APA, along with many other CMM and Leeds folks.



Friday, March 09, 2007

Fundamental and derivative truths

After a bit of to-ing and fro-ing, I've decided to post a first draft of "Fundamental and derivative truths" on my work in progress page.

I've been thinking about this material a lot lately, but I've found it surprisingly different to formulate and explain. I can see how everything fits together: just not sure how best to go about explaining it to people. Different people react to it in such different ways!

The paper does a bunch of things:
  • offering an interpretation of Kit Fine's distinction between things that are really true, and things that are merely true. (So, e.g. tables might exist, but not really exist).
  • using Agustin Rayo's recent proposal for formulating a theory of requirements/ontological commitments in explication.
  • putting forward a general strategy for formulating nihilist-friendly theories of requirements (set theoretic nihilism and mereological nihilisms being the illustrative cases used in the paper).
  • using this to give an account of "postulating" things into existence (e.g. sets, weirdo fusions).
  • sketching a general answer to the question: in virtue of what do our sentences have the ontological commitments they do (i.e. what makes a theory of requirements *the correct one* for this or that language?)
This is exploratory stuff: there's lots more to be said about each of these, and plenty more issues (e.g. how does this relate to fictionalist proposals?) But I'm at a stage where feedback and discussion are perhaps the most important things, so making it public seems a natural strategy...

I'm going to be talking in more detail about the case of mereological nihilism at the CMM structure in metaphysics workshop.

Thresholds for belief

I’m greatly enjoying reading David Christensen’s Putting logic in its place at the moment. Some remarks he makes about threshold accounts of the relationship between binary and graded beliefs seemed particularly suggestive. I want to use them here to suggest a certain picture of the relationship between binary and graded belief. No claim to novelty here, of course, but I’d be interested to hear about worries about this specific formulation (Christensen himself argues against the threshold account).

One worry about threshold accounts is that they’ll make constraints on binary beliefs look very weird. Consider, for example, the lottery paradox. I am certain that someone will win, but for each individual ticket, I’m almost certain that it’s a loser. Suppose that having belief of degree n sufficed for binary belief. Then, by choosing a big enough lottery, we can make it that I believe a generalization (there will be a winner) while believing the negation of each of its premises. So I believe each of a logically inconsistent set.

This sort of situation is very natural from the graded belief perspective: the beliefs in question meet constraints of probabilistic coherence. But there’s a strong natural thought that binary beliefs should be constrained to be logically consistent. And of course, the threshold account doesn’t deliver this.

What Christensen points to is some observations by Kyburg about limited consistency results that can be derived from the threshold account. Minimally, binary beliefs are required to be weakly consistent: for any threshold above zero, one cannot believe a single contradictory proposition. But there are stronger results too. For example, for any threshold above 0.5, one cannot believe a pair of mutually contradictory propositions. One can see why this is if one remembers the following result: that a logically valid argument is such that the improbability of its conclusion cannot be greater than the sum of the improbabilities of its premises. For the case where the conclusion is absurd (i.e. the premises are contradictory) we get the the sum of the improbabilities of the premises must be less than or equal to 1.

In general, then, what we get is the following: if the threshold for binary belief is at least 1-1/n, then one cannot believe each of an inconsistent set of n propositions.

Here’s one thought. Let’s suppose that the threshold for binary belief is context dependent in some way (I mean here to use this broadly, rather than committing to some particularly potentially controversial semantic analysis of belief attributions). The threshold that marks the shift to binary belief can vary depending on aspects of the context. The thought, crudely put, is that there’ll be the following constraint on what thresholds can be set: in a context where n propositions are being entertained, then the threshold for binary belief must be at least 1-1/n.

There is, of course, lots to clarify about this. But notice that now relative to every context, we’ll get logical consistency as a constraint on the pattern of binary belief (assuming that to belief that p is in part to entertain that p).

[As Christensen emphasises, this is not the same thing as getting closure holding in every context. Suppose we consider the three propositions, A, B, and A&B. Consistency means that we cannot accept the first two and accept the negation of the last. And indeed, with the threshold set at 2/3, we get this result. But closure would tell us that every situation in which we believe the first two, we should believe the last. But it’s quite consistent to believe A and B (say, by having credence 2/3 in each) and to fail to believe A&B (say, by having credence 1/3 in this proposition). Probabilistic coherence isn’t going to save the extendability of beliefs by deduction, for any reasonable choice of threshold.

Of course, if we allow a strong notion of disbelief or rejection, such that someone disbelieves that p iff their uncertainty of p is past the threshold (the same threshold as for belief), then we’ll be able to read off from the consistency constraint that in a valid argument, if one believes the premises, one should abandon disbelief in the conclusion. This is not closure, but perhaps it might sweeten the pill of giving up on closure.]

Without logical consistency being a pro tanto normative constraint on believing, I’m sceptical that we’re really dealing with a notion of binary belief at all. Suppose this is accepted. Then we can use the considerations above to argue (1) that if the threshold account of binary belief is right, then thresholds (if not extreme) must be context dependent, since for no choice of threshold less than 1 will consistency be upheld. (2) that there’s a natural constraint on thresholds in terms of the number of propositions obtained.

The minimal conclusion, for this threshold theorist, is that the more propositions they entertain, the harder it will be for them to count as beliefs. Consider the lottery paradox construed this way:


1 loses

2 loses

N loses

So: everyone loses

Present this as the following puzzle: We can believe all the premises, and disbelieve the conclusion, yet the latter is entailed by the former.

We can answer this version of the lottery paradox using the resources described above. In a context where we’re contemplating this many propositions, the threshold for belief is so high that we won’t count as believing the individual props. But we can explain why it seems so compelling: entertain each individually, and we will believe it (and our credences remain fixed throughout).

Of course, there’s other versions of the lottery paradox that we can formulate, e.g. relying on closure, for which we have no answer. Or at least, our answer is just to reject closure as a constraint on rational binary beliefs. But with a contextually variable threshold account, as opposed to a fixed threshold account, we don’t have to retreat any further.

Thursday, March 08, 2007

Supervaluational consequence again

I’ve just finished a new version of my paper supervaluational consequence. A pdf version is available here. I thought I'd post something explaining what's going on therein.

Let’s start at the beginning. Classical semantics requires, inter alia, the following. For every expression, there has to be a unique intended interpretation. This single interpretation will assign to each name, a single referent. To each predicate, it will assign a set of individuals. Similarly for other grammatical categories.

But sometimes, the idea that there are such unique referents, extensions and so on, looks absurd. What supervaluationism (in the liberal sense I’m interested in) gives you is the flexibility to accommodate this. Supervaluationism requires, not a single intended interpretation, but a set of interpretations.

So if you’re interested in the problem of the many, and think that there’s more than one optimal candidate referent for “Kilimanjaro”; if you’re interested in theory change, and think that relativist and rest mass are equi-optimal candidate properties to be what “mass” picks out; if you are interested in inscrutability of reference, and think that rabbit-slices, undetached rabbit parts as well as rabbits themselves are in the running to be in the extension of “rabbit”; if you’re interested in counterfactuals, and think that it’s indeterminate which world is the closest one where Bizet and Verdi were compatriots; if you think vagueness can be analyzed as a kind of multiple-candidate indeterminacy of reference; if you find any of these ideas plausible, then you should care about supervaluationism.

It would be interesting, therefore, if supervaluationism undermined the tenants of the kind of logic that we rely on. For either, in the light of the compelling applications of supervaluationism, we will have to revise our logic to accommodate these phenomena; or else supervaluationism as a theory of these phenomena is itself misconceived. Either way, there’s lots at stake.

Orthodoxy is that supervaluationism is logically revisionary, in that it involves admitting counterexamples to some of the most familiar classical inferential moves: conditional proof, reductio, argument by cases, contraposition. There’s a substantial hetrodox movement which recommends a hetrodox way of defining supervaluational consequence (so called “local consequence”) which is entirely non-revisionary.

My paper aims to do a number of things:

  1. to give persuasive arguments against the local consequence heterodoxy
  2. to establish, contra orthodoxy, that standard supervaluational consequence is not revisionary (this, granted a certain assumption)
  3. to show that, even if the assumption is refused, the usual case for revisionism is flawed
  4. to give a final fallback option: even if supervaluational consequence is revisionary, it is not damagingly so, for it in no way involves revision of inferential practice.

It convinces me that supervaluationists shouldn't feel bad: they probably don't revise logic, and if they do, it's in a not-terribly-significant way.