Thursday, June 14, 2007

The fuzzy link

Following up on one of my earlier posts on quantum stuff, I've been reading up on an interesting literature on relating ordinary talk to quantum mechanics. As before, caveats apply: please let me know if I'm making terrible technical errors, or if there's relevant literature I should be reading/citing.

The topic here is GRW. This way of doing things, recall, involved random localizations of the wavefunction. Let's think of the quantum wavefunction for a single particle system, and suppose it's initially pretty wide. So the amplitude of the wavefunction pertaining to the "position" of the particle is spread out over a wide span of space. But, if one of the random localizations occurs, the wavefunction collapses into a very narrow spike, within a tiny region of space.

But what does all this mean? What does it say about the position of the particle? (Here I'm following the Albert/Loewer presentation, and ignoring alternatives, e.g. Ghirardi's mass-density approach).

Well, one traditional line was that talk of position was only well defined when the particle was in an eigenstate of the position observable. Since on GRW the particles' wavefunction is pretty much spread all over space, on this view talking of a particle's location would never be well-defined.

Albert and Loewer's suggestion is that we alter the link. As previously, think of the wavefunction as giving a measure over different situations in which the particle has a definite location. Rather than saying x is located within region R iff the set of situations in which the particle lies in R is measure 1, they suggest that x is located within region R iff the set of situations in which the particle lies in R is almost measure 1. The idea is that even if not all of a particle's wavefunction places it right here, the vast majority of it is within a tiny subregion here. On the Albert/Loewer suggestion, we get to say on this basis, that the particle is located in that tiny subregion. They argue also that there are sensible choices of what "almost 1" should be that'll give the right results, though it's probably a vague matter exactly what the figure is.

Peter Lewis points out oddities with this. One oddity is that conjunction-introduction will fail. It might be true that marble i is in a particular region, for each i between 1 and 100; and yet it fail to be true that all these marbles are in the box.

Here's another illustration of the oddities. Take a particle with a localized wavefunction. Choose some region R around the peak of the wavefunction which is minimal, such that enough of the wavefunction is inside for the particle to be within R. Then subdivide R into two pieces (the left half and the right half) such that the wavefunction is nonzero in each. The particle is within R. But it's not within the left half of R. Nor is it within the right half of R (in each case by modus tollens on the Albert/Loewer's biconditional). But the R is just the sum of the left half and right half of R. So either we're committed to some very odd combination of claims about location, or something is going wrong with modus tollens.

So clearly this proposal is looking like it's pretty revisionary of well-entrenched principles. While I don't think it indefensible (after all, logical revisionism from science isn't a new idea) I do think it's a significant theoretical cost.

I want to suggest a slightly more general, and I think, much more satisfactory, way of linking up the semantics of ordinary talk with the GRW wavefunction. The rule will be this:

"Particle x is within region R" is true to degree equal to the wavefunction-measure of the set of situations where the particle is somewhere in region R.

On this view, then, ordinary claims about position don't have a classical semantics. Rather, they have a degreed semantics (in fact, exactly the degreed-supervaluational semantics I talked about in a previous post). And ordinary claims about the location of a well-localized particle aren't going to be perfectly true, but only almost-perfectly true.

Now, it's easy but unwarranted to slide from "not perfectly true" to "not true". The degree theorist in general shouldn't concede that. It's an open question for now how to relate ordinary talk of truth simpliciter to the degree-theorist's setting.

One advantage of setting up things in this more general setting is that we can "off the peg" take results about what sort of behaviour we can expect the language to exhibit. An example: it's well known that if you have a classically valid argument in this sort of setting, then the degree of untruth of the conclusion cannot exceed the sum of the degrees of untruth of the premises. This amounts to a "safety constraint" on arguments: we can put a cap on how badly wrong things can go, though there'll always be the phenomenon of slight degradations of truth value across arguments, unless we're working with perfectly true premises. So there's still some point of classifying arguments like conjunction introduction as "valid" on this picture, for that captures a certain kind of important information.

Say that the figure that Albert and Loewer identified as sufficient for particle-location was 1-p. Then the way to generate something like the Albert and Loewer picture on this view is to identify truth with truth-to-degree-1-p. In the marbles case, the degrees of falsity of each premise "marble i is in the box" collectively "add up" in the conclusion to give a degree of falsity beyond the permitted limit. In the case

An alternative to the Albert-Loewer suggestion for making sense of ordinary talk is to go for a universal error-theory, supplemented with the specification of a norm for assertion. To do this, we allow the identification of truth simpliciter with true-to-degree 1. Since ordinary assertions of particle location won't be true to degree 1, they'll be untrue. But we might say that such sentences are assertible provided they're "true enough": true to the Albert/Loewer figure of 1-p, for example. No counterexamples to classical logic would threaten (Peter Lewis's cases would all be unsound, for example). Admittedly, a related phenomenon would arise: we'd be able to go by classical reasoning from a set of premises all of which are assertible, to a conclusion that is unassertible. But there are plausible mundane examples of this phenomenon, for example, as exhibited in the preface "paradox".

But I'd rather not go either for the error-theoretic approach, nor for the identification of a "threshold" for truth, as the Albert-Loewer inspired proposal suggests. I think there are more organic ways to handle utterance-truth within a degree theoretic framework. It's a bit involved to go into here, but the basic ideas are extracted from recent work by Agustin Rayo, and involve only allowing "local" specifications of truth simpliciter, relative to a particular conversational context. The key thing is that on the semantic side, once we have the degree theory, we can take "off the peg" an account of how such degree theories interact with a general account of communication. So combining the degree-based understanding of what validity amounts to (in terms of limiting the creep of falsity into the conclusion) and this degree-based account
of assertion, I think we've got a pretty powerful, pretty well understood overview about how ordinary language position-talk works.

Kripkenstein's monster

Though I've thought a lot about inscrutability and indeterminacy (well, I wrote my PhD thesis on it) I've always run a bit scared from the literature on Kripkenstein. Partly this is because the literature is so huge and sometimes intimidatingly complex. Partly it's because I was a bit dissatisfied/puzzled with some of the foundational assumptions that seemed to be around, and was setting it aside until I had time to think things through.

Anyway, I'm now thinking about making a start on thinking about the issue. So this post is something in the way of a plea for information: I'm going to set out how I understand the puzzle involved, and invite people to disabuse me of my ignorance, recommend good readings or where these ideas have already been worked out.

To begin with, let's draw a rough divide between three types of facts:

  1. Paradigmatically naturalistic facts (patterns of assent and dissent, causal relationships, dispositions, etc).
  2. Meaning-facts. (Of the form: “+” means addition, “67+56=123” is true, "Dobbin" refers to Dobbin.)
  3. Linguistic norms. (Of the form: One should utter “67+56=123” in such-and-such circs).

Kripkenstein’s strategy is to ask us to show how facts of (A) can constitute facts of kind (B) and (C). (An oddity here: the debate seems to have centred on a “dispositionalist” account of the move from A to B. But that’s hardly a popular option in the literature on naturalistic treatments of content, where variants of radical interpretation (Lewis, Davidson), of causal (Fodor, Field) and teleological (Millikan) theories are far more prominent. Boghossian in his state of the art article in Mind seems to say that these can all be seen as variants of the dispositionalist idea. But I don't quite understand how. Anyway...)

One of the major strategies in Kripkenstein is to raise doubts about whether this or that constitutive story can really found facts of kind (C). Notice that if one assumes that (B) and (C) are a joint package, then this will simultaneously throw into doubt naturalistic stories about (B).

In what sense might they be a joint package? Well, maybe some sort of constraint like the following is proposed: unless putative meaning-facts make immediately intelligible the corresponding linguistic norms, then they don’t deserve the name “meaning facts” at all.

To see an application, suppose that some of Kripke’s “technical” objections to the dispositionalist position were patched (e.g. suppose one could non-circularly identify a disposition of mine to return the intuitively correct verdicts to each and every arithmetical sum). Still, then, there’s the “normative” objection: why are those the verdicts the ones one should return in those circumstances? And (right or wrongly) the Kripkenstein challenge is that this normative explanation is missing. So (according to the Kripkean) these ain’t the meaning-facts at all.

There's one purely terminological issue I'd like to settle at this point. I think we shouldn’t just build it into the definition of meaning-facts that they correspond to linguistic norms in this way. After all, there’s lot of other theoretical roles for meaning other than supporting linguistic norms (e.g. a predicative/explanatory role wrt understanding, for example). I propose to proceed as follows. Firstly, let’s speak of “semantic” or “meaning” facts in general (picked out if you like via other aspects of the theoretical role of meaning). Secondly, we'll look for arguments for or against the substantive claim that part of the job of a theory of meaning is to subserve, or make immediately intelligible, or whatever, facts like (C).

Onto details. The Kripkenstein paradox looks like it proceeds on the following assumptions. First, three principles are taken as target (we can think of them as part of a "folk theory" of meaning)

  1. the meaning-facts to be exactly as we take them to be: i.e. arithmetical truths are determinate “to infinity”; and
  2. the corresponding linguistic norms are determinate “to infinity” as well; and
  3. (1) and (2) are connected in the obvious way: if S is true, then in appropriate circumstances, we should utter S.

The “straight solutions” seem to tacitly assume that our story should take the following form. First, give some constitutive story about what fixes facts of kind (B). Then (supposing there’s no obvious counterexamples, i.e. that the technical challenge is met). Then the Kripkensteinian looks to see whether this “really gives you meaning”, in the sense that we’ve also got a story underpinning (C). Given our early discussion, the Kripkensteinian challenge needs to be rephrased somewhat. Put the challenge as follows. First, the straight solution gives a theory of semantic facts, which is evaluated for success on grounds that set aside putative connections to facts of kind (C). Next, we ask the question: can we give an adequate account of facts of kind (C), on the basis of what we have so far? The Kripkensteinian suggests not.

The “sceptical solution” starts in the other direction. It takes as groundwork facts of kind (A) and (C) (perhaps explaining facts of kind (C) on the basis of those of kind (A)?) and then uses this in constructing an account of (something like) (B). One Kripkensteinian thought here is to base some kind of vindication of (B)-talk on the (C)-style claim that one ought to utter sentences involving semantic vocabulary such as " '+' means addition".

The basic idea one should be having at this point is more general however. Rather than start by assuming that facts like (B) are prior in the order of explanation to facts like (C), why not consider other explanatory orderings? Two spring to mind: linguistic normativity and meaning-facts are explained independently; or linguistic normativity is prior in the order of explanation to meaning-facts.

One natural thought in the latter direction is to run a “radical interpretation” line. The first element of a radical interpretation proposal is identify a “target set” of T-sentences, which the meaning-fixing T-theory for a language is (cp) constrained to generate. Davidson suggests we pick the T-sentences by looking at what sentences people de facto hold true in certain circumstances. But, granted (C)-facts, when identifying the target set of T-sentences one might instead appeal to what person’s ought to utter in such and such circs.

There’s no obvious reason why such normative facts need be construed as themselves “semantic” in nature, nor any obvious reason why the naturalistically minded shouldn’t look for reductions of this kind of normativity (e.g. it might be a normativity on a par with that involved with weak hypothetical imperatives, e.g. in the claim that I should eat this food, in order to stay alive, which I take to be pretty unscary.). So there's no need to give up on reductionist project in doing things this way. Nor is it only radical interpretation that could build in this sort of appeal to (C)-type facts in the account of meaning.

One nice thing about building normativity into the subvening base for semantic facts in this way is that we make it obvious that we’ll get something like (a perhaps restricted and hedged) form of (iii). Running accounts of (B) and (C) separately would make the convergence of meaning-facts and linguistic norms seem like a coincidence, if it in fact holds in any form at all.)

Is there anything particularly sceptical about the setup, so construed? Not in the sense in which Kripke’s own suggestion is. Two things about the Kripke proposal (as I suggested we read it): it’s clear that we’ve got some kind of projectionist/quasi-realist treatment of the semantic going on (it’s only the acceptability of semantic claims that’s being vindicated, not "semantic facts" as most naturalistic theories of meaning would conceive them). Further, the sort of norms to which we can reasonably appeal will be grounded in practices of praise and blame in a linguistic community to which we belong, and given the sheer absence of people doing very-long sums, there just won't be a practice of praise and blaming people for uttering "x+y=z" for sufficiently large choices of x, y and z. The linguistic norms we can ground in this way might be much more restricted than one might at first think: maybe only finitely many sentences S are such that something of the following form holds: we should assert S in circs c. Though there might be norms governing apparently infinitary claims, there is no reason to suppose in this setup that there are infinitely many type-(C) facts. That'll mean that (2) and (3) are dropped.

In sum, Kripke's proposal is sceptical in two senses: it is projectionist, rather than realist, about meaning-facts. And it drops what one might take to be a central plank of folk-theory of meaning, (2) and (3) above.

On the other hand, the modified radical interpretation or causal theory proposal I’ve been sketching can perfectly well be a realist about meaning-facts, having them “stretch out to infinity” as much as you like (I’d be looking to combine the radical interpretation setting sketched earlier with something like Lewis’s eligibility constraints on correct interpretation, to secure semantic determinacy). So it's not "sceptical" in the first sense in which Kripke's theory is: it doesn't involve any dodgy projectivism about meaning-facts. But it is a "sceptical solution" in the other sense, since it gives up the claims that linguistic norms "stretch out" to infinity, and that truth-conditions of sentences are invariably paired with some such norm.


[Thanks (I think) are owed to Gerald Lang for the title to this post. A quick google search reveals that others have had the same idea...]

Wednesday, June 13, 2007

Why preserve the letter of Humean supervenience?

Today in the phil physics reading group here at Leeds we were discussing Tim Maudlin's paper "Why be Humean?".

The question arose about why we should accord to the letter of the Humean supervenience principle. What that requires is that everything there is should supervene on the distribution of fundamental (local, monadic) properties and spatio-temporal relations. Why not e.g. allow further perfectly natural relations holding between pointy particles, so long as they are physically motivated and don't enter into necessary connections with other fundamental properties or relations?

Brian Weatherson's Lewis blog addressed something like this question at one point. His suggestion (I take it) was that the interest of tightly-constrained Humean supervenience was methodological: roughly, if we can fit all important aspects of the manifest image (causality, intentionality, consciousness, laws, modality, whatever) into an HS world, then we should be confident that we could do the same in non-HS worlds, worlds which are more generous with the range of fundamentals they commit us to. If Brian's right about this, the motivation for going for the strongest formulation of HS, is that allowing any more would make our stories about how to fit the manifest image into the world as described by science, more dependent on exactly what science delivers.

If that's the motivation for HS, then it's not so interesting whether physics contradicts HS: what's interesting is whether the stories about causality, intentionality and the rest that Lewis describes with the HS equipment in mind, go through in the non-HS worlds with minimal alteration.

Jobs at Leeds

Just to note that there are currently a bunch of jobs in philosophy/history and philosophy of science being advertised at Leeds. These are fixed-term (one year) lecturerships, and are pretty nice. While some places make temporary positions into teaching drudgery, Leeds has a policy of appointing full lecturer replacements, and so people appointed to these posts have in the past got exactly the teaching/admin load as the rest of us. Importantly for people looking to get out publications and secure permanent jobs, this means you got the same time to do research as a permanent lecturer. (Recent occupants of these roles have just secured permanent jobs and postdoc positions in the UK).

And of course you get to hang out with the lovely Leeds folk. So apply!

converting LaTeX into word...

I write (most) of my research in LaTeX format. But journals often demand .rtf or even .doc formats for the final version of my paper. Sometimes by speaking to them very nicely you can get them to accept tex versions (Phil Studies and Phil Perspectives both did this). But sometimes that's just not an option.

This leads to hours of heartache and potentially lots of typos, as I try ten ways of transferring the stuff over to my word processor. And I have to deal with getting logic into word, which is never nice. I used to use a special compiler to get it into html format, and then "save as" word. But that didn't actually save much time, so I've recently begun to just cut-and-paste the raw tex file, and reformat it and rewrite any code I've put in. I've downloaded a couple of trial applications that promise to convert stuff directly into doc, but with no success (they throw a wobbly whenever they meet any dollar signs, it seems).

Does anyone know what the best way to do this is? Would it help to get scientific word (more money to the man, I know, but at this stage I'm desperate).

Friday, June 08, 2007

Worlds


earths
Originally uploaded by blue sometimes
Hee hee

Supervaluations and revisionism once more

I've just spent the afternoon thinking about an error I found in my paper "supervaluational consequence" (see this previous post). I've figured out how to patch it now, so thought I'd blog about it.

The background is the orthodox view that supervaluational consequence will lead to revisions of classical logic. The strongest case I know for this (due to Williamson) is the following. Consider the claim "p&~Determinately(p)". This (it is claimed) cannot be true on any serious supervaluational model of our language. Equivalently, you can't have both p and ~Determinately(p) both true in a single model. If classical reductio were an ok rule of inference, therefore, you'd be able argue from ~Determinately(p) to ~p. But nobody thinks that's supervaluationally valid: any indeterminate sentence will be a counterexample to it. So classical reductio should be given up.

This is stronger than the more commonly cited argument: that supervaluational semantics vindicates the move from p to Determinately(p), but not the material conditional "if p then Determinately(p)" (a counterexample to conditional proof). The reason is that, if "Determinately" itself is vague, arguably the supervaluationist won't be committed to the former move. The key here is the thought that as well as things that are determinately sharpenings of our language, their may be interpretations which are borderline-sharpenings. Perhaps interpretation X is an "admissible interpretation of our language" on some sharpenings, but not on others. If p is true at all the definite sharpenings, but false at X, then that may lead to a situation where p is supertrue, but Determinately(p) isn't.

But orthodoxy says that this sort of situation (non-transitivity in the accessibility relation among interpretations of our language) does nothing to undermine the case for revisionism I mentioned in the first paragraph.

One thing I do in the paper is construct what seems to me a reasonable-looking toy semantics for a language, on which one can have both p and ~Determinately p. Here it is.

Suppose you have five colour patches, ranging from red to orange (non-red). Call them A,B,C,D,E.

Suppose that our thought and talk makes it the case that only interpretations which put the cut-off between B and D are determinately "sharpenings" of the language we use. Suppose, however, that there's some fuzziness around in what it is to be an "admissible interpretation". For example, an interpretation that places the cut-off between B and C, thinks that both interpretations placing the cut-off between C and D, and interpretations placing the cut-off between A and B, are admissible. And likewise, an interpretation that place the cut-off between C and D think that interpretations that place the cut-off between B and C are admissible, but also thinks that interpretations that place the cut-off between D and E are admissible.

Modelling the situation with four interpretations, labelled AB, BC, CD, DE, for where they place the red/non-red cut-off, we can express the thought like this: each intepretation accesses (thinks admissible) itself and its immediate neighbours, but nothing else. But BC and CD are the sharpenings.

My first claim is that all this is a perfectly coherent toy model for the supervaluationist: nothing dodgy or "unintended" is going on.

Now let's think about the truths values assigned to particular claims. Notice, to start with, that the claim "B is red" will be true at each sharpening. The claim "Determinately, B is red" will be true at the sharpening CD, but it won't be true at the sharpening BC, for that accesses an interpretation on which B counts as non-red (viz. AB).

Likewise, the claim "D is not red" will be true at each sharpening, but "Determinately, D is not red" will be true at the sharpening BC, but fails at CD, due to the latter seeing the (non-sharpening) interpretation DE, at which D counts as red.

In neither of these atomic cases do we find "p and ~Det(p)" coming out true (that's where I made a mistake previously). But by considering the following, we can find such a case:

Consider "B is red and D is not red". It's easy to see that this is true at each of the sharpenings, from what's been said above. But also "Determinately(B is red and D is not red)" is false at each of the sharpenings. It's false at BC because of the accessible interpretation AB at which B counts as non-red. It's false at CD because of the accessible interpretation DE at which D counts as red.

So we've got "B is red and D is not red, & ~Determinately(B is red and D is non-red)." And we've got that in a perfectly reasonable toy model for a language of colour predicates.

(Why do people think otherwise? Well, the standard way of modelling the consequence relation in settings where the accessibility relation is non-transitive is to think of the sharpenings as *all the interpretations accessible from some designated interpretation*. And that imposes additional structure which, for example, the model just sketch doesn't satisfy. But the additional structure seems to me totally unmotivated, and I provide an alternative framework in the paper for freeing oneself from those assumptions. The key thing is not to try and define "sharpening" in terms of the accessibility relation.).

The conclusion: the best extant case for (global) supervaluational consequence being revisionary fails.

Wednesday, June 06, 2007

Bohm and Lewis

So I've been thinking and reading a bit about quantum theory recently (originally in connection with work on ontic vagueness). One thing that's been intriguing me is the Bohmian interpretation of non-relativistic quantum theory. The usual caveats apply: I'm no expert in this area, on a steep learning curve, wouldn't be terribly surprised if there's some technical error in here somewhere.

What is Bohmianism? Well, to start with it's quite a familiar picture. There are a bunch of particles, each supplied with non-dynamical properties (like charge and mass) and definite positions, which move around in a familiar three-dimensional space. The actual trajectories of those particles, though, are not what you'd expect from a classical point of view: they don't trace straight lines through the space, but rather wobbly ones, if if they were bobbing around on some wave.

The other part of the Bohmian picture, I gather, is that one appeals to a wavefunction that lives in a space of far higher dimension: configuration space. As mentioned in a previous post I'm thinking of this as a set of (temporal slices of) possible worlds. The actual world is a point in configuration space, just as one would expect given this identification.

The first part of the Bohmian picture sounds all very safe from the metaphysician's perspective: the sort of world at which, for example, Lewis's project of Humean supervenience could get off and running, the sort of thing to give us the old-school worries about determinism and freedom (the evolution of a Bohmian world is totally deterministic). And so on and so forth.

But the second part is all a bit unexpected. What is a wave in modal space? Is that a physical thing (after all, it's invoked in fundamental physical theory)? How can a wave in modal space push around particles in physical space? Etc.

I'm sure there's lots of interesting phil physics and metaphysics to be done that takes the wave function seriously (I've started reading some of it). But I want to sketch a metaphysical interpretation of the above that treats it unseriously, for those of us with weak bellies.

The inspiration is Lewis's treatment of objective chance (as explained, for example, in his "Humean supervenience debugged"). The picture of chance he there sketches has some affinities to frequentism: when we describe what there is and how it is in fundamental terms, we never mention chances. Rather, we just describe patterns of instantiation: radioactive decay here, now, another radioactive decay there, then (for example). What one then has to work with is certain statistically regularities that emerge from the mosaic of non-chancy facts.

Now, it's very informative to be told about these regularities, but it's not obvious how to capture that information within a simple theory (we could just write down the actual frequencies, but that'd be pretty ugly, and wouldn't allow us to to capture underlying patterns among the frequencies). So Lewis suggests, when we're writing down the laws, we should avail ourselves of a new notion "P", assigning numbers to proposition-time pairs, obeying the usual probability axioms. We'll count a P-theory as "fitting" with facts (roughly) to the extent that the P-values it assigns to propositions match up, overall, to the statistically regularities we mentioned earlier. Thus, if we're told that a certain P-theory is "best", we're given some (cp) information on what the statistical regularities are. At not much gain in complexity, therefore, our theory gains enormously in informativeness.

The proposal, then, is that the chance of p at t is n, iff overall best theory assigns n to (p,t).

That's very rough, but the I hope the overall idea is clear: we can be "selectively instrumentalist" about some of the vocabulary that appears in fundamental physical theory. Though many of the physical primitives will also be treated as metaphysically basic (as expressing "natural properties") some bits that by the lights of independently motivated metaphysics are "too scary" can be regarded as just reflections of best theory, rather than part of the furniture of the world.

The question relevant here is: why stop at chance? If we've been able to get rid of one function over the space of possible worlds (the chance measure), why not do the same with another metaphysically troubling piece of theory: the wavefunction field.

Recall the first part of the Bohmian picture: particles moving through 3-space, in rather odd paths "as if guided by a wave". Suppose this was all there (fundamentally) was. Well then, we're going to be in a lot of trouble finding a decent way of encapsulating all this data about the trajectories of particles: the theory would be terribly unwieldy if we had to write out in longhand the exact trajectory. As before, there's much to be gained in informativeness if we allow ourselves a new notion in the formulation of overall theory, L, say. L will assign scalar values (complex numbers) to proposition-time pairs, and we can then use L in writing down the wavefunction equations of quantum mechanics which elegantly predicts the future positions of particles on the basis of their present positions. The "best" L-theory, of course will be that one whose predictions of the future positions of particles fits with the actual future-facts. The idea is that wavefunction talk is thereby allowed for: the wave function takes value z at region R of configuration space at time t iff Best L-theory assigns z to L(R,t).

So that's the proposal: we're selectively instrumentalist about the wavefunction, just as Lewis is selectively instrumentalist about objective chance (I'm using "instrumentalist" in a somewhat picturesque sense, by the way: I'm certainly not denying that chance or wavefunction talk has robust, objective truth-conditions.) There are, of course, ways of being unhappy with this sort of treatment of basic physical notions in general (e.g. one might complain that the explanatory force has been sucked from notions of chance, or the wavefunction). But I can't see anything that Humeans such as Lewis should be unhappy with here.

(There's a really nice paper by Barry Loewer on Lewisian treatments of objective chance which I think is the thing to read on this stuff. Interestingly, at the end of that paper he canvasses the possibility of extending the account to the "chances" one (allegedly) finds in Bohmianism. It might be that he has in mind something that is, in effect, exactly the position sketched above. But there are also reasons for thinking there might be differences between the two ideas. Loewer's idea turns on the idea that one can have something that deserves the name objective chance, even in a world for which there are deterministic laws underpinning what happens (as is the case for both Bohmianism, and for the chancy laws of statistically mechanics in a chancy world). I'm inclined to agree with Loewer on this, but even if that were given up, and one thought that the measure induced by the wavefunction isn't a chance-measure, the position I've sketched is still a runner: the fundamental idea is to use the Lewisian tactics to remove ideological commitment, not to use the Lewisian tactics to remove ideological commitment to chance specifically. [Update: it turns out that Barry definitely wasn't thinking of getting rid of the wavefunction in the way I canvass in this post: the suggestion in the cited paper is just to deal with the Bohmian (deterministic) chances in the Lewisian way])

[Update: I've just read through Jonathan Schaffer's BJPS paper which (inter alia) attacks the Loewer treatment of chance in Stat Mechanics and Bohm Mechanics (though I think some of his arguments are more problematic in the Bohmian case than the stat case.) But anyway, if Jonathan is right, it still wouldn't matter for the purposes of the theory presented here, which doesn't need to make the claim that the measure determined by the wavefunction is anything to do with chance: it has a theoretical role, in formulating the deterministic dynamical laws, that's quite independent of the issues Jonathan raises.]

Academic careers

Others have already pointed this out, but it's worth highlighting.

Terence Tao - recent winner of the Field's medal (a sort of Nobel prize for mathematics) - has written some really interesting career advice. It's aimed at mathematicians, but lots of it is more generally applicable, and certainly lots of strikes a chord with academic philosophy. It's also not just for graduates: e.g. I'm a recent-graduate, and I'm sure there's lots there that I'm not doing, which it's good to be reminded of.

The advice to "use the wastebasket" is going to be more difficult now that the University of Leeds has decided to remove all wastebackets from our offices...

HT: Shawn Standefer, Richard Zach


p.s. here's one thing that struck me as particularly transferable:

"Don't prematurely obsess on a single "big problem" or "big theory"
. This is a particularly dangerous occupational hazard in this subject - that one becomes focused, to the exclusion of other mathematical activity, on a single really difficult problem in a field (or on some grand unifying theory) before one is really ready (both in terms of mathematical preparation, and also in terms of one career) to devote so much of one's research time to such a project. When one begins to neglect other tasks (such as writing and publishing one's "lesser" results), hoping to use the eventual "big payoff" of solving a major problem or establishing a revolutionary new theory to make up for lack of progress in all other areas of one's career, then this is a strong warning sign that one should rebalance one's priorities. While it is true that several major problems have been solved, and several important theories introduced, by precisely such an obsessive approach, this has only worked out well when the mathematician involved (a) has a proven track record of reliably producing significant papers in the area already, and (b) has a secure career (e.g. a tenured position). If you do not yet have both (a) and (b), and if your ideas on how to solve a big problem still have a significant speculative component (or if your grand theory does not yet have a definite and striking application), I would strongly advocate a more balanced approach instead: keep the big problems and theories in mind, and tinker with them occasionally, but spend most of your time on more feasible "low-hanging fruit", which will build up your experience, mathematical power, and credibility for when you are ready to tackle the more ambitious projects. "

Pictures from St Andrews (with added commentary)

Courtesy of Brit over at Lemmings

you can find the originals from the link here

We had a great time in St Andrews, by the way. Two good conferences, lots of fun time spent with interesting people. And conference-accommodation to die for...

AJP paper

My paper on a certain kind of argument for structural universals has just appeared in AJP. Very exciting from my perspective: I've had things "forthcoming" for so long, I think I thought they'd always have that status.

FWIW, the paper discusses a certain argument for the existence of structural universals (that is, universals "made out of" other universals, as "being water" might be thought to be made out of "being Hydrogen" "being Oxygen" etc.) The argument is based on the (alleged) possibility of worlds with no fundamental physical layer: where things "go down forever". Quite a few people use this argument in print, and many more raise it in conversation when you're pressing a microphysicalist metaphysics.

This is part of a wider project exploring a ontological microphysicalism, where the only things that really exist are the physical fundamentals. The recent stuff on ontological commitment is, in part, a continuation of that project.

On a more practical note, I can't figure out how you access AJP articles these days: my institution is supposed to have a subscription, but the links that take you to the pdf don't seem live. Any ideas of how to get into it would be gratefully received!

Vagueness and quantum stuff

I've finally put online a tidied up version of my ontic vagueness paper, which'll be coming out in Phil Quarterly some time soon. One idea in the paper is to give an account of truths in an ontically vague world, making use of the idea that more than one possible world is actual. The result is a supervaluation-like framework, with "precisifications" replaced with precise possible worlds. For some reason, truth-functional multi-valued settings seem to have a much firmer grip on the ontic vagueness debate than in the vagueness debate more generally. That seems a mistake to me.

(The idea of having supervaluation-style treatments of ontic vagueness isn't unknown in the literature however: in a couple of papers, Ken Akiba argues for this kind of treatment of ontic vagueness, though his route to this framework is pretty different to the one I like. And Elizabeth Barnes has been thinking and writing about the the kind of modal treatments of ontic vagueness for a while, and I owe huge amounts to conversations with her about all of these issues. Her take on these matters is very close to the one I like (non-coincidentally) and those interested should check out her papers for systematic discussion and defense of the coherence of ontic vagueness in this spirit.)

The project in my paper wasn't to argue that there was ontic vagueness, or even tell you what ontic vagueness (constitutively) is. The project was just to set up a framework for talking about, and reasoning about, metaphysically vague matters, with a particular eye to evaluate the Evans argument against ontically vague identity. In particular, the framework I gave has no chance of giving any sort of reduction of metaphysical indeterminacy, since that very notion was used in defining up bits of the framework. (I'm actually pretty attracted to the view that the right way to think about these things would be to treat indeterminacy as a metaphysical primitive, in the way that some modalists might treat contingency. See this previous blog post. I was later pointed to this excellent paper by David Barnett where he works out this sort of idea in far more detail.)

One thing that I've been thinking about recently is how the sort of "indeterminacy" that people talk about in quantum mechanics might relate to this setting. So I want to write a bit about this here.

Some caveats. First, this stuff clearly isn't going to be interpretation neutral. If you think Bohm gave the right account of quantum ontology, then you're not going to think there's much indeterminacy around. So I'll be supposing something like the GRW interpretation. Second, I'm not going to be metaphysically neutral even given this interpretation: there's going to be a bunch of other ways of thinking about the metaphysics of GRW that I don't consider here (I do think, however, that independently motivated metaphysics can contribute to the interpretation of a physical theory). Third, I'm only thinking of non-relativistic quantum theory here: Quantum field theory and the like is just beyond me at the moment. Finally, I'm on a steep learning curve with this stuff, so please excuse stupidities.

You can represent the GRW quantum ontology as a wave function over a certain space (configuration space). Mathematically speaking, that's a scalar field over a set of points (which then determines a measure over those points) in a high-dimensional space. As time rolls forward, the equations of quantum theory tell you how this field changes its values. Picture it as a wave evolving through time over this space. GRW tells you that at random intervals, this wave undergoes a certain drastic change, and this drastic change is what plays the role of "collapse".

That's all highly abstract. So let me try parlaying that into something more familiar to metaphysicians.

Suppose you're interested in a world with N particles in it, at time t. Without departing from classical modes of thinking yet, think of the possible arrangements of those particles at t: a scattering of particles equipped with mass and charge over a 3-dimensional space, say (think of the particles haecceistically for now). Collect all these possible-world-slices together into a set. There'll be a certain implicit ordering on this set: if the worlds contain nothing but those N massy and chargey particles located in space-time, then we can describe a world-slice w by giving, for each of the N particles, the coordinates of its location within w: that is, by giving a list of 3N coordinates. What this means is that each world can be regarded as a point in a 3N dimensional space (the first 3 dimensions giving the position of the first particle in w, the second 3 dimensions the position of the second, etc). And this is what I'm taking to be the "configuration space". So what is the configuration space, on the way I'm thinking of it? It's a certain set of time-slices of possible worlds.

One Bohmian picture of quantum ontology fits very naturally into the way that we usually think of possible worlds at this point. For Bohm says that one point in configuration space is special: it gives the actual positions of particles. And this fits the normal way of thinking of possible worlds: the special point in configuration space is just the slice of the actual world at t. (Bohmian mechanics doesn't dispense with the wave across configuration space, of course: just as some physical theories would appeal to objective chance in their natural laws, which we can represent as a measure across a space of possible worlds, Bohmianism appeals to a scalar field determining a measure across configuration space: the wavefunction).

But on the GRW interpretation, we don't get anything like this trad picture. What we have is configuration space and the wave function over it. Sometimes, the amplitude of that wave function is highly concentrated on a set of world-slices that are in certain respects very similar: say, they all contain particles arranged in a rough pointer-shaped in a certain location. But nevertheless, no single world will be picked out, and some amplitude will be given to sets of worlds which have the particles in all sorts of odd positions.

But of course, the framework for ontic vagueness I like is up for monkeying around with the actuality of worlds. There needn't be a single designated actual world, on the way I was thinking of things. But the picture I described doesn't exactly fit the present situation. For I supposed (following the supervaluationist paradigm) that there'd be a set of worlds, all of which would be "co-actual".

Yet there are other closely related models that'd help here. In particular, Lewis, Kamp and Edgington have described what I'll call a "degree supervaluationist" picture that looks to be exactly what we need. Here's the story, in the original setting. Your classical semantic theorist looks at the set of all possible interpretations of the language, and says that one among them is the designated (or "intended") one. Truth is truth at the unique, designated, interpretation. Your supervaluationist looks at the same space, and says that there's a set of interpretations with equal claim to be "intended": so they should all be co-designated. Truth is truth at each of the co-designated interpretations. Your degree-supervaluationist looks at the set of all interpretations, and says that some are better than others: they are "intended" to different degrees. So the way to describe the semantic facts is to give a measure over the space of interpretations that (roughly) gives in each case the degree to which a given interpretation is designated. Degree supervaluationism will share some of the distinctive features of the classical and standard supervaluational setups: for example, since classical tautologies are true at all interpretations, the law of excluded middle and the like will be "true to degree 1" (i.e. true on a set of interpretations of designation-measure 1).

I don't see any reason why we can't take this across to the worlds setting I favoured. Just as the traditional view is that there's a unique actual world among the space of possible worlds, and I argued that we can make sense of there sometimes being a set of coactual worlds among that space (with something being true if it is true at all of them), I now suggest that we should be up for there being some measure across the space of possible worlds, expressing the degree to which those worlds are actual.

The suggestion this is building up to is that we regard the measure determined by the wavefunction in GRW as the "actuality measure". Things are determinately the case to the extent that the set of worlds where they're true is assigned a high measure.

So, for example, suppose that the amplitude of the wavefunction is concentrated on worlds where Sparky is located within region R (suppose the measure of that space of world-slices is 0.9). Then it'll be determinately the case to degree 0.9 that Spark is in location R. Of course, in a set of worlds of measure 0.1, Sparky will be outside R. So it'll be determinately the case to degree 0.1 that Sparky is outside R. (Of course, it'll be determinate to degree 1 that Sparky is either inside R or outside R: at all the worlds, Sparky is located somewhere!)

I don't expect this to shed much light at all on what the wavefunction means. Ontic indeterminacy, many think, is a pretty obscure notion taken cold, and I'm not expecting metaphysicians or anyone else to find the notion of "degrees of actuality" something they recognize. So I'm not saying that there's any illuminating metaphysics of GRW here. I think the illumination is likely to go in the other direction: if you've can get a pre-philosophical grip on the "determinacy" and "no fact of the matter" talk in quantum physics, we've got a way of using that to explain talk of "degrees of actuality" and the like. Nevertheless, I think that, if this all works technically, then a bunch of substantive results follow. Here's a few thoughts in that direction:
  1. We've got a candidate for vagueness in the world, linked to a general story about how to think about ontic vagueness. Given ontic vagueness isn't in the best repute in the philosophical community, there's an important "existence result" in the offing here.
  2. Recall the idea canvassed earlier that "determinacy" or an equivalent might just be a metaphysical primitive. Well, here we have the suggestion that what amounts to (degrees of) determinacy being taken as a *physical* primitive. And taking the primitives of fundamental physics as a prima facie guide to metaphysical primitives is a well-trodden route, so I think some support for that idea could be found here.
  3. If there is ontic vagueness in the quantum domain, then we should be able to extract information about the appropriate way to think and reason in the presence of determinacy, by looking at an appropriately regimented version of how this goes in physics. And notice that there's no suggestion here that we go for a truth-functional degree theory with the consequent revisions of classical logic: rather, a variant of the supervaluational setup seems to me to be the best regimentation. If that's right, then it lends the support for the (currently rather hetrodox) supervaluational-style framework for thinking about metaphysical vagueness.
  4. I think that there's a bunch of alleged metaphysical implications of quantum theory that don't *obviously* go through if we buy into the sort of metaphysics of GRW just suggested. I'm thinking in particular about the allegation that quantum theory teaches us that certain systems of particles have "emergent properties" (Jonathan Shaffer has been using this recently as part of his defence of Monism). Bohmianism already shows, I guess, that this sort of claim won't be interpretation-neutral. But the above picture I think complicates the case for holism even within GRW.

(Thanks are owed to a bunch of people, particularly George Darby, for discussion of this stuff. They shouldn't be blamed for any misunderstands of the physics, or indeed, philosophy, that I'm making!)