Following up on one of my earlier posts on quantum stuff, I've been reading up on an interesting literature on relating ordinary talk to quantum mechanics. As before, caveats apply: please let me know if I'm making terrible technical errors, or if there's relevant literature I should be reading/citing.
The topic here is GRW. This way of doing things, recall, involved random localizations of the wavefunction. Let's think of the quantum wavefunction for a single particle system, and suppose it's initially pretty wide. So the amplitude of the wavefunction pertaining to the "position" of the particle is spread out over a wide span of space. But, if one of the random localizations occurs, the wavefunction collapses into a very narrow spike, within a tiny region of space.
But what does all this mean? What does it say about the position of the particle? (Here I'm following the Albert/Loewer presentation, and ignoring alternatives, e.g. Ghirardi's mass-density approach).
Well, one traditional line was that talk of position was only well defined when the particle was in an eigenstate of the position observable. Since on GRW the particles' wavefunction is pretty much spread all over space, on this view talking of a particle's location would never be well-defined.
Albert and Loewer's suggestion is that we alter the link. As previously, think of the wavefunction as giving a measure over different situations in which the particle has a definite location. Rather than saying x is located within region R iff the set of situations in which the particle lies in R is measure 1, they suggest that x is located within region R iff the set of situations in which the particle lies in R is almost measure 1. The idea is that even if not all of a particle's wavefunction places it right here, the vast majority of it is within a tiny subregion here. On the Albert/Loewer suggestion, we get to say on this basis, that the particle is located in that tiny subregion. They argue also that there are sensible choices of what "almost 1" should be that'll give the right results, though it's probably a vague matter exactly what the figure is.
Peter Lewis points out oddities with this. One oddity is that conjunction-introduction will fail. It might be true that marble i is in a particular region, for each i between 1 and 100; and yet it fail to be true that all these marbles are in the box.
Here's another illustration of the oddities. Take a particle with a localized wavefunction. Choose some region R around the peak of the wavefunction which is minimal, such that enough of the wavefunction is inside for the particle to be within R. Then subdivide R into two pieces (the left half and the right half) such that the wavefunction is nonzero in each. The particle is within R. But it's not within the left half of R. Nor is it within the right half of R (in each case by modus tollens on the Albert/Loewer's biconditional). But the R is just the sum of the left half and right half of R. So either we're committed to some very odd combination of claims about location, or something is going wrong with modus tollens.
So clearly this proposal is looking like it's pretty revisionary of well-entrenched principles. While I don't think it indefensible (after all, logical revisionism from science isn't a new idea) I do think it's a significant theoretical cost.
I want to suggest a slightly more general, and I think, much more satisfactory, way of linking up the semantics of ordinary talk with the GRW wavefunction. The rule will be this:
"Particle x is within region R" is true to degree equal to the wavefunction-measure of the set of situations where the particle is somewhere in region R.
On this view, then, ordinary claims about position don't have a classical semantics. Rather, they have a degreed semantics (in fact, exactly the degreed-supervaluational semantics I talked about in a previous post). And ordinary claims about the location of a well-localized particle aren't going to be perfectly true, but only almost-perfectly true.
Now, it's easy but unwarranted to slide from "not perfectly true" to "not true". The degree theorist in general shouldn't concede that. It's an open question for now how to relate ordinary talk of truth simpliciter to the degree-theorist's setting.
One advantage of setting up things in this more general setting is that we can "off the peg" take results about what sort of behaviour we can expect the language to exhibit. An example: it's well known that if you have a classically valid argument in this sort of setting, then the degree of untruth of the conclusion cannot exceed the sum of the degrees of untruth of the premises. This amounts to a "safety constraint" on arguments: we can put a cap on how badly wrong things can go, though there'll always be the phenomenon of slight degradations of truth value across arguments, unless we're working with perfectly true premises. So there's still some point of classifying arguments like conjunction introduction as "valid" on this picture, for that captures a certain kind of important information.
Say that the figure that Albert and Loewer identified as sufficient for particle-location was 1-p. Then the way to generate something like the Albert and Loewer picture on this view is to identify truth with truth-to-degree-1-p. In the marbles case, the degrees of falsity of each premise "marble i is in the box" collectively "add up" in the conclusion to give a degree of falsity beyond the permitted limit. In the case
An alternative to the Albert-Loewer suggestion for making sense of ordinary talk is to go for a universal error-theory, supplemented with the specification of a norm for assertion. To do this, we allow the identification of truth simpliciter with true-to-degree 1. Since ordinary assertions of particle location won't be true to degree 1, they'll be untrue. But we might say that such sentences are assertible provided they're "true enough": true to the Albert/Loewer figure of 1-p, for example. No counterexamples to classical logic would threaten (Peter Lewis's cases would all be unsound, for example). Admittedly, a related phenomenon would arise: we'd be able to go by classical reasoning from a set of premises all of which are assertible, to a conclusion that is unassertible. But there are plausible mundane examples of this phenomenon, for example, as exhibited in the preface "paradox".
But I'd rather not go either for the error-theoretic approach, nor for the identification of a "threshold" for truth, as the Albert-Loewer inspired proposal suggests. I think there are more organic ways to handle utterance-truth within a degree theoretic framework. It's a bit involved to go into here, but the basic ideas are extracted from recent work by Agustin Rayo, and involve only allowing "local" specifications of truth simpliciter, relative to a particular conversational context. The key thing is that on the semantic side, once we have the degree theory, we can take "off the peg" an account of how such degree theories interact with a general account of communication. So combining the degree-based understanding of what validity amounts to (in terms of limiting the creep of falsity into the conclusion) and this degree-based account
of assertion, I think we've got a pretty powerful, pretty well understood overview about how ordinary language position-talk works.