In connection with the survey article mentioned below, I was reading through Tim Williamson's "Vagueness in reality". It's an interesting paper, though I find its conclusions very odd.
As I've mentioned previously, I like a way of formulating claims of metaphysical indeterminacy that's semantically similar to supervaluationism (basically, we have ontic precisifications of reality, rather than semantic sharpenings of our meanings. It's similar to ideas put forward by Ken Akiba and Elizabeth Barnes).
Williamson formulates the question of whether there is vagueness in reality, as the question of whether the following can ever be true:
Here X is a property-quantifier, and x an object quantifier. His answer is that the semantics force this to be false. The key observation is that, as he sets things up, the value assigned to a variable at a precisification and a variable assignment depends only on the variable assignment, and not at all on the precisification. So at all precisifications, the same value is assigned to the variable. That goes for both X and x; with the net result that if "Xx" is true relative to some precisification (at the given variable assignment) it's true at all of them. That means there cannot be a variable assignment that makes Vague[Xx] true.
You might think this is cheating. Why shouldn't variables receive different values at different precisifications (formally, it's very easy to do)? Williamson says that, if we allow this to happen, we'd end up making things like the following come out true:
It's crucial to the supervaluationist's explanatory programme that this come out false (it's supposed to explain why we find the sorites premise compelling). But consider a variable assignment to x which at each precisification maps x to that object which marks the F/non-F cutoff relative to that precisification. It's easy to see that on this "variable assignment", Def[Fx&Fx'] comes out true, underpinning the truth of the existential.
Again, suppose that we were taking the variable assignment to X to be a precisification-relative matter. Take some object o that intuitively is perfectly precise. Now consider the assignment to X that maps X at precisification 1 to the whole domain, and X at precisification 2 to the null set. Consider "Vague[Xx]", where o is assigned to x at every precisification, and the assignment to X is as above. The sentence will be true relative to these variable assignments, and so we have "(EX)Vague[Xx]" relative to an assignment of o to x which is supposed to "say" that o has some vague property.
Although Williamson's discussion is about the supervaluationist, the semantic point equally applies to the (pretty much isomorphic) setting that I like, and which is supposed to capture vagueness in reality. If one makes the variable assignments non-precisification relative, then trivially the quantified indeterminacy claims go false. If one makes the variable assignments precisification-relative, then it threatens to make them trivially true.
The thought I have is that the problem here is essentially one of mixing up abundant and natural properties. At least for property-quantification, we should go for the precisification-relative notion. It will indeed turn out that "(EX)Vague[Xx]" will be trivially true for every choice of X. But that's no more surprising that the analogous result in the modal case: quantifying over abundant properties, it turns out that every object (even things like numbers) have a great range of contingent properties: being such that grass is green for example. Likewise, in the vagueness case, everything has a great deal of vague properties: being such that the cat is alive, for example (or whatever else is your favourite example of ontic indeterminacy).
What we need to get a substantive notion, is to restrict these quantifiers to interesting properties. So for example, the way to ask whether o has some vague sparse property is to ask whether the following is true "(EX:Natural(X))Vague[Xx]". The extrinsically specified properties invoked above won't count.
If the question is formulated in this way, then we can't read off from the semantics whether there will be an object and a property such that it is vague whether the former has the latter. For this will turn, not on the semantics for quantifiers alone, but upon which among the variable assignments correspond to natural properties.
Something similar goes for the case of quantification over states of affairs. (ES)Vague[S] would be either vacuously true or vacuously false depending on what semantics we assign to the variables "X". But if our interest is in whether there are sparse states of affairs which are such that it is vague whether they obtain, what we should do is e.g. let the assignment of values to S be functions from precisifications to truth values, and then ask the question:
Where a function from precisifications to truth values is "natural" if it corresponds to some relatively sparse state of affairs (e.g. there being a live cat on the mat). So long as there's a principled story about which states of affairs these are (and it's the job of metaphysics to give us that) everything works fine.
A final note. It's illuminating to think about the exactly analogous point that could be made in the modal case. If values are assigned to variables independently of the world, we'll be able to prove that the following is never true on any variable assignment:
Again, the extensions assigned to X and x are non-world dependent, so if "Xx" is true relative to one world, it's true at them all. Is this really an argument that there is no contingent instantiation of properties? Surely not. To capture the intended sense of the question, we have to adopt something like the tactic just suggested: first allow world-relative variable assignment, and then restrict the quantifiers to the particular instances of this that are metaphysically interesting.