Saturday, May 25, 2013

The Propositional Fracture Theorem

The Propositional Fracture Theorem:
MathML-enabled post (click for more details).

Suppose X is a topological space and U⊆X is an open subset, with closed complement K=X∖U. Then U and K are, of course, topological spaces in their own right, and we have X=U⊔K as a set. What additional information beyond the topologies of U and K is necessary to enable us to recover the topology of X on their disjoint union?

MathML-enabled post (click for more details).

Recall that the subspace topologies of U and K say that for each open V⊆X, the intersections V∩U and V∩K are open in U and K, respectively. Thus, if a subset of X is to be open, it must yield open subsets of U and K when intersected with them. However, this condition is not in general sufficient for a subset of X to be open — it does define a topology on X, but it’s the coproduct topology, which may not be the original one.

One way we could start is by asking what sort of structure relating U and K we can deduce from the fact that both are embedded in X. For instance, suppose A⊆U is open. Then there is some open V⊆X such that V∩U=A. But we could also consider V∩K, and ask whether this defines something interesting as a function of A.

Of course, it’s not clear that V∩K is a function of A at all, since it depends on our choice of V such that V∩U=A. Is there a canonical choice of such V? Well, yes, there’s one obvious canonical choice: since U is open in X, A is also open as a subset of X, and we have A∩U=A. However, A∩K=∅, so choosing V=A wouldn’t be very interesting.

The choice V=A is the smallest possible V such that V∩U=A. But there’s also a largest such V, namely the union of all such V. This set is open in X, of course, since open sets are closed under arbitrary unions, and since intersections distribute over arbitrary unions, its intersection with U is still A.

Let’s call this set i *(A). In fact, it’s part of a triple of adjoint functors i !⊣i *⊣i * between the posets O(U) and O(X) of open sets in U and X, where i *:O(X)→O(U) is defined by i *(V)=V∩U, and i !:O(U)→O(X) is defined by i !(A)=A. Here i denotes the continuous inclusion U↪X.

Now we can consider the intersection i *(A)∩K, which I’ll also denote j *i *(A), where j:K↪X is the inclusion. It turns out that this is interesting! Consider the following example, which is easy to visualize:

  • X=ℝ 2.
  • U={(x,y)∣x<0}, the open left half-plane.
  • K={(x,y)∣x≥0}, the closed right half-plane.

If an open subset A⊆U “doesn’t approach the boundary” between U and K, such as the open disc of radius 1 centered at (−2,0), then it’s fairly easy to see that i *(A)=A∪{(x,y)∣x>0}, and therefore j *i *(A)={(x,y)∣x>0} is the open right half-plane.

On the other hand, consider some open subset A⊆U which does approach the boundary, such as

A={(x,y)∣x 2+y 2<1andx<0}

the intersection with U of the open disc of radius 1 centered at (0,0). A little thought should convince you that in this case, i *(A) is the union of the open right half-plane with the whole open disc of radius 1 centered at (0,0). Therefore, j *i *(A) is the open right half-plane together with the strip {(0,y)∣−1<y<1}.

This example suggests that in general, j *i *(A) measures how much of the “boundary” between U and K is “adjacent” to A. I leave it to some enterprising reader to try to make that precise. Here’s another nice exercise: what can you say about i *j *(B) for an open subset B⊆K?

Let us however go back to our original question of recovering the topology of X. Suppose A⊆U and B⊆K are open such that A∪B is open in X; how does this latter fact manifest as a property of A and B? Note first that (A∪B)∩U=A. Thus, since i *(A) is the largest V such that V∩U=A, we have A∪B⊆i *(A), and therefore B=j *(A∪B)⊆j *i *(A). Let me say that again:

B⊆j *i *(A).

This is a relationship between A and B which is expressed purely in terms of the topological spaces U and K and the function j *i *:O(U)→O(K), which we have just shown is necessary for A∪B to be open in X.

In fact, it is also sufficient! For suppose this to be true. Since B is open in K, there is some open C⊆X such that C∩K=B. Given such a C, the union C∪U also has this property, since U∩K=∅. Note that in fact C∪U=B∪U, and also B∪U=j *(B), the largest open subset of X whose intersection with K is B. (Since K, unlike U, is not open, there may not be a smallest such, but there is always a largest such.) Now I claim we have

A∪B=j *(B)∩i *(A)

To show this, it suffices to show that the two sides become equal after intersecting with U and with K.
For the first, we have

(j *(B)∩i *(A))∩U=j *(B)∩(i *(A)∩U)=j *(B)∩A=A=(A∪B)∩U

and for the second we have

(j *(B)∩i *(A))∩K=(j *(B)∩K)∩i *(A)=B∩i *(A)=B=(A∪B)∩K

using the assumption at the step B∩i *(A)=B.

In conclusion, the topology of X is entirely determined by

  • the induced topology of an open subspace U⊆X,
  • the induced topology on its closed complement K=X∖U, and
  • the induced function j *i *:O(U)→O(K).

Specifically, the open subsets of X are those of the form A∪B — or equivalently, by the above argument, i *(A)∩j *(B) — where A⊆U is open in U, B⊆K is open in K, and B⊆j *i *(A).

An obvious question to ask now is, suppose given two arbitrary topological spaces U and K and a function f:O(U)→O(K); what conditions on f ensure that we can define a topology on X≔U⊔K in this way, which restricts to the given topologies on U and K and induces f as j *i *? We may start by asking what properties j *i * has. Well, it preserves inclusion of open sets (i.e. A⊆A′⇒j *i *(A)⊆j *i *(A′)) and also finite intersections (j *i *(A∩A′)=j *i *(A)∩j *i *(A′)), including the empty intersection (j *i *(U)=K). In other words, it is a finite-limit-preserving functor between posets. Perhaps surprisingly, it turns out that this is also sufficient: any finite-limit-preserving f:O(U)→O(K) allows us to glue U and K in this way; I’ll leave that as an exercise too.

Okay, that was some fun point-set topology. Now let’s categorify it. Open subsets of X are the same as 0-sheaves on it, i.e. sheaves of truth values, or of subsingleton sets, and the poset O(X) is the (0,1)-topos of 0-sheaves on X. So a certain sort of person immediately asks, what about n-sheaves for n>0?

In other words, suppose we have X, U, and K as above; what additional data on the toposes Sh(U) and Sh(K) of sheaves (of sets, or groupoids, or homotopy types, etc.) allows us to recover the topos Sh(X)? As in the posetal case, we have adjunctions i !⊣i *⊣i * and j *⊣j * relating these toposes, and we may consider the composite j *i *:Sh(U)→Sh(K).

The corresponding theorem is then that Sh(X) is equivalent to the comma category of Id Sh(K) over j *i *, i.e. the category of triples (A,B,ϕ) where A∈Sh(U), B∈Sh(K), and ϕ:B→j *i *(A). This is true for 1-sheaves, n-sheaves, ∞-sheaves, etc. Moreover, the condition on a functor f:Sh(U)→Sh(K) ensuring that its comma category is a topos is again precisely that it preserves finite limits. Finally, this all works for arbitrary toposes, not just sheaves on topological spaces. I mentioned in my last post some applications of gluing for non-sheaf toposes (namely, syntactic categories).

One new-looking thing does happen at dimension 1, though, relating to what exactly the equivalence

Sh(X)≃(Id Sh(K)↓j *i *)

looks like. The left-to-right direction is easy: we send C∈Sh(X) to (i *C,j *C,ϕ) where ϕ:j *C→j *i *i *C is j * applied to the unit of the adjunction i *⊣i *. But in the other direction, suppose given (A,B,ϕ); how can we reconstruct an object of Sh(X)?

In the case of open subsets, we obtained the corresponding object (an open subset of X) as A∪B, but now we no longer have an ambient “set of points” in which to take such a union. However, we also had the equivalent characterization of the open subset of X as i *(A)∩j *(B), and in the categorified case we do have objects i *(A) and j *(B) of Sh(X). We might initially try their cartesian product, but this is obviously wrong because it doesn’t incorporate the additional datum ϕ. It turns out that the right generalization is actually the pullback of j *(ϕ) and the unit of the adjunction j *⊣j * at i *(A):

C → j *(B) ↓ ↓ j *(ϕ) i *(A) → j *j *i *(A)

In particular, any object C∈Sh(X) can be recovered from i *C and j *C by this pullback:

C → j *j *C ↓ ↓ i *i *C → j *j *i *i *C

Now let’s shift perspective a bit, and ask what all this looks like in the internal language of the topos Sh(X). Inside Sh(X), the subtoposes Sh(U) and Sh(K) are visible through the left-exact idempotent monads i *i * and j *j *, whose corresponding reflective subcategories are equivalent to Sh(U) and Sh(K) respectively. In the internal type theory of Sh(X), i *i * and j *j * are modalities, which I will denote I U and J U respectively. Thus, inside Sh(X) we can talk about “sheaves on U” and “sheaves on K” by talking about I U-modal and J U-modal types (or sets).

Moreover, these particular modalities are actually definable in the internal language of Sh(X). Open subsets U⊆X can be identified with subterminal objects of Sh(X), a.k.a. h-propositions or “truth values” in the internal logic. Thus, U is such a proposition. Now I U is definable in terms of U by

I U(C)=(U→C)

I’m using type-theorists’ notation here, so U→C is the exponential C U in Sh(X). The other modality J U is also definable internally, though a bit less simply: it’s the following pushout:

U×C → C ↓ ↓ U → J U(C).

In homotopy-theoretic language, J U(C) is the join of C and U, written U*C.
And if we identify Sh(U) and Sh(K) with their images under i * and j *, then the functor j *i *:Sh(U)→Sh(K) is just the modality J U applied to I U-modal types.

Finally, the fact that Sh(X) is the gluing of Sh(U) with Sh(K) means internally that any type C can be recovered from I U(C), J U(C), and the induced map J U(C)→J U(I U(C)) as a pullback:

C → J U(C) ↓ ↓ I U(C) → J U(I U(C))

Now recall that internally, U is a proposition: something which might be true or false. Logically, I U(C)=(U→C) has a clear meaning: its elements are ways to construct an element of C under the assumption that U is true.

The logical meaning of J U is somewhat murkier, but there is one case in which it is crystal clear. Suppose U is decidable, i.e. that it is true internally that “U or not U”. If the law of excluded middle holds, then all propositions are decidable — but of course, internally to a topos, the LEM may fail to hold in general. If U is decidable, then we have U+¬U=1, where ¬U=(U→0) is its internal complement. It’s a nice exercise to show that under this assumption we have J U(C)=(¬U→C).

In other words, if U is decidable, then the elements of J U(C) are ways to construct an element of C under the assumption that U is false. In the decidable case, we also have J U(I U(C))=1, so that C=I U(C)×J U(C) — and this is just the usual way to construct an element of C by case analysis, doing one thing if U is true and another if it is false.

This suggests that we might regard internal gluing as a “generalized sort of case analysis” which applies even to non-decidable propositions. Instead of ordinary case analysis, where we have to do two things:

  • assuming U, construct an element of C; and
  • assuming not U, construct an element of C

in the non-decidable case we have to do three things:

  • assuming U, construct an element of C;
  • construct an element of the join U*C; and
  • check that the two constructions agree in U*(U→C).

I have no idea whether this sort of generalized case analysis is useful for anything. I kind of suspect it isn’t, since otherwise people would have discovered it, and be using it, and I would have heard about it. But you never know, maybe it has some application. In any case, I find it a neat way to think about gluing.

Let me end with a tantalizing remark (at least, tantalizing to me). People who calculate things in algebraic topology like to work by “localizing” or “completing” their topological spaces at primes, since it makes lots of things simpler. Then they have to try to put this “prime-by-prime” information back together into information about the original space. One important class of tools for this “putting back together” is called fracture theorems. A simple fracture theorem says that if X is a p-local space (meaning that all primes other than p are inverted) and some technical conditions hold, then there is a pullback square:

X → X p ∧ ↓ ↓ X ℚ → (X p ∧) ℚ

where (−) p ∧ denotes p-completion and (−) ℚ denotes “rationalization” (inverting all primes). A similar theorem applies to any space X (with technical conditions), yielding a pullback square

X → ∏ pX (p) ↓ ↓ X ℚ → (∏ pX (p)) ℚ

where (−) (p) denotes localization at p.

Clearly, there is a formal resemblance to the pullback square involved in the gluing theorem. At this point I feel like I should be saying something about Spec(ℤ). Unfortunately, I don’t know what to say! Maybe some passing expert will enlighten us.


DIGITAL JUICE

No comments:

Post a Comment

Thank's!