Saturday, October 29, 2016

10/29/2016

My reference letters are all finally uploaded, and I've been working on cover letters all day. Things are starting to feel very real. I am feeling quite nervous about this whole thing, but maybe I will feel better once I submit my first application.

In other news, I presented twice at UNT this week. Both times I talked about Jonsson cardinals from the ZFC perspective. It was an interesting experience catering similar information to different audiences: the graduate logic group and the algebra seminar. The talk for the graduate logic group seemed to go quite well, and I must thank Christopher Lambie-Hanson for his excellent presentation of this material during the UCI summer school, as my talk was based very strongly this material. For the algebra seminar I tried to to use the topic to motivate some basic objects of set theory from algebraic concerns. I didn't get through all of the material I wanted to (in retrospect I was overambitious), and I initially felt it had not gone so well. A little after the talk, however, the organizer of the algebra seminar approached me and said she had enjoyed the talk and even reassured me about that style of presentation for job talks. She was almost certainly being too nice, but ti made me feel better regardless. In any case, it seemed to go better than when I presented to them about Whitehead's problem years ago. I still haven't quite figured out a quick way of presenting ordinals and cardinals which preserves enough information to be worth it.

Monday, October 24, 2016

10/22/2016 UIC Workshop Day 3

Starting off this day, Justin Moore finished up his tutorial. Today was more about generating certain kinds of trees,  in such a way that CH is not violated. This, of course, continues the theme of the previous day, as the existence of certain trees is known to follow from MA and be denied by diamond. So if one of these can be forced to exist while maintaining CH, it provides evidence that diamond was necessary. In particular, Moore covered some of the details of a forcing which is completely proper, adds a special tree, and is still completely proper in the extension. It's interesting to me that when I read through the material on trees in Kunen that I mistakenly thought of it as a kind of curiosity and as mostly completed work. I was way wrong, but I'm happy that I was, as the discussion of these tree properties, square properties, and other combinatorial properties generates a lot of interesting mathematics.

Anush Tserunyan was the next speaker and her talk was on "Integer Cost and Ergodic Actions." She spent a little while acclimating the group to the ideas of invariant descriptive set theory, and although that was review for me, her perspective and intuitions were helpful. In particular, she focused on the interplay between group actions, equivalence relations, and graphs. These connections provide background to try and answer the question "What is most efficient presentation of an equivalence relation E?" and "What is the smallest number of edges required to create a graph G which understands E?" Towards a qualitative answer to these questions, you can see if there is an acyclic Borel graph which captures the equivalence relation. A good example here is the equivalence relation generated from the free action of a free group. More quantitatively, using a measure you can integrate the number of edges then take an infimum and obtain a cost function. Galorian showed in 1998 that this infimum is achieved precisely when the equivalence relations can be captured by an acyclic Borel graph. Hjorth continued in this vein, looking more specifically at countable ergodic measure preserving equivalence relations. If the cost is an integer or infinite, then the equivalence relation can be seen as generated by the free action of a free group with at most countably many generators. Anush created tools which not only simplify the proof of this substantially, but allow her to strengthen its conclusion to add that the subactions generated by individual generators is ergodic as well. Anush is always a good speaker, and since I normally see her speak at descriptive set theory sessions, it was interesting to see how she catered her material to a more pure set theory audience. Her ability to live in both the world of set theory and that of analysis is something I strive for. As a final note, something in her discussion of these graph representations shook loose some ideas I've been having about the qualitative characterizations of various quotients of the reals. Even though E_0 cannot be linearly ordered, it can be represented by an acyclic graph, and in that sense its partial order structure is less complicated than say its product with equivalence relation generated by the free group on two generators acting freely. So it seems to separate E_0 and l_1 qualitatively I should be looking at ways in which they can be represented.

In the afternoon, Matt Foreman did the first two parts of his tutorial, "Applications of Descriptive Set Theory to Classical Dynamical Systems." This was based on the same slides as when I saw Foreman speak at CUNY, although this time he catered the talk an audience of peers as opposed to graduate students. As such, he included a number of insights that were in the CUNY talk. Also, he had more time here, so he was able to go into more detail. As before, the problem is to see if it possible to classify the ergodic measure preserving diffeomorphisms of compact manifolds. This was problem was originally posed by Von Neumann in the 1930s. You can argue that attempts to solve are in the background of many of the significant developments in dynamics, such as the notion of entropy, although its not clear that the field is still motivated by it today. Regardless, finding a solution to an 80 year problem is impressive. The answer is no, there is no simple way to classify the ergodic measure preserving diffeomorphisms of compact manifolds. More precisely, this problem is more complicated than the most complicated problems which have algebraic invariants: those isomorphisms which can be seen as actions of the infinite permutation group. If you drop the diffeomorphism aspect, and just look at ergodic measure preserving transformations, the problem is complete analytic, which is as bad it could be. This uses the fact that odometer based dynamical systems are in some sense generic in the space of dynamical systems. Foreman and Weiss however, really wanted to solve Von Neumann's problem. To do this, they created a new type of dynamical system which is motivated by odometer systems, called circular systems. While it is still unknown if an arbitrary odometer system can be realized as an ergodic measure preserving diffeomorphisms they were able to show that circular systems can be. There is also a nice isomorphism between the class of odometer based systems and the class of circular based systems. Putting this altogether you can get that the diffeomorphism problem is also complete analytic.

Finishing off the third day, Maxwell Levine spoke on "Weak Squares and Very Good Scales." I saw a twenty minute version of this talk at the CUNY graduate student conference. When I saw it there I knew absolutely no pcf theory, and I had a hard time tracking what was happening. Now that I've done the pcf summer school and Maxwell had an hour, I think I got a much better idea of what he is doing. There are three kinds of combinatorial principles in play in his work: square principles, stationary reflection, and scales. Stationary reflection is a kind of compactness property, and can be provided by large cardinals. Square principles are a kind of incompactness; they occur in models like L, and for example can provide for the existence of an almost metrizable space which is not itself metrizable. A theorem of Shelah says that scales exist, but the existence of a very good scale contradicts stationary reflection. What's somewhat odd is that square properties can co-exist with stationary reflection. A strong square property can coexist with stationary reflection. The weakest square property in particular can even coexist with simultaneous stationary reflection, but it is not immediately clear if the second weakest square can. One way to check this is to see if this second weakest square implies the existence of a very good scale. Maxwell was able to construct a model where the weakest square holds and there are no very good scales, however the second weakest square still fails in this model. So he next tried to see if it is possible for the second weakest square to strictly hold. The answer is yes, and it can even happen at relatively small cardinals (aleph_omega). Complicating the picture even more, in the presence of large cardinals this second weakest square property implies that the strongest form of simultaneous stationary reflection fails. In fact, Maxwell is able to explicitly characterize why this reflection is failing. What I like about this work now is that Maxwell is basically taking the theory of cardinal arithmetic apart piece by piece and seeing which steps were really necessary. At the very least I feel like I have a better understanding of the machinery of cardinal arithmetic after seeing him speak.

Sunday, October 23, 2016

10/21/2016 UIC Workshop Day 2

Justin Moore started off day 2 with a tutorial on "Iterated Forcing and the Continuum Hypothesis." The goal is try and develop forcings which preserve CH while obtaining partial consequences of combinatorial principles which in their full strength contradict CH. As most of the forcings required to instantiate these principles require iterated forcing, the bulk of this talk focused on possible conditions on a poset which guarantee that even after an iteration of some length, new reals have not been added. This is is more subtle than it may initially seem, as the act of iteration can add reals even when none of the forcings being iterated do so on their own. The first thing one of these posets must do is preserve stationary subsets of aleph_1. There is a reasonably straightforward example which adds reals after being iterated, and the precise reason it does so is because it fails to preserve stationary sets. This leads to the notion of proper forcing. However, this is not enough. A more subtle example shows that an iteration of proper posets which individually do not add reals can still add in the end. One answer is to look at something called completely proper posets. This is a technical definition, but the upshot is that for a completely proper poset, certain information about its generic don't depend on the model being forced over. A theorem of Shelah shows that countable support iterations of completely proper posets with one of two extra assumptions are still completely proper, and thus do not add reals. Interestingly enough, these extra assumptions really are separate and not obviously subsumed by some more general assumption. Again, these advanced forcing talks really are beyond my skill set with forcing, but I am enjoying soaking up the culture and philosophy around them.

In first part of the afternoon, Manachem Magidor finished up his tutorial on "Compactness and Incompactness Principles for Chromatic Numbers and Other Cardinal Sins." This talk started off with examples of both compactness and incompactness theorems for bounds on chromatic number under different settings. If stationary reflection fails, then bounds for chromatic numbers are not compact. On the other hand, in the presence of a countably complete ultrafilter, having chromatic number bounded by aleph_0 is a strongly compact property. After this, Magidor talked some of his joint with with Bagaria in which they relate compactness properties to other abstract compactness principles. While compactness principles are obtainable for large cardinals, getting them on small cardinals, say aleph_n for some n, is another matter, and involves large cardinals of noticeably higher consistency strength. I do want to check out how some of these chromatic number play out under AD.

Omar Ben Neria went next, He spoke on "The Distance Between HOD and V." The question is, up to consistency, how small can HOD be? There are a couple ways to measure this, one way for HOD to be large is for it satisfy a kind of coloring lemma, and a way for HOD to be small is for cardinals in V to be large cardinals in HOD. Woodin has shown from an extendible cardinal that a version of Jensen's dichotomy holds between V and HOD. Ben Neria and Unger were able to strengthen a result of Cummings, Friedman, and Golshani which shows that it is consistent for covering to fail for HOD. In fact, it is consistent that every uncountable regular cardinal in V is huge in HOD. One important piece of technology for the proof is weakly homogeneous forcings. These ensure that the HOD of the forcing extension do not escape the ground model. Another important technique is to use iterated forcing with non-stationary support. I don't understand the intense modified iterated forcing going on here, but the result is nice. I asked about getting all uncountable cardinals of V to be Jonsson cardinals in HOD, and it seems that the their construction does achieve this, although at the cost of at least a measurable cardinal at the moment. I'm not sure what an optimal consistency strength would be. Ben was kind of enough to point to me some information which may help me to attack the consistency problem of all uncountable cardinalsbelow Theta being Jonsson in L(R).

To finish out the day, Nam Trang gave talk on the "Compactness of Omega_1." Under the axiom of determinacy, omega_1 has some supercompactness type properties. For some fixed set X, Trang was interested in finding minimal models satisfying the claim "omega_1 is X-supercompact." Even under ZF, this implies some failures of square. More than just implying that omega_1 is strongly compact with respect to the reals, it turns out that AD and thus property are equiconsistent. The statement that omega_1 is supercompact with respect to the reals is equiconsistent to an even stronger form of determinacy. Moving up with the power set operation on R, the compactness properties become equiconsistent with stronger forms of determinacy still. Trang, in joint work with Rodriguez was able to show that there is a unique model of certain form which thinks that omega_1 is R-supercompact. I always enjoy hearing about these higher forms of determinacy and their corresponding models, and these equiconsistency results add yet another connection between the theory of large cardinals and the theory of determinacy.

Saturday, October 22, 2016

10/20/2016 UIC Workshop Day 1

Manachem Magidor started the morning off with the first tutorial. He is speaking on "Compactness and Incompactness for Chromatic Numbers and Other Cardinal Sins." He phrases compactness in the intuitive way "If every small substructure has a property, then the whole structure does", where small means lower cardinality. He gave quite a few examples of compactness phenomena, stretching across algebra, topology,and set theory. I really liked his intuitive descriptions of weakly compact, strongly compact and supercompact cardinals.

  • A cardinal is weakly compact with respect to a certain property if the property passes to small pieces of it,
  • A cardinal is strongly compact with respect to a certain property if the property passes to all pieces of it, and
  • A cardinal is supercompact it if strongly compact with respect to all second order properties,
He then spoke on chromatic numbers, coloring numbers, stationary reflection, and the relationship between the three. The chromatic number is highly malleable and so hard to study and definitely fixing a bound on the chromatic number is hard property to make an object compact for. Enough stationary reflection on the other hand provides compactness for the property of bounding the coloring number. This is as far as we got today, I'm curious to see where it goes next.

The first talk in the afternoon was by Sherwood Hachtman, He spoke on "Forcing Analytic Determinacy." Essentially, this is related to the reverse math question of how much set theory do you need to get that lightface analytic determinacy is equivalent to 0# existing. More combinatorially, you can ask determinacy with respect to Turing cones. While Woodin conjectured that Z_2 is enough strength (and it certainly is for boldface analytic determinacy), recent work by Cheng and Schinder seems to indicate that Z_3 may be necessary. Hachtman didn't have concrete answers to this yet, but inspired by the technique of genericity iterations, he has forced Turing cones to be inside of some sets who should have them.

This all was pretty interesting to me as I didn't know that anybody was thinking about reverse math questions surrounding determinacy. The classic result about Borel determinacy is certainly cool enough to warrant more attention, so I'm not sure why I had never really thought about it. It would be neat if the lightface analytic determinacy really was force-able, and thus an interesting intermediate stage between Borel determinacy and full boldface analytic determinacy.

Finishing out the first day, Spencer Unger presented the "Poor Man's Tree Property." People in the audience said it was like watching a mini James Cummings present. The idea is to replace the tree property with the weaker assumption that there are no special trees, This is consistency work, o the goal is to find a forcing which can ensure that there are no special trees. Because of some obstructions, such a model will have to fail GCH and SCH everywhere. Combining approaches of Magidor and Gitik, Under was able to create a large interval of cardinals, starting at aleph_2 and passing some singular cardinals, on which there are no special trees. The modified Magidor approach works from the ground up and can ensure that aleph_2 allows for no special trees. This approach can be modified for finitely many cardinals, if you thread the forcings together appropriately. The Gitik approach can be used to ensure that a singular cardinal doesn't allow for special trees. For his result, Unger put these approaches together in supercompact Prikry-like forcing which very carefully collapses certain intervals of cardinals. The details are much too complicated for me to understand with my current knowledge of forcing.

Even though it is outside of my skill set, these super high tech forcings are quite interesting, and its good to see people like Unger, who have intuition and a big picture in mind, present about them.

Monday, October 3, 2016

Generalizing Silver's Theorem Part 2

In this post, I will be proving Silver’s theorem the forcing way. Recall the statement.

Theorem 1   
Suppose that E is a lightface co-analytic equivalence relation on the reals (in this case taken to be Cantor space: infinite strings of 0’s and 1’s). Then either E has countably many equivalence classes, or there is a perfect set of E-inequivalent elements.
Proof.
From here on out I will be referring to lightface analytic sets as Σ11, co-analytic sets as Π11, and sets which are both as Δ11.

As in the topological proof, I will be using lightface analytic sets, and exploiting their properties. The first two of these properties are reasonably straightforward:
  • There are only countably many Σ11 sets, Π11 sets, and Δ11 sets.
  • Disjoint Σ11 sets can be separated by a Δ11 set. That is whenever A and B are disjoint Σ11 sets, there is a Δ11 set C so that AC and BC = ∅. This is equivalent to the reduction property for Π11. For any Π11 sets A and B, we can find disjoint Π11 sets A′ and B′ so that AA′, BB′ and AB = A′ ∪ B′.
The next property will require a little bit of set-up. In descriptive set theory, we try to code everything we possibly can with real numbers. In the case of countable collections, we try to do even better, and code them by natural numbers. In particular, we can obtain a set U × ω × ℝ so that for every Π11 set B, there is an n ∈ ω so that B = U ∩ ({n} × 2ω). That is, every Π11 set shows up as a vertical section of U. I am presenting these facts without proof because they are pretty standard and this post is going to be long as it is.

This proof really consists of test, the results of which provide construction principles. In words, the test is as follows: for every real x, is there a Δ11 set A so that xA ⊆ [x]E? If the answer is yes, then the fact that Δ11 is countable implies that E has countably many equivalence classes. We certainly cannot have more equivalence classes than elements of Δ11 in this case. So what happens if the answer is no? This is where we will use forcing to create a copy of 2ω which consists of pairwise E-inequivalent elements.

Let Q be the collection of non-empty Σ11 sets. For A,BQ, we say that AB (A extends B) iff AB. I will refer to elements of Q as conditions. This is because they include some reals and exclude others. Intuitively, extending conditions in this partial order corresponds to filtering out reals. Now let S = {x ∈ ℝ : (∃ A ∈ Δ11) [xA ⊆ [x]E} and set W to be the complement of S. If the answer to the question is no, W is non-empty. What I want now is for W to be a Σ11 set. That way I can force below W. The information that W imparts will guarantee that the forcing produces pairwise E-inequivalent reals.

Claim 1 
W is Σ11.
Reason
It suffices to show that S is Π11. We create a coding of the Δ11 sets in order to make this computation. Let U ⊆ ω × ℝ be universal for Π11. Define Π11 sets V1 and V2 by
  • V1(⟨ n,m ⟩,x)  ⇐⇒ U(n,x) and
  • V2(⟨ n,m ⟩,x)  ⇐⇒ U(m,x).
Using the reduction property for Π11, we can find disjoint Π11 sets V1′ and V2′ so that ViVi′, V1′ ∩ V2′ = ∅, and V1V2 = V1′ ∪ V2′. Then ⟨ n,m ⟩ codes a Δ11 set iff
(∀ y ∈ ℝ)[V1′(⟨ n,m ⟩,y) ∨ V2′(⟨ n,m ⟩,y)],
that is ⟨ n,m ⟩ finds a section of U which codes complementary Π11 sets. Call the set of such codes C. C is Π11. Now
x ∈ S  ⇐⇒ (∃ n,m ∈ ω)[⟨ n,m ⟩ ∈ C ∧ V1′(⟨ n,m ⟩,x) ∧ (∀ y ∈ ℝ)[¬ V2′(⟨ n,m ⟩,x) ⇒ x E y]].
This shows that S is Π11.


With this knowledge of W in hand, it makes sense to speak of the conditions in Q below W. Call this poset P. I will use this poset to produce pairwise E-inequivalent reals. For the uninitiated, forcing with a poset involves three major pieces.
  • Dense subsets of P(these are dense in the intuitive way; every condition can be extended to be inside the set). Topologically these correspond roughly to comeager sets.
  • Generic filters for P: G is a generic filter if it is closed upwards in the ≤ order, is closed under "meets" in the sense that if p,qG, then there is an rG so that rp,q, and G has non-empty intersection with every dense subset of P.
  • Names for objects which can be described by P. These are essentially collections of pairs (A,τ) where A is a condition in P and τ is another name. The construction of names is inherently recursive.
These three objects come together in a synthesis. G builds actual objects using the names as a guide: it pulls objects into a set it is building according to the conditions in G and the name that condition is tagged with. This process is recursive. In notation, the interpretation of the name τ by the generic G is written as τG. The dense sets are how the poset controls what G can build. The creation of clever dense sets forces G to make decisions about desired properties of the sets it constructs. If we call V our initial universe of objects, V[G] is the universe of objects which can be created from G in this way. Standard forcing theorems show that V is properly contained in V[G] and G in an element of V[G]. Syntactically, this means there are names for elements of V and for G which get interpreted appropriately for all generics G. If xV, the standard name for x is denoted ẋ.

For sentences ϕ (which are allowed to reference names) and conditions A, the notation A ⊩ ϕ means that whenever G is generic, if AG, then ϕ is true in V[G]. This symbol means that A forces ϕ. It is an important theorem in forcing that if a statement ϕ is true about V[G], then it was forced to be true by some condition in the poset. To work with the forcing in this context, I will restrict down to a countable structure M for which enough of ZFC is true for the proofs to go through. This allows for the explicit construction of generics. Actually, what I want is a little bit stronger. By first taking a sub-universe of enough of ZFC, and then taking a countable elementary substructure M, I can assume that there is an embedding ϕ:MV which preserves all of the first order statements which come up in this proof. This small universe does not have Q or P, but M has ϕ−1(P) and ϕ−1(Q), which I will refer to as P and Q. As general notation, the bar will be used to denotes the ϕ-inverse of a set.
Now the poset Q I am considering has a particularly nice interplay with its generics.

Claim 2   
Suppose G is generic filter on Q. Then G gives rise to a real xG so that xGA iff AG.
Reason
I use cylinder sets to construct the name. For a finite string s of 0’s and 1’s, N(s) is those reals which have s as an initial segment. Consider the name ẋ = {(ṡ,A) : AN(s)}. Then if G is a generic filter for Q over M, this name at least guarantees that ẋG ∈ ℝ. It suffices to show that A ⊩ ẋ ∈ A for every AP. Since A is Σ11, I can write A = p[T], where T is a recursive tree on ω × ω. Then
A ⊩ ẋ ∈ A  ⇐⇒ A ⊩ (∃ y ∈ ℝ)(∀ n ∈ ω)[(ẋ|n,y|n) ∈ T].
I will first check that this does not depend on our choice of T. A could have many representations, and this computation should not depend on the representation. So Suppose T0 and T1 are recursive trees on ω × ω so that A = p[T0] = p[T1]. Suppose G is generic for P over M. I need to see that
ẋ ∈ p[T0]  ⇐⇒ ẋ ∈ p[T1].
In M, it is true that (∀ x ∈ ℝ)[xp[T0]  ⇐⇒ xp[T1]]. Since this is a simple enough statement (Π21), Shoenfield absoluteness declares it true in M[H] as well no matter which generic H is chosen. So the forcing statement is true.

Now suppose towards a contradiction that A ⊮ẋ ∈ A. It is part of the formal definition of ⊩ that there then must be a condition BA so that B ⊩ ẋ ∉ A. Then B ⊩ ẋ ∉ B. Thus WLOG A ⊩ ẋ ∉ A. Call this element A0. This is not quite a contradiction, but it is almost there. Write A0 = p[T0]. I will use A0 to build a generic extension G so that in M[G], ẋGA and ẋGp[T0]. In V, let {Dn : n ∈ ω} be the dense subsets of Q from M. I want to fix the first coordinate of the reals in A0, fix a valid first coordinate for a witness that those reals are in A0 and extend A0 to meet D1. Let s0 ∈ ω and t00 ∈ ω be so that
A0′ = {x : (∃ y ∈ ℝ)[(x(0),y(0)) = (s,t00) ∧ (x,y) ∈ [T0]]} ⊆ A0.
This fixes the first digit of reals inside A0 and the first digit of potential witness y which prove that those reals are in A0. Let A1A0′ ∩ N(s0) be so that A1D1.

Continue this way inductively, defining An, Tn, sn, tnm, and An′ for n ∈ ω and mn so that
  • An = p[Tn],
  • sn,tnm ∈ ωn,
  • snsn+1,
  • tnmtn+1m,
  • An′ = {x : snx ∧ (∃ y ∈ ∏mnN(tnm))(∀ mn)[(x,ym) ∈ [Tm]] },
  • AnDn, and
  • AnAn′ ≥ An+1.
We can then define a filter G by taking the ≤-upward closure of the An. Explicitly, define a filter G on P by
A ∈ G  ⇐⇒ (∃ n ∈ ω)[An ≤ A].
This is a generic filter for Q. By construction, ẋG =limn → ∞sn. I also assumed that A0 ⊩ ẋ ∉ A0. Thus ẋGA0. Let y = limn → ∞tn0. Then (ẋG,y) ∈ [T0]. So ẋGp[T0] = A0. This is a contradiction.


With this in mind, I will freely identify a real x as Q-generic (or P-generic). At this point, to meaningfully interact with the equivalence relation, I need to be working in ℝ2. Unfortunately, there are two ways to lift Q and P up to ℝ2. The first way is the simpler, and will be denoted Q × Q. Conditions in Q × Q are simply ordered pairs (A,B), where A,BQ. The second way is slightly more complicated. Q2 is the version of Q defined off of ℝ2. That is AQ2 iff A ⊆ ℝ2, A is Σ11 and A ≠ ∅. Syntactically, this means there is a recursive tree T on ω × ω × ω so that (x,y) ∈ A iff there is a z ∈ ℝ so that (x,y,z) ∈ [T].

Let P × P be the product poset of P, and P2 = {AQ2 : AW × W}. Note that if G is Q2 generic (or P2-generic), then G defines a pair of reals (xG,yG). With the name (x,y) = {(ṡ,ṫ,A) : AN(s) × N(t)}, the same proof as for one dimension shows that
(x,y)G ∈ A  ⇐⇒ A ∈ G.

Now the question is how do these two-dimensional posets interact with the one-dimensional version. The answer is quite well! I just have to define some projection operations, and then there is a smooth translation between the posets. For AQ2, we let
πL(A) = {x ∈ ℝ : (∃ y ∈ ℝ)[(x,y) ∈ A]}  and  πR(A) = {y ∈ ℝ : (∃ x ∈ ℝ)[(x,y) ∈ A]}.
Then πL(A),πR(A) ∈ Q, as Σ11 is closed under projections. If G is Q2-generic, let
πL(G) = {A ∈ 
Q
 : (∃ B ∈ G2)[πL(B) ⊆ A]},
and πR(G) is similarly defined.

One particular nice interaction is that generic reals pass back and forth. I am proving this for Q2 and the barred version for notational simplicity and because I can.

Claim 3 
If (x,y) is Q2 generic, then x and y are generic for Q.
Reason
Suppose (x,y) is Q2 generic. Let G2 be the associated Q2 generic. I will show that x is Q-generic, and the argument for y is symmetric. Let G = πL(G2). I will show that G is generic. Let D be dense for Q. Let A2Q2, and set A = πL[A2]. Then as AQ, there is a BA in Q so that BD. Define B2 by
B2 = (B × ℝ) ⋂ A2.
Then B2 is Σ11 and B2 ≠ ∅. So B2Q2. Note that πL[B2] = BD. Thus {XQ2 : πL[X] ∈ D} is dense in Q2. So there is an XG so that πL[X] ∈ D. Thus GD ≠ ∅, and so G is generic. Moreover,
x ∈ A  ⇐⇒ (x,y) ∈ A × ℝ  ⇐⇒ A × ℝ ∈ G  ⇐⇒ A ∈ G.
So G is the the generic associated to x.


Putting this all together, it finally comes out that forcing below W produces reals which are pairwise E-inequivalent.

Claim 4   
If (x,y) are P × P-generic, then (x,y) ∉ E.
Reason
BWOC suppose that (A,B) ∈ P × P are so that (A,B) ⊩P2 ((x,y) ∈ E). Consider (A,A,B) ∈ P2 × P. Note that A2 ∩ (ℝ2E) is in Σ11 and non-empty. Thus A2 ∩ (ℝ2E) ∈ P2. Let G2 be P2-generic over V so that AG2. Let G be P-generic over V[G2] so that BG. Then in V[G2][G], G2 × G defines a generic triple (x,y,z). Now
  • in V[G2], (x,y) ∉ E (as A2G2),
  • in VL(G2)][G], x E z (as BG, (x,z) is P × P-generic, and A × B ∈ πL(G2) × G) and
  • in VR(G2)][G], y E z (as BG, (y,z) is P × P-generic, and A × B ∈ πR(G2) × G).
Thus,
in V[G2][G], [(x E z) ∧ (y E z) ∧ ((x,y) ∉ E)].
Therefore In V[G2][G], E is not an equivalence relation. But to say E is an equivalence relation is a Π21 statement. So by Shoenfield absoluteness again, in V[G2][G], E is an equivalence relation. This is a contradiction.


Now I will build a perfect set of generics for P. To do so, I have to return to the countable sufficiently elementary submodel M. Note that by elementarity,
M ⊨ [∅ ⊩P_2 (x,y) ∉ E].
The construction is an inductive construction. I will create Σ11 sets As for all finite strings s in such a way that incompatible strings correspond to disjoint Σ11 sets and each descending sequence of A’s produces a generic. If I can follow this through, the set of such generics will be perfect and they msut also be pairwise E-inequivalent.

Let {Dn : n ∈ ω} enumerate the dense sets in P × P. Let (A0,A1) ∈ P2 be so that A0P ẋ(i0) = a0 and A1P ẋ(i0) = b0 with a0b0. In P × P there are A00, A01, A10, A11 so that A00A01 = ∅ and A10A11 = ∅. I can then find AijAij so that AijD0. I simply continue in this way, defining As,AsP for s ∈ 2 so that
  • for all s,t ∈ 2, if st, then AtAs,
  • for all s ∈ 2, AsAs,
  • for all s ∈ 2, As ⁀ 0As ⁀ 1 = ∅, and
  • for all s ∈ 2, AsD|s|−1.
The easiest way to check that this construction has worked is through the language of embeddings. Define ψ:2ω→ ℝ by ψ(x) = ∩n Ax|n. If x ∈ 2ω, then the filter generated by x (AGx  ⇐⇒ ψ(x) ∈ A) is a generic filter for P over M. In fact, if xy ∈ 2ω, then (Gx,Gy) is P × P-generic over M. Thus in M[(Gx,Gy)], (ψ(x),ψ(y)) ∉ E. By Σ11-absoluteness, it must then be true in V that (ψ(x),ψ(y)) ∉ E. This verifies the construction and finishes the proof


This version is reasonably slick, and doesn’t use the axiom of choice in ways which are incompatible with AD. However, it doesn’t actually generalize all that well. If instead of being co-analytic, the equivalence relation is merely co-κ-Suslin (a set A is κ-Suslin if it can be written as A = p[T], where T is a tree on ω × κ), the same proof will not got through as I will lose features of Σ11 which made some of these complexity computations come out just so. So what’s the solution? Infinitary logic and admissible ordinals. Or, in my case, infinity Borel codes and admissible ordinals. Look for that next time.
This document was translated from LATEX by HEVEA. (How did it work?)