My reference letters are all finally uploaded, and I've been working on cover letters all day. Things are starting to feel very real. I am feeling quite nervous about this whole thing, but maybe I will feel better once I submit my first application.
In other news, I presented twice at UNT this week. Both times I talked about Jonsson cardinals from the ZFC perspective. It was an interesting experience catering similar information to different audiences: the graduate logic group and the algebra seminar. The talk for the graduate logic group seemed to go quite well, and I must thank Christopher Lambie-Hanson for his excellent presentation of this material during the UCI summer school, as my talk was based very strongly this material. For the algebra seminar I tried to to use the topic to motivate some basic objects of set theory from algebraic concerns. I didn't get through all of the material I wanted to (in retrospect I was overambitious), and I initially felt it had not gone so well. A little after the talk, however, the organizer of the algebra seminar approached me and said she had enjoyed the talk and even reassured me about that style of presentation for job talks. She was almost certainly being too nice, but ti made me feel better regardless. In any case, it seemed to go better than when I presented to them about Whitehead's problem years ago. I still haven't quite figured out a quick way of presenting ordinals and cardinals which preserves enough information to be worth it.
I've created this blog to document my experience as a research mathematician. My broad interests are logic and set theory, and I am currently focused on descriptive inner model theory. In addition to posting my daily progress, I hope to include the occasional opinion post or interesting article.
Saturday, October 29, 2016
Monday, October 24, 2016
10/22/2016 UIC Workshop Day 3
Starting off this day, Justin Moore finished up his tutorial. Today was more about generating certain kinds of trees, in such a way that CH is not violated. This, of course, continues the theme of the previous day, as the existence of certain trees is known to follow from MA and be denied by diamond. So if one of these can be forced to exist while maintaining CH, it provides evidence that diamond was necessary. In particular, Moore covered some of the details of a forcing which is completely proper, adds a special tree, and is still completely proper in the extension. It's interesting to me that when I read through the material on trees in Kunen that I mistakenly thought of it as a kind of curiosity and as mostly completed work. I was way wrong, but I'm happy that I was, as the discussion of these tree properties, square properties, and other combinatorial properties generates a lot of interesting mathematics.
Anush Tserunyan was the next speaker and her talk was on "Integer Cost and Ergodic Actions." She spent a little while acclimating the group to the ideas of invariant descriptive set theory, and although that was review for me, her perspective and intuitions were helpful. In particular, she focused on the interplay between group actions, equivalence relations, and graphs. These connections provide background to try and answer the question "What is most efficient presentation of an equivalence relation E?" and "What is the smallest number of edges required to create a graph G which understands E?" Towards a qualitative answer to these questions, you can see if there is an acyclic Borel graph which captures the equivalence relation. A good example here is the equivalence relation generated from the free action of a free group. More quantitatively, using a measure you can integrate the number of edges then take an infimum and obtain a cost function. Galorian showed in 1998 that this infimum is achieved precisely when the equivalence relations can be captured by an acyclic Borel graph. Hjorth continued in this vein, looking more specifically at countable ergodic measure preserving equivalence relations. If the cost is an integer or infinite, then the equivalence relation can be seen as generated by the free action of a free group with at most countably many generators. Anush created tools which not only simplify the proof of this substantially, but allow her to strengthen its conclusion to add that the subactions generated by individual generators is ergodic as well. Anush is always a good speaker, and since I normally see her speak at descriptive set theory sessions, it was interesting to see how she catered her material to a more pure set theory audience. Her ability to live in both the world of set theory and that of analysis is something I strive for. As a final note, something in her discussion of these graph representations shook loose some ideas I've been having about the qualitative characterizations of various quotients of the reals. Even though E_0 cannot be linearly ordered, it can be represented by an acyclic graph, and in that sense its partial order structure is less complicated than say its product with equivalence relation generated by the free group on two generators acting freely. So it seems to separate E_0 and l_1 qualitatively I should be looking at ways in which they can be represented.
In the afternoon, Matt Foreman did the first two parts of his tutorial, "Applications of Descriptive Set Theory to Classical Dynamical Systems." This was based on the same slides as when I saw Foreman speak at CUNY, although this time he catered the talk an audience of peers as opposed to graduate students. As such, he included a number of insights that were in the CUNY talk. Also, he had more time here, so he was able to go into more detail. As before, the problem is to see if it possible to classify the ergodic measure preserving diffeomorphisms of compact manifolds. This was problem was originally posed by Von Neumann in the 1930s. You can argue that attempts to solve are in the background of many of the significant developments in dynamics, such as the notion of entropy, although its not clear that the field is still motivated by it today. Regardless, finding a solution to an 80 year problem is impressive. The answer is no, there is no simple way to classify the ergodic measure preserving diffeomorphisms of compact manifolds. More precisely, this problem is more complicated than the most complicated problems which have algebraic invariants: those isomorphisms which can be seen as actions of the infinite permutation group. If you drop the diffeomorphism aspect, and just look at ergodic measure preserving transformations, the problem is complete analytic, which is as bad it could be. This uses the fact that odometer based dynamical systems are in some sense generic in the space of dynamical systems. Foreman and Weiss however, really wanted to solve Von Neumann's problem. To do this, they created a new type of dynamical system which is motivated by odometer systems, called circular systems. While it is still unknown if an arbitrary odometer system can be realized as an ergodic measure preserving diffeomorphisms they were able to show that circular systems can be. There is also a nice isomorphism between the class of odometer based systems and the class of circular based systems. Putting this altogether you can get that the diffeomorphism problem is also complete analytic.
Finishing off the third day, Maxwell Levine spoke on "Weak Squares and Very Good Scales." I saw a twenty minute version of this talk at the CUNY graduate student conference. When I saw it there I knew absolutely no pcf theory, and I had a hard time tracking what was happening. Now that I've done the pcf summer school and Maxwell had an hour, I think I got a much better idea of what he is doing. There are three kinds of combinatorial principles in play in his work: square principles, stationary reflection, and scales. Stationary reflection is a kind of compactness property, and can be provided by large cardinals. Square principles are a kind of incompactness; they occur in models like L, and for example can provide for the existence of an almost metrizable space which is not itself metrizable. A theorem of Shelah says that scales exist, but the existence of a very good scale contradicts stationary reflection. What's somewhat odd is that square properties can co-exist with stationary reflection. A strong square property can coexist with stationary reflection. The weakest square property in particular can even coexist with simultaneous stationary reflection, but it is not immediately clear if the second weakest square can. One way to check this is to see if this second weakest square implies the existence of a very good scale. Maxwell was able to construct a model where the weakest square holds and there are no very good scales, however the second weakest square still fails in this model. So he next tried to see if it is possible for the second weakest square to strictly hold. The answer is yes, and it can even happen at relatively small cardinals (aleph_omega). Complicating the picture even more, in the presence of large cardinals this second weakest square property implies that the strongest form of simultaneous stationary reflection fails. In fact, Maxwell is able to explicitly characterize why this reflection is failing. What I like about this work now is that Maxwell is basically taking the theory of cardinal arithmetic apart piece by piece and seeing which steps were really necessary. At the very least I feel like I have a better understanding of the machinery of cardinal arithmetic after seeing him speak.
Anush Tserunyan was the next speaker and her talk was on "Integer Cost and Ergodic Actions." She spent a little while acclimating the group to the ideas of invariant descriptive set theory, and although that was review for me, her perspective and intuitions were helpful. In particular, she focused on the interplay between group actions, equivalence relations, and graphs. These connections provide background to try and answer the question "What is most efficient presentation of an equivalence relation E?" and "What is the smallest number of edges required to create a graph G which understands E?" Towards a qualitative answer to these questions, you can see if there is an acyclic Borel graph which captures the equivalence relation. A good example here is the equivalence relation generated from the free action of a free group. More quantitatively, using a measure you can integrate the number of edges then take an infimum and obtain a cost function. Galorian showed in 1998 that this infimum is achieved precisely when the equivalence relations can be captured by an acyclic Borel graph. Hjorth continued in this vein, looking more specifically at countable ergodic measure preserving equivalence relations. If the cost is an integer or infinite, then the equivalence relation can be seen as generated by the free action of a free group with at most countably many generators. Anush created tools which not only simplify the proof of this substantially, but allow her to strengthen its conclusion to add that the subactions generated by individual generators is ergodic as well. Anush is always a good speaker, and since I normally see her speak at descriptive set theory sessions, it was interesting to see how she catered her material to a more pure set theory audience. Her ability to live in both the world of set theory and that of analysis is something I strive for. As a final note, something in her discussion of these graph representations shook loose some ideas I've been having about the qualitative characterizations of various quotients of the reals. Even though E_0 cannot be linearly ordered, it can be represented by an acyclic graph, and in that sense its partial order structure is less complicated than say its product with equivalence relation generated by the free group on two generators acting freely. So it seems to separate E_0 and l_1 qualitatively I should be looking at ways in which they can be represented.
In the afternoon, Matt Foreman did the first two parts of his tutorial, "Applications of Descriptive Set Theory to Classical Dynamical Systems." This was based on the same slides as when I saw Foreman speak at CUNY, although this time he catered the talk an audience of peers as opposed to graduate students. As such, he included a number of insights that were in the CUNY talk. Also, he had more time here, so he was able to go into more detail. As before, the problem is to see if it possible to classify the ergodic measure preserving diffeomorphisms of compact manifolds. This was problem was originally posed by Von Neumann in the 1930s. You can argue that attempts to solve are in the background of many of the significant developments in dynamics, such as the notion of entropy, although its not clear that the field is still motivated by it today. Regardless, finding a solution to an 80 year problem is impressive. The answer is no, there is no simple way to classify the ergodic measure preserving diffeomorphisms of compact manifolds. More precisely, this problem is more complicated than the most complicated problems which have algebraic invariants: those isomorphisms which can be seen as actions of the infinite permutation group. If you drop the diffeomorphism aspect, and just look at ergodic measure preserving transformations, the problem is complete analytic, which is as bad it could be. This uses the fact that odometer based dynamical systems are in some sense generic in the space of dynamical systems. Foreman and Weiss however, really wanted to solve Von Neumann's problem. To do this, they created a new type of dynamical system which is motivated by odometer systems, called circular systems. While it is still unknown if an arbitrary odometer system can be realized as an ergodic measure preserving diffeomorphisms they were able to show that circular systems can be. There is also a nice isomorphism between the class of odometer based systems and the class of circular based systems. Putting this altogether you can get that the diffeomorphism problem is also complete analytic.
Finishing off the third day, Maxwell Levine spoke on "Weak Squares and Very Good Scales." I saw a twenty minute version of this talk at the CUNY graduate student conference. When I saw it there I knew absolutely no pcf theory, and I had a hard time tracking what was happening. Now that I've done the pcf summer school and Maxwell had an hour, I think I got a much better idea of what he is doing. There are three kinds of combinatorial principles in play in his work: square principles, stationary reflection, and scales. Stationary reflection is a kind of compactness property, and can be provided by large cardinals. Square principles are a kind of incompactness; they occur in models like L, and for example can provide for the existence of an almost metrizable space which is not itself metrizable. A theorem of Shelah says that scales exist, but the existence of a very good scale contradicts stationary reflection. What's somewhat odd is that square properties can co-exist with stationary reflection. A strong square property can coexist with stationary reflection. The weakest square property in particular can even coexist with simultaneous stationary reflection, but it is not immediately clear if the second weakest square can. One way to check this is to see if this second weakest square implies the existence of a very good scale. Maxwell was able to construct a model where the weakest square holds and there are no very good scales, however the second weakest square still fails in this model. So he next tried to see if it is possible for the second weakest square to strictly hold. The answer is yes, and it can even happen at relatively small cardinals (aleph_omega). Complicating the picture even more, in the presence of large cardinals this second weakest square property implies that the strongest form of simultaneous stationary reflection fails. In fact, Maxwell is able to explicitly characterize why this reflection is failing. What I like about this work now is that Maxwell is basically taking the theory of cardinal arithmetic apart piece by piece and seeing which steps were really necessary. At the very least I feel like I have a better understanding of the machinery of cardinal arithmetic after seeing him speak.
Sunday, October 23, 2016
10/21/2016 UIC Workshop Day 2
Justin Moore started off day 2 with a tutorial on "Iterated Forcing and the Continuum Hypothesis." The goal is try and develop forcings which preserve CH while obtaining partial consequences of combinatorial principles which in their full strength contradict CH. As most of the forcings required to instantiate these principles require iterated forcing, the bulk of this talk focused on possible conditions on a poset which guarantee that even after an iteration of some length, new reals have not been added. This is is more subtle than it may initially seem, as the act of iteration can add reals even when none of the forcings being iterated do so on their own. The first thing one of these posets must do is preserve stationary subsets of aleph_1. There is a reasonably straightforward example which adds reals after being iterated, and the precise reason it does so is because it fails to preserve stationary sets. This leads to the notion of proper forcing. However, this is not enough. A more subtle example shows that an iteration of proper posets which individually do not add reals can still add in the end. One answer is to look at something called completely proper posets. This is a technical definition, but the upshot is that for a completely proper poset, certain information about its generic don't depend on the model being forced over. A theorem of Shelah shows that countable support iterations of completely proper posets with one of two extra assumptions are still completely proper, and thus do not add reals. Interestingly enough, these extra assumptions really are separate and not obviously subsumed by some more general assumption. Again, these advanced forcing talks really are beyond my skill set with forcing, but I am enjoying soaking up the culture and philosophy around them.
In first part of the afternoon, Manachem Magidor finished up his tutorial on "Compactness and Incompactness Principles for Chromatic Numbers and Other Cardinal Sins." This talk started off with examples of both compactness and incompactness theorems for bounds on chromatic number under different settings. If stationary reflection fails, then bounds for chromatic numbers are not compact. On the other hand, in the presence of a countably complete ultrafilter, having chromatic number bounded by aleph_0 is a strongly compact property. After this, Magidor talked some of his joint with with Bagaria in which they relate compactness properties to other abstract compactness principles. While compactness principles are obtainable for large cardinals, getting them on small cardinals, say aleph_n for some n, is another matter, and involves large cardinals of noticeably higher consistency strength. I do want to check out how some of these chromatic number play out under AD.
Omar Ben Neria went next, He spoke on "The Distance Between HOD and V." The question is, up to consistency, how small can HOD be? There are a couple ways to measure this, one way for HOD to be large is for it satisfy a kind of coloring lemma, and a way for HOD to be small is for cardinals in V to be large cardinals in HOD. Woodin has shown from an extendible cardinal that a version of Jensen's dichotomy holds between V and HOD. Ben Neria and Unger were able to strengthen a result of Cummings, Friedman, and Golshani which shows that it is consistent for covering to fail for HOD. In fact, it is consistent that every uncountable regular cardinal in V is huge in HOD. One important piece of technology for the proof is weakly homogeneous forcings. These ensure that the HOD of the forcing extension do not escape the ground model. Another important technique is to use iterated forcing with non-stationary support. I don't understand the intense modified iterated forcing going on here, but the result is nice. I asked about getting all uncountable cardinals of V to be Jonsson cardinals in HOD, and it seems that the their construction does achieve this, although at the cost of at least a measurable cardinal at the moment. I'm not sure what an optimal consistency strength would be. Ben was kind of enough to point to me some information which may help me to attack the consistency problem of all uncountable cardinalsbelow Theta being Jonsson in L(R).
To finish out the day, Nam Trang gave talk on the "Compactness of Omega_1." Under the axiom of determinacy, omega_1 has some supercompactness type properties. For some fixed set X, Trang was interested in finding minimal models satisfying the claim "omega_1 is X-supercompact." Even under ZF, this implies some failures of square. More than just implying that omega_1 is strongly compact with respect to the reals, it turns out that AD and thus property are equiconsistent. The statement that omega_1 is supercompact with respect to the reals is equiconsistent to an even stronger form of determinacy. Moving up with the power set operation on R, the compactness properties become equiconsistent with stronger forms of determinacy still. Trang, in joint work with Rodriguez was able to show that there is a unique model of certain form which thinks that omega_1 is R-supercompact. I always enjoy hearing about these higher forms of determinacy and their corresponding models, and these equiconsistency results add yet another connection between the theory of large cardinals and the theory of determinacy.
In first part of the afternoon, Manachem Magidor finished up his tutorial on "Compactness and Incompactness Principles for Chromatic Numbers and Other Cardinal Sins." This talk started off with examples of both compactness and incompactness theorems for bounds on chromatic number under different settings. If stationary reflection fails, then bounds for chromatic numbers are not compact. On the other hand, in the presence of a countably complete ultrafilter, having chromatic number bounded by aleph_0 is a strongly compact property. After this, Magidor talked some of his joint with with Bagaria in which they relate compactness properties to other abstract compactness principles. While compactness principles are obtainable for large cardinals, getting them on small cardinals, say aleph_n for some n, is another matter, and involves large cardinals of noticeably higher consistency strength. I do want to check out how some of these chromatic number play out under AD.
Omar Ben Neria went next, He spoke on "The Distance Between HOD and V." The question is, up to consistency, how small can HOD be? There are a couple ways to measure this, one way for HOD to be large is for it satisfy a kind of coloring lemma, and a way for HOD to be small is for cardinals in V to be large cardinals in HOD. Woodin has shown from an extendible cardinal that a version of Jensen's dichotomy holds between V and HOD. Ben Neria and Unger were able to strengthen a result of Cummings, Friedman, and Golshani which shows that it is consistent for covering to fail for HOD. In fact, it is consistent that every uncountable regular cardinal in V is huge in HOD. One important piece of technology for the proof is weakly homogeneous forcings. These ensure that the HOD of the forcing extension do not escape the ground model. Another important technique is to use iterated forcing with non-stationary support. I don't understand the intense modified iterated forcing going on here, but the result is nice. I asked about getting all uncountable cardinals of V to be Jonsson cardinals in HOD, and it seems that the their construction does achieve this, although at the cost of at least a measurable cardinal at the moment. I'm not sure what an optimal consistency strength would be. Ben was kind of enough to point to me some information which may help me to attack the consistency problem of all uncountable cardinalsbelow Theta being Jonsson in L(R).
To finish out the day, Nam Trang gave talk on the "Compactness of Omega_1." Under the axiom of determinacy, omega_1 has some supercompactness type properties. For some fixed set X, Trang was interested in finding minimal models satisfying the claim "omega_1 is X-supercompact." Even under ZF, this implies some failures of square. More than just implying that omega_1 is strongly compact with respect to the reals, it turns out that AD and thus property are equiconsistent. The statement that omega_1 is supercompact with respect to the reals is equiconsistent to an even stronger form of determinacy. Moving up with the power set operation on R, the compactness properties become equiconsistent with stronger forms of determinacy still. Trang, in joint work with Rodriguez was able to show that there is a unique model of certain form which thinks that omega_1 is R-supercompact. I always enjoy hearing about these higher forms of determinacy and their corresponding models, and these equiconsistency results add yet another connection between the theory of large cardinals and the theory of determinacy.
Saturday, October 22, 2016
10/20/2016 UIC Workshop Day 1
Manachem Magidor started the morning off with the first tutorial. He is speaking on "Compactness and Incompactness for Chromatic Numbers and Other Cardinal Sins." He phrases compactness in the intuitive way "If every small substructure has a property, then the whole structure does", where small means lower cardinality. He gave quite a few examples of compactness phenomena, stretching across algebra, topology,and set theory. I really liked his intuitive descriptions of weakly compact, strongly compact and supercompact cardinals.
The first talk in the afternoon was by Sherwood Hachtman, He spoke on "Forcing Analytic Determinacy." Essentially, this is related to the reverse math question of how much set theory do you need to get that lightface analytic determinacy is equivalent to 0# existing. More combinatorially, you can ask determinacy with respect to Turing cones. While Woodin conjectured that Z_2 is enough strength (and it certainly is for boldface analytic determinacy), recent work by Cheng and Schinder seems to indicate that Z_3 may be necessary. Hachtman didn't have concrete answers to this yet, but inspired by the technique of genericity iterations, he has forced Turing cones to be inside of some sets who should have them.
This all was pretty interesting to me as I didn't know that anybody was thinking about reverse math questions surrounding determinacy. The classic result about Borel determinacy is certainly cool enough to warrant more attention, so I'm not sure why I had never really thought about it. It would be neat if the lightface analytic determinacy really was force-able, and thus an interesting intermediate stage between Borel determinacy and full boldface analytic determinacy.
Finishing out the first day, Spencer Unger presented the "Poor Man's Tree Property." People in the audience said it was like watching a mini James Cummings present. The idea is to replace the tree property with the weaker assumption that there are no special trees, This is consistency work, o the goal is to find a forcing which can ensure that there are no special trees. Because of some obstructions, such a model will have to fail GCH and SCH everywhere. Combining approaches of Magidor and Gitik, Under was able to create a large interval of cardinals, starting at aleph_2 and passing some singular cardinals, on which there are no special trees. The modified Magidor approach works from the ground up and can ensure that aleph_2 allows for no special trees. This approach can be modified for finitely many cardinals, if you thread the forcings together appropriately. The Gitik approach can be used to ensure that a singular cardinal doesn't allow for special trees. For his result, Unger put these approaches together in supercompact Prikry-like forcing which very carefully collapses certain intervals of cardinals. The details are much too complicated for me to understand with my current knowledge of forcing.
Even though it is outside of my skill set, these super high tech forcings are quite interesting, and its good to see people like Unger, who have intuition and a big picture in mind, present about them.
- A cardinal is weakly compact with respect to a certain property if the property passes to small pieces of it,
- A cardinal is strongly compact with respect to a certain property if the property passes to all pieces of it, and
- A cardinal is supercompact it if strongly compact with respect to all second order properties,
The first talk in the afternoon was by Sherwood Hachtman, He spoke on "Forcing Analytic Determinacy." Essentially, this is related to the reverse math question of how much set theory do you need to get that lightface analytic determinacy is equivalent to 0# existing. More combinatorially, you can ask determinacy with respect to Turing cones. While Woodin conjectured that Z_2 is enough strength (and it certainly is for boldface analytic determinacy), recent work by Cheng and Schinder seems to indicate that Z_3 may be necessary. Hachtman didn't have concrete answers to this yet, but inspired by the technique of genericity iterations, he has forced Turing cones to be inside of some sets who should have them.
This all was pretty interesting to me as I didn't know that anybody was thinking about reverse math questions surrounding determinacy. The classic result about Borel determinacy is certainly cool enough to warrant more attention, so I'm not sure why I had never really thought about it. It would be neat if the lightface analytic determinacy really was force-able, and thus an interesting intermediate stage between Borel determinacy and full boldface analytic determinacy.
Finishing out the first day, Spencer Unger presented the "Poor Man's Tree Property." People in the audience said it was like watching a mini James Cummings present. The idea is to replace the tree property with the weaker assumption that there are no special trees, This is consistency work, o the goal is to find a forcing which can ensure that there are no special trees. Because of some obstructions, such a model will have to fail GCH and SCH everywhere. Combining approaches of Magidor and Gitik, Under was able to create a large interval of cardinals, starting at aleph_2 and passing some singular cardinals, on which there are no special trees. The modified Magidor approach works from the ground up and can ensure that aleph_2 allows for no special trees. This approach can be modified for finitely many cardinals, if you thread the forcings together appropriately. The Gitik approach can be used to ensure that a singular cardinal doesn't allow for special trees. For his result, Unger put these approaches together in supercompact Prikry-like forcing which very carefully collapses certain intervals of cardinals. The details are much too complicated for me to understand with my current knowledge of forcing.
Even though it is outside of my skill set, these super high tech forcings are quite interesting, and its good to see people like Unger, who have intuition and a big picture in mind, present about them.
Monday, October 3, 2016
Generalizing Silver's Theorem Part 2
In this post, I will be proving Silver’s theorem the forcing way. Recall the statement.
From here on out I will be referring to lightface analytic sets as Σ11, co-analytic sets as Π11, and sets which are both as Δ11.
As in the topological proof, I will be using lightface analytic sets, and exploiting their properties. The first two of these properties are reasonably straightforward:
This proof really consists of test, the results of which provide construction principles. In words, the test is as follows: for every real x, is there a Δ11 set A so that x ∈ A ⊆ [x]E? If the answer is yes, then the fact that Δ11 is countable implies that E has countably many equivalence classes. We certainly cannot have more equivalence classes than elements of Δ11 in this case. So what happens if the answer is no? This is where we will use forcing to create a copy of 2ω which consists of pairwise E-inequivalent elements.
Let Q be the collection of non-empty Σ11 sets. For A,B ∈ Q, we say that A ≤ B (A extends B) iff A ⊆ B. I will refer to elements of Q as conditions. This is because they include some reals and exclude others. Intuitively, extending conditions in this partial order corresponds to filtering out reals. Now let S = {x ∈ ℝ : (∃ A ∈ Δ11) [x ∈ A ⊆ [x]E} and set W to be the complement of S. If the answer to the question is no, W is non-empty. What I want now is for W to be a Σ11 set. That way I can force below W. The information that W imparts will guarantee that the forcing produces pairwise E-inequivalent reals.
It suffices to show that S is Π11. We create a coding of the Δ11 sets in order to make this computation. Let U ⊆ ω × ℝ be universal for Π11. Define Π11 sets V1 and V2 by
that is ⟨ n,m ⟩ finds a section of U which codes complementary Π11 sets. Call the set of such codes C. C is Π11. Now
This shows that S is Π11.
With this knowledge of W in hand, it makes sense to speak of the conditions in Q below W. Call this poset P. I will use this poset to produce pairwise E-inequivalent reals. For the uninitiated, forcing with a poset involves three major pieces.
For sentences ϕ (which are allowed to reference names) and conditions A, the notation A ⊩ ϕ means that whenever G is generic, if A ∈ G, then ϕ is true in V[G]. This symbol means that A forces ϕ. It is an important theorem in forcing that if a statement ϕ is true about V[G], then it was forced to be true by some condition in the poset. To work with the forcing in this context, I will restrict down to a countable structure M for which enough of ZFC is true for the proofs to go through. This allows for the explicit construction of generics. Actually, what I want is a little bit stronger. By first taking a sub-universe of enough of ZFC, and then taking a countable elementary substructure M, I can assume that there is an embedding ϕ:M → V which preserves all of the first order statements which come up in this proof. This small universe does not have Q or P, but M has ϕ−1(P) and ϕ−1(Q), which I will refer to as P and Q. As general notation, the bar will be used to denotes the ϕ-inverse of a set.
Now the poset Q I am considering has a particularly nice interplay with its generics.
I use cylinder sets to construct the name. For a finite string s of 0’s and 1’s, N(s) is those reals which have s as an initial segment. Consider the name ẋ = {(ṡ,A) : A ⊆ N(s)}. Then if G is a generic filter for Q over M, this name at least guarantees that ẋG ∈ ℝ. It suffices to show that A ⊩ ẋ ∈ A for every A ∈ P. Since A is Σ11, I can write A = p[T], where T is a recursive tree on ω × ω. Then
I will first check that this does not depend on our choice of T. A could have many representations, and this computation should not depend on the representation. So Suppose T0 and T1 are recursive trees on ω × ω so that A = p[T0] = p[T1]. Suppose G is generic for P over M. I need to see that
In M, it is true that (∀ x ∈ ℝ)[x ∈ p[T0] ⇐⇒ x ∈ p[T1]]. Since this is a simple enough statement (Π21), Shoenfield absoluteness declares it true in M[H] as well no matter which generic H is chosen. So the forcing statement is true.
Now suppose towards a contradiction that A ⊮ẋ ∈ A. It is part of the formal definition of ⊩ that there then must be a condition B ≤ A so that B ⊩ ẋ ∉ A. Then B ⊩ ẋ ∉ B. Thus WLOG A ⊩ ẋ ∉ A. Call this element A0. This is not quite a contradiction, but it is almost there. Write A0 = p[T0]. I will use A0 to build a generic extension G so that in M[G], ẋG ∉ A and ẋG ∈ p[T0]. In V, let {Dn : n ∈ ω} be the dense subsets of Q from M. I want to fix the first coordinate of the reals in A0, fix a valid first coordinate for a witness that those reals are in A0 and extend A0 to meet D1. Let s0 ∈ ω and t00 ∈ ω be so that
This fixes the first digit of reals inside A0 and the first digit of potential witness y which prove that those reals are in A0. Let A1 ⊆ A0′ ∩ N(s0) be so that A1 ∈ D1.
Continue this way inductively, defining An, Tn, sn, tnm, and An′ for n ∈ ω and m ≤ n so that
This is a generic filter for Q. By construction, ẋG =limn → ∞sn. I also assumed that A0 ⊩ ẋ ∉ A0. Thus ẋG ∉ A0. Let y = limn → ∞tn0. Then (ẋG,y) ∈ [T0]. So ẋG ∈ p[T0] = A0. This is a contradiction.
With this in mind, I will freely identify a real x as Q-generic (or P-generic). At this point, to meaningfully interact with the equivalence relation, I need to be working in ℝ2. Unfortunately, there are two ways to lift Q and P up to ℝ2. The first way is the simpler, and will be denoted Q × Q. Conditions in Q × Q are simply ordered pairs (A,B), where A,B ∈ Q. The second way is slightly more complicated. Q2 is the version of Q defined off of ℝ2. That is A ∈ Q2 iff A ⊆ ℝ2, A is Σ11 and A ≠ ∅. Syntactically, this means there is a recursive tree T on ω × ω × ω so that (x,y) ∈ A iff there is a z ∈ ℝ so that (x,y,z) ∈ [T].
Let P × P be the product poset of P, and P2 = {A ∈ Q2 : A ≤ W × W}. Note that if G is Q2 generic (or P2-generic), then G defines a pair of reals (xG,yG). With the name (x,y) = {(ṡ,ṫ,A) : A ⊆ N(s) × N(t)}, the same proof as for one dimension shows that
Now the question is how do these two-dimensional posets interact with the one-dimensional version. The answer is quite well! I just have to define some projection operations, and then there is a smooth translation between the posets. For A ∈ Q2, we let
Then πL(A),πR(A) ∈ Q, as Σ11 is closed under projections. If G is Q2-generic, let
and πR(G) is similarly defined.
One particular nice interaction is that generic reals pass back and forth. I am proving this for Q2 and the barred version for notational simplicity and because I can.
Suppose (x,y) is Q2 generic. Let G2 be the associated Q2 generic. I will show that x is Q-generic, and the argument for y is symmetric. Let G = πL(G2). I will show that G is generic. Let D be dense for Q. Let A2 ∈ Q2, and set A = πL[A2]. Then as A ∈ Q, there is a B ≤ A in Q so that B ∈ D. Define B2 by
Then B2 is Σ11 and B2 ≠ ∅. So B2 ∈ Q2. Note that πL[B2] = B ∈ D. Thus {X ∈ Q2 : πL[X] ∈ D} is dense in Q2. So there is an X ∈ G so that πL[X] ∈ D. Thus G ∩ D ≠ ∅, and so G is generic. Moreover,
So G is the the generic associated to x.
Putting this all together, it finally comes out that forcing below W produces reals which are pairwise E-inequivalent.
BWOC suppose that (A,B) ∈ P × P are so that (A,B) ⊩P2 ((x,y) ∈ E). Consider (A,A,B) ∈ P2 × P. Note that A2 ∩ (ℝ2 − E) is in Σ11 and non-empty. Thus A2 ∩ (ℝ2 − E) ∈ P2. Let G2 be P2-generic over V so that A ∈ G2. Let G be P-generic over V[G2] so that B ∈ G. Then in V[G2][G], G2 × G defines a generic triple (x,y,z). Now
Therefore In V[G2][G], E is not an equivalence relation. But to say E is an equivalence relation is a Π21 statement. So by Shoenfield absoluteness again, in V[G2][G], E is an equivalence relation. This is a contradiction.
Now I will build a perfect set of generics for P. To do so, I have to return to the countable sufficiently elementary submodel M. Note that by elementarity,
The construction is an inductive construction. I will create Σ11 sets As for all finite strings s in such a way that incompatible strings correspond to disjoint Σ11 sets and each descending sequence of A’s produces a generic. If I can follow this through, the set of such generics will be perfect and they msut also be pairwise E-inequivalent.
Let {Dn : n ∈ ω} enumerate the dense sets in P × P. Let (A0,A1) ∈ P2 be so that A0 ⊩P ẋ(i0) = a0 and A1 ⊩P ẋ(i0) = b0 with a0 ≠ b0. In P × P there are A00, A01, A10, A11 so that A00 ∩ A01 = ∅ and A10 ∩ A11 = ∅. I can then find A′ij ≤ Aij so that A′ij ∈ D0. I simply continue in this way, defining As,A′s ∈ P for s ∈ 2<ω so that
This version is reasonably slick, and doesn’t use the axiom of choice in ways which are incompatible with AD. However, it doesn’t actually generalize all that well. If instead of being co-analytic, the equivalence relation is merely co-κ-Suslin (a set A is κ-Suslin if it can be written as A = p[T], where T is a tree on ω × κ), the same proof will not got through as I will lose features of Σ11 which made some of these complexity computations come out just so. So what’s the solution? Infinitary logic and admissible ordinals. Or, in my case, infinity Borel codes and admissible ordinals. Look for that next time.
Theorem 1
Suppose that E is a lightface co-analytic equivalence relation on the reals (in this case taken to be Cantor space: infinite strings of 0’s and 1’s). Then either E has countably many equivalence classes, or there is a perfect set of E-inequivalent elements.
Proof.Suppose that E is a lightface co-analytic equivalence relation on the reals (in this case taken to be Cantor space: infinite strings of 0’s and 1’s). Then either E has countably many equivalence classes, or there is a perfect set of E-inequivalent elements.
From here on out I will be referring to lightface analytic sets as Σ11, co-analytic sets as Π11, and sets which are both as Δ11.
As in the topological proof, I will be using lightface analytic sets, and exploiting their properties. The first two of these properties are reasonably straightforward:
- There are only countably many Σ11 sets, Π11 sets, and Δ11 sets.
- Disjoint Σ11 sets can be separated by a Δ11 set. That is whenever A and B are disjoint Σ11 sets, there is a Δ11 set C so that A ⊆ C and B ∩ C = ∅. This is equivalent to the reduction property for Π11. For any Π11 sets A and B, we can find disjoint Π11 sets A′ and B′ so that A ⊆ A′, B ⊆ B′ and A ∪ B = A′ ∪ B′.
This proof really consists of test, the results of which provide construction principles. In words, the test is as follows: for every real x, is there a Δ11 set A so that x ∈ A ⊆ [x]E? If the answer is yes, then the fact that Δ11 is countable implies that E has countably many equivalence classes. We certainly cannot have more equivalence classes than elements of Δ11 in this case. So what happens if the answer is no? This is where we will use forcing to create a copy of 2ω which consists of pairwise E-inequivalent elements.
Let Q be the collection of non-empty Σ11 sets. For A,B ∈ Q, we say that A ≤ B (A extends B) iff A ⊆ B. I will refer to elements of Q as conditions. This is because they include some reals and exclude others. Intuitively, extending conditions in this partial order corresponds to filtering out reals. Now let S = {x ∈ ℝ : (∃ A ∈ Δ11) [x ∈ A ⊆ [x]E} and set W to be the complement of S. If the answer to the question is no, W is non-empty. What I want now is for W to be a Σ11 set. That way I can force below W. The information that W imparts will guarantee that the forcing produces pairwise E-inequivalent reals.
Claim 1
W is Σ11.
ReasonW is Σ11.
It suffices to show that S is Π11. We create a coding of the Δ11 sets in order to make this computation. Let U ⊆ ω × ℝ be universal for Π11. Define Π11 sets V1 and V2 by
- V1(⟨ n,m ⟩,x) ⇐⇒ U(n,x) and
- V2(⟨ n,m ⟩,x) ⇐⇒ U(m,x).
(∀ y ∈ ℝ)[V1′(⟨ n,m ⟩,y) ∨ V2′(⟨ n,m ⟩,y)], |
x ∈ S ⇐⇒ (∃ n,m ∈ ω)[⟨ n,m ⟩ ∈ C ∧ V1′(⟨ n,m ⟩,x) ∧ (∀ y ∈ ℝ)[¬ V2′(⟨ n,m ⟩,x) ⇒ x E y]]. |
With this knowledge of W in hand, it makes sense to speak of the conditions in Q below W. Call this poset P. I will use this poset to produce pairwise E-inequivalent reals. For the uninitiated, forcing with a poset involves three major pieces.
- Dense subsets of P(these are dense in the intuitive way; every condition can be extended to be inside the set). Topologically these correspond roughly to comeager sets.
- Generic filters for P: G is a generic filter if it is closed upwards in the ≤ order, is closed under "meets" in the sense that if p,q ∈ G, then there is an r ∈ G so that r ≤ p,q, and G has non-empty intersection with every dense subset of P.
- Names for objects which can be described by P. These are essentially collections of pairs (A,τ) where A is a condition in P and τ is another name. The construction of names is inherently recursive.
For sentences ϕ (which are allowed to reference names) and conditions A, the notation A ⊩ ϕ means that whenever G is generic, if A ∈ G, then ϕ is true in V[G]. This symbol means that A forces ϕ. It is an important theorem in forcing that if a statement ϕ is true about V[G], then it was forced to be true by some condition in the poset. To work with the forcing in this context, I will restrict down to a countable structure M for which enough of ZFC is true for the proofs to go through. This allows for the explicit construction of generics. Actually, what I want is a little bit stronger. By first taking a sub-universe of enough of ZFC, and then taking a countable elementary substructure M, I can assume that there is an embedding ϕ:M → V which preserves all of the first order statements which come up in this proof. This small universe does not have Q or P, but M has ϕ−1(P) and ϕ−1(Q), which I will refer to as P and Q. As general notation, the bar will be used to denotes the ϕ-inverse of a set.
Now the poset Q I am considering has a particularly nice interplay with its generics.
Claim 2
Suppose G is generic filter on Q. Then G gives rise to a real xG so that xG ∈ A iff A ∈ G.
ReasonSuppose G is generic filter on Q. Then G gives rise to a real xG so that xG ∈ A iff A ∈ G.
I use cylinder sets to construct the name. For a finite string s of 0’s and 1’s, N(s) is those reals which have s as an initial segment. Consider the name ẋ = {(ṡ,A) : A ⊆ N(s)}. Then if G is a generic filter for Q over M, this name at least guarantees that ẋG ∈ ℝ. It suffices to show that A ⊩ ẋ ∈ A for every A ∈ P. Since A is Σ11, I can write A = p[T], where T is a recursive tree on ω × ω. Then
A ⊩ ẋ ∈ A ⇐⇒ A ⊩ (∃ y ∈ ℝ)(∀ n ∈ ω)[(ẋ|n,y|n) ∈ T]. |
ẋ ∈ p[T0] ⇐⇒ ẋ ∈ p[T1]. |
Now suppose towards a contradiction that A ⊮ẋ ∈ A. It is part of the formal definition of ⊩ that there then must be a condition B ≤ A so that B ⊩ ẋ ∉ A. Then B ⊩ ẋ ∉ B. Thus WLOG A ⊩ ẋ ∉ A. Call this element A0. This is not quite a contradiction, but it is almost there. Write A0 = p[T0]. I will use A0 to build a generic extension G so that in M[G], ẋG ∉ A and ẋG ∈ p[T0]. In V, let {Dn : n ∈ ω} be the dense subsets of Q from M. I want to fix the first coordinate of the reals in A0, fix a valid first coordinate for a witness that those reals are in A0 and extend A0 to meet D1. Let s0 ∈ ω and t00 ∈ ω be so that
A0′ = {x : (∃ y ∈ ℝ)[(x(0),y(0)) = (s,t00) ∧ (x,y) ∈ [T0]]} ⊆ A0. |
Continue this way inductively, defining An, Tn, sn, tnm, and An′ for n ∈ ω and m ≤ n so that
- An = p[Tn],
- sn,tnm ∈ ωn,
- sn ⊑ sn+1,
- tnm ⊑ tn+1m,
- An′ = {x : sn ⊑ x ∧ (∃ y→ ∈ ∏m ≤ nN(tnm))(∀ m ≤ n)[(x,ym) ∈ [Tm]] },
- An ∈ Dn, and
- An ≥ An′ ≥ An+1.
A ∈ G ⇐⇒ (∃ n ∈ ω)[An ≤ A]. |
With this in mind, I will freely identify a real x as Q-generic (or P-generic). At this point, to meaningfully interact with the equivalence relation, I need to be working in ℝ2. Unfortunately, there are two ways to lift Q and P up to ℝ2. The first way is the simpler, and will be denoted Q × Q. Conditions in Q × Q are simply ordered pairs (A,B), where A,B ∈ Q. The second way is slightly more complicated. Q2 is the version of Q defined off of ℝ2. That is A ∈ Q2 iff A ⊆ ℝ2, A is Σ11 and A ≠ ∅. Syntactically, this means there is a recursive tree T on ω × ω × ω so that (x,y) ∈ A iff there is a z ∈ ℝ so that (x,y,z) ∈ [T].
Let P × P be the product poset of P, and P2 = {A ∈ Q2 : A ≤ W × W}. Note that if G is Q2 generic (or P2-generic), then G defines a pair of reals (xG,yG). With the name (x,y) = {(ṡ,ṫ,A) : A ⊆ N(s) × N(t)}, the same proof as for one dimension shows that
(x,y)G ∈ A ⇐⇒ A ∈ G. |
Now the question is how do these two-dimensional posets interact with the one-dimensional version. The answer is quite well! I just have to define some projection operations, and then there is a smooth translation between the posets. For A ∈ Q2, we let
πL(A) = {x ∈ ℝ : (∃ y ∈ ℝ)[(x,y) ∈ A]} and πR(A) = {y ∈ ℝ : (∃ x ∈ ℝ)[(x,y) ∈ A]}. |
πL(G) = {A ∈ |
| : (∃ B ∈ G2)[πL(B) ⊆ A]}, |
One particular nice interaction is that generic reals pass back and forth. I am proving this for Q2 and the barred version for notational simplicity and because I can.
Claim 3
If (x,y) is Q2 generic, then x and y are generic for Q.
ReasonIf (x,y) is Q2 generic, then x and y are generic for Q.
Suppose (x,y) is Q2 generic. Let G2 be the associated Q2 generic. I will show that x is Q-generic, and the argument for y is symmetric. Let G = πL(G2). I will show that G is generic. Let D be dense for Q. Let A2 ∈ Q2, and set A = πL[A2]. Then as A ∈ Q, there is a B ≤ A in Q so that B ∈ D. Define B2 by
B2 = (B × ℝ) ⋂ A2. |
x ∈ A ⇐⇒ (x,y) ∈ A × ℝ ⇐⇒ A × ℝ ∈ G ⇐⇒ A ∈ G. |
Putting this all together, it finally comes out that forcing below W produces reals which are pairwise E-inequivalent.
Claim 4
If (x,y) are P × P-generic, then (x,y) ∉ E.
ReasonIf (x,y) are P × P-generic, then (x,y) ∉ E.
BWOC suppose that (A,B) ∈ P × P are so that (A,B) ⊩P2 ((x,y) ∈ E). Consider (A,A,B) ∈ P2 × P. Note that A2 ∩ (ℝ2 − E) is in Σ11 and non-empty. Thus A2 ∩ (ℝ2 − E) ∈ P2. Let G2 be P2-generic over V so that A ∈ G2. Let G be P-generic over V[G2] so that B ∈ G. Then in V[G2][G], G2 × G defines a generic triple (x,y,z). Now
- in V[G2], (x,y) ∉ E (as A2 ∈ G2),
- in V[πL(G2)][G], x E z (as B ∈ G, (x,z) is P × P-generic, and A × B ∈ πL(G2) × G) and
- in V[πR(G2)][G], y E z (as B ∈ G, (y,z) is P × P-generic, and A × B ∈ πR(G2) × G).
in V[G2][G], [(x E z) ∧ (y E z) ∧ ((x,y) ∉ E)]. |
Now I will build a perfect set of generics for P. To do so, I have to return to the countable sufficiently elementary submodel M. Note that by elementarity,
M ⊨ [∅ ⊩P_2 (x,y) ∉ E]. |
Let {Dn : n ∈ ω} enumerate the dense sets in P × P. Let (A0,A1) ∈ P2 be so that A0 ⊩P ẋ(i0) = a0 and A1 ⊩P ẋ(i0) = b0 with a0 ≠ b0. In P × P there are A00, A01, A10, A11 so that A00 ∩ A01 = ∅ and A10 ∩ A11 = ∅. I can then find A′ij ≤ Aij so that A′ij ∈ D0. I simply continue in this way, defining As,A′s ∈ P for s ∈ 2<ω so that
- for all s,t ∈ 2<ω, if s ⊑ t, then At ≤ A′s,
- for all s ∈ 2<ω, A′s ≤ As,
- for all s ∈ 2<ω, As ⁀ 0 ∩ As ⁀ 1 = ∅, and
- for all s ∈ 2<ω, A′s ∈ D|s|−1.
This version is reasonably slick, and doesn’t use the axiom of choice in ways which are incompatible with AD. However, it doesn’t actually generalize all that well. If instead of being co-analytic, the equivalence relation is merely co-κ-Suslin (a set A is κ-Suslin if it can be written as A = p[T], where T is a tree on ω × κ), the same proof will not got through as I will lose features of Σ11 which made some of these complexity computations come out just so. So what’s the solution? Infinitary logic and admissible ordinals. Or, in my case, infinity Borel codes and admissible ordinals. Look for that next time.
This document was translated from LATEX by HEVEA. (How did it work?)
Subscribe to:
Posts (Atom)