My reference letters are all finally uploaded, and I've been working on cover letters all day. Things are starting to feel very real. I am feeling quite nervous about this whole thing, but maybe I will feel better once I submit my first application.
In other news, I presented twice at UNT this week. Both times I talked about Jonsson cardinals from the ZFC perspective. It was an interesting experience catering similar information to different audiences: the graduate logic group and the algebra seminar. The talk for the graduate logic group seemed to go quite well, and I must thank Christopher Lambie-Hanson for his excellent presentation of this material during the UCI summer school, as my talk was based very strongly this material. For the algebra seminar I tried to to use the topic to motivate some basic objects of set theory from algebraic concerns. I didn't get through all of the material I wanted to (in retrospect I was overambitious), and I initially felt it had not gone so well. A little after the talk, however, the organizer of the algebra seminar approached me and said she had enjoyed the talk and even reassured me about that style of presentation for job talks. She was almost certainly being too nice, but ti made me feel better regardless. In any case, it seemed to go better than when I presented to them about Whitehead's problem years ago. I still haven't quite figured out a quick way of presenting ordinals and cardinals which preserves enough information to be worth it.
I've created this blog to document my experience as a research mathematician. My broad interests are logic and set theory, and I am currently focused on descriptive inner model theory. In addition to posting my daily progress, I hope to include the occasional opinion post or interesting article.
Saturday, October 29, 2016
Monday, October 24, 2016
10/22/2016 UIC Workshop Day 3
Starting off this day, Justin Moore finished up his tutorial. Today was more about generating certain kinds of trees, in such a way that CH is not violated. This, of course, continues the theme of the previous day, as the existence of certain trees is known to follow from MA and be denied by diamond. So if one of these can be forced to exist while maintaining CH, it provides evidence that diamond was necessary. In particular, Moore covered some of the details of a forcing which is completely proper, adds a special tree, and is still completely proper in the extension. It's interesting to me that when I read through the material on trees in Kunen that I mistakenly thought of it as a kind of curiosity and as mostly completed work. I was way wrong, but I'm happy that I was, as the discussion of these tree properties, square properties, and other combinatorial properties generates a lot of interesting mathematics.
Anush Tserunyan was the next speaker and her talk was on "Integer Cost and Ergodic Actions." She spent a little while acclimating the group to the ideas of invariant descriptive set theory, and although that was review for me, her perspective and intuitions were helpful. In particular, she focused on the interplay between group actions, equivalence relations, and graphs. These connections provide background to try and answer the question "What is most efficient presentation of an equivalence relation E?" and "What is the smallest number of edges required to create a graph G which understands E?" Towards a qualitative answer to these questions, you can see if there is an acyclic Borel graph which captures the equivalence relation. A good example here is the equivalence relation generated from the free action of a free group. More quantitatively, using a measure you can integrate the number of edges then take an infimum and obtain a cost function. Galorian showed in 1998 that this infimum is achieved precisely when the equivalence relations can be captured by an acyclic Borel graph. Hjorth continued in this vein, looking more specifically at countable ergodic measure preserving equivalence relations. If the cost is an integer or infinite, then the equivalence relation can be seen as generated by the free action of a free group with at most countably many generators. Anush created tools which not only simplify the proof of this substantially, but allow her to strengthen its conclusion to add that the subactions generated by individual generators is ergodic as well. Anush is always a good speaker, and since I normally see her speak at descriptive set theory sessions, it was interesting to see how she catered her material to a more pure set theory audience. Her ability to live in both the world of set theory and that of analysis is something I strive for. As a final note, something in her discussion of these graph representations shook loose some ideas I've been having about the qualitative characterizations of various quotients of the reals. Even though E_0 cannot be linearly ordered, it can be represented by an acyclic graph, and in that sense its partial order structure is less complicated than say its product with equivalence relation generated by the free group on two generators acting freely. So it seems to separate E_0 and l_1 qualitatively I should be looking at ways in which they can be represented.
In the afternoon, Matt Foreman did the first two parts of his tutorial, "Applications of Descriptive Set Theory to Classical Dynamical Systems." This was based on the same slides as when I saw Foreman speak at CUNY, although this time he catered the talk an audience of peers as opposed to graduate students. As such, he included a number of insights that were in the CUNY talk. Also, he had more time here, so he was able to go into more detail. As before, the problem is to see if it possible to classify the ergodic measure preserving diffeomorphisms of compact manifolds. This was problem was originally posed by Von Neumann in the 1930s. You can argue that attempts to solve are in the background of many of the significant developments in dynamics, such as the notion of entropy, although its not clear that the field is still motivated by it today. Regardless, finding a solution to an 80 year problem is impressive. The answer is no, there is no simple way to classify the ergodic measure preserving diffeomorphisms of compact manifolds. More precisely, this problem is more complicated than the most complicated problems which have algebraic invariants: those isomorphisms which can be seen as actions of the infinite permutation group. If you drop the diffeomorphism aspect, and just look at ergodic measure preserving transformations, the problem is complete analytic, which is as bad it could be. This uses the fact that odometer based dynamical systems are in some sense generic in the space of dynamical systems. Foreman and Weiss however, really wanted to solve Von Neumann's problem. To do this, they created a new type of dynamical system which is motivated by odometer systems, called circular systems. While it is still unknown if an arbitrary odometer system can be realized as an ergodic measure preserving diffeomorphisms they were able to show that circular systems can be. There is also a nice isomorphism between the class of odometer based systems and the class of circular based systems. Putting this altogether you can get that the diffeomorphism problem is also complete analytic.
Finishing off the third day, Maxwell Levine spoke on "Weak Squares and Very Good Scales." I saw a twenty minute version of this talk at the CUNY graduate student conference. When I saw it there I knew absolutely no pcf theory, and I had a hard time tracking what was happening. Now that I've done the pcf summer school and Maxwell had an hour, I think I got a much better idea of what he is doing. There are three kinds of combinatorial principles in play in his work: square principles, stationary reflection, and scales. Stationary reflection is a kind of compactness property, and can be provided by large cardinals. Square principles are a kind of incompactness; they occur in models like L, and for example can provide for the existence of an almost metrizable space which is not itself metrizable. A theorem of Shelah says that scales exist, but the existence of a very good scale contradicts stationary reflection. What's somewhat odd is that square properties can co-exist with stationary reflection. A strong square property can coexist with stationary reflection. The weakest square property in particular can even coexist with simultaneous stationary reflection, but it is not immediately clear if the second weakest square can. One way to check this is to see if this second weakest square implies the existence of a very good scale. Maxwell was able to construct a model where the weakest square holds and there are no very good scales, however the second weakest square still fails in this model. So he next tried to see if it is possible for the second weakest square to strictly hold. The answer is yes, and it can even happen at relatively small cardinals (aleph_omega). Complicating the picture even more, in the presence of large cardinals this second weakest square property implies that the strongest form of simultaneous stationary reflection fails. In fact, Maxwell is able to explicitly characterize why this reflection is failing. What I like about this work now is that Maxwell is basically taking the theory of cardinal arithmetic apart piece by piece and seeing which steps were really necessary. At the very least I feel like I have a better understanding of the machinery of cardinal arithmetic after seeing him speak.
Anush Tserunyan was the next speaker and her talk was on "Integer Cost and Ergodic Actions." She spent a little while acclimating the group to the ideas of invariant descriptive set theory, and although that was review for me, her perspective and intuitions were helpful. In particular, she focused on the interplay between group actions, equivalence relations, and graphs. These connections provide background to try and answer the question "What is most efficient presentation of an equivalence relation E?" and "What is the smallest number of edges required to create a graph G which understands E?" Towards a qualitative answer to these questions, you can see if there is an acyclic Borel graph which captures the equivalence relation. A good example here is the equivalence relation generated from the free action of a free group. More quantitatively, using a measure you can integrate the number of edges then take an infimum and obtain a cost function. Galorian showed in 1998 that this infimum is achieved precisely when the equivalence relations can be captured by an acyclic Borel graph. Hjorth continued in this vein, looking more specifically at countable ergodic measure preserving equivalence relations. If the cost is an integer or infinite, then the equivalence relation can be seen as generated by the free action of a free group with at most countably many generators. Anush created tools which not only simplify the proof of this substantially, but allow her to strengthen its conclusion to add that the subactions generated by individual generators is ergodic as well. Anush is always a good speaker, and since I normally see her speak at descriptive set theory sessions, it was interesting to see how she catered her material to a more pure set theory audience. Her ability to live in both the world of set theory and that of analysis is something I strive for. As a final note, something in her discussion of these graph representations shook loose some ideas I've been having about the qualitative characterizations of various quotients of the reals. Even though E_0 cannot be linearly ordered, it can be represented by an acyclic graph, and in that sense its partial order structure is less complicated than say its product with equivalence relation generated by the free group on two generators acting freely. So it seems to separate E_0 and l_1 qualitatively I should be looking at ways in which they can be represented.
In the afternoon, Matt Foreman did the first two parts of his tutorial, "Applications of Descriptive Set Theory to Classical Dynamical Systems." This was based on the same slides as when I saw Foreman speak at CUNY, although this time he catered the talk an audience of peers as opposed to graduate students. As such, he included a number of insights that were in the CUNY talk. Also, he had more time here, so he was able to go into more detail. As before, the problem is to see if it possible to classify the ergodic measure preserving diffeomorphisms of compact manifolds. This was problem was originally posed by Von Neumann in the 1930s. You can argue that attempts to solve are in the background of many of the significant developments in dynamics, such as the notion of entropy, although its not clear that the field is still motivated by it today. Regardless, finding a solution to an 80 year problem is impressive. The answer is no, there is no simple way to classify the ergodic measure preserving diffeomorphisms of compact manifolds. More precisely, this problem is more complicated than the most complicated problems which have algebraic invariants: those isomorphisms which can be seen as actions of the infinite permutation group. If you drop the diffeomorphism aspect, and just look at ergodic measure preserving transformations, the problem is complete analytic, which is as bad it could be. This uses the fact that odometer based dynamical systems are in some sense generic in the space of dynamical systems. Foreman and Weiss however, really wanted to solve Von Neumann's problem. To do this, they created a new type of dynamical system which is motivated by odometer systems, called circular systems. While it is still unknown if an arbitrary odometer system can be realized as an ergodic measure preserving diffeomorphisms they were able to show that circular systems can be. There is also a nice isomorphism between the class of odometer based systems and the class of circular based systems. Putting this altogether you can get that the diffeomorphism problem is also complete analytic.
Finishing off the third day, Maxwell Levine spoke on "Weak Squares and Very Good Scales." I saw a twenty minute version of this talk at the CUNY graduate student conference. When I saw it there I knew absolutely no pcf theory, and I had a hard time tracking what was happening. Now that I've done the pcf summer school and Maxwell had an hour, I think I got a much better idea of what he is doing. There are three kinds of combinatorial principles in play in his work: square principles, stationary reflection, and scales. Stationary reflection is a kind of compactness property, and can be provided by large cardinals. Square principles are a kind of incompactness; they occur in models like L, and for example can provide for the existence of an almost metrizable space which is not itself metrizable. A theorem of Shelah says that scales exist, but the existence of a very good scale contradicts stationary reflection. What's somewhat odd is that square properties can co-exist with stationary reflection. A strong square property can coexist with stationary reflection. The weakest square property in particular can even coexist with simultaneous stationary reflection, but it is not immediately clear if the second weakest square can. One way to check this is to see if this second weakest square implies the existence of a very good scale. Maxwell was able to construct a model where the weakest square holds and there are no very good scales, however the second weakest square still fails in this model. So he next tried to see if it is possible for the second weakest square to strictly hold. The answer is yes, and it can even happen at relatively small cardinals (aleph_omega). Complicating the picture even more, in the presence of large cardinals this second weakest square property implies that the strongest form of simultaneous stationary reflection fails. In fact, Maxwell is able to explicitly characterize why this reflection is failing. What I like about this work now is that Maxwell is basically taking the theory of cardinal arithmetic apart piece by piece and seeing which steps were really necessary. At the very least I feel like I have a better understanding of the machinery of cardinal arithmetic after seeing him speak.
Sunday, October 23, 2016
10/21/2016 UIC Workshop Day 2
Justin Moore started off day 2 with a tutorial on "Iterated Forcing and the Continuum Hypothesis." The goal is try and develop forcings which preserve CH while obtaining partial consequences of combinatorial principles which in their full strength contradict CH. As most of the forcings required to instantiate these principles require iterated forcing, the bulk of this talk focused on possible conditions on a poset which guarantee that even after an iteration of some length, new reals have not been added. This is is more subtle than it may initially seem, as the act of iteration can add reals even when none of the forcings being iterated do so on their own. The first thing one of these posets must do is preserve stationary subsets of aleph_1. There is a reasonably straightforward example which adds reals after being iterated, and the precise reason it does so is because it fails to preserve stationary sets. This leads to the notion of proper forcing. However, this is not enough. A more subtle example shows that an iteration of proper posets which individually do not add reals can still add in the end. One answer is to look at something called completely proper posets. This is a technical definition, but the upshot is that for a completely proper poset, certain information about its generic don't depend on the model being forced over. A theorem of Shelah shows that countable support iterations of completely proper posets with one of two extra assumptions are still completely proper, and thus do not add reals. Interestingly enough, these extra assumptions really are separate and not obviously subsumed by some more general assumption. Again, these advanced forcing talks really are beyond my skill set with forcing, but I am enjoying soaking up the culture and philosophy around them.
In first part of the afternoon, Manachem Magidor finished up his tutorial on "Compactness and Incompactness Principles for Chromatic Numbers and Other Cardinal Sins." This talk started off with examples of both compactness and incompactness theorems for bounds on chromatic number under different settings. If stationary reflection fails, then bounds for chromatic numbers are not compact. On the other hand, in the presence of a countably complete ultrafilter, having chromatic number bounded by aleph_0 is a strongly compact property. After this, Magidor talked some of his joint with with Bagaria in which they relate compactness properties to other abstract compactness principles. While compactness principles are obtainable for large cardinals, getting them on small cardinals, say aleph_n for some n, is another matter, and involves large cardinals of noticeably higher consistency strength. I do want to check out how some of these chromatic number play out under AD.
Omar Ben Neria went next, He spoke on "The Distance Between HOD and V." The question is, up to consistency, how small can HOD be? There are a couple ways to measure this, one way for HOD to be large is for it satisfy a kind of coloring lemma, and a way for HOD to be small is for cardinals in V to be large cardinals in HOD. Woodin has shown from an extendible cardinal that a version of Jensen's dichotomy holds between V and HOD. Ben Neria and Unger were able to strengthen a result of Cummings, Friedman, and Golshani which shows that it is consistent for covering to fail for HOD. In fact, it is consistent that every uncountable regular cardinal in V is huge in HOD. One important piece of technology for the proof is weakly homogeneous forcings. These ensure that the HOD of the forcing extension do not escape the ground model. Another important technique is to use iterated forcing with non-stationary support. I don't understand the intense modified iterated forcing going on here, but the result is nice. I asked about getting all uncountable cardinals of V to be Jonsson cardinals in HOD, and it seems that the their construction does achieve this, although at the cost of at least a measurable cardinal at the moment. I'm not sure what an optimal consistency strength would be. Ben was kind of enough to point to me some information which may help me to attack the consistency problem of all uncountable cardinalsbelow Theta being Jonsson in L(R).
To finish out the day, Nam Trang gave talk on the "Compactness of Omega_1." Under the axiom of determinacy, omega_1 has some supercompactness type properties. For some fixed set X, Trang was interested in finding minimal models satisfying the claim "omega_1 is X-supercompact." Even under ZF, this implies some failures of square. More than just implying that omega_1 is strongly compact with respect to the reals, it turns out that AD and thus property are equiconsistent. The statement that omega_1 is supercompact with respect to the reals is equiconsistent to an even stronger form of determinacy. Moving up with the power set operation on R, the compactness properties become equiconsistent with stronger forms of determinacy still. Trang, in joint work with Rodriguez was able to show that there is a unique model of certain form which thinks that omega_1 is R-supercompact. I always enjoy hearing about these higher forms of determinacy and their corresponding models, and these equiconsistency results add yet another connection between the theory of large cardinals and the theory of determinacy.
In first part of the afternoon, Manachem Magidor finished up his tutorial on "Compactness and Incompactness Principles for Chromatic Numbers and Other Cardinal Sins." This talk started off with examples of both compactness and incompactness theorems for bounds on chromatic number under different settings. If stationary reflection fails, then bounds for chromatic numbers are not compact. On the other hand, in the presence of a countably complete ultrafilter, having chromatic number bounded by aleph_0 is a strongly compact property. After this, Magidor talked some of his joint with with Bagaria in which they relate compactness properties to other abstract compactness principles. While compactness principles are obtainable for large cardinals, getting them on small cardinals, say aleph_n for some n, is another matter, and involves large cardinals of noticeably higher consistency strength. I do want to check out how some of these chromatic number play out under AD.
Omar Ben Neria went next, He spoke on "The Distance Between HOD and V." The question is, up to consistency, how small can HOD be? There are a couple ways to measure this, one way for HOD to be large is for it satisfy a kind of coloring lemma, and a way for HOD to be small is for cardinals in V to be large cardinals in HOD. Woodin has shown from an extendible cardinal that a version of Jensen's dichotomy holds between V and HOD. Ben Neria and Unger were able to strengthen a result of Cummings, Friedman, and Golshani which shows that it is consistent for covering to fail for HOD. In fact, it is consistent that every uncountable regular cardinal in V is huge in HOD. One important piece of technology for the proof is weakly homogeneous forcings. These ensure that the HOD of the forcing extension do not escape the ground model. Another important technique is to use iterated forcing with non-stationary support. I don't understand the intense modified iterated forcing going on here, but the result is nice. I asked about getting all uncountable cardinals of V to be Jonsson cardinals in HOD, and it seems that the their construction does achieve this, although at the cost of at least a measurable cardinal at the moment. I'm not sure what an optimal consistency strength would be. Ben was kind of enough to point to me some information which may help me to attack the consistency problem of all uncountable cardinalsbelow Theta being Jonsson in L(R).
To finish out the day, Nam Trang gave talk on the "Compactness of Omega_1." Under the axiom of determinacy, omega_1 has some supercompactness type properties. For some fixed set X, Trang was interested in finding minimal models satisfying the claim "omega_1 is X-supercompact." Even under ZF, this implies some failures of square. More than just implying that omega_1 is strongly compact with respect to the reals, it turns out that AD and thus property are equiconsistent. The statement that omega_1 is supercompact with respect to the reals is equiconsistent to an even stronger form of determinacy. Moving up with the power set operation on R, the compactness properties become equiconsistent with stronger forms of determinacy still. Trang, in joint work with Rodriguez was able to show that there is a unique model of certain form which thinks that omega_1 is R-supercompact. I always enjoy hearing about these higher forms of determinacy and their corresponding models, and these equiconsistency results add yet another connection between the theory of large cardinals and the theory of determinacy.
Saturday, October 22, 2016
10/20/2016 UIC Workshop Day 1
Manachem Magidor started the morning off with the first tutorial. He is speaking on "Compactness and Incompactness for Chromatic Numbers and Other Cardinal Sins." He phrases compactness in the intuitive way "If every small substructure has a property, then the whole structure does", where small means lower cardinality. He gave quite a few examples of compactness phenomena, stretching across algebra, topology,and set theory. I really liked his intuitive descriptions of weakly compact, strongly compact and supercompact cardinals.
The first talk in the afternoon was by Sherwood Hachtman, He spoke on "Forcing Analytic Determinacy." Essentially, this is related to the reverse math question of how much set theory do you need to get that lightface analytic determinacy is equivalent to 0# existing. More combinatorially, you can ask determinacy with respect to Turing cones. While Woodin conjectured that Z_2 is enough strength (and it certainly is for boldface analytic determinacy), recent work by Cheng and Schinder seems to indicate that Z_3 may be necessary. Hachtman didn't have concrete answers to this yet, but inspired by the technique of genericity iterations, he has forced Turing cones to be inside of some sets who should have them.
This all was pretty interesting to me as I didn't know that anybody was thinking about reverse math questions surrounding determinacy. The classic result about Borel determinacy is certainly cool enough to warrant more attention, so I'm not sure why I had never really thought about it. It would be neat if the lightface analytic determinacy really was force-able, and thus an interesting intermediate stage between Borel determinacy and full boldface analytic determinacy.
Finishing out the first day, Spencer Unger presented the "Poor Man's Tree Property." People in the audience said it was like watching a mini James Cummings present. The idea is to replace the tree property with the weaker assumption that there are no special trees, This is consistency work, o the goal is to find a forcing which can ensure that there are no special trees. Because of some obstructions, such a model will have to fail GCH and SCH everywhere. Combining approaches of Magidor and Gitik, Under was able to create a large interval of cardinals, starting at aleph_2 and passing some singular cardinals, on which there are no special trees. The modified Magidor approach works from the ground up and can ensure that aleph_2 allows for no special trees. This approach can be modified for finitely many cardinals, if you thread the forcings together appropriately. The Gitik approach can be used to ensure that a singular cardinal doesn't allow for special trees. For his result, Unger put these approaches together in supercompact Prikry-like forcing which very carefully collapses certain intervals of cardinals. The details are much too complicated for me to understand with my current knowledge of forcing.
Even though it is outside of my skill set, these super high tech forcings are quite interesting, and its good to see people like Unger, who have intuition and a big picture in mind, present about them.
- A cardinal is weakly compact with respect to a certain property if the property passes to small pieces of it,
- A cardinal is strongly compact with respect to a certain property if the property passes to all pieces of it, and
- A cardinal is supercompact it if strongly compact with respect to all second order properties,
The first talk in the afternoon was by Sherwood Hachtman, He spoke on "Forcing Analytic Determinacy." Essentially, this is related to the reverse math question of how much set theory do you need to get that lightface analytic determinacy is equivalent to 0# existing. More combinatorially, you can ask determinacy with respect to Turing cones. While Woodin conjectured that Z_2 is enough strength (and it certainly is for boldface analytic determinacy), recent work by Cheng and Schinder seems to indicate that Z_3 may be necessary. Hachtman didn't have concrete answers to this yet, but inspired by the technique of genericity iterations, he has forced Turing cones to be inside of some sets who should have them.
This all was pretty interesting to me as I didn't know that anybody was thinking about reverse math questions surrounding determinacy. The classic result about Borel determinacy is certainly cool enough to warrant more attention, so I'm not sure why I had never really thought about it. It would be neat if the lightface analytic determinacy really was force-able, and thus an interesting intermediate stage between Borel determinacy and full boldface analytic determinacy.
Finishing out the first day, Spencer Unger presented the "Poor Man's Tree Property." People in the audience said it was like watching a mini James Cummings present. The idea is to replace the tree property with the weaker assumption that there are no special trees, This is consistency work, o the goal is to find a forcing which can ensure that there are no special trees. Because of some obstructions, such a model will have to fail GCH and SCH everywhere. Combining approaches of Magidor and Gitik, Under was able to create a large interval of cardinals, starting at aleph_2 and passing some singular cardinals, on which there are no special trees. The modified Magidor approach works from the ground up and can ensure that aleph_2 allows for no special trees. This approach can be modified for finitely many cardinals, if you thread the forcings together appropriately. The Gitik approach can be used to ensure that a singular cardinal doesn't allow for special trees. For his result, Unger put these approaches together in supercompact Prikry-like forcing which very carefully collapses certain intervals of cardinals. The details are much too complicated for me to understand with my current knowledge of forcing.
Even though it is outside of my skill set, these super high tech forcings are quite interesting, and its good to see people like Unger, who have intuition and a big picture in mind, present about them.
Monday, October 3, 2016
Generalizing Silver's Theorem Part 2
In this post, I will be proving Silver’s theorem the forcing way. Recall the statement.
From here on out I will be referring to lightface analytic sets as Σ11, co-analytic sets as Π11, and sets which are both as Δ11.
As in the topological proof, I will be using lightface analytic sets, and exploiting their properties. The first two of these properties are reasonably straightforward:
This proof really consists of test, the results of which provide construction principles. In words, the test is as follows: for every real x, is there a Δ11 set A so that x ∈ A ⊆ [x]E? If the answer is yes, then the fact that Δ11 is countable implies that E has countably many equivalence classes. We certainly cannot have more equivalence classes than elements of Δ11 in this case. So what happens if the answer is no? This is where we will use forcing to create a copy of 2ω which consists of pairwise E-inequivalent elements.
Let Q be the collection of non-empty Σ11 sets. For A,B ∈ Q, we say that A ≤ B (A extends B) iff A ⊆ B. I will refer to elements of Q as conditions. This is because they include some reals and exclude others. Intuitively, extending conditions in this partial order corresponds to filtering out reals. Now let S = {x ∈ ℝ : (∃ A ∈ Δ11) [x ∈ A ⊆ [x]E} and set W to be the complement of S. If the answer to the question is no, W is non-empty. What I want now is for W to be a Σ11 set. That way I can force below W. The information that W imparts will guarantee that the forcing produces pairwise E-inequivalent reals.
It suffices to show that S is Π11. We create a coding of the Δ11 sets in order to make this computation. Let U ⊆ ω × ℝ be universal for Π11. Define Π11 sets V1 and V2 by
that is ⟨ n,m ⟩ finds a section of U which codes complementary Π11 sets. Call the set of such codes C. C is Π11. Now
This shows that S is Π11.
With this knowledge of W in hand, it makes sense to speak of the conditions in Q below W. Call this poset P. I will use this poset to produce pairwise E-inequivalent reals. For the uninitiated, forcing with a poset involves three major pieces.
For sentences ϕ (which are allowed to reference names) and conditions A, the notation A ⊩ ϕ means that whenever G is generic, if A ∈ G, then ϕ is true in V[G]. This symbol means that A forces ϕ. It is an important theorem in forcing that if a statement ϕ is true about V[G], then it was forced to be true by some condition in the poset. To work with the forcing in this context, I will restrict down to a countable structure M for which enough of ZFC is true for the proofs to go through. This allows for the explicit construction of generics. Actually, what I want is a little bit stronger. By first taking a sub-universe of enough of ZFC, and then taking a countable elementary substructure M, I can assume that there is an embedding ϕ:M → V which preserves all of the first order statements which come up in this proof. This small universe does not have Q or P, but M has ϕ−1(P) and ϕ−1(Q), which I will refer to as P and Q. As general notation, the bar will be used to denotes the ϕ-inverse of a set.
Now the poset Q I am considering has a particularly nice interplay with its generics.
I use cylinder sets to construct the name. For a finite string s of 0’s and 1’s, N(s) is those reals which have s as an initial segment. Consider the name ẋ = {(ṡ,A) : A ⊆ N(s)}. Then if G is a generic filter for Q over M, this name at least guarantees that ẋG ∈ ℝ. It suffices to show that A ⊩ ẋ ∈ A for every A ∈ P. Since A is Σ11, I can write A = p[T], where T is a recursive tree on ω × ω. Then
I will first check that this does not depend on our choice of T. A could have many representations, and this computation should not depend on the representation. So Suppose T0 and T1 are recursive trees on ω × ω so that A = p[T0] = p[T1]. Suppose G is generic for P over M. I need to see that
In M, it is true that (∀ x ∈ ℝ)[x ∈ p[T0] ⇐⇒ x ∈ p[T1]]. Since this is a simple enough statement (Π21), Shoenfield absoluteness declares it true in M[H] as well no matter which generic H is chosen. So the forcing statement is true.
Now suppose towards a contradiction that A ⊮ẋ ∈ A. It is part of the formal definition of ⊩ that there then must be a condition B ≤ A so that B ⊩ ẋ ∉ A. Then B ⊩ ẋ ∉ B. Thus WLOG A ⊩ ẋ ∉ A. Call this element A0. This is not quite a contradiction, but it is almost there. Write A0 = p[T0]. I will use A0 to build a generic extension G so that in M[G], ẋG ∉ A and ẋG ∈ p[T0]. In V, let {Dn : n ∈ ω} be the dense subsets of Q from M. I want to fix the first coordinate of the reals in A0, fix a valid first coordinate for a witness that those reals are in A0 and extend A0 to meet D1. Let s0 ∈ ω and t00 ∈ ω be so that
This fixes the first digit of reals inside A0 and the first digit of potential witness y which prove that those reals are in A0. Let A1 ⊆ A0′ ∩ N(s0) be so that A1 ∈ D1.
Continue this way inductively, defining An, Tn, sn, tnm, and An′ for n ∈ ω and m ≤ n so that
This is a generic filter for Q. By construction, ẋG =limn → ∞sn. I also assumed that A0 ⊩ ẋ ∉ A0. Thus ẋG ∉ A0. Let y = limn → ∞tn0. Then (ẋG,y) ∈ [T0]. So ẋG ∈ p[T0] = A0. This is a contradiction.
With this in mind, I will freely identify a real x as Q-generic (or P-generic). At this point, to meaningfully interact with the equivalence relation, I need to be working in ℝ2. Unfortunately, there are two ways to lift Q and P up to ℝ2. The first way is the simpler, and will be denoted Q × Q. Conditions in Q × Q are simply ordered pairs (A,B), where A,B ∈ Q. The second way is slightly more complicated. Q2 is the version of Q defined off of ℝ2. That is A ∈ Q2 iff A ⊆ ℝ2, A is Σ11 and A ≠ ∅. Syntactically, this means there is a recursive tree T on ω × ω × ω so that (x,y) ∈ A iff there is a z ∈ ℝ so that (x,y,z) ∈ [T].
Let P × P be the product poset of P, and P2 = {A ∈ Q2 : A ≤ W × W}. Note that if G is Q2 generic (or P2-generic), then G defines a pair of reals (xG,yG). With the name (x,y) = {(ṡ,ṫ,A) : A ⊆ N(s) × N(t)}, the same proof as for one dimension shows that
Now the question is how do these two-dimensional posets interact with the one-dimensional version. The answer is quite well! I just have to define some projection operations, and then there is a smooth translation between the posets. For A ∈ Q2, we let
Then πL(A),πR(A) ∈ Q, as Σ11 is closed under projections. If G is Q2-generic, let
and πR(G) is similarly defined.
One particular nice interaction is that generic reals pass back and forth. I am proving this for Q2 and the barred version for notational simplicity and because I can.
Suppose (x,y) is Q2 generic. Let G2 be the associated Q2 generic. I will show that x is Q-generic, and the argument for y is symmetric. Let G = πL(G2). I will show that G is generic. Let D be dense for Q. Let A2 ∈ Q2, and set A = πL[A2]. Then as A ∈ Q, there is a B ≤ A in Q so that B ∈ D. Define B2 by
Then B2 is Σ11 and B2 ≠ ∅. So B2 ∈ Q2. Note that πL[B2] = B ∈ D. Thus {X ∈ Q2 : πL[X] ∈ D} is dense in Q2. So there is an X ∈ G so that πL[X] ∈ D. Thus G ∩ D ≠ ∅, and so G is generic. Moreover,
So G is the the generic associated to x.
Putting this all together, it finally comes out that forcing below W produces reals which are pairwise E-inequivalent.
BWOC suppose that (A,B) ∈ P × P are so that (A,B) ⊩P2 ((x,y) ∈ E). Consider (A,A,B) ∈ P2 × P. Note that A2 ∩ (ℝ2 − E) is in Σ11 and non-empty. Thus A2 ∩ (ℝ2 − E) ∈ P2. Let G2 be P2-generic over V so that A ∈ G2. Let G be P-generic over V[G2] so that B ∈ G. Then in V[G2][G], G2 × G defines a generic triple (x,y,z). Now
Therefore In V[G2][G], E is not an equivalence relation. But to say E is an equivalence relation is a Π21 statement. So by Shoenfield absoluteness again, in V[G2][G], E is an equivalence relation. This is a contradiction.
Now I will build a perfect set of generics for P. To do so, I have to return to the countable sufficiently elementary submodel M. Note that by elementarity,
The construction is an inductive construction. I will create Σ11 sets As for all finite strings s in such a way that incompatible strings correspond to disjoint Σ11 sets and each descending sequence of A’s produces a generic. If I can follow this through, the set of such generics will be perfect and they msut also be pairwise E-inequivalent.
Let {Dn : n ∈ ω} enumerate the dense sets in P × P. Let (A0,A1) ∈ P2 be so that A0 ⊩P ẋ(i0) = a0 and A1 ⊩P ẋ(i0) = b0 with a0 ≠ b0. In P × P there are A00, A01, A10, A11 so that A00 ∩ A01 = ∅ and A10 ∩ A11 = ∅. I can then find A′ij ≤ Aij so that A′ij ∈ D0. I simply continue in this way, defining As,A′s ∈ P for s ∈ 2<ω so that
This version is reasonably slick, and doesn’t use the axiom of choice in ways which are incompatible with AD. However, it doesn’t actually generalize all that well. If instead of being co-analytic, the equivalence relation is merely co-κ-Suslin (a set A is κ-Suslin if it can be written as A = p[T], where T is a tree on ω × κ), the same proof will not got through as I will lose features of Σ11 which made some of these complexity computations come out just so. So what’s the solution? Infinitary logic and admissible ordinals. Or, in my case, infinity Borel codes and admissible ordinals. Look for that next time.
Theorem 1
Suppose that E is a lightface co-analytic equivalence relation on the reals (in this case taken to be Cantor space: infinite strings of 0’s and 1’s). Then either E has countably many equivalence classes, or there is a perfect set of E-inequivalent elements.
Proof.Suppose that E is a lightface co-analytic equivalence relation on the reals (in this case taken to be Cantor space: infinite strings of 0’s and 1’s). Then either E has countably many equivalence classes, or there is a perfect set of E-inequivalent elements.
From here on out I will be referring to lightface analytic sets as Σ11, co-analytic sets as Π11, and sets which are both as Δ11.
As in the topological proof, I will be using lightface analytic sets, and exploiting their properties. The first two of these properties are reasonably straightforward:
- There are only countably many Σ11 sets, Π11 sets, and Δ11 sets.
- Disjoint Σ11 sets can be separated by a Δ11 set. That is whenever A and B are disjoint Σ11 sets, there is a Δ11 set C so that A ⊆ C and B ∩ C = ∅. This is equivalent to the reduction property for Π11. For any Π11 sets A and B, we can find disjoint Π11 sets A′ and B′ so that A ⊆ A′, B ⊆ B′ and A ∪ B = A′ ∪ B′.
This proof really consists of test, the results of which provide construction principles. In words, the test is as follows: for every real x, is there a Δ11 set A so that x ∈ A ⊆ [x]E? If the answer is yes, then the fact that Δ11 is countable implies that E has countably many equivalence classes. We certainly cannot have more equivalence classes than elements of Δ11 in this case. So what happens if the answer is no? This is where we will use forcing to create a copy of 2ω which consists of pairwise E-inequivalent elements.
Let Q be the collection of non-empty Σ11 sets. For A,B ∈ Q, we say that A ≤ B (A extends B) iff A ⊆ B. I will refer to elements of Q as conditions. This is because they include some reals and exclude others. Intuitively, extending conditions in this partial order corresponds to filtering out reals. Now let S = {x ∈ ℝ : (∃ A ∈ Δ11) [x ∈ A ⊆ [x]E} and set W to be the complement of S. If the answer to the question is no, W is non-empty. What I want now is for W to be a Σ11 set. That way I can force below W. The information that W imparts will guarantee that the forcing produces pairwise E-inequivalent reals.
Claim 1
W is Σ11.
ReasonW is Σ11.
It suffices to show that S is Π11. We create a coding of the Δ11 sets in order to make this computation. Let U ⊆ ω × ℝ be universal for Π11. Define Π11 sets V1 and V2 by
- V1(⟨ n,m ⟩,x) ⇐⇒ U(n,x) and
- V2(⟨ n,m ⟩,x) ⇐⇒ U(m,x).
(∀ y ∈ ℝ)[V1′(⟨ n,m ⟩,y) ∨ V2′(⟨ n,m ⟩,y)], |
x ∈ S ⇐⇒ (∃ n,m ∈ ω)[⟨ n,m ⟩ ∈ C ∧ V1′(⟨ n,m ⟩,x) ∧ (∀ y ∈ ℝ)[¬ V2′(⟨ n,m ⟩,x) ⇒ x E y]]. |
With this knowledge of W in hand, it makes sense to speak of the conditions in Q below W. Call this poset P. I will use this poset to produce pairwise E-inequivalent reals. For the uninitiated, forcing with a poset involves three major pieces.
- Dense subsets of P(these are dense in the intuitive way; every condition can be extended to be inside the set). Topologically these correspond roughly to comeager sets.
- Generic filters for P: G is a generic filter if it is closed upwards in the ≤ order, is closed under "meets" in the sense that if p,q ∈ G, then there is an r ∈ G so that r ≤ p,q, and G has non-empty intersection with every dense subset of P.
- Names for objects which can be described by P. These are essentially collections of pairs (A,τ) where A is a condition in P and τ is another name. The construction of names is inherently recursive.
For sentences ϕ (which are allowed to reference names) and conditions A, the notation A ⊩ ϕ means that whenever G is generic, if A ∈ G, then ϕ is true in V[G]. This symbol means that A forces ϕ. It is an important theorem in forcing that if a statement ϕ is true about V[G], then it was forced to be true by some condition in the poset. To work with the forcing in this context, I will restrict down to a countable structure M for which enough of ZFC is true for the proofs to go through. This allows for the explicit construction of generics. Actually, what I want is a little bit stronger. By first taking a sub-universe of enough of ZFC, and then taking a countable elementary substructure M, I can assume that there is an embedding ϕ:M → V which preserves all of the first order statements which come up in this proof. This small universe does not have Q or P, but M has ϕ−1(P) and ϕ−1(Q), which I will refer to as P and Q. As general notation, the bar will be used to denotes the ϕ-inverse of a set.
Now the poset Q I am considering has a particularly nice interplay with its generics.
Claim 2
Suppose G is generic filter on Q. Then G gives rise to a real xG so that xG ∈ A iff A ∈ G.
ReasonSuppose G is generic filter on Q. Then G gives rise to a real xG so that xG ∈ A iff A ∈ G.
I use cylinder sets to construct the name. For a finite string s of 0’s and 1’s, N(s) is those reals which have s as an initial segment. Consider the name ẋ = {(ṡ,A) : A ⊆ N(s)}. Then if G is a generic filter for Q over M, this name at least guarantees that ẋG ∈ ℝ. It suffices to show that A ⊩ ẋ ∈ A for every A ∈ P. Since A is Σ11, I can write A = p[T], where T is a recursive tree on ω × ω. Then
A ⊩ ẋ ∈ A ⇐⇒ A ⊩ (∃ y ∈ ℝ)(∀ n ∈ ω)[(ẋ|n,y|n) ∈ T]. |
ẋ ∈ p[T0] ⇐⇒ ẋ ∈ p[T1]. |
Now suppose towards a contradiction that A ⊮ẋ ∈ A. It is part of the formal definition of ⊩ that there then must be a condition B ≤ A so that B ⊩ ẋ ∉ A. Then B ⊩ ẋ ∉ B. Thus WLOG A ⊩ ẋ ∉ A. Call this element A0. This is not quite a contradiction, but it is almost there. Write A0 = p[T0]. I will use A0 to build a generic extension G so that in M[G], ẋG ∉ A and ẋG ∈ p[T0]. In V, let {Dn : n ∈ ω} be the dense subsets of Q from M. I want to fix the first coordinate of the reals in A0, fix a valid first coordinate for a witness that those reals are in A0 and extend A0 to meet D1. Let s0 ∈ ω and t00 ∈ ω be so that
A0′ = {x : (∃ y ∈ ℝ)[(x(0),y(0)) = (s,t00) ∧ (x,y) ∈ [T0]]} ⊆ A0. |
Continue this way inductively, defining An, Tn, sn, tnm, and An′ for n ∈ ω and m ≤ n so that
- An = p[Tn],
- sn,tnm ∈ ωn,
- sn ⊑ sn+1,
- tnm ⊑ tn+1m,
- An′ = {x : sn ⊑ x ∧ (∃ y→ ∈ ∏m ≤ nN(tnm))(∀ m ≤ n)[(x,ym) ∈ [Tm]] },
- An ∈ Dn, and
- An ≥ An′ ≥ An+1.
A ∈ G ⇐⇒ (∃ n ∈ ω)[An ≤ A]. |
With this in mind, I will freely identify a real x as Q-generic (or P-generic). At this point, to meaningfully interact with the equivalence relation, I need to be working in ℝ2. Unfortunately, there are two ways to lift Q and P up to ℝ2. The first way is the simpler, and will be denoted Q × Q. Conditions in Q × Q are simply ordered pairs (A,B), where A,B ∈ Q. The second way is slightly more complicated. Q2 is the version of Q defined off of ℝ2. That is A ∈ Q2 iff A ⊆ ℝ2, A is Σ11 and A ≠ ∅. Syntactically, this means there is a recursive tree T on ω × ω × ω so that (x,y) ∈ A iff there is a z ∈ ℝ so that (x,y,z) ∈ [T].
Let P × P be the product poset of P, and P2 = {A ∈ Q2 : A ≤ W × W}. Note that if G is Q2 generic (or P2-generic), then G defines a pair of reals (xG,yG). With the name (x,y) = {(ṡ,ṫ,A) : A ⊆ N(s) × N(t)}, the same proof as for one dimension shows that
(x,y)G ∈ A ⇐⇒ A ∈ G. |
Now the question is how do these two-dimensional posets interact with the one-dimensional version. The answer is quite well! I just have to define some projection operations, and then there is a smooth translation between the posets. For A ∈ Q2, we let
πL(A) = {x ∈ ℝ : (∃ y ∈ ℝ)[(x,y) ∈ A]} and πR(A) = {y ∈ ℝ : (∃ x ∈ ℝ)[(x,y) ∈ A]}. |
πL(G) = {A ∈ |
| : (∃ B ∈ G2)[πL(B) ⊆ A]}, |
One particular nice interaction is that generic reals pass back and forth. I am proving this for Q2 and the barred version for notational simplicity and because I can.
Claim 3
If (x,y) is Q2 generic, then x and y are generic for Q.
ReasonIf (x,y) is Q2 generic, then x and y are generic for Q.
Suppose (x,y) is Q2 generic. Let G2 be the associated Q2 generic. I will show that x is Q-generic, and the argument for y is symmetric. Let G = πL(G2). I will show that G is generic. Let D be dense for Q. Let A2 ∈ Q2, and set A = πL[A2]. Then as A ∈ Q, there is a B ≤ A in Q so that B ∈ D. Define B2 by
B2 = (B × ℝ) ⋂ A2. |
x ∈ A ⇐⇒ (x,y) ∈ A × ℝ ⇐⇒ A × ℝ ∈ G ⇐⇒ A ∈ G. |
Putting this all together, it finally comes out that forcing below W produces reals which are pairwise E-inequivalent.
Claim 4
If (x,y) are P × P-generic, then (x,y) ∉ E.
ReasonIf (x,y) are P × P-generic, then (x,y) ∉ E.
BWOC suppose that (A,B) ∈ P × P are so that (A,B) ⊩P2 ((x,y) ∈ E). Consider (A,A,B) ∈ P2 × P. Note that A2 ∩ (ℝ2 − E) is in Σ11 and non-empty. Thus A2 ∩ (ℝ2 − E) ∈ P2. Let G2 be P2-generic over V so that A ∈ G2. Let G be P-generic over V[G2] so that B ∈ G. Then in V[G2][G], G2 × G defines a generic triple (x,y,z). Now
- in V[G2], (x,y) ∉ E (as A2 ∈ G2),
- in V[πL(G2)][G], x E z (as B ∈ G, (x,z) is P × P-generic, and A × B ∈ πL(G2) × G) and
- in V[πR(G2)][G], y E z (as B ∈ G, (y,z) is P × P-generic, and A × B ∈ πR(G2) × G).
in V[G2][G], [(x E z) ∧ (y E z) ∧ ((x,y) ∉ E)]. |
Now I will build a perfect set of generics for P. To do so, I have to return to the countable sufficiently elementary submodel M. Note that by elementarity,
M ⊨ [∅ ⊩P_2 (x,y) ∉ E]. |
Let {Dn : n ∈ ω} enumerate the dense sets in P × P. Let (A0,A1) ∈ P2 be so that A0 ⊩P ẋ(i0) = a0 and A1 ⊩P ẋ(i0) = b0 with a0 ≠ b0. In P × P there are A00, A01, A10, A11 so that A00 ∩ A01 = ∅ and A10 ∩ A11 = ∅. I can then find A′ij ≤ Aij so that A′ij ∈ D0. I simply continue in this way, defining As,A′s ∈ P for s ∈ 2<ω so that
- for all s,t ∈ 2<ω, if s ⊑ t, then At ≤ A′s,
- for all s ∈ 2<ω, A′s ≤ As,
- for all s ∈ 2<ω, As ⁀ 0 ∩ As ⁀ 1 = ∅, and
- for all s ∈ 2<ω, A′s ∈ D|s|−1.
This version is reasonably slick, and doesn’t use the axiom of choice in ways which are incompatible with AD. However, it doesn’t actually generalize all that well. If instead of being co-analytic, the equivalence relation is merely co-κ-Suslin (a set A is κ-Suslin if it can be written as A = p[T], where T is a tree on ω × κ), the same proof will not got through as I will lose features of Σ11 which made some of these complexity computations come out just so. So what’s the solution? Infinitary logic and admissible ordinals. Or, in my case, infinity Borel codes and admissible ordinals. Look for that next time.
This document was translated from LATEX by HEVEA. (How did it work?)
Wednesday, September 28, 2016
Generalizing Silver's Theorem Part 1
I should probably start this off with what Silver's theorem is and why it matters. In invariant descriptive set theory, quotients of the real line and other Polish spaces are studied. They are compared to each other via Borel reductions, which are maps that inject one quotient space into another. In this sense, different sizes are generated by taking quotients. For instance, 1 is achieved by identifying all points in the real line with each other. 2 could be formed by identifying all negative numbers together and all non-negative numbers together. In similar ways, all of the finite numbers are constructed. To construct ω, one could identify all points in together for all n. Finally, the real line itself comes from using the identity equivalence relation. But what about other, more complicated sets? Can larger ordinals be constructed as quotients of the real line? Are there quotients which fit in between ω and the continuum?
One way that ω1 can be created is to consider reals (now as elements of ωω) as coding partial orders on the naturals. Those that code well-orders can be assigned the corresponding ordinal that well-order is in bijection with. Give reals which don't code well-orders a formal rank of ∞. Then identify reals together if they have the same rank. The resulting quotient is in bijection with ω1. This construction heavily used the set of reals which code well-orders, which is a prime example of non-Borel set. It is in fact co-analytic. That makes this equivalence relation much more complicated than Borel. It is in fact more complicated than analytic or co-analytic, as seeing if two reals are equivalent requires searching a co-analytic set. So if we allow equivalence relations which are more than co-analytic, complicated sets can arise as quotients. What about co-analytic sets themselves? Silver's theorem tells us that if a quotient space is created a co-analytic equivalence relation, then either the quotient is countable or the real line embeds into the quotient. Basically, for co-analytic equivalence relations, we have obtained a form of the continuum hypothesis.
In these posts I am going to prove Silver's theorem twice, and then I will prove it's generalization, the Harrington-Shelah theorem. Silver's theorem is commonly proved topologically, using the topology generated by the lightface analytic sets (sets which can be written as the projection of a recursive tree on ωω). Since that is easily found, I am first going to prove in the language of forcing with the lightface analytic sets as conditions. This is the way that almost generalizes, but doesn't quite. The proof to come next time.
One way that ω1 can be created is to consider reals (now as elements of ωω) as coding partial orders on the naturals. Those that code well-orders can be assigned the corresponding ordinal that well-order is in bijection with. Give reals which don't code well-orders a formal rank of ∞. Then identify reals together if they have the same rank. The resulting quotient is in bijection with ω1. This construction heavily used the set of reals which code well-orders, which is a prime example of non-Borel set. It is in fact co-analytic. That makes this equivalence relation much more complicated than Borel. It is in fact more complicated than analytic or co-analytic, as seeing if two reals are equivalent requires searching a co-analytic set. So if we allow equivalence relations which are more than co-analytic, complicated sets can arise as quotients. What about co-analytic sets themselves? Silver's theorem tells us that if a quotient space is created a co-analytic equivalence relation, then either the quotient is countable or the real line embeds into the quotient. Basically, for co-analytic equivalence relations, we have obtained a form of the continuum hypothesis.
In these posts I am going to prove Silver's theorem twice, and then I will prove it's generalization, the Harrington-Shelah theorem. Silver's theorem is commonly proved topologically, using the topology generated by the lightface analytic sets (sets which can be written as the projection of a recursive tree on ωω). Since that is easily found, I am first going to prove in the language of forcing with the lightface analytic sets as conditions. This is the way that almost generalizes, but doesn't quite. The proof to come next time.
9/28/2016 Almost Ready
Everything is almost lined up. The statements are written, minus some editing of the research statement. My letter writers are lined up. The first draft of the paper Steve and I are working is done. The next phase is to do more research on the universities I am going to apply to and work up cover letters. There are so many things to do right now, but they are all full of strange small tasks. Oh yeah, I am giving three talks at UNT this semester, attending a set theory workshop at UIC in a a few week, and speaking at Ohio University in November.
Research wise, Steve and I finally cracked the code between infinitary logic and infinity Borel codes. That completed the last gaps on our purely descriptive set theoretic proof of the Harrington-Shelah theorem. Moving forward, we will be applying these tools to try and generalize the Harrington Kechris Louveau theorem.
I'm going to work up a series of posts about Silver's theorem and how it generalizes. The first one should be up tonight.
Research wise, Steve and I finally cracked the code between infinitary logic and infinity Borel codes. That completed the last gaps on our purely descriptive set theoretic proof of the Harrington-Shelah theorem. Moving forward, we will be applying these tools to try and generalize the Harrington Kechris Louveau theorem.
I'm going to work up a series of posts about Silver's theorem and how it generalizes. The first one should be up tonight.
Saturday, July 30, 2016
7/30/2016 UCI Summer School Week 1
I have now been embedded into the choice users for a week. The weekend provides a brief break I can use to safely gather my thoughts. Initially the choice users were very sensitive to my presence, apologizing to me whenever they blasphemed and applied the dreaded principle. By the end of the week however, they had already become desensitized to my presence. Choice is a hammer to them, and with it in hand, they see most problems as nails of various sizes. Blind persistence is the winning approach to these problems; if you keep hitting a nail it will eventually sink into the wood. And yet, we have discovered common ground. The shapeless, cyclopean morass generated from the power set operation is abhorrent to them as well. While I restrict my attention to the describable and the real to avoid this particular mathematical abomination, they use the tools of pcf and pseudo-power, which, when studied under the lens of choice, are house cats relative to the tiger that is the full power set. One, of course, should never underestimate a house cat.
On a serious note, the talks this week have been generally good, although 6 hours of new math day for a whole week is genuinely exhausting. Pretty much all of what we have been learning is the work of Shelah; the structure he created to be able to answer questions of singular cardinal combinatorics. James Cummings started the week and provided vital context and pictures that really help motivate and elucidate this somewhat dense material. Bill Chen also presented this week, and by all accounts, he covered a breathtaking amount of information. Thanks to him, we were able to see some of the major theorems of the initial setting of pcf theory (a singular cardinal with uncountable cofinality which is not a fixed point of the Aleph operation). James emphasized the topological connections of this material early on, and with this in mind I think much of what Bill proved can be seen as advanced topological theorems of ordinal spaces. In topology one can study the interplay between density and covering properties, and in pcf theory you can study the interplay between cofinality and covering properties. I think arguments can be made for cofinality being a reasonable replacement for density in ordinal spaces.
Although I am tired, I am looking forward to using this weekend to get some energy back and then jump right into more applications and developments of pcf theory next week.
On a serious note, the talks this week have been generally good, although 6 hours of new math day for a whole week is genuinely exhausting. Pretty much all of what we have been learning is the work of Shelah; the structure he created to be able to answer questions of singular cardinal combinatorics. James Cummings started the week and provided vital context and pictures that really help motivate and elucidate this somewhat dense material. Bill Chen also presented this week, and by all accounts, he covered a breathtaking amount of information. Thanks to him, we were able to see some of the major theorems of the initial setting of pcf theory (a singular cardinal with uncountable cofinality which is not a fixed point of the Aleph operation). James emphasized the topological connections of this material early on, and with this in mind I think much of what Bill proved can be seen as advanced topological theorems of ordinal spaces. In topology one can study the interplay between density and covering properties, and in pcf theory you can study the interplay between cofinality and covering properties. I think arguments can be made for cofinality being a reasonable replacement for density in ordinal spaces.
Although I am tired, I am looking forward to using this weekend to get some energy back and then jump right into more applications and developments of pcf theory next week.
Monday, July 18, 2016
7/18/2016
Over the weekend, I started working on the introduction to my dissertation, and built the first draft of my cv. I've read through the first chapters of "Introduction to Cardinal Arithmetic" now in preparation for the summer school in set theory, which starts next week. I also begin reading Magidor's cardinal arithmetic chapter in the handbook today.
I believe we are really quite close to cracking the Harrington-Shelah paper. One more definition needs to be sorted out, and then the rest of details, while non-trivial, should be able to be filled in. It's frustrating that the original write-up is so incomplete, particularly since we need to generalize out from it. I still agree with Jackson that re-writing the arguments in terms of infinity Borel codes and admissable ordinals is the right approach, but the translation is not quite straightforward. It would be great if there was a source on this.
I believe we are really quite close to cracking the Harrington-Shelah paper. One more definition needs to be sorted out, and then the rest of details, while non-trivial, should be able to be filled in. It's frustrating that the original write-up is so incomplete, particularly since we need to generalize out from it. I still agree with Jackson that re-writing the arguments in terms of infinity Borel codes and admissable ordinals is the right approach, but the translation is not quite straightforward. It would be great if there was a source on this.
Friday, July 1, 2016
Some Thoughts on Size and Choice
My thesis (which is admittedly biased given my research topic) is that, while choice makes certain questions about combinatorics possible, it also covers up interesting problems regarding cardinalities.
The first of these distinctions that it obscures is the fundamental difference between surjections and injections. In the choice context, injections and surjections are perfectly dual: any covering map can be turned into embedding the other way, and any embedding can be turned into a covering map. Now even with the axoim of choice, this perfect duality breaks down once more structure is involved. Without choice you can still get surjections from injections, but divining injections from surjections is largely impossible.
Although this is a simple point, I think it is worth noting how structure is preserved differently by injections and surjections. In particular I want to focus on order structures, although I am sure there are interesting things to say with regards to other types of structures. Now if A maps onto B and has some order structure, nothing about this needs to pass onto B. Basically the fibers of the surjection can interact with the order in very messy ways. However, if A maps into B, then we know two things: that B is capable of containing an ordered substructure with the same order as A, and that an order structure can be put on A which satisfies at least the non-quantified properties of the order structure on B. For example, the power set of the reals maps onto the reals, but without extra universe assumptions, the power set of the reals cannot be linearly ordered. If you can inject a set A into the reals though, it can clearly be linearly ordered. A need not inherit all of the nice properties of the reals: for instance the rationals map into the reals but are not closed under supremums. Note also that just because the reals maps onto some set, it doesn't mean that set can even be linearly ordered. The example here would be any hyperfinite quotient of the reals.
One kind of structure which acts differently with respect to this distinction are well-orders. If an ordinal maps onto a set B, then we can pull representatives from the fibers of the corresponding surjection. So that ordinal would have to contain a perfect facsimile of B. In particular if B has a certain order structure, then that ordinal would have to permit an order structure which contains B's order structure as a sub-structure. On the other hand, it would also mean that B has to itself be well-orderable. In a universe with sets that cannot be well-ordered, this puts severe limitations on what ordinals can map onto. It is actually easier to map an ordinal into a set, as this only forces the set contain a copy of that ordinal. In particular the set can fail to permit a global well-order while admitting order structure which are "locally" well-orderable. An interesting example here is that, in L(R), many ordinals can map into the power set of the reals; as we mentioned before, however, this power set can not even by linearly ordered, let alone well-ordered.
Without the axiom of choice, the question "Which partial order structures does A admit?" is non-trivial. To ask "Is |A| < |B|?" then asks "For every partial order structure which A admits is there a partial order structure on B which contains that as a substructure?" I think this re-interpretation is the most intuitive way to "explain" why its not always possible to compare the cardinality of two sets.
This leaves me wondering how much size questions should depend on structure questions. The axiom of choice looks to minimize this distinction. On the other end of the spectrum, why should A inject into B unless we have explicitly built B on top of A? In this light, I suspect the field of invariant descriptive set theory is exploring what happens when this distinction is maximized (up to at least assuming countable choice). The framework of determinacy appears to be somewhere in between these two perspectives.
I have a final stray thought. Are different versions of the axiom of choice just different assertions about the lattice structure of the universe? If so, are there unexplored variants of the axiom of choice corresponding to different lattice structures? What kind of structure is compatible with ZF in general?
The first of these distinctions that it obscures is the fundamental difference between surjections and injections. In the choice context, injections and surjections are perfectly dual: any covering map can be turned into embedding the other way, and any embedding can be turned into a covering map. Now even with the axoim of choice, this perfect duality breaks down once more structure is involved. Without choice you can still get surjections from injections, but divining injections from surjections is largely impossible.
Although this is a simple point, I think it is worth noting how structure is preserved differently by injections and surjections. In particular I want to focus on order structures, although I am sure there are interesting things to say with regards to other types of structures. Now if A maps onto B and has some order structure, nothing about this needs to pass onto B. Basically the fibers of the surjection can interact with the order in very messy ways. However, if A maps into B, then we know two things: that B is capable of containing an ordered substructure with the same order as A, and that an order structure can be put on A which satisfies at least the non-quantified properties of the order structure on B. For example, the power set of the reals maps onto the reals, but without extra universe assumptions, the power set of the reals cannot be linearly ordered. If you can inject a set A into the reals though, it can clearly be linearly ordered. A need not inherit all of the nice properties of the reals: for instance the rationals map into the reals but are not closed under supremums. Note also that just because the reals maps onto some set, it doesn't mean that set can even be linearly ordered. The example here would be any hyperfinite quotient of the reals.
One kind of structure which acts differently with respect to this distinction are well-orders. If an ordinal maps onto a set B, then we can pull representatives from the fibers of the corresponding surjection. So that ordinal would have to contain a perfect facsimile of B. In particular if B has a certain order structure, then that ordinal would have to permit an order structure which contains B's order structure as a sub-structure. On the other hand, it would also mean that B has to itself be well-orderable. In a universe with sets that cannot be well-ordered, this puts severe limitations on what ordinals can map onto. It is actually easier to map an ordinal into a set, as this only forces the set contain a copy of that ordinal. In particular the set can fail to permit a global well-order while admitting order structure which are "locally" well-orderable. An interesting example here is that, in L(R), many ordinals can map into the power set of the reals; as we mentioned before, however, this power set can not even by linearly ordered, let alone well-ordered.
Without the axiom of choice, the question "Which partial order structures does A admit?" is non-trivial. To ask "Is |A| < |B|?" then asks "For every partial order structure which A admits is there a partial order structure on B which contains that as a substructure?" I think this re-interpretation is the most intuitive way to "explain" why its not always possible to compare the cardinality of two sets.
This leaves me wondering how much size questions should depend on structure questions. The axiom of choice looks to minimize this distinction. On the other end of the spectrum, why should A inject into B unless we have explicitly built B on top of A? In this light, I suspect the field of invariant descriptive set theory is exploring what happens when this distinction is maximized (up to at least assuming countable choice). The framework of determinacy appears to be somewhere in between these two perspectives.
I have a final stray thought. Are different versions of the axiom of choice just different assertions about the lattice structure of the universe? If so, are there unexplored variants of the axiom of choice corresponding to different lattice structures? What kind of structure is compatible with ZF in general?
Wednesday, June 29, 2016
6/29/2016
We finally started to make some headway on the long union situation. It seems like L(R) can in fact understand enough about these unions, it just has to work extra hard for it. The main block currently is that we need to adapt an argument of Shelah and Harrington to generalize the Harrington-Kechris-Louveau result. Their argument is only a page long, but it is thoroughly unreadable. Fortunately, Jackson knows a way of putting the argument together.
In other news, I got first place at BEST. I think the whole judging thing is kind of goofy, but the money is cool and its nice to feel appreciated.
In other news, I got first place at BEST. I think the whole judging thing is kind of goofy, but the money is cool and its nice to feel appreciated.
Friday, June 17, 2016
6/16/2016
Day 2 of Best.
Erin Carmody was the special speaker for the morning. Her talk was entitles "Killing Them Softly." After describing degrees of inaccessibility with words to a ridiculous level (inaccessible, hyper inaccessible, richly inaccessible, utterly inaccessible, deeply, truly, eternally,...), she introduced a better way of describing these degrees using meta-ordinals and something like Cantor normal form. She was able to use variants of this approach to describe degrees of Mahlo cardinals and degrees of measurable cardinals. The real work here though is the creation of clever forcing which can bump large cardinals down in degree very sensitively (basically one degree at a time).
Next Douglas Ulrich talked about a new notion of cardinality for countable first order theories. I believe I saw his co-author present this in Utah. It had been a while, and I still find the idea of considering Scott sentences for models outside of your current universe pretty exciting. It's also interesting how looking at the number of models up to back and forth equivalence doesn't suffice to distinguish different first order theories.
After that I spoke. I think it went okay. People had questions. The only thing is I should have a waited a bit longer to prepare my slides as I had already kind of moved beyond them with what I have actually proved. People did like the little graphics I put in.
Up next was Monroe Eskew. He spoke on rigid ideals. This is one of those curious problems where the difference between MA and CH really shows up. In MA it seems that the results on rigid ideals are uninteresting. Under just CH (really GCH) however, all kinds of consistency questions show up. Monroe is able to employ some clever forcing and large cardinal conditions to get the existence of rigid ideals.
The last talk before lunch was Daniel Soukoup. He talked about orientations of graphs with large chromatic number. Basically, the chromatic number of a graph can determine some structure of its subgraphs. When you orient a graph, that number might change, and the structure of the oriented subgraphs is different than the structure of the subgraphs. This problem is intractable when the chromatic number is finite, but Soukoup is able to gets results when the chromatic number is uncountable.
The next special speaker was Martin Zeman. His talk was on master conditions from huge embeddings. Although the title wouldn't suggest it, the talk was mostly about the non-saturated ideal, the properties of precipitousness, pre-saturation, saturation, and other analogous properties. So this really took me back on a nostalgia trip to the stationary tower. The huge embeddings and master conditions come into play because the problem reduces to a problem of the closure properies of a certain forcing (which is actually a different forcing modded out by the master condition).
Paul Ellis then spoke on a Borel amalgamation property. The sentence FAP implies BAP implies SAP was in this talk. It's great. This material is Ellis' attempt to generalize out some properties that certain Fraisse structures have. Basically is the complexity of the isomoprhism problem of the associated group of automorphisms sufficiently hard. The generalization is natural and the results are interesting: it's not clear what conditions guarantee the hardness.
The second to last speaker was John Clemens. His talk was on the relative primeness of equivalence relations. He is essentially studying a sort of pigeonhole property for quotients of the reals. I think his work and mine will eventually have some intersection. Mostly his work right now is a repackaging of previous results, but as we all know, having the right words to describe a problem is half the battle.
The last speaker was Lijiana Babinkostova. Her talk was titles "On a property of Corson." Her talk was basically on games which can describe topological properties. In particular, she liked to use this context to look at modified versions of the standard games to get a hold on some more technical topological notions. For instance, instead of density, she was interesting in theta-density, which only requires that the set meet the closure of every open set.
Erin Carmody was the special speaker for the morning. Her talk was entitles "Killing Them Softly." After describing degrees of inaccessibility with words to a ridiculous level (inaccessible, hyper inaccessible, richly inaccessible, utterly inaccessible, deeply, truly, eternally,...), she introduced a better way of describing these degrees using meta-ordinals and something like Cantor normal form. She was able to use variants of this approach to describe degrees of Mahlo cardinals and degrees of measurable cardinals. The real work here though is the creation of clever forcing which can bump large cardinals down in degree very sensitively (basically one degree at a time).
Next Douglas Ulrich talked about a new notion of cardinality for countable first order theories. I believe I saw his co-author present this in Utah. It had been a while, and I still find the idea of considering Scott sentences for models outside of your current universe pretty exciting. It's also interesting how looking at the number of models up to back and forth equivalence doesn't suffice to distinguish different first order theories.
After that I spoke. I think it went okay. People had questions. The only thing is I should have a waited a bit longer to prepare my slides as I had already kind of moved beyond them with what I have actually proved. People did like the little graphics I put in.
Up next was Monroe Eskew. He spoke on rigid ideals. This is one of those curious problems where the difference between MA and CH really shows up. In MA it seems that the results on rigid ideals are uninteresting. Under just CH (really GCH) however, all kinds of consistency questions show up. Monroe is able to employ some clever forcing and large cardinal conditions to get the existence of rigid ideals.
The last talk before lunch was Daniel Soukoup. He talked about orientations of graphs with large chromatic number. Basically, the chromatic number of a graph can determine some structure of its subgraphs. When you orient a graph, that number might change, and the structure of the oriented subgraphs is different than the structure of the subgraphs. This problem is intractable when the chromatic number is finite, but Soukoup is able to gets results when the chromatic number is uncountable.
The next special speaker was Martin Zeman. His talk was on master conditions from huge embeddings. Although the title wouldn't suggest it, the talk was mostly about the non-saturated ideal, the properties of precipitousness, pre-saturation, saturation, and other analogous properties. So this really took me back on a nostalgia trip to the stationary tower. The huge embeddings and master conditions come into play because the problem reduces to a problem of the closure properies of a certain forcing (which is actually a different forcing modded out by the master condition).
Paul Ellis then spoke on a Borel amalgamation property. The sentence FAP implies BAP implies SAP was in this talk. It's great. This material is Ellis' attempt to generalize out some properties that certain Fraisse structures have. Basically is the complexity of the isomoprhism problem of the associated group of automorphisms sufficiently hard. The generalization is natural and the results are interesting: it's not clear what conditions guarantee the hardness.
The second to last speaker was John Clemens. His talk was on the relative primeness of equivalence relations. He is essentially studying a sort of pigeonhole property for quotients of the reals. I think his work and mine will eventually have some intersection. Mostly his work right now is a repackaging of previous results, but as we all know, having the right words to describe a problem is half the battle.
The last speaker was Lijiana Babinkostova. Her talk was titles "On a property of Corson." Her talk was basically on games which can describe topological properties. In particular, she liked to use this context to look at modified versions of the standard games to get a hold on some more technical topological notions. For instance, instead of density, she was interesting in theta-density, which only requires that the set meet the closure of every open set.
Thursday, June 16, 2016
6/15/2016
Day 1 of the 23rd Best Conference (insert joke here)
This is one of the more social conferences I've been to. It probably helps that there is a nationally renowned brewery down the road.
The morning started off with a special 1 hour talk by Natasha Dobrinen on Ramsey Spaces coding universal triangle-free graphs and applications to Ramsey degrees. She really gave quite a good talk, and it was full of things I sort of understood. You start off with a Fraisse structure, you then want to know things about colorings of it, and the methodology involves a rather clever coding of an infinite graph into the full binary tree. She used the "choose" notation for the coloring. It's interesting how pure set theory and topological Ramsey theory can have such different notation for the same idea. I think I like the "choose" notation more. She half apologized for just scanning in hand drawn pictures of her trees, but I think I actuallu prefer those to a latexed up version. Plus Latexing all of that up might be the very definition of a waste of time.
Shezad Ahmed went next on Jonsson cardinals and pcf theory. This was close to the same talk as at CUNY, but I definitely got more out of it this time around. pcf theory is tough to talk about because there is so much background, The "tension", as he describes it, that a successor of a singular cardinal being Jonsson introduces into the objects of pcf theory is interesting. It seems like its one of those situations where it makes your life complicated because its allowing you to study phenomena at a level of detail previously unavailable.
Then William Chan gave a talk on how every analytic equivalence relation with all Borel classes is Borel somewhere. The work is actually quite general, and ends up essentially being a problem in determinacy theory, although aspects of the problem are resolvable in ZFC(and a little more). This work might entail one of the more creative uses of the Martin Solovay tree construction that I've seen. For the biggest results he has, he uses very strong forms of determinacy. I wonder how much of that is truly necessary (I bet he does too).
Next, Kameryn Williams spoken on minimal models of Kelley-Morse set theory. Again, this was essentially the same talk as at CUNY. It's still funny that the result is actually that there is no least model of KM, This talk hadn't stuck with me as much as Shezad's had, so on this second time hearing it, I actually picked up quite a bit more of what was going on. The clever trick that allows him to even get off the ground is this idea of looking at a slight strengthening of KM which then has a nice first order characterization in terms of an inaccessible cardinal. I'm intrigued about the possible interplay between large cardinal notions and second order set theories,
Kaethe Minden finished out the morning with a talk on subcomplete forcing and trees. This was also essentially the same talk as at CUNY. I'm still really impressed with pictures and with her ability to explain a very complicated forcing concept. I've said it before, I have a hard time with forcing talks in general. But this one is nice.
After lunch, Simon Thomas gave a special one hour talk. He spoke on the isomorphism and bi-embeddability relations for finitely generated group. It was great; he is consistently a very good speaker. I think my favorite line this time around was "A Kazhdan group is punchline." That there are very hard problems with just finitely generated groups is fascinating. I guess I'll just remember some more lines from the talk. "The old work was great, but you can see the sweat. You shouldn't be able to see the sweat." And so he comes up with a much better topological space to approach his problem. Finally, "Some people think you should believe in your conjectures. I strongly disagree with this/" One more. "No presentation should be perfect. You should always introduce an intentional error. This is the intentional error for this talk. Some people don't need to introduce intentional errors." Although related to his talk at Utah, this was obviously quite different as he had some more time.
Next up was Cody Dance. He talked about indiscernibles for L[T_2,x]. I think I like this work better than his work on the embedding generated by the club filter in L(R). Of course, I'm probably just being biased because I like determinacy theory more than inner model theory. Basically, because Jackson recently proved some new things about the Martin Solovay tree construction, it seems that we might be able to use less description theory and more indiscernible theory to get the weak and strong partition property. If the methods hold out, this would probably finally allow Jackson to push the analysis of these properties past the first inductive pointclass.
After Cody was Kyle Douglas Beserra. He spoke on the conjugacy problem for automorphisms of countable regular trees. Kyle is coming to UNT in the fall, having just finished his masters. Instead of looking at the complexity of classifying trees up to isomorphism, he as looking at the complexity of classifying isomorphisms of trees up to conjugacy. It seems one can mostly lift the methods used to study the trees to study their automorphisms as well. The really clever bit though is when the trees are unrooted and finitely branching, and the automorphism fixes a tree, Then he gets to do some invariant descriptive set theory and find some nice universal countable equivalence relations to Borel reduce to and be reduced to from.
At this point my yellow notepad ran out of pages. It's okay, I had a backup.
Dan Hathaway then talked about disjoint Borel functions. He reminded the audience of the notion of a conflict response relation, which other people seemed familiar with, but I was not. Later I found out that the notion has connections to cardinal invariants. He used it to try get a handle on the minimum size of a set of Borel functions you would need to always be able to respond to someone else's Borel function with a disjoint one. Interestingly enough, this problem reduces to looking at just real numbers and any ordering that sort of looks like the ordering L puts on the reals. To show this reduction he has to create his own variant of Hechler forcing (which looks itself like a variant of Mathias forcing to me). The great part is that this analysis is quite flexible, and the functions don't have to be Borel. If you look into the projective hierarchy you just start using the mouse orders for the mouse with the correct number of Wooding cardinals. It's cool to see someone exploit the interplay between the mice and the projective classes.
Almost finishing out the day was Paul Mckenney. He was looking at the classic problem of whether or not the power sets of \omega and \omega_1 are isomorphic mod finite. In particular, he was studying some consequences of this. He had a few different results on this. The first was that if there is a cardinal preserving automorphism of the power set of omega_1 mod finite, then the continuum function collapses at omega_1. The next was about Q_B sets. This is frustrating because this talk is very hard to write about on here, but it really was very good.
The last talk of the day(this was quite a bit of talks) was given by Paul Corazza. He talked about the axiom of infinity, QFT, and large cardinals. This was basically basically just an alternate way to talk about the strength of elementary embedding, stuffed inside a somewhat goofy package. There is no real connection here to QFT. He does get some new large cardinal notions out of it though.
This is one of the more social conferences I've been to. It probably helps that there is a nationally renowned brewery down the road.
The morning started off with a special 1 hour talk by Natasha Dobrinen on Ramsey Spaces coding universal triangle-free graphs and applications to Ramsey degrees. She really gave quite a good talk, and it was full of things I sort of understood. You start off with a Fraisse structure, you then want to know things about colorings of it, and the methodology involves a rather clever coding of an infinite graph into the full binary tree. She used the "choose" notation for the coloring. It's interesting how pure set theory and topological Ramsey theory can have such different notation for the same idea. I think I like the "choose" notation more. She half apologized for just scanning in hand drawn pictures of her trees, but I think I actuallu prefer those to a latexed up version. Plus Latexing all of that up might be the very definition of a waste of time.
Shezad Ahmed went next on Jonsson cardinals and pcf theory. This was close to the same talk as at CUNY, but I definitely got more out of it this time around. pcf theory is tough to talk about because there is so much background, The "tension", as he describes it, that a successor of a singular cardinal being Jonsson introduces into the objects of pcf theory is interesting. It seems like its one of those situations where it makes your life complicated because its allowing you to study phenomena at a level of detail previously unavailable.
Then William Chan gave a talk on how every analytic equivalence relation with all Borel classes is Borel somewhere. The work is actually quite general, and ends up essentially being a problem in determinacy theory, although aspects of the problem are resolvable in ZFC(and a little more). This work might entail one of the more creative uses of the Martin Solovay tree construction that I've seen. For the biggest results he has, he uses very strong forms of determinacy. I wonder how much of that is truly necessary (I bet he does too).
Next, Kameryn Williams spoken on minimal models of Kelley-Morse set theory. Again, this was essentially the same talk as at CUNY. It's still funny that the result is actually that there is no least model of KM, This talk hadn't stuck with me as much as Shezad's had, so on this second time hearing it, I actually picked up quite a bit more of what was going on. The clever trick that allows him to even get off the ground is this idea of looking at a slight strengthening of KM which then has a nice first order characterization in terms of an inaccessible cardinal. I'm intrigued about the possible interplay between large cardinal notions and second order set theories,
Kaethe Minden finished out the morning with a talk on subcomplete forcing and trees. This was also essentially the same talk as at CUNY. I'm still really impressed with pictures and with her ability to explain a very complicated forcing concept. I've said it before, I have a hard time with forcing talks in general. But this one is nice.
After lunch, Simon Thomas gave a special one hour talk. He spoke on the isomorphism and bi-embeddability relations for finitely generated group. It was great; he is consistently a very good speaker. I think my favorite line this time around was "A Kazhdan group is punchline." That there are very hard problems with just finitely generated groups is fascinating. I guess I'll just remember some more lines from the talk. "The old work was great, but you can see the sweat. You shouldn't be able to see the sweat." And so he comes up with a much better topological space to approach his problem. Finally, "Some people think you should believe in your conjectures. I strongly disagree with this/" One more. "No presentation should be perfect. You should always introduce an intentional error. This is the intentional error for this talk. Some people don't need to introduce intentional errors." Although related to his talk at Utah, this was obviously quite different as he had some more time.
Next up was Cody Dance. He talked about indiscernibles for L[T_2,x]. I think I like this work better than his work on the embedding generated by the club filter in L(R). Of course, I'm probably just being biased because I like determinacy theory more than inner model theory. Basically, because Jackson recently proved some new things about the Martin Solovay tree construction, it seems that we might be able to use less description theory and more indiscernible theory to get the weak and strong partition property. If the methods hold out, this would probably finally allow Jackson to push the analysis of these properties past the first inductive pointclass.
After Cody was Kyle Douglas Beserra. He spoke on the conjugacy problem for automorphisms of countable regular trees. Kyle is coming to UNT in the fall, having just finished his masters. Instead of looking at the complexity of classifying trees up to isomorphism, he as looking at the complexity of classifying isomorphisms of trees up to conjugacy. It seems one can mostly lift the methods used to study the trees to study their automorphisms as well. The really clever bit though is when the trees are unrooted and finitely branching, and the automorphism fixes a tree, Then he gets to do some invariant descriptive set theory and find some nice universal countable equivalence relations to Borel reduce to and be reduced to from.
At this point my yellow notepad ran out of pages. It's okay, I had a backup.
Dan Hathaway then talked about disjoint Borel functions. He reminded the audience of the notion of a conflict response relation, which other people seemed familiar with, but I was not. Later I found out that the notion has connections to cardinal invariants. He used it to try get a handle on the minimum size of a set of Borel functions you would need to always be able to respond to someone else's Borel function with a disjoint one. Interestingly enough, this problem reduces to looking at just real numbers and any ordering that sort of looks like the ordering L puts on the reals. To show this reduction he has to create his own variant of Hechler forcing (which looks itself like a variant of Mathias forcing to me). The great part is that this analysis is quite flexible, and the functions don't have to be Borel. If you look into the projective hierarchy you just start using the mouse orders for the mouse with the correct number of Wooding cardinals. It's cool to see someone exploit the interplay between the mice and the projective classes.
Almost finishing out the day was Paul Mckenney. He was looking at the classic problem of whether or not the power sets of \omega and \omega_1 are isomorphic mod finite. In particular, he was studying some consequences of this. He had a few different results on this. The first was that if there is a cardinal preserving automorphism of the power set of omega_1 mod finite, then the continuum function collapses at omega_1. The next was about Q_B sets. This is frustrating because this talk is very hard to write about on here, but it really was very good.
The last talk of the day(this was quite a bit of talks) was given by Paul Corazza. He talked about the axiom of infinity, QFT, and large cardinals. This was basically basically just an alternate way to talk about the strength of elementary embedding, stuffed inside a somewhat goofy package. There is no real connection here to QFT. He does get some new large cardinal notions out of it though.
Monday, June 13, 2016
6/13/2016
I finally finished up the proof reading today. Now all that's left for this phase of the project is to finish adding in the Rowbottom and Ramsey results. It was strangely exhausting to go through the paper, although maybe not so strange given how long it is. I leave for San Diego tomorrow to go to BEST, so that's pretty exciting. It's basically the one place in California that I haven't been to.
Subscribe to:
Posts (Atom)