This was a long day. There were a lot of short talks to attend, so this will be a pretty lengthy report (probably why I've been putting it off). As such, I am splitting it into two parts.
Talk 1: "Ligeti's Combinatorial Tonality" by Clifton Callendar
I decided to attend some math and music talks, and I'm glad I did, as they were a fun way to start the morning. This first one, as suggested by the title, was a study of some of Liegti's later works, particularly the piece "Disorde." The pieces are neither harmonious or atonal, but a blend of the two. To describe this phenomena, Callendar uses the notions of inter harmonies, intra harmonies, and then interval content. In particular, he performed a statistical analysis of the standard deviation of the interval content for Disorde. The piece corresponds well to statistically expected interval content, but with more emphasis on tritones. Interestingly enough, however, the piece corresponds better with the expected intervals and notes as it goes on. There are two proposed reasons for this. First, the left and right hands of the piano become more separated as the piece goes on, so the increased dissonance is less noticeable to human ears. Second, the piece has something called Iso-rhythmic structure, and as the piece becomes denser towards the end, this becomes compressed, allowing for fewer possible notes and enforcing the randomness. Both reasons seem to be partially correct, based off of a statistical analysis. This was a really fun talk, particularly because we got to listen to the piece and to the speaker play on a keyboard.
Talk 2: "Voicing Transformations and Linear Representation of Uniform Triadic Transformations" by Thomas M. Fiore
Thomas' motivation for this work was Webern concerto no. 9, because it's rich pattern. Thomas is proposing a more robust system for understanding triadic transformations, based on global reflections, contextual reflections, and voicing reflections. In terms of linear algebra, these are matrices which act on the note space. Thomas takes the semi-direct product of the symmetric group on 3 elements with the matrix group generated by these reflections to create what he calls J. J ends up having very nice structure. It also can be used to described a wide variety of classical progressions. For instance, there are elements of J whose orbit produces the diatonic fifths sequence and a subgroup of J is isomorphic to the Hook group. The Hook group generates the uniform triadic transformations, and was known before, but this is the first time a linear description of this group and its action has been realized. The major downside of this work is that it ignores inversions. So the progression is only represented up to inversion. Still pretty cool. I like how this material is essentially clever applications of undergraduate mathematics.
Talk 3: "Hypergraph Regularity, Ultraproducts, and a Game Semantics" by Henry Towsner
This was another talk that was related to that fateful blog post by Terry Tao on non-standard analysis. Henry is studying the ultraproducts of finite (or finitely generated/finite dimensional) objects. There is a recurring phenomena where properties of the ultraproduct translate into uniform statements about the class of finite objects forming the ultraproduct. An example of this is meta-stability. For finite graphs, this is the Szeneredi Regularity Lemma. A corresponding statement can be proved for hypergraphs, but is horribly complicated to state and prove. Henry, proceeding philosphically, was wondering about the role of the ultraproduct in these statements, and whether a better semantics was possible which could describe this phenomena. Building on the game-theoretic form of first order logic semantics, Henry described a game setup that can describe these uniform statements, called uniform semantics. This avoids ultraproducts, and can actually be used to obtain specific bounds, which the ultraproduct method could not. This is honestly super cool, and I'm really intrigued by the further applications of this that Henry mentioned.
Talk 4: "Measurable Chromatic Numbers" by Clinton Conley
This is essentially what it sounds like (if you know the terminology). The objects of study here are graph structures on Polish spaces, where the edge relation is Borel. The chromatic number is the minimum number of colors needed to color the graph in such a way that if two nodes share an edge, they are different colors. This however splits into three numbers. The raw number, where the axiom of choice can be used to select starting points in components of the graph, The Borel number, where the colorings under consideration have to be Borel, and the measure number, where the colorings under consideration have to be (let's say Lebesgue) measurable and you only have to color the graph up to a set of full measure. For countable graphs, there is a neat result called Brooke's theorem, which states that if the degree of the graph is less than d, the chromatic number is at most d+1. If the graph contains no finite cliques and no odd cycles, then the chromatic number is at most d. This extends to the Borel chromatic number, except that Marks proved that there are acyclic Borel graphs of degree d (for any d) whose Borel chromatic number is d+1. The example produced by Marks are of high complexity, however, which is maybe the only way one could ask for this result to be improved. For the measure chromatic number, there are actually two variants: the first listed above, where you color up to a set of full measure, and another where for all epsilon, you color the graph up to set of measure 1 - epsilon. These two number can be different, as seen in the example of the graph generated by the irrational rotation of the circle (3 for the full measure version, 2 for the epsilon version). In fact, any graph generated by the Borel action of an amenable group has epsilon measure chromatic number 2. So for measure chromatic number, the irrational rotation of the circle, is essentially the only new criteria that needs to be added to Brooks theorem. With this in hand, and the fact that the Borel actions of amenable groups generate hyperfinite graphs (up to measure), one might hope that Marks' examples somehow rely on being of high complexity. Indeed, it's not possible to use smooth graphs for such a class of examples. However, by tweaking Marks' example in a very non-trivial way, Clinton was able to create (for all d) hyperfinite acyclic graphs of degree d whose Borel chromatic number is d+1.
Jared Holshouser: Logic and Set Theory
I've created this blog to document my experience as a research mathematician. My broad interests are logic and set theory, and I am currently focused on descriptive inner model theory. In addition to posting my daily progress, I hope to include the occasional opinion post or interesting article.
Monday, January 16, 2017
Wednesday, January 11, 2017
JMM 2017 Day 2
I didn't go to quite as many talks this day, as I had to attend a reception for graduate students who had received money to attend, and I gave my talk on this afternoon as well.
Talk 1: "Descriptive Set Theory and Borel Probabilities" by Chris Caruvana
In Chris's research, he is studying the space of Borel probability measures. Since this is a Polish space, one can talk about comeager collections of measures in a coherent way. In particular, the sets of reals which are measure zero for a comeager collection of measures is an object of interest. This forms a countably closed ideal, and it at least contains the meager sets. Thus there is an associated notion of "smallness" that comes from these sets. With this notion of smallness comes an analogue of being Baire Measurable, that is your difference with a Borel set is small. Using this analogue, Chris was able to prove an automatic continuity result. There are interesting open questions floating around about exactly what relationship between these sets and universally null sets is consistent.
Talk 2: "Rationales for Mathematics and its Significance in Recently-Excavated Bamboo Texts from Ancient China" by Joseph W. Dauben
A whole slew of ancient Chinese mathematical texts have been recovered recently. They were written on flattened bamboo strips, and some are really well preserved. They tend to follow a student-master dialogue style, and covered a pretty wide range of arithmetic and word problems. It's interesting that they seemed to have a very Pythagorean view of things, where everything is number (and in particular expressible with fractions? That point wasn't totally clear). In one of the dialogues, the student is upset, because he can't seem to master literature and math at the same time. The teacher tells him to learn math if he has to choose, because "math can help you understand literature, but literature cannot help you understand math." This was a fun talk, it was definitely cool to see pictures of the bamboo strips. I don't know much about ancient Chinese math, so I didn't understand a lot of the speaker's references, but I'm kind of curious now to look at some of the important texts that have been uncovered (translated into English of course).
Talk 3: "Modal logic axioms valid in quotient spaces of finite CW-complexes and other families of topological sets" by Maria Nogin
This talk mostly dealt with realizing the semantics of modal logic in certain topologies. In particular point topologies (where open means containing a particular point), excluded point topologies (open means missing a particular point) and quotients of finite CW-complexes. Different extensions of S4 can be realized across these spaces. I've never looked into this kind of thing too much, beyond understanding the relationship between compactness in topological products and compactness in classical logic. I remember a colleague once looking into this sort of representation for intuitionistic logic. I think this level of understanding of the semantics can be helpful for these more complicated logics.
Talk 4: "On a variant of continuous logic" by Simon Cho
Another pure logic talk. This one had more of a non-standard analysis bent. Simon was looking at something called geodesic logic, which expands on continuous logic by loosening the requirement for the semantics to consist of continuous functions, but adds linear and geodesic structure into the semantics. Using this, uniform convergence results can be obtained for functions which are not continuous. In some sense, this is an extension of ideas presented in a really good sequence of blog posts by Terry Tao on non-standard analysis, convergence and ergodic theory. I really enjoy seeing where this line of thought is progressing. In some sense it may be easiest to approach intractable analysis problems by first figuring out what the appropriate logic is to study the problems with.
Talk 1: "Descriptive Set Theory and Borel Probabilities" by Chris Caruvana
In Chris's research, he is studying the space of Borel probability measures. Since this is a Polish space, one can talk about comeager collections of measures in a coherent way. In particular, the sets of reals which are measure zero for a comeager collection of measures is an object of interest. This forms a countably closed ideal, and it at least contains the meager sets. Thus there is an associated notion of "smallness" that comes from these sets. With this notion of smallness comes an analogue of being Baire Measurable, that is your difference with a Borel set is small. Using this analogue, Chris was able to prove an automatic continuity result. There are interesting open questions floating around about exactly what relationship between these sets and universally null sets is consistent.
Talk 2: "Rationales for Mathematics and its Significance in Recently-Excavated Bamboo Texts from Ancient China" by Joseph W. Dauben
A whole slew of ancient Chinese mathematical texts have been recovered recently. They were written on flattened bamboo strips, and some are really well preserved. They tend to follow a student-master dialogue style, and covered a pretty wide range of arithmetic and word problems. It's interesting that they seemed to have a very Pythagorean view of things, where everything is number (and in particular expressible with fractions? That point wasn't totally clear). In one of the dialogues, the student is upset, because he can't seem to master literature and math at the same time. The teacher tells him to learn math if he has to choose, because "math can help you understand literature, but literature cannot help you understand math." This was a fun talk, it was definitely cool to see pictures of the bamboo strips. I don't know much about ancient Chinese math, so I didn't understand a lot of the speaker's references, but I'm kind of curious now to look at some of the important texts that have been uncovered (translated into English of course).
Talk 3: "Modal logic axioms valid in quotient spaces of finite CW-complexes and other families of topological sets" by Maria Nogin
This talk mostly dealt with realizing the semantics of modal logic in certain topologies. In particular point topologies (where open means containing a particular point), excluded point topologies (open means missing a particular point) and quotients of finite CW-complexes. Different extensions of S4 can be realized across these spaces. I've never looked into this kind of thing too much, beyond understanding the relationship between compactness in topological products and compactness in classical logic. I remember a colleague once looking into this sort of representation for intuitionistic logic. I think this level of understanding of the semantics can be helpful for these more complicated logics.
Talk 4: "On a variant of continuous logic" by Simon Cho
Another pure logic talk. This one had more of a non-standard analysis bent. Simon was looking at something called geodesic logic, which expands on continuous logic by loosening the requirement for the semantics to consist of continuous functions, but adds linear and geodesic structure into the semantics. Using this, uniform convergence results can be obtained for functions which are not continuous. In some sense, this is an extension of ideas presented in a really good sequence of blog posts by Terry Tao on non-standard analysis, convergence and ergodic theory. I really enjoy seeing where this line of thought is progressing. In some sense it may be easiest to approach intractable analysis problems by first figuring out what the appropriate logic is to study the problems with.
Saturday, January 7, 2017
JMM 2017 Day 1
For this year's joint meeting I decided to mix it up a bit and attend a wider variety of talks. I think I had a better time because of it. On this first day, I attended some MAA talks on teaching, some history of math talks, and my friend James' contributed talk.
Talk 1: "Instant Insanity: Using Colored Blocks to Teach Graph Theory" by Stephen Adams
Instant insanity is a game using four colored blocks. The object is to stack the blocks into a column so that each color appears at most once per column. It's easy to set three up, but pretty difficult to get four. Stephen had built his own blocks, and since he had a small class, he let them attempt the game on their own, and keep the blocks during the subsequent lecture. Graph theory can be used to label the blocks in such a way that a relatively simple algorithm can then be applied to achieve a successful stacking. I think this is an interesting way to introduce graphs. I like how it uses visual, tactile, and formal reasoning skills on the part of the student.
Talk 2: "A Magic Trick That is Full of Induction" by Robert W. Vallin
The students have shuffled the cards. They are behind your back. You have told them that you can tell red cards from black cards by touch, and, of course, they are skeptical. Miraculously, you consistently alternate pulling and red and black cards, without looking. Now they want to know the trick. It's actually quite simple, you prep the deck to alternate red and black ahead of time. So even though you let the students cut the deck repeatedly, those don't change this alternating feature. But what about the riffle shuffle you performed before putting the cards behind your back. How did that not mess up the order? Well it did, but in a very specific way, so that if you pull cards in the right way, you will still alternate red and black. That's the read magic. What's the proof that it works? It goes by induction.
Robert brought this up as an interesting way of introducing students to induction proofs, more interesting than the usual summation formula at least. If you extend the concept to shuffled permutations, it sometimes works. There is a nice characterization of exactly when it works (Gilbreath permutations), and related to this characterization is a problem which needs to be proved with strong induction. So a magic trick could be the motivating example for all of the theoretical material involved in teaching induction in a discrete class.
Talk 3: "From L'Hopital to Lagrange: Analysis Textbooks in the 18th Century" by Robert Bradley
At the beginning of the 18th century, calculus (although developed by Newton and Leibniz) was still only understood by very few people. Over the course of the 18th century it spread across the intellectual world, in part because of key textbooks. L'Hopital wrote one called "Analyse des infiniment petits" in 1696, with a second edition in 1715. From Leibniz to the Bernoullis to L'Hopital the axioms for infinitesimals had been reduced to two. L'Hopital didn't deal with functions, but curves, and so although his book contains derivative rules for rational functions, it does not have the chain rule. It also does not have rules for transcendental functions.
Euler's textbook was called "Calculus Differentials", and was published in 1750, although there is an unpublished version from 1727. In the 1727 versions, he does talk about functions, begins with finite differentials, then talks about log and exponent, and uses similar axioms to L'Hopital. He does not talk about trig functions. In 1750 he has changed his viewpoint on infinitesimals. Now there algebraically 0, but not geometrically. Euler also wrote a pre-calculus book, "Introductio." In this book he defines functions, distinguishes between variables and constants, and uses series and power series representations of functions. He also defined the trig functions from the unit circle.
The last book Robert discussed was the "Theorie des Function Analytiques" by Lagrange. Lagrange extends Eulers view, and begins the book with power series representations. He does to avoid using infinitesimals. From power series, derivatives are algebraically definable. He is the first of the three to talk about derivatives as functions, and although his reasoning is fundamentally circular, he talks much about rigor and purity in mathematics. Apparently this approach became the standard one in the 19th century, surviving the actual rigorization of calculus. It wasn't until the 20th century that the rigorous sequence of limits -> derivatives -> power series was incorporated into calculus textbooks.
I'm definitely interested in looking at these books myself at some point. It was interesting to see how some aspects of the pre-cal/calc curicculum have been around for a long time, and others (like late transcendentals) are relatively recent. Also this trickle of pure math results dripping down into lower math curriculum is really interesting.
Talk 4: "Jean le Rond D'Alembert: l'enfant terrible of the French Enlightenment" by Lawrence D'Antonio
D'Alembert was a very prominent Frech mathematician who was a contemporary of Euler. He was very influential in both the French and German Academies. I wasn't aware that he was the inventor of the ratio test for series, which is kind of cool. This talk was really entertaining: D'Alembert's life is very dramatic and Lawrence presented it well. D'Alembert was eventually enemies with just about everybody, including Euler (who basically scooped him on a result about equinoxes). He was also co-editor, along with Diderot of the Encyclopedie, a mega collection of writings of enlightenment thinkers. Its translated and on-line here http://encyclopedie.uchicago.edu/, which is awesome for us now. I didn't write any of them down, unfortunately, but there were some really hilarious quotes from D'Alembert.
Talk 5: "Packing Measure of Super Separated Iterated Function Systems" By James Reid
James discussed one of his recent research projects, which was to find an algorithm for computing the packing measure of a certain family of fractals. Two things about this are very cool. The first is that these dimension and measure computations are usually not even remotely computable. The dual notion, Hausdorff dimension and measure, certainly are not computable in the broad context that James was working in. The second is that, although the packing measure had been given a nice formula for one dimensional examples, people had yet to push this to higher dimensions. Although James' formula only applies to objects whose packing dimensions is bounded by 1, they are in the plane, R^3, and so on. It will be interesting to see if having packing dimension greater than 2 is really an obstacle, or if the nice behavior will somehow continue. Oh yeah, the talk has some really fun pictures of fractals generated by polygons as well.
Talk 1: "Instant Insanity: Using Colored Blocks to Teach Graph Theory" by Stephen Adams
Instant insanity is a game using four colored blocks. The object is to stack the blocks into a column so that each color appears at most once per column. It's easy to set three up, but pretty difficult to get four. Stephen had built his own blocks, and since he had a small class, he let them attempt the game on their own, and keep the blocks during the subsequent lecture. Graph theory can be used to label the blocks in such a way that a relatively simple algorithm can then be applied to achieve a successful stacking. I think this is an interesting way to introduce graphs. I like how it uses visual, tactile, and formal reasoning skills on the part of the student.
Talk 2: "A Magic Trick That is Full of Induction" by Robert W. Vallin
The students have shuffled the cards. They are behind your back. You have told them that you can tell red cards from black cards by touch, and, of course, they are skeptical. Miraculously, you consistently alternate pulling and red and black cards, without looking. Now they want to know the trick. It's actually quite simple, you prep the deck to alternate red and black ahead of time. So even though you let the students cut the deck repeatedly, those don't change this alternating feature. But what about the riffle shuffle you performed before putting the cards behind your back. How did that not mess up the order? Well it did, but in a very specific way, so that if you pull cards in the right way, you will still alternate red and black. That's the read magic. What's the proof that it works? It goes by induction.
Robert brought this up as an interesting way of introducing students to induction proofs, more interesting than the usual summation formula at least. If you extend the concept to shuffled permutations, it sometimes works. There is a nice characterization of exactly when it works (Gilbreath permutations), and related to this characterization is a problem which needs to be proved with strong induction. So a magic trick could be the motivating example for all of the theoretical material involved in teaching induction in a discrete class.
Talk 3: "From L'Hopital to Lagrange: Analysis Textbooks in the 18th Century" by Robert Bradley
At the beginning of the 18th century, calculus (although developed by Newton and Leibniz) was still only understood by very few people. Over the course of the 18th century it spread across the intellectual world, in part because of key textbooks. L'Hopital wrote one called "Analyse des infiniment petits" in 1696, with a second edition in 1715. From Leibniz to the Bernoullis to L'Hopital the axioms for infinitesimals had been reduced to two. L'Hopital didn't deal with functions, but curves, and so although his book contains derivative rules for rational functions, it does not have the chain rule. It also does not have rules for transcendental functions.
Euler's textbook was called "Calculus Differentials", and was published in 1750, although there is an unpublished version from 1727. In the 1727 versions, he does talk about functions, begins with finite differentials, then talks about log and exponent, and uses similar axioms to L'Hopital. He does not talk about trig functions. In 1750 he has changed his viewpoint on infinitesimals. Now there algebraically 0, but not geometrically. Euler also wrote a pre-calculus book, "Introductio." In this book he defines functions, distinguishes between variables and constants, and uses series and power series representations of functions. He also defined the trig functions from the unit circle.
The last book Robert discussed was the "Theorie des Function Analytiques" by Lagrange. Lagrange extends Eulers view, and begins the book with power series representations. He does to avoid using infinitesimals. From power series, derivatives are algebraically definable. He is the first of the three to talk about derivatives as functions, and although his reasoning is fundamentally circular, he talks much about rigor and purity in mathematics. Apparently this approach became the standard one in the 19th century, surviving the actual rigorization of calculus. It wasn't until the 20th century that the rigorous sequence of limits -> derivatives -> power series was incorporated into calculus textbooks.
I'm definitely interested in looking at these books myself at some point. It was interesting to see how some aspects of the pre-cal/calc curicculum have been around for a long time, and others (like late transcendentals) are relatively recent. Also this trickle of pure math results dripping down into lower math curriculum is really interesting.
Talk 4: "Jean le Rond D'Alembert: l'enfant terrible of the French Enlightenment" by Lawrence D'Antonio
D'Alembert was a very prominent Frech mathematician who was a contemporary of Euler. He was very influential in both the French and German Academies. I wasn't aware that he was the inventor of the ratio test for series, which is kind of cool. This talk was really entertaining: D'Alembert's life is very dramatic and Lawrence presented it well. D'Alembert was eventually enemies with just about everybody, including Euler (who basically scooped him on a result about equinoxes). He was also co-editor, along with Diderot of the Encyclopedie, a mega collection of writings of enlightenment thinkers. Its translated and on-line here http://encyclopedie.uchicago.edu/, which is awesome for us now. I didn't write any of them down, unfortunately, but there were some really hilarious quotes from D'Alembert.
Talk 5: "Packing Measure of Super Separated Iterated Function Systems" By James Reid
James discussed one of his recent research projects, which was to find an algorithm for computing the packing measure of a certain family of fractals. Two things about this are very cool. The first is that these dimension and measure computations are usually not even remotely computable. The dual notion, Hausdorff dimension and measure, certainly are not computable in the broad context that James was working in. The second is that, although the packing measure had been given a nice formula for one dimensional examples, people had yet to push this to higher dimensions. Although James' formula only applies to objects whose packing dimensions is bounded by 1, they are in the plane, R^3, and so on. It will be interesting to see if having packing dimension greater than 2 is really an obstacle, or if the nice behavior will somehow continue. Oh yeah, the talk has some really fun pictures of fractals generated by polygons as well.
Saturday, October 29, 2016
10/29/2016
My reference letters are all finally uploaded, and I've been working on cover letters all day. Things are starting to feel very real. I am feeling quite nervous about this whole thing, but maybe I will feel better once I submit my first application.
In other news, I presented twice at UNT this week. Both times I talked about Jonsson cardinals from the ZFC perspective. It was an interesting experience catering similar information to different audiences: the graduate logic group and the algebra seminar. The talk for the graduate logic group seemed to go quite well, and I must thank Christopher Lambie-Hanson for his excellent presentation of this material during the UCI summer school, as my talk was based very strongly this material. For the algebra seminar I tried to to use the topic to motivate some basic objects of set theory from algebraic concerns. I didn't get through all of the material I wanted to (in retrospect I was overambitious), and I initially felt it had not gone so well. A little after the talk, however, the organizer of the algebra seminar approached me and said she had enjoyed the talk and even reassured me about that style of presentation for job talks. She was almost certainly being too nice, but ti made me feel better regardless. In any case, it seemed to go better than when I presented to them about Whitehead's problem years ago. I still haven't quite figured out a quick way of presenting ordinals and cardinals which preserves enough information to be worth it.
In other news, I presented twice at UNT this week. Both times I talked about Jonsson cardinals from the ZFC perspective. It was an interesting experience catering similar information to different audiences: the graduate logic group and the algebra seminar. The talk for the graduate logic group seemed to go quite well, and I must thank Christopher Lambie-Hanson for his excellent presentation of this material during the UCI summer school, as my talk was based very strongly this material. For the algebra seminar I tried to to use the topic to motivate some basic objects of set theory from algebraic concerns. I didn't get through all of the material I wanted to (in retrospect I was overambitious), and I initially felt it had not gone so well. A little after the talk, however, the organizer of the algebra seminar approached me and said she had enjoyed the talk and even reassured me about that style of presentation for job talks. She was almost certainly being too nice, but ti made me feel better regardless. In any case, it seemed to go better than when I presented to them about Whitehead's problem years ago. I still haven't quite figured out a quick way of presenting ordinals and cardinals which preserves enough information to be worth it.
Monday, October 24, 2016
10/22/2016 UIC Workshop Day 3
Starting off this day, Justin Moore finished up his tutorial. Today was more about generating certain kinds of trees, in such a way that CH is not violated. This, of course, continues the theme of the previous day, as the existence of certain trees is known to follow from MA and be denied by diamond. So if one of these can be forced to exist while maintaining CH, it provides evidence that diamond was necessary. In particular, Moore covered some of the details of a forcing which is completely proper, adds a special tree, and is still completely proper in the extension. It's interesting to me that when I read through the material on trees in Kunen that I mistakenly thought of it as a kind of curiosity and as mostly completed work. I was way wrong, but I'm happy that I was, as the discussion of these tree properties, square properties, and other combinatorial properties generates a lot of interesting mathematics.
Anush Tserunyan was the next speaker and her talk was on "Integer Cost and Ergodic Actions." She spent a little while acclimating the group to the ideas of invariant descriptive set theory, and although that was review for me, her perspective and intuitions were helpful. In particular, she focused on the interplay between group actions, equivalence relations, and graphs. These connections provide background to try and answer the question "What is most efficient presentation of an equivalence relation E?" and "What is the smallest number of edges required to create a graph G which understands E?" Towards a qualitative answer to these questions, you can see if there is an acyclic Borel graph which captures the equivalence relation. A good example here is the equivalence relation generated from the free action of a free group. More quantitatively, using a measure you can integrate the number of edges then take an infimum and obtain a cost function. Galorian showed in 1998 that this infimum is achieved precisely when the equivalence relations can be captured by an acyclic Borel graph. Hjorth continued in this vein, looking more specifically at countable ergodic measure preserving equivalence relations. If the cost is an integer or infinite, then the equivalence relation can be seen as generated by the free action of a free group with at most countably many generators. Anush created tools which not only simplify the proof of this substantially, but allow her to strengthen its conclusion to add that the subactions generated by individual generators is ergodic as well. Anush is always a good speaker, and since I normally see her speak at descriptive set theory sessions, it was interesting to see how she catered her material to a more pure set theory audience. Her ability to live in both the world of set theory and that of analysis is something I strive for. As a final note, something in her discussion of these graph representations shook loose some ideas I've been having about the qualitative characterizations of various quotients of the reals. Even though E_0 cannot be linearly ordered, it can be represented by an acyclic graph, and in that sense its partial order structure is less complicated than say its product with equivalence relation generated by the free group on two generators acting freely. So it seems to separate E_0 and l_1 qualitatively I should be looking at ways in which they can be represented.
In the afternoon, Matt Foreman did the first two parts of his tutorial, "Applications of Descriptive Set Theory to Classical Dynamical Systems." This was based on the same slides as when I saw Foreman speak at CUNY, although this time he catered the talk an audience of peers as opposed to graduate students. As such, he included a number of insights that were in the CUNY talk. Also, he had more time here, so he was able to go into more detail. As before, the problem is to see if it possible to classify the ergodic measure preserving diffeomorphisms of compact manifolds. This was problem was originally posed by Von Neumann in the 1930s. You can argue that attempts to solve are in the background of many of the significant developments in dynamics, such as the notion of entropy, although its not clear that the field is still motivated by it today. Regardless, finding a solution to an 80 year problem is impressive. The answer is no, there is no simple way to classify the ergodic measure preserving diffeomorphisms of compact manifolds. More precisely, this problem is more complicated than the most complicated problems which have algebraic invariants: those isomorphisms which can be seen as actions of the infinite permutation group. If you drop the diffeomorphism aspect, and just look at ergodic measure preserving transformations, the problem is complete analytic, which is as bad it could be. This uses the fact that odometer based dynamical systems are in some sense generic in the space of dynamical systems. Foreman and Weiss however, really wanted to solve Von Neumann's problem. To do this, they created a new type of dynamical system which is motivated by odometer systems, called circular systems. While it is still unknown if an arbitrary odometer system can be realized as an ergodic measure preserving diffeomorphisms they were able to show that circular systems can be. There is also a nice isomorphism between the class of odometer based systems and the class of circular based systems. Putting this altogether you can get that the diffeomorphism problem is also complete analytic.
Finishing off the third day, Maxwell Levine spoke on "Weak Squares and Very Good Scales." I saw a twenty minute version of this talk at the CUNY graduate student conference. When I saw it there I knew absolutely no pcf theory, and I had a hard time tracking what was happening. Now that I've done the pcf summer school and Maxwell had an hour, I think I got a much better idea of what he is doing. There are three kinds of combinatorial principles in play in his work: square principles, stationary reflection, and scales. Stationary reflection is a kind of compactness property, and can be provided by large cardinals. Square principles are a kind of incompactness; they occur in models like L, and for example can provide for the existence of an almost metrizable space which is not itself metrizable. A theorem of Shelah says that scales exist, but the existence of a very good scale contradicts stationary reflection. What's somewhat odd is that square properties can co-exist with stationary reflection. A strong square property can coexist with stationary reflection. The weakest square property in particular can even coexist with simultaneous stationary reflection, but it is not immediately clear if the second weakest square can. One way to check this is to see if this second weakest square implies the existence of a very good scale. Maxwell was able to construct a model where the weakest square holds and there are no very good scales, however the second weakest square still fails in this model. So he next tried to see if it is possible for the second weakest square to strictly hold. The answer is yes, and it can even happen at relatively small cardinals (aleph_omega). Complicating the picture even more, in the presence of large cardinals this second weakest square property implies that the strongest form of simultaneous stationary reflection fails. In fact, Maxwell is able to explicitly characterize why this reflection is failing. What I like about this work now is that Maxwell is basically taking the theory of cardinal arithmetic apart piece by piece and seeing which steps were really necessary. At the very least I feel like I have a better understanding of the machinery of cardinal arithmetic after seeing him speak.
Anush Tserunyan was the next speaker and her talk was on "Integer Cost and Ergodic Actions." She spent a little while acclimating the group to the ideas of invariant descriptive set theory, and although that was review for me, her perspective and intuitions were helpful. In particular, she focused on the interplay between group actions, equivalence relations, and graphs. These connections provide background to try and answer the question "What is most efficient presentation of an equivalence relation E?" and "What is the smallest number of edges required to create a graph G which understands E?" Towards a qualitative answer to these questions, you can see if there is an acyclic Borel graph which captures the equivalence relation. A good example here is the equivalence relation generated from the free action of a free group. More quantitatively, using a measure you can integrate the number of edges then take an infimum and obtain a cost function. Galorian showed in 1998 that this infimum is achieved precisely when the equivalence relations can be captured by an acyclic Borel graph. Hjorth continued in this vein, looking more specifically at countable ergodic measure preserving equivalence relations. If the cost is an integer or infinite, then the equivalence relation can be seen as generated by the free action of a free group with at most countably many generators. Anush created tools which not only simplify the proof of this substantially, but allow her to strengthen its conclusion to add that the subactions generated by individual generators is ergodic as well. Anush is always a good speaker, and since I normally see her speak at descriptive set theory sessions, it was interesting to see how she catered her material to a more pure set theory audience. Her ability to live in both the world of set theory and that of analysis is something I strive for. As a final note, something in her discussion of these graph representations shook loose some ideas I've been having about the qualitative characterizations of various quotients of the reals. Even though E_0 cannot be linearly ordered, it can be represented by an acyclic graph, and in that sense its partial order structure is less complicated than say its product with equivalence relation generated by the free group on two generators acting freely. So it seems to separate E_0 and l_1 qualitatively I should be looking at ways in which they can be represented.
In the afternoon, Matt Foreman did the first two parts of his tutorial, "Applications of Descriptive Set Theory to Classical Dynamical Systems." This was based on the same slides as when I saw Foreman speak at CUNY, although this time he catered the talk an audience of peers as opposed to graduate students. As such, he included a number of insights that were in the CUNY talk. Also, he had more time here, so he was able to go into more detail. As before, the problem is to see if it possible to classify the ergodic measure preserving diffeomorphisms of compact manifolds. This was problem was originally posed by Von Neumann in the 1930s. You can argue that attempts to solve are in the background of many of the significant developments in dynamics, such as the notion of entropy, although its not clear that the field is still motivated by it today. Regardless, finding a solution to an 80 year problem is impressive. The answer is no, there is no simple way to classify the ergodic measure preserving diffeomorphisms of compact manifolds. More precisely, this problem is more complicated than the most complicated problems which have algebraic invariants: those isomorphisms which can be seen as actions of the infinite permutation group. If you drop the diffeomorphism aspect, and just look at ergodic measure preserving transformations, the problem is complete analytic, which is as bad it could be. This uses the fact that odometer based dynamical systems are in some sense generic in the space of dynamical systems. Foreman and Weiss however, really wanted to solve Von Neumann's problem. To do this, they created a new type of dynamical system which is motivated by odometer systems, called circular systems. While it is still unknown if an arbitrary odometer system can be realized as an ergodic measure preserving diffeomorphisms they were able to show that circular systems can be. There is also a nice isomorphism between the class of odometer based systems and the class of circular based systems. Putting this altogether you can get that the diffeomorphism problem is also complete analytic.
Finishing off the third day, Maxwell Levine spoke on "Weak Squares and Very Good Scales." I saw a twenty minute version of this talk at the CUNY graduate student conference. When I saw it there I knew absolutely no pcf theory, and I had a hard time tracking what was happening. Now that I've done the pcf summer school and Maxwell had an hour, I think I got a much better idea of what he is doing. There are three kinds of combinatorial principles in play in his work: square principles, stationary reflection, and scales. Stationary reflection is a kind of compactness property, and can be provided by large cardinals. Square principles are a kind of incompactness; they occur in models like L, and for example can provide for the existence of an almost metrizable space which is not itself metrizable. A theorem of Shelah says that scales exist, but the existence of a very good scale contradicts stationary reflection. What's somewhat odd is that square properties can co-exist with stationary reflection. A strong square property can coexist with stationary reflection. The weakest square property in particular can even coexist with simultaneous stationary reflection, but it is not immediately clear if the second weakest square can. One way to check this is to see if this second weakest square implies the existence of a very good scale. Maxwell was able to construct a model where the weakest square holds and there are no very good scales, however the second weakest square still fails in this model. So he next tried to see if it is possible for the second weakest square to strictly hold. The answer is yes, and it can even happen at relatively small cardinals (aleph_omega). Complicating the picture even more, in the presence of large cardinals this second weakest square property implies that the strongest form of simultaneous stationary reflection fails. In fact, Maxwell is able to explicitly characterize why this reflection is failing. What I like about this work now is that Maxwell is basically taking the theory of cardinal arithmetic apart piece by piece and seeing which steps were really necessary. At the very least I feel like I have a better understanding of the machinery of cardinal arithmetic after seeing him speak.
Sunday, October 23, 2016
10/21/2016 UIC Workshop Day 2
Justin Moore started off day 2 with a tutorial on "Iterated Forcing and the Continuum Hypothesis." The goal is try and develop forcings which preserve CH while obtaining partial consequences of combinatorial principles which in their full strength contradict CH. As most of the forcings required to instantiate these principles require iterated forcing, the bulk of this talk focused on possible conditions on a poset which guarantee that even after an iteration of some length, new reals have not been added. This is is more subtle than it may initially seem, as the act of iteration can add reals even when none of the forcings being iterated do so on their own. The first thing one of these posets must do is preserve stationary subsets of aleph_1. There is a reasonably straightforward example which adds reals after being iterated, and the precise reason it does so is because it fails to preserve stationary sets. This leads to the notion of proper forcing. However, this is not enough. A more subtle example shows that an iteration of proper posets which individually do not add reals can still add in the end. One answer is to look at something called completely proper posets. This is a technical definition, but the upshot is that for a completely proper poset, certain information about its generic don't depend on the model being forced over. A theorem of Shelah shows that countable support iterations of completely proper posets with one of two extra assumptions are still completely proper, and thus do not add reals. Interestingly enough, these extra assumptions really are separate and not obviously subsumed by some more general assumption. Again, these advanced forcing talks really are beyond my skill set with forcing, but I am enjoying soaking up the culture and philosophy around them.
In first part of the afternoon, Manachem Magidor finished up his tutorial on "Compactness and Incompactness Principles for Chromatic Numbers and Other Cardinal Sins." This talk started off with examples of both compactness and incompactness theorems for bounds on chromatic number under different settings. If stationary reflection fails, then bounds for chromatic numbers are not compact. On the other hand, in the presence of a countably complete ultrafilter, having chromatic number bounded by aleph_0 is a strongly compact property. After this, Magidor talked some of his joint with with Bagaria in which they relate compactness properties to other abstract compactness principles. While compactness principles are obtainable for large cardinals, getting them on small cardinals, say aleph_n for some n, is another matter, and involves large cardinals of noticeably higher consistency strength. I do want to check out how some of these chromatic number play out under AD.
Omar Ben Neria went next, He spoke on "The Distance Between HOD and V." The question is, up to consistency, how small can HOD be? There are a couple ways to measure this, one way for HOD to be large is for it satisfy a kind of coloring lemma, and a way for HOD to be small is for cardinals in V to be large cardinals in HOD. Woodin has shown from an extendible cardinal that a version of Jensen's dichotomy holds between V and HOD. Ben Neria and Unger were able to strengthen a result of Cummings, Friedman, and Golshani which shows that it is consistent for covering to fail for HOD. In fact, it is consistent that every uncountable regular cardinal in V is huge in HOD. One important piece of technology for the proof is weakly homogeneous forcings. These ensure that the HOD of the forcing extension do not escape the ground model. Another important technique is to use iterated forcing with non-stationary support. I don't understand the intense modified iterated forcing going on here, but the result is nice. I asked about getting all uncountable cardinals of V to be Jonsson cardinals in HOD, and it seems that the their construction does achieve this, although at the cost of at least a measurable cardinal at the moment. I'm not sure what an optimal consistency strength would be. Ben was kind of enough to point to me some information which may help me to attack the consistency problem of all uncountable cardinalsbelow Theta being Jonsson in L(R).
To finish out the day, Nam Trang gave talk on the "Compactness of Omega_1." Under the axiom of determinacy, omega_1 has some supercompactness type properties. For some fixed set X, Trang was interested in finding minimal models satisfying the claim "omega_1 is X-supercompact." Even under ZF, this implies some failures of square. More than just implying that omega_1 is strongly compact with respect to the reals, it turns out that AD and thus property are equiconsistent. The statement that omega_1 is supercompact with respect to the reals is equiconsistent to an even stronger form of determinacy. Moving up with the power set operation on R, the compactness properties become equiconsistent with stronger forms of determinacy still. Trang, in joint work with Rodriguez was able to show that there is a unique model of certain form which thinks that omega_1 is R-supercompact. I always enjoy hearing about these higher forms of determinacy and their corresponding models, and these equiconsistency results add yet another connection between the theory of large cardinals and the theory of determinacy.
In first part of the afternoon, Manachem Magidor finished up his tutorial on "Compactness and Incompactness Principles for Chromatic Numbers and Other Cardinal Sins." This talk started off with examples of both compactness and incompactness theorems for bounds on chromatic number under different settings. If stationary reflection fails, then bounds for chromatic numbers are not compact. On the other hand, in the presence of a countably complete ultrafilter, having chromatic number bounded by aleph_0 is a strongly compact property. After this, Magidor talked some of his joint with with Bagaria in which they relate compactness properties to other abstract compactness principles. While compactness principles are obtainable for large cardinals, getting them on small cardinals, say aleph_n for some n, is another matter, and involves large cardinals of noticeably higher consistency strength. I do want to check out how some of these chromatic number play out under AD.
Omar Ben Neria went next, He spoke on "The Distance Between HOD and V." The question is, up to consistency, how small can HOD be? There are a couple ways to measure this, one way for HOD to be large is for it satisfy a kind of coloring lemma, and a way for HOD to be small is for cardinals in V to be large cardinals in HOD. Woodin has shown from an extendible cardinal that a version of Jensen's dichotomy holds between V and HOD. Ben Neria and Unger were able to strengthen a result of Cummings, Friedman, and Golshani which shows that it is consistent for covering to fail for HOD. In fact, it is consistent that every uncountable regular cardinal in V is huge in HOD. One important piece of technology for the proof is weakly homogeneous forcings. These ensure that the HOD of the forcing extension do not escape the ground model. Another important technique is to use iterated forcing with non-stationary support. I don't understand the intense modified iterated forcing going on here, but the result is nice. I asked about getting all uncountable cardinals of V to be Jonsson cardinals in HOD, and it seems that the their construction does achieve this, although at the cost of at least a measurable cardinal at the moment. I'm not sure what an optimal consistency strength would be. Ben was kind of enough to point to me some information which may help me to attack the consistency problem of all uncountable cardinalsbelow Theta being Jonsson in L(R).
To finish out the day, Nam Trang gave talk on the "Compactness of Omega_1." Under the axiom of determinacy, omega_1 has some supercompactness type properties. For some fixed set X, Trang was interested in finding minimal models satisfying the claim "omega_1 is X-supercompact." Even under ZF, this implies some failures of square. More than just implying that omega_1 is strongly compact with respect to the reals, it turns out that AD and thus property are equiconsistent. The statement that omega_1 is supercompact with respect to the reals is equiconsistent to an even stronger form of determinacy. Moving up with the power set operation on R, the compactness properties become equiconsistent with stronger forms of determinacy still. Trang, in joint work with Rodriguez was able to show that there is a unique model of certain form which thinks that omega_1 is R-supercompact. I always enjoy hearing about these higher forms of determinacy and their corresponding models, and these equiconsistency results add yet another connection between the theory of large cardinals and the theory of determinacy.
Saturday, October 22, 2016
10/20/2016 UIC Workshop Day 1
Manachem Magidor started the morning off with the first tutorial. He is speaking on "Compactness and Incompactness for Chromatic Numbers and Other Cardinal Sins." He phrases compactness in the intuitive way "If every small substructure has a property, then the whole structure does", where small means lower cardinality. He gave quite a few examples of compactness phenomena, stretching across algebra, topology,and set theory. I really liked his intuitive descriptions of weakly compact, strongly compact and supercompact cardinals.
The first talk in the afternoon was by Sherwood Hachtman, He spoke on "Forcing Analytic Determinacy." Essentially, this is related to the reverse math question of how much set theory do you need to get that lightface analytic determinacy is equivalent to 0# existing. More combinatorially, you can ask determinacy with respect to Turing cones. While Woodin conjectured that Z_2 is enough strength (and it certainly is for boldface analytic determinacy), recent work by Cheng and Schinder seems to indicate that Z_3 may be necessary. Hachtman didn't have concrete answers to this yet, but inspired by the technique of genericity iterations, he has forced Turing cones to be inside of some sets who should have them.
This all was pretty interesting to me as I didn't know that anybody was thinking about reverse math questions surrounding determinacy. The classic result about Borel determinacy is certainly cool enough to warrant more attention, so I'm not sure why I had never really thought about it. It would be neat if the lightface analytic determinacy really was force-able, and thus an interesting intermediate stage between Borel determinacy and full boldface analytic determinacy.
Finishing out the first day, Spencer Unger presented the "Poor Man's Tree Property." People in the audience said it was like watching a mini James Cummings present. The idea is to replace the tree property with the weaker assumption that there are no special trees, This is consistency work, o the goal is to find a forcing which can ensure that there are no special trees. Because of some obstructions, such a model will have to fail GCH and SCH everywhere. Combining approaches of Magidor and Gitik, Under was able to create a large interval of cardinals, starting at aleph_2 and passing some singular cardinals, on which there are no special trees. The modified Magidor approach works from the ground up and can ensure that aleph_2 allows for no special trees. This approach can be modified for finitely many cardinals, if you thread the forcings together appropriately. The Gitik approach can be used to ensure that a singular cardinal doesn't allow for special trees. For his result, Unger put these approaches together in supercompact Prikry-like forcing which very carefully collapses certain intervals of cardinals. The details are much too complicated for me to understand with my current knowledge of forcing.
Even though it is outside of my skill set, these super high tech forcings are quite interesting, and its good to see people like Unger, who have intuition and a big picture in mind, present about them.
- A cardinal is weakly compact with respect to a certain property if the property passes to small pieces of it,
- A cardinal is strongly compact with respect to a certain property if the property passes to all pieces of it, and
- A cardinal is supercompact it if strongly compact with respect to all second order properties,
The first talk in the afternoon was by Sherwood Hachtman, He spoke on "Forcing Analytic Determinacy." Essentially, this is related to the reverse math question of how much set theory do you need to get that lightface analytic determinacy is equivalent to 0# existing. More combinatorially, you can ask determinacy with respect to Turing cones. While Woodin conjectured that Z_2 is enough strength (and it certainly is for boldface analytic determinacy), recent work by Cheng and Schinder seems to indicate that Z_3 may be necessary. Hachtman didn't have concrete answers to this yet, but inspired by the technique of genericity iterations, he has forced Turing cones to be inside of some sets who should have them.
This all was pretty interesting to me as I didn't know that anybody was thinking about reverse math questions surrounding determinacy. The classic result about Borel determinacy is certainly cool enough to warrant more attention, so I'm not sure why I had never really thought about it. It would be neat if the lightface analytic determinacy really was force-able, and thus an interesting intermediate stage between Borel determinacy and full boldface analytic determinacy.
Finishing out the first day, Spencer Unger presented the "Poor Man's Tree Property." People in the audience said it was like watching a mini James Cummings present. The idea is to replace the tree property with the weaker assumption that there are no special trees, This is consistency work, o the goal is to find a forcing which can ensure that there are no special trees. Because of some obstructions, such a model will have to fail GCH and SCH everywhere. Combining approaches of Magidor and Gitik, Under was able to create a large interval of cardinals, starting at aleph_2 and passing some singular cardinals, on which there are no special trees. The modified Magidor approach works from the ground up and can ensure that aleph_2 allows for no special trees. This approach can be modified for finitely many cardinals, if you thread the forcings together appropriately. The Gitik approach can be used to ensure that a singular cardinal doesn't allow for special trees. For his result, Unger put these approaches together in supercompact Prikry-like forcing which very carefully collapses certain intervals of cardinals. The details are much too complicated for me to understand with my current knowledge of forcing.
Even though it is outside of my skill set, these super high tech forcings are quite interesting, and its good to see people like Unger, who have intuition and a big picture in mind, present about them.
Subscribe to:
Posts (Atom)