Monday, January 16, 2017

JMM 2017 Day 3 (Part 1)

This was a long day. There were a lot of short talks to attend, so this will be a pretty lengthy report (probably why I've been putting it off). As such, I am splitting it into two parts.

Talk 1: "Ligeti's Combinatorial Tonality" by Clifton Callendar

I decided to attend some math and music talks, and I'm glad I did, as they were a fun way to start the morning. This first one, as suggested by the title, was a study of some of Liegti's later works, particularly the piece "Disorde." The pieces are neither harmonious or atonal, but a blend of the two. To describe this phenomena, Callendar uses the notions of inter harmonies, intra harmonies, and then interval content. In particular, he performed a statistical analysis of the standard deviation of the interval content for Disorde. The piece corresponds well to statistically expected interval content, but with more emphasis on tritones. Interestingly enough, however, the piece corresponds better with the expected intervals and notes as it goes on. There are two proposed reasons for this. First, the left and right hands of the piano become more separated as the piece goes on, so the increased dissonance is less noticeable to human ears. Second, the piece has something called Iso-rhythmic structure, and as the piece becomes denser towards the end, this becomes compressed, allowing for fewer possible notes and enforcing the randomness. Both reasons seem to be partially correct, based off of a statistical analysis. This was a really fun talk, particularly because we got to listen to the piece and to the speaker play on a keyboard.

Talk 2: "Voicing Transformations and Linear Representation of Uniform Triadic Transformations" by Thomas M. Fiore

Thomas' motivation for this work was Webern concerto no. 9, because it's rich pattern. Thomas is proposing a more robust system for understanding triadic transformations, based on global reflections, contextual reflections, and voicing reflections. In terms of linear algebra, these are matrices which act on the note space. Thomas takes the semi-direct product of the symmetric group on 3 elements with the matrix group generated by these reflections to create what he calls J. J ends up having very nice structure. It also can be used to described a wide variety of classical progressions. For instance, there are elements of J whose orbit produces the diatonic fifths sequence and a subgroup of J is isomorphic to the Hook group. The Hook group generates the uniform triadic transformations, and was known before, but this is the first time a linear description of this group and its action has been realized. The major downside of this work is that it ignores inversions. So the progression is only represented up to inversion. Still pretty cool. I like how this material is essentially clever applications of undergraduate mathematics.

Talk 3: "Hypergraph Regularity, Ultraproducts, and a Game Semantics" by Henry Towsner

This was another talk that was related to that fateful blog post by Terry Tao on non-standard analysis. Henry is studying the ultraproducts of finite (or finitely generated/finite dimensional) objects. There is a recurring phenomena where properties of the ultraproduct translate into uniform statements about the class of finite objects forming the ultraproduct. An example of this is meta-stability. For finite graphs, this is the Szeneredi Regularity Lemma. A corresponding statement can be proved for hypergraphs, but is horribly complicated to state and prove. Henry, proceeding philosphically, was wondering about the role of the ultraproduct in these statements, and whether a better semantics was possible which could describe this phenomena. Building on the game-theoretic form of first order logic semantics, Henry described a game setup that can describe these uniform statements, called uniform semantics. This avoids ultraproducts, and can actually be used to obtain specific bounds, which the ultraproduct method could not. This is honestly super cool, and I'm really intrigued by the further applications of this that Henry mentioned.

Talk 4: "Measurable Chromatic Numbers" by Clinton Conley

This is essentially what it sounds like (if you know the terminology). The objects of study here are graph structures on Polish spaces, where the edge relation is Borel. The chromatic number is the minimum number of colors needed to color the graph in such a way that if two nodes share an edge, they are different colors. This however splits into three numbers. The raw number, where the axiom of choice can be used to select starting points in components of the graph, The Borel number, where the colorings under consideration have to be Borel, and the measure number, where the colorings under consideration have to be (let's say Lebesgue) measurable and you only have to color the graph up to a set of full measure. For countable graphs, there is a neat result called Brooke's theorem, which states that if the degree of the graph is less than d, the chromatic number is at most d+1. If the graph contains no finite cliques and no odd cycles, then the chromatic number is at most d. This extends to the Borel chromatic number, except that Marks proved that there are acyclic Borel graphs of degree d (for any d) whose Borel chromatic number is d+1. The example produced by Marks are of high complexity, however, which is maybe the only way one could ask for this result to be improved. For the measure chromatic number, there are actually two variants: the first listed above, where you color up to a set of full measure, and another where for all epsilon, you color the graph up to set of measure 1 - epsilon. These two number can be different, as seen in the example of the graph generated by the irrational rotation of the circle (3 for the full measure version, 2 for the epsilon version). In fact, any graph generated by the Borel action of an amenable group has epsilon measure chromatic number 2. So for measure chromatic number, the irrational rotation of the circle, is essentially the only new criteria that needs to be added to Brooks theorem. With this in hand, and the fact that the Borel actions of amenable groups generate hyperfinite graphs (up to measure), one might hope that Marks' examples somehow rely on being of high complexity. Indeed, it's not possible to use smooth graphs for such a class of examples. However, by tweaking Marks' example in a very non-trivial way, Clinton was able to create (for all d) hyperfinite acyclic graphs of degree d whose Borel chromatic number is d+1.

Wednesday, January 11, 2017

JMM 2017 Day 2

I didn't go to quite as many talks this day, as I had to attend a reception for graduate students who had received money to attend, and I gave my talk on this afternoon as well.

Talk 1: "Descriptive Set Theory and Borel Probabilities" by Chris Caruvana

In Chris's research, he is studying the space of Borel probability measures. Since this is a Polish space, one can talk about comeager collections of measures in a coherent way.  In particular, the sets of reals which are measure zero for a comeager collection of measures is an object of interest. This forms a countably closed ideal, and it at least contains the meager sets. Thus there is an associated notion of "smallness" that comes from these sets. With this notion of smallness comes an analogue of being Baire Measurable, that is your difference with a Borel set is small. Using this analogue, Chris was able to prove an automatic continuity result. There are interesting open questions floating around about exactly what relationship between these sets and universally null sets is consistent.

Talk 2: "Rationales for Mathematics and its Significance in Recently-Excavated Bamboo Texts from Ancient China" by Joseph W. Dauben

A whole slew of ancient Chinese mathematical texts have been recovered recently. They were written on flattened bamboo strips, and some are really well preserved. They tend to follow a student-master dialogue style, and covered a pretty wide range of arithmetic and word problems. It's interesting that they seemed to have a very Pythagorean view of things, where everything is number (and in particular expressible with fractions? That point wasn't totally clear). In one of the dialogues, the student is upset, because he can't seem to master literature and math at the same time. The teacher tells him to learn math if he has to choose, because "math can help you understand literature, but literature cannot help you understand math." This was a fun talk, it was definitely cool to see pictures of the bamboo strips. I don't know much about ancient Chinese math, so I didn't understand a lot of the speaker's references, but I'm kind of curious now to look at some of the important texts that have been uncovered (translated into English of course).

Talk 3: "Modal logic axioms valid in quotient spaces of finite CW-complexes and other families of topological sets" by Maria Nogin

This talk mostly dealt with realizing the semantics of modal logic in certain topologies. In particular point topologies (where open means containing a particular point), excluded point topologies (open means missing a particular point) and quotients of finite CW-complexes. Different extensions of S4 can be realized across these spaces. I've never looked into this kind of thing too much, beyond understanding the relationship between compactness in topological products and compactness in classical logic. I remember a colleague once looking into this sort of representation for intuitionistic logic. I think this level of understanding of the semantics can be helpful for these more complicated logics.

Talk 4: "On a variant of continuous logic" by Simon Cho

Another pure logic talk. This one had more of a non-standard analysis bent. Simon was looking at something called geodesic logic, which expands on continuous logic by loosening the requirement for the semantics to consist of continuous functions, but adds linear and geodesic structure into the semantics. Using this, uniform convergence results can be obtained for functions which are not continuous. In some sense, this is an extension of ideas presented in a really good sequence of blog posts by Terry Tao on non-standard analysis, convergence and ergodic theory. I really enjoy seeing where this line of thought is progressing. In some sense it may be easiest to approach intractable analysis problems by first figuring out what the appropriate logic is to study the problems with.

Saturday, January 7, 2017

JMM 2017 Day 1

For this year's joint meeting I decided to mix it up a bit and attend a wider variety of talks. I think I had a better time because of it. On this first day, I attended some MAA talks on teaching, some history of math talks, and my friend James' contributed talk.

Talk 1: "Instant Insanity: Using Colored Blocks to Teach Graph Theory" by Stephen Adams

 Instant insanity is a game using four colored blocks. The object is to stack the blocks into a column so that each color appears at most once per column. It's easy to set three up, but pretty difficult to get four. Stephen had built his own blocks, and since he had a small class, he let them attempt the game on their own, and keep the blocks during the subsequent lecture. Graph theory can be used to label the blocks in such a way that a relatively simple algorithm can then be applied to achieve a successful stacking. I think this is an interesting way to introduce graphs. I like how it uses visual, tactile, and formal reasoning skills on the part of the student.

Talk 2: "A Magic Trick That is Full of Induction" by Robert W. Vallin

The students have shuffled the cards. They are behind your back. You have told them that you can tell red cards from black cards by touch, and, of course, they are skeptical. Miraculously, you consistently alternate pulling and red and black cards, without looking. Now they want to know the trick. It's actually quite simple, you prep the deck to alternate red and black ahead of time. So even though you let the students cut the deck repeatedly, those don't change this alternating feature. But what about the riffle shuffle you performed before putting the cards behind your back. How did that not mess up the order? Well it did, but in a very specific way, so that if you pull cards in the right way, you will still alternate red and black. That's the read magic. What's the proof that it works? It goes by induction.

Robert brought this up as an interesting way of introducing students to induction proofs, more interesting than the usual summation formula at least. If you extend the concept to shuffled permutations, it sometimes works. There is a nice characterization of exactly when it works (Gilbreath permutations), and related to this characterization is a problem which needs to be proved with strong induction. So a magic trick could be the motivating example for all of the theoretical material involved in teaching induction in a discrete class.

Talk 3: "From L'Hopital to Lagrange: Analysis Textbooks in the 18th Century" by Robert Bradley

At the beginning of the 18th century, calculus (although developed by Newton and Leibniz) was still only understood by very few people. Over the course of the 18th century it spread across the intellectual world, in part because of key textbooks. L'Hopital wrote one called "Analyse des infiniment petits" in 1696, with a second edition in 1715. From Leibniz to the Bernoullis to L'Hopital the axioms for infinitesimals had been reduced to two. L'Hopital didn't deal with functions, but curves, and so although his book contains derivative rules for rational functions, it does not have the chain rule. It also does not have rules for transcendental functions.

Euler's textbook was called "Calculus Differentials", and was published in 1750, although there is an unpublished version from 1727. In the 1727 versions, he does talk about functions, begins with finite differentials, then talks about log and exponent, and uses similar axioms to L'Hopital. He does not talk about trig functions. In 1750 he has changed his viewpoint on infinitesimals. Now there algebraically 0, but not geometrically. Euler also wrote a pre-calculus book, "Introductio." In this book he defines functions, distinguishes between variables and constants, and uses series and power series representations of functions. He also defined the trig functions from the unit circle.

The last book Robert discussed was the "Theorie des Function Analytiques" by Lagrange. Lagrange extends Eulers view, and begins the book with power series representations. He does to avoid using infinitesimals. From power series, derivatives are algebraically definable. He is the first of the three to talk about derivatives as functions, and although his reasoning is fundamentally circular, he talks much about rigor and purity in mathematics. Apparently this approach became the standard one in the 19th century, surviving the actual rigorization of calculus. It wasn't until the 20th century that the rigorous sequence of limits -> derivatives -> power series was incorporated into calculus textbooks.

I'm definitely interested in looking at these books myself at some point. It was interesting to see how some aspects of the pre-cal/calc curicculum have been around for a long time, and others (like late transcendentals) are relatively recent. Also this trickle of pure math results dripping down into lower math curriculum is really interesting.

Talk 4: "Jean le Rond D'Alembert: l'enfant terrible of the French Enlightenment" by Lawrence D'Antonio

D'Alembert was a very prominent Frech mathematician who was a contemporary of Euler. He was very influential in both the French and German Academies. I wasn't aware that he was the inventor of the ratio test for series, which is kind of cool. This talk was really entertaining: D'Alembert's life is very dramatic and Lawrence presented it well. D'Alembert was eventually enemies with just about everybody, including Euler (who basically scooped him on a result about equinoxes). He was also co-editor, along with Diderot of the Encyclopedie, a mega collection of writings of enlightenment thinkers. Its translated and on-line here http://encyclopedie.uchicago.edu/, which is awesome for us now. I didn't write any of them down, unfortunately, but there were some really hilarious quotes from D'Alembert.

Talk 5: "Packing Measure of Super Separated Iterated Function Systems" By James Reid

James discussed one of his recent research projects, which was to find an algorithm for computing the packing measure of a certain family of fractals. Two things about this are very cool. The first is that these dimension and measure computations are usually not even remotely computable. The dual notion, Hausdorff dimension and measure, certainly are not computable in the broad context that James was working in. The second is that, although the packing measure had been given a nice formula for one dimensional examples, people had yet to push this to higher dimensions. Although James' formula only applies to objects whose packing dimensions is bounded by 1, they are in the plane, R^3, and so on. It will be interesting to see if having packing dimension greater than 2 is really an obstacle, or if the nice behavior will somehow continue. Oh yeah, the talk has some really fun pictures of fractals generated by polygons as well.