Please visit www.mathematicalgemstones.com, where this blog will be updated from now on.
Please visit www.mathematicalgemstones.com, where this blog will be updated from now on.
I have exciting news today: The first ever joint paper by Monks, Monks, Monks, and Monks has been accepted for publication in Discrete Mathematics.
These four Monks’s are my two brothers, my father, and myself. We worked together last summer on the notorious 3x+1 conjecture (also known as the Collatz conjecture), an open problem which is so easy to state that a child can understand the question, and yet it has stumped mathematicians for over 70 years.
The 3x+1 problem asks the following: Suppose we start with a positive integer, and if it is odd then multiply it by 3 and add 1, and if it is even, divide it by 2. Then repeat this process as long as you can. Do you eventually reach the integer 1, no matter what you started with?
For instance, starting with 5, it is odd, so we apply 3x+1. We get 16, which is even, so we divide by 2. We get 8, and then 4, and then 2, and then 1. So yes, in this case, we eventually end up at 1.
That’s it! It’s an addictive and fun problem to play around with (see http://xkcd.com/710/), but it is frustratingly difficult to solve completely.
So far, it is known that all numbers less than , or about billion billion, eventually reach 1 when this process is applied. (See Silva’s and Roosendaal’s websites, where calculations are continually being run to verify the conjecture for higher and higher integers.)
But what about the general case?
Let’s define the Collatz function on the positive integers by:
Then the conjecture states that the sequence has as a term. Notice that and , so is a cyclic point of .
We can draw a graph on the positive integers in which we connect to with an arrow for all , and color it red if is odd and black if is even. The portion of the graph near looks like this:
We just want to show that this graph is connected – that there are no other components with very large integers that don’t connect back to .
Last summer, we started out with some previous ideas and partial progress. In 2006, one member of our family research team, my brother Ken M. Monks, demonstrated that it suffices to prove the conjecture for some arithmetic sequence in the Collatz graph. (The paper is available here.) With this as a starting point, we worked to understand how arithmetic sequences are distributed across the Collatz graph. Where do they occur? How far apart are their elements?
By the end of the summer, we realized that there was some beautiful structure in the Collatz graph lying before us. We proved several surprising number-theoretic results, for instance, that every -orbit must contain an integer congruent to modulo .
We’re not sure if this will lead to a proof of the conjecture, but we found a few big gemstones that might give us a boost in the right direction. A preprint of the paper can be found here:
Enjoy, and feel free to post a comment with any ideas on the conjecture you may have!
Over the last few weeks I’ve been writing about several little gemstones that I have seen in symmetric function theory. But one of the main overarching beauties of the entire area is that there are at least five natural bases with which to express any symmetric functions: the monomial (), elementary (), power sum (), complete homogeneous (), and Schur () bases. As a quick reminder, here is an example of each, in three variables :
Since we can usually transition between the bases fairly easily, this gives us lots of flexibility in attacking problems involving symmetric functions; it’s sometimes just a matter of picking the right basis.
So, to wrap up my recent streak on symmetric function theory, I’ve posted below a list of rules for transitioning between the bases. (The only ones I have not mentioned is how to take a polynomial expressed in the monomial symmetric functions in terms of the others; this is rarely needed and also rather difficult.)
where is the number of -matrices with row sums and column sums .
where is the number of matrices with nonnegative integer entries with row sums and column sums .
where is the number of ways of sorting the parts of into a number of ordered blocks in such a way that the sum of the parts in the th block is .
where is the number of semistandard Young tableau of shape and content .
There is a reason I’ve been building up the theory of symmetric functions in the last few posts, one gemstone at a time: all this theory is needed for the proof of the beautiful Murnaghan-Nakayama rule for computing the characters of the symmetric group.
What do symmetric functions have to do with representation theory? The answer lies in the Frobenius map, the keystone that completes the bridge between these two worlds.
The Frobenius map essentially takes a character of a representation of the symmetric group and assigns it a symmetric function. To define it, recall that characters are constant across conjugacy classes, and in , the conjugacy classes correspond to the partitions of by associating a permutation with its cycle type . For instance, the permutations and both have cycle type . So, any character satisfies .
We can now define the Frobenius map . For any character of a representation of , define
where is the th power sum symmetric function. (Recall that .)
Then, combining the permutations with the same cycle type in the sum, we can rewrite the definition as a sum over partitions of size n:
for some constants . (It’s a fun little combinatorial problem to show that if has 1’s, 2’s, and so on, then )
As an example, let’s take a look at the character table of :
Consider the second row, , and let us work over three variables . Then the Frobenius map sends to
by our definition above. This simplifies to:
which can be written as . Notice that, by the combinatorial definition of the Schur functions, this is precisely the Schur function ! In fact:
The Frobenius map sends the irreducible character to the Schur function for all .
And therein lies the bridge.
Why is this the case? Read on to find out…
In the last few posts (see here and here), I’ve been talking about various bases for the symmetric functions: the monomial symmetric functions , the elementary symmetric functions , the power sum symmetric functions , and the homogeneous symmetric functions . As some of you aptly pointed out in the comments, there is one more important basis to discuss: the Schur functions!
When I first came across the Schur functions, I had no idea why they were what they were, why every symmetric function can be expressed in terms of them, or why they were useful or interesting. I first saw them defined using a simple, but rather arbitrary-sounding, combinatorial approach:
First, define a semistandard Young tableau (SSYT) to be a way of filling in the squares of a partition diagram (Young diagram) with numbers such that they are nondecreasing across rows and strictly increasing down columns. For instance, the Young diagram of is:
and one possible SSYT of this shape is:
(Fun fact: The plural of tableau is tableaux, pronounced exactly the same as the singular, but with an x.)
Now, given a SSYT with numbers of size at most , let be the number of `s written in the tableau. Given variables , we can define the monomial . Then the Schur function is defined to be the sum of all monomials where is a SSYT of shape .
For instance, if , then the possible SSYT’s of shape with numbers of size at most are:
So the Schur function in variables and is .
This combinatorial definition seemed rather out-of-the-blue when I first saw it. Even more astonishing is that the Schur functions have an abundance of nice properties. To name a few:
All of this is quite remarkable – but why is it true? It is not even clear that they are symmetric, let alone a basis for the symmetric functions.
After studying the Schur functions for a few weeks, I realized that while this combinatorial definition is very useful for quickly writing down a given , there is an equivalent algebraic definition that is perhaps more natural in terms of understanding its role in symmetric function theory. (Turn to page 2!)
Time for another gemstone from symmetric function theory! (I am studying for my Ph.D. qualifying exam at the moment, and as a consequence, the next several posts will feature yet more gemstones from symmetric function theory. You can refer back to this post for the basic definitions.)
Start with a polynomial that factors as
The coefficients of are symmetric functions in – in fact, they are, up to sign, the elementary symmetric functions in .
In particular, if denotes , then
(These coefficients are sometimes referred to as Vieta’s Formulas.)
Since for all , we can actually turn the equation above into a symmetric function identity by plugging in :
and then summing these equations:
…and we have stumbled across another important basis of the symmetric functions, the power sum symmetric functions. Defining , every symmetric function in can be uniquely expressed as a linear combination of products of these `s (we write .)
So, we have
This is called the th Newton-Girard identity, and it gives us a recursive way of expressing the `s in terms of the `s.
Well, almost. We have so far only shown that this identity holds when dealing with variables. However, we can plug in any number of zeros for the `s to see that the identity also holds for any number of variables . And, if we had more than variables, we can compare coefficients of each monomial individually, which only can involve at most of the variables at a time since the equation is homogeneous of degree . Setting the rest of the variables equal to zero for each such monomial will do the trick.
So now we have it! The Newton-Girard identities allow us to recursively solve for in terms of the `s. Wikipedia does this nicely and explains the computation, and the result is:
For instance, this gives us , which is true:
The Wikipedia page derives a similar identity for expressing the `s in terms of the `s. It also does the same for expressing the complete homogeneous symmetric functions in terms of the `s and vice versa. However, it does not explicitly express the `s in terms of the `s or vice versa. In the name of completeness of the Internet, let’s treat these here.
Fix some number of variables . For any , define to be the sum of all monomials of degree in . This is clearly symmetric, and we define
for any partition , as we did for the elementary symmetric functions last week. The `s, called the complete homogeneous symmetric functions, form a basis for the space of symmetric functions.
It’s a fun exercise to derive the following generating function identities for and :
(The first requires expanding out each factor as a geometric series, and then comparing coefficients. Try it!)
From these, we notice that , and by multiplying the generating functions together and comparing coefficients, we find the identities
Just as before, this gives us a recursion for in terms of the `s. With a bit of straightforward algebra, involving Cramer’s Rule, we can solve for :
We can also use the same equations to solve for in terms of :
I find these two formulas to be more aesthetically appealing than the standard Newton-Gerard formulas between the `s and `s, since they lack the pesky integer coefficients that appear in the first column of the matrix in the -to- case. While perhaps not as mainstream, they are gemstones in their own right, and deserve a day to shine.
Last week I posted about the Fundamental Theorem of Symmetric Function Theory. Zarathustra Brady pointed me to the following alternate proof in Serge Lang’s book Algebra. While not as direct or useful in terms of changing basis from the `s to the `s, it is a nice, clean inductive proof that I thought was worth sharing:
Assume for contradiction that the `s do not form a basis of the space of symmetric functions. We have shown that they span the space, so there is a dependence relation: some nontrivial linear combination of `s, all necessarily of the same degree, is equal to zero. Among all such linear combinations, choose one (say ) that holds for the smallest possible number of variables . Furthermore, among the possible linear combinations for variables, choose to have minimal degree.
If the number of variables is , then the only elementary symmetric functions are for some , and so there is clearly no linear dependence relation. So, . Furthermore, if has degree as a polynomial in the `s, then it can involve only , and so it cannot be identically zero. So has degree at least , in at least variables.
Now, if is divisible by , then by symmetry it is divisible by each of the variables. So, it is divisible by , and so we can divide the equation by and get a relation of smaller degree, contradicting our choice of . Otherwise, if is not divisible by , set . Then we get another nontrivial relation among the in the smaller number of variables , again contradicting the choice of . QED!