This blog has moved

Please visit www.mathematicalgemstones.com, where this blog will be updated from now on.

Advertisements
Posted in Uncategorized | Leave a comment

The 3x+1 problem!

I have exciting news today: The first ever joint paper by Monks, Monks, Monks, and Monks has been accepted for publication in Discrete Mathematics.

These four Monks’s are my two brothers, my father, and myself. We worked together last summer on the notorious 3x+1 conjecture (also known as the Collatz conjecture), an open problem which is so easy to state that a child can understand the question, and yet it has stumped mathematicians for over 70 years.

The 3x+1 problem asks the following: Suppose we start with a positive integer, and if it is odd then multiply it by 3 and add 1, and if it is even, divide it by 2. Then repeat this process as long as you can. Do you eventually reach the integer 1, no matter what you started with?

For instance, starting with 5, it is odd, so we apply 3x+1. We get 16, which is even, so we divide by 2. We get 8, and then 4, and then 2, and then 1. So yes, in this case, we eventually end up at 1.

That’s it! It’s an addictive and fun problem to play around with (see http://xkcd.com/710/), but it is frustratingly difficult to solve completely.

So far, it is known that all numbers less than 20\cdot 2^{58}, or about 5.8 billion billion, eventually reach 1 when this process is applied. (See Silva’s and Roosendaal’s websites, where calculations are continually being run to verify the conjecture for higher and higher integers.)

But what about the general case?

Let’s define the Collatz function T on the positive integers by:
T(x)=\begin{cases}\frac{3x+1}{2} & x\text{ is odd} \\ \frac{x}{2} & x\text{ is even}\end{cases}.

Then the conjecture states that the sequence x, T(x), T(T(x)), T(T(T(x))),\ldots has 1 as a term. Notice that T(1)=2 and T(2)=1, so 1 is a cyclic point of T.

We can draw a graph on the positive integers in which we connect x to T(x) with an arrow for all x, and color it red if x is odd and black if x is even. The portion of the graph near 1 looks like this:

We just want to show that this graph is connected – that there are no other components with very large integers that don’t connect back to 1.

Last summer, we started out with some previous ideas and partial progress. In 2006, one member of our family research team, my brother Ken M. Monks, demonstrated that it suffices to prove the conjecture for some arithmetic sequence in the Collatz graph. (The paper is available here.) With this as a starting point, we worked to understand how arithmetic sequences are distributed across the Collatz graph. Where do they occur? How far apart are their elements?

By the end of the summer, we realized that there was some beautiful structure in the Collatz graph lying before us. We proved several surprising number-theoretic results, for instance, that every T-orbit must contain an integer congruent to 2 modulo 9.

We’re not sure if this will lead to a proof of the conjecture, but we found a few big gemstones that might give us a boost in the right direction. A preprint of the paper can be found here:

http://arxiv.org/abs/1204.3904

Enjoy, and feel free to post a comment with any ideas on the conjecture you may have!

Posted in Amber, Gemstones | 1 Comment

Summary: Symmetric functions transition table

Over the last few weeks I’ve been writing about several little gemstones that I have seen in symmetric function theory. But one of the main overarching beauties of the entire area is that there are at least five natural bases with which to express any symmetric functions: the monomial (m_\lambda), elementary (e_\lambda), power sum (p_\lambda), complete homogeneous (h_\lambda), and Schur (s_\lambda) bases. As a quick reminder, here is an example of each, in three variables x,y,z:

m_{(3,2,2)}=x^3y^2z^2+y^3x^2z^2+z^3y^2x^2

e_{(3,2,2)}=e_3e_2e_2=xyz(xy+yz+zx)^2

p_{(3,2,2)}=p_3p_2p_2=(x^3+y^3+z^3)(x^2+y^2+z^2)^2

h_{(2,1)}=h_2h_1=(x^2+y^2+z^2+xy+yz+zx)(x+y+z)

s_{(3,1)}=m_{(3,1)}+m_{(2,2)}+2m_{(2,1,1)}

Since we can usually transition between the bases fairly easily, this gives us lots of flexibility in attacking problems involving symmetric functions; it’s sometimes just a matter of picking the right basis.

So, to wrap up my recent streak on symmetric function theory, I’ve posted below a list of rules for transitioning between the bases. (The only ones I have not mentioned is how to take a polynomial expressed in the monomial symmetric functions m_\lambda in terms of the others; this is rarely needed and also rather difficult.)

  • Elementary to monomial:

    e_\lambda=\sum M_{\lambda\mu} m_\mu

    where M_{\lambda\mu} is the number of 0,1-matrices with row sums \lambda_i and column sums \mu_j.

  • Elementary to homogeneous:

    e_n=\det \left(\begin{array}{cccccc}     h_1       & 1       & 0       & 0        &\cdots & 0 \\    h_2       & h_1     & 1       & 0        & \cdots & 0 \\    h_3       & h_2     & h_1     &  1       & \cdots & 0 \\    \vdots   & \vdots & \vdots  & \vdots &  \ddots & \vdots \\    h_{n-1}   & h_{n-2} & h_{n-3} &  h_{n-4}  & \ddots & 1  \\    h_n       & h_{n-1} & h_{n-2} &  h_{n-3}   & \cdots & h_1   \end{array}\right)

  • Elementary to power sum:

    e_n=\frac{1}{n!}\det\left(\begin{array}{cccccc}     p_1       & 1       & 0       & 0        &\cdots & 0 \\    p_2       & p_1     & 2       & 0        & \cdots & 0 \\    p_3       & p_2     & p_1     &  3       & \cdots & 0 \\    \vdots   & \vdots & \vdots  & \vdots &  \ddots & \vdots \\    p_{n-1}   & p_{n-2} & p_{n-3} &  p_{n-4}  & \ddots & n-1  \\    p_n       & p_{n-1} & p_{n-2} &  p_{n-3}   & \cdots & p_1   \end{array}\right)

  • Elementary to Schur:

    e_\lambda=\sum_{\mu} K_{\mu'\lambda}s_\mu

  • Homogeneous to monomial:

    h_\lambda=\sum N_{\lambda\mu} m_\mu

    where N_{\lambda\mu} is the number of matrices with nonnegative integer entries with row sums \lambda_i and column sums \mu_j.

  • Homogeneous to elementary:

    h_n=\det\left(\begin{array}{cccccc}     e_1        & 1       & 0       & 0        &\cdots & 0 \\    e_2       & e_1     & 1       & 0        & \cdots & 0 \\    e_3       & e_2     & e_1     &  1       & \cdots & 0 \\    \vdots    & \vdots & \vdots  & \vdots &  \ddots & \vdots \\    e_{n-1} & e_{n-2} & e_{n-3} &  e_{n-4}  & \ddots & 1  \\    e_n       & e_{n-1} & e_{n-2} &  e_{n-3}   & \cdots & e_1   \end{array}\right)

  • Homogeneous to power sum:

    h_n=\frac{1}{n!}\det\left(\begin{array}{cccccc}     p_1       & -1       & 0       & 0        &\cdots & 0 \\    p_2       & p_1     & -2       & 0        & \cdots & 0 \\    p_3       & p_2     & p_1     &  -3       & \cdots & 0 \\    \vdots   & \vdots & \vdots  & \vdots &  \ddots & \vdots \\    p_{n-1}   & p_{n-2} & p_{n-3} &  p_{n-4}  & \ddots & -(n-1)  \\    p_n       & p_{n-1} & p_{n-2} &  p_{n-3}   & \cdots & p_1   \end{array}\right)

  • Homogeneous to Schur:

    h_{\lambda}=\sum_\mu K_{\mu\lambda}s_\mu

  • Power sum to monomial:

    p_\lambda=R_{\lambda,\mu}m_\mu

    where R_{\lambda,\mu} is the number of ways of sorting the parts of \lambda into a number of ordered blocks in such a way that the sum of the parts in the jth block is \mu_j.

  • Power sum to elementary: Newton-Gerard identities:

    p_n=\det\left(\begin{array}{cccccc}     e_1        & 1       & 0       & 0        &\cdots & 0 \\    2e_2       & e_1     & 1       & 0        & \cdots & 0 \\    3e_3       & e_2     & e_1     &  1       & \cdots & 0 \\    \vdots    & \vdots & \vdots  & \vdots &  \ddots & \vdots \\  (n-1)e_{n-1} & e_{n-2} & e_{n-3} &  e_{n-4}  & \ddots & 1  \\    ne_n       & e_{n-1} & e_{n-2} &  e_{n-3}   & \cdots & e_1   \end{array}\right)

  • Power sum to homogeneous:

    p_n=(-1)^{n-1}\det\left(\begin{array}{cccccc}     h_1        & 1       & 0       & 0        &\cdots & 0 \\    2h_2       & h_1     & 1       & 0        & \cdots & 0 \\    3h_3       & h_2     & h_1     &  1       & \cdots & 0 \\    \vdots    & \vdots & \vdots  & \vdots &  \ddots & \vdots \\  (n-1)h_{n-1} & h_{n-2} & h_{n-3} &  h_{n-4}  & \ddots & 1  \\    nh_n       & h_{n-1} & h_{n-2} &  h_{n-3}   & \cdots & h_1   \end{array}\right)

  • Power sum to Schur: Let \chi^\lambda be the \lambdath character of the symmetric group S_n where n=|\lambda|, and write \chi^\lambda(\mu) to denote the value of \chi^{\lambda} at any permutation with cycle type \mu. Then for any partition \mu\vdash n, we have:

    p_\mu=\sum_{\lambda\vdash n} \chi^\lambda(\mu) s_\lambda

  • Schur to monomial:

    s_{\lambda}=\sum_{\mu\vdash |\lambda|} K_{\lambda \mu}m_\mu

    where K_{\lambda\mu} is the number of semistandard Young tableau of shape \lambda and content \mu.

  • Schur to elementary: (Dual Jacobi-Trudi Identity.)

    s_{\lambda/\mu} = \det \left(e_{\lambda'_i-\mu'_j-i+j}\right)_{i,j=1}^n

  • Schur to homogeneous: (Jacobi-Trudi Identity.)

    s_{\lambda/\mu} = \det \left(h_{\lambda_i-\mu_j-i+j}\right)_{i,j=1}^n

  • Schur to power sum:

    s_\lambda=\sum_{\nu\vdash |\lambda|} z_\nu^{-1} \chi^{\lambda}(\nu) p_\nu

Posted in Diamond, Uncategorized | Leave a comment

A bridge between two worlds: the Frobenius map

There is a reason I’ve been building up the theory of symmetric functions in the last few posts, one gemstone at a time: all this theory is needed for the proof of the beautiful Murnaghan-Nakayama rule for computing the characters of the symmetric group.

What do symmetric functions have to do with representation theory? The answer lies in the Frobenius map, the keystone that completes the bridge between these two worlds.

The Frobenius map essentially takes a character of a representation of the symmetric group and assigns it a symmetric function. To define it, recall that characters are constant across conjugacy classes, and in S_n, the conjugacy classes correspond to the partitions of n by associating a permutation \pi with its cycle type c(\pi). For instance, the permutations \pi=(12)(345) and \sigma=(35)(142) both have cycle type \lambda=(3,2). So, any character \chi satisfies \chi(\pi)=\chi(\sigma).

We can now define the Frobenius map F. For any character \chi of a representation of S_n, define

F\left(\chi\right)=\frac{1}{n!}\sum_{\pi\in S_n}\chi(\pi)p_{c(\pi)}

where p_\lambda=p_{\lambda_1}p_{\lambda_2}\cdots p_{\lambda_k} is the \lambdath power sum symmetric function. (Recall that p_{\lambda_i}=x_1^{\lambda_i}+x_2^{\lambda_i}+\cdots+x_k^{\lambda_i}.)

Then, combining the permutations with the same cycle type in the sum, we can rewrite the definition as a sum over partitions of size n:

F\left(\chi\right)=\sum_{|\lambda|=n}\frac{1}{z_\lambda}\chi(\lambda)p_{\lambda}

for some constants z_\lambda. (It’s a fun little combinatorial problem to show that if \lambda has m_1 1’s, m_2 2’s, and so on, then z_\lambda=1^{m_1}m_1!2^{m_2}m_2!\cdots.)

As an example, let’s take a look at the character table of S_3:

Consider the second row, \chi^{(2,1)}, and let us work over three variables x,y,z. Then the Frobenius map sends \chi^{(2,1)} to

F\chi^{(2,1)}=\frac{1}{6}(2p_{(1,1,1)}-2p_3)

by our definition above. This simplifies to:

\frac{1}{3}((x+y+z)^3-(x^3+y^3+z^3))=x^2y+y^2x+x^2z+z^2x+y^2z+z^2y+2xyz

which can be written as m_{2,1}+2m_{1,1,1}. Notice that, by the combinatorial definition of the Schur functions, this is precisely the Schur function s_{2,1}! In fact:

The Frobenius map sends the irreducible character \chi^\lambda to the Schur function s_\lambda for all \lambda.

And therein lies the bridge.

Why is this the case? Read on to find out…

Posted in Quartz, Uncategorized | Leave a comment

The hidden basis

In the last few posts (see here and here), I’ve been talking about various bases for the symmetric functions: the monomial symmetric functions m_\lambda, the elementary symmetric functions e_\lambda, the power sum symmetric functions p_\lambda, and the homogeneous symmetric functions h_\lambda. As some of you aptly pointed out in the comments, there is one more important basis to discuss: the Schur functions!

When I first came across the Schur functions, I had no idea why they were what they were, why every symmetric function can be expressed in terms of them, or why they were useful or interesting. I first saw them defined using a simple, but rather arbitrary-sounding, combinatorial approach:

First, define a semistandard Young tableau (SSYT) to be a way of filling in the squares of a partition diagram (Young diagram) with numbers such that they are nondecreasing across rows and strictly increasing down columns. For instance, the Young diagram of (5,3,1,1) is:

and one possible SSYT of this shape is:

(Fun fact: The plural of tableau is tableaux, pronounced exactly the same as the singular, but with an x.)

Now, given a SSYT T with numbers of size at most n, let \alpha_i be the number of i`s written in the tableau. Given variables x_1,\ldots,x_n, we can define the monomial x^T=x_1^{\alpha_1}\cdots x_n^{\alpha_n}. Then the Schur function s_\lambda is defined to be the sum of all monomials x^T where T is a SSYT of shape \lambda.

For instance, if \lambda=(2,1), then the possible SSYT’s of shape \lambda with numbers of size at most 2 are:

So the Schur function s_{(2,1)} in 2 variables x and y is x^2y+y^2x.

This combinatorial definition seemed rather out-of-the-blue when I first saw it. Even more astonishing is that the Schur functions have an abundance of nice properties. To name a few:

  • The Schur functions are symmetric. Interchanging any two of the variables results in the same polynomial.
  • The Schur functions form a basis for the symmetric functions, Like the elementary symmetric functions, every symmetric polynomial can be expressed uniquely as a linear combination of s_\lambda`s.
  • The Schur functions arise as the characters of the irreducible polynomial representations of the general linear group. This was proven by Isaac Schur and was the first context in which the Schur functions were defined. Here, a polynomial representation is a matrix representation in which the entries are given by polynomials in the entries of the elements of GL_n.
  • The transition matrix between the power sum symmetric functions and the Schur functions is precisely the character table of the symmetric group S_n. This fact is essential in proving the Murnaghan-Nakayama rule that I mentioned in a previous post.

All of this is quite remarkable – but why is it true? It is not even clear that they are symmetric, let alone a basis for the symmetric functions.

After studying the Schur functions for a few weeks, I realized that while this combinatorial definition is very useful for quickly writing down a given s_\lambda, there is an equivalent algebraic definition that is perhaps more natural in terms of understanding its role in symmetric function theory. (Turn to page 2!)

Posted in Quartz, Uncategorized | 3 Comments

Theme and variations: the Newton-Girard identities

Time for another gemstone from symmetric function theory! (I am studying for my Ph.D. qualifying exam at the moment, and as a consequence, the next several posts will feature yet more gemstones from symmetric function theory. You can refer back to this post for the basic definitions.)

Start with a polynomial p(x) that factors as

p(x)=(x-\alpha_1)(x-\alpha_2)\cdots(x-\alpha_n).

The coefficients of p(x) are symmetric functions in \alpha_1,\ldots,\alpha_n – in fact, they are, up to sign, the elementary symmetric functions in \alpha_1,\ldots,\alpha_n.

In particular, if e_i denotes \sum_{j_1<\cdots<j_i} \alpha_{j_1}\alpha_{j_2}\cdots\alpha_{j_i}, then

p(x)=x^n-e_1x^{n-1}+e_{2}x^{n-2}-\cdots+(-1)^n e_n.

(These coefficients are sometimes referred to as Vieta’s Formulas.)

Since p(\alpha_i)=0 for all i, we can actually turn the equation above into a symmetric function identity by plugging in \alpha_1,\ldots,\alpha_n:

\alpha_1^n-e_1\alpha_1^{n-1}+e_{2}\alpha_1^{n-2}-\cdots+(-1)^n e_n=0
\alpha_2^n-e_1\alpha_2^{n-1}+e_{2}\alpha_2^{n-2}-\cdots+(-1)^n e_n=0
\vdots

and then summing these equations:

(\alpha_1^n+\cdots+\alpha_n^n)-e_1(\alpha_1^{n-1}+\cdots+\alpha_n^{n-1})+\cdots+(-1)^n\cdot ne_n=0

…and we have stumbled across another important basis of the symmetric functions, the power sum symmetric functions. Defining p_i=\alpha_1^i+\cdots+\alpha_n^i, every symmetric function in \{\alpha_j\} can be uniquely expressed as a linear combination of products of these p_i`s (we write p_\lambda=p_{\lambda_1}p_{\lambda_2}\cdots p_{\lambda_n}.)

So, we have

p_n-e_1p_{n-1}+e_2 p_{n-2}-\cdots+(-1)^n\cdot ne_n=0.

This is called the nth Newton-Girard identity, and it gives us a recursive way of expressing the p`s in terms of the e`s.

Well, almost. We have so far only shown that this identity holds when dealing with n variables. However, we can plug in any number of zeros for the \alpha_i`s to see that the identity also holds for any number of variables k<n. And, if we had more than n variables, we can compare coefficients of each monomial individually, which only can involve at most n of the variables at a time since the equation is homogeneous of degree n. Setting the rest of the variables equal to zero for each such monomial will do the trick.

So now we have it! The Newton-Girard identities allow us to recursively solve for p_n in terms of the e_i`s. Wikipedia does this nicely and explains the computation, and the result is:

p_n=\det\left(\begin{array}{cccccc}     e_1        & 1       & 0       & 0        &\cdots & 0 \\    2e_2       & e_1     & 1       & 0        & \cdots & 0 \\    3e_3       & e_2     & e_1     &  1       & \cdots & 0 \\    \vdots    & \vdots & \vdots  & \vdots &  \ddots & \vdots \\  (n-1)e_{n-1} & e_{n-2} & e_{n-3} &  e_{n-4}  & \ddots & 1  \\    ne_n       & e_{n-1} & e_{n-2} &  e_{n-3}   & \cdots & e_1   \end{array}\right)

For instance, this gives us p_2=e_1^2-2e_2, which is true:

x^2+y^2+z^2=(x+y+z)^2-2(xy+yz+zx).

The Wikipedia page derives a similar identity for expressing the e`s in terms of the p`s. It also does the same for expressing the complete homogeneous symmetric functions h_n in terms of the p`s and vice versa. However, it does not explicitly express the e`s in terms of the h`s or vice versa. In the name of completeness of the Internet, let’s treat these here.

Fix some number of variables x_1,\ldots,x_n. For any d, define h_d to be the sum of all monomials of degree d in x_1,\ldots,x_n. This is clearly symmetric, and we define

h_\lambda = h_{\lambda_1}h_{\lambda_2}\cdots h_{\lambda_k}

for any partition \lambda, as we did for the elementary symmetric functions last week. The h_\lambda`s, called the complete homogeneous symmetric functions, form a basis for the space of symmetric functions.

It’s a fun exercise to derive the following generating function identities for e_n and h_n:

H(t)=\sum_n h_n t^n = \prod_{i=1}^n \frac{1}{1-x_i t}
E(t)=\sum_n e_n t^n = \prod_{i=1}^n (1+x_i t)

(The first requires expanding out each factor as a geometric series, and then comparing coefficients. Try it!)

From these, we notice that H(t)E(-t)=1, and by multiplying the generating functions together and comparing coefficients, we find the identities

h_n=h_{n-1}e_1-h_{n-2}e_2+\cdots+(-1)^{n-1}e_n

Just as before, this gives us a recursion for h_n in terms of the e_i`s. With a bit of straightforward algebra, involving Cramer’s Rule, we can solve for h_n:

h_n=\det\left(\begin{array}{cccccc}     e_1        & 1       & 0       & 0        &\cdots & 0 \\    e_2       & e_1     & 1       & 0        & \cdots & 0 \\    e_3       & e_2     & e_1     &  1       & \cdots & 0 \\    \vdots    & \vdots & \vdots  & \vdots &  \ddots & \vdots \\    e_{n-1} & e_{n-2} & e_{n-3} &  e_{n-4}  & \ddots & 1  \\    e_n       & e_{n-1} & e_{n-2} &  e_{n-3}   & \cdots & e_1   \end{array}\right)

We can also use the same equations to solve for e_n in terms of h_n:

e_n=\det \left(\begin{array}{cccccc}     h_1       & 1       & 0       & 0        &\cdots & 0 \\    h_2       & h_1     & 1       & 0        & \cdots & 0 \\    h_3       & h_2     & h_1     &  1       & \cdots & 0 \\    \vdots   & \vdots & \vdots  & \vdots &  \ddots & \vdots \\    h_{n-1}   & h_{n-2} & h_{n-3} &  h_{n-4}  & \ddots & 1  \\    h_n       & h_{n-1} & h_{n-2} &  h_{n-3}   & \cdots & h_1   \end{array}\right)

I find these two formulas to be more aesthetically appealing than the standard Newton-Gerard formulas between the p_n`s and e_n`s, since they lack the pesky integer coefficients that appear in the first column of the matrix in the p-to-e case. While perhaps not as mainstream, they are gemstones in their own right, and deserve a day to shine.

Posted in Obsidian, Uncategorized | 2 Comments

Addendum: An alternate proof of the FTSFT

Last week I posted about the Fundamental Theorem of Symmetric Function Theory. Zarathustra Brady pointed me to the following alternate proof in Serge Lang’s book Algebra. While not as direct or useful in terms of changing basis from the e_\lambda`s to the m_\lambda`s, it is a nice, clean inductive proof that I thought was worth sharing:

Assume for contradiction that the e_\lambda`s do not form a basis of the space of symmetric functions. We have shown that they span the space, so there is a dependence relation: some nontrivial linear combination of e_\lambda`s, all necessarily of the same degree, is equal to zero. Among all such linear combinations, choose one (say P) that holds for the smallest possible number of variables x_1,\ldots,x_n. Furthermore, among the possible linear combinations for n variables, choose P to have minimal degree.

If the number of variables is 1, then the only elementary symmetric functions are x_1^k for some k, and so there is clearly no linear dependence relation. So, n\ge 2. Furthermore, if P has degree 1 as a polynomial in the x_i`s, then it can involve only e_1, and so it cannot be identically zero. So P has degree at least 2, in at least 2 variables.

Now, if P is divisible by x_n, then by symmetry it is divisible by each of the variables. So, it is divisible by e_n, and so we can divide the equation by e_n and get a relation of smaller degree, contradicting our choice of P. Otherwise, if P is not divisible by x_n, set x_n=0. Then we get another nontrivial relation among the e_\lambda in the smaller number of variables x_1,\ldots,x_n, again contradicting the choice of P. QED!

Posted in Obsidian, Uncategorized | Leave a comment