In which I start to grasp representability

I have been recently reading through Emily Riehl‘s “Category Theory in Context“, when I came across the following puzzle:

To whet the reader’s appetite, let us pose the following puzzle. Fixing two objects A, B in a locally small category C, define a functor C(A, −) × C(B, −): CSet that carries an object X to the set C(A, X) × C(B, X) whose elements are pairs of maps a: AX and b: BX in C. What would it mean for this functor to be representable?

My initial response was almost one of bewilderment. I still don’t really feel like I understand representability of functors, and while I can follow the arguments that prove (for example) that $\mathbb{Z}$ represents the forgetful functor GrpSet, I don’t really have a strong idea for how one would come up with a representing object for a given functor.

I had mostly moved past this point in the book, but came back to idly muse over it, and I found the though-process involved—and how I “solved” this puzzle—to be quite interesting, as it seems to touch on a lot of the benefits of having a broad base of mathematical knowledge which affords multiple ways to think about the same concept.

Here is roughly how my thoughts went. I first noted that if this functor were representable, all we are would be saying is that it is equivalent to a functor of the form C(R, −) for some object R in C. That is, we would have natural isomorphism

$\mathbf{C}(A, X) \times \mathbf{C}(B, X) \xrightarrow{\cong} \mathbf{C}(R,X)$

for every X in C. This is nothing but writing down the definition of a representable functor.

Next, I idly noted that we sometimes write as a shorthand—at least, in some categories—the set of morphisms C(A, X) as XA. If we treat this as a purely formal replacement, we might try something like the following:

$X^A \times X^B = X^{A + B}$

which is true for integers. However, it is true in other contexts, depending on how you define ‘+’! For example, if we define for sets A and B their sum A + B to be their disjoint union (often denoted $A \coprod B$), then this is still correct. It then hit me that the key is that $A \coprod B$ is a coproduct. And then it further hit me that that’s exactly what the solution to the puzzle is: the functor is representable exactly when A and B have a coproduct in the category C!

So I found myself for the first time, truly understanding a part of representability. And it came from a strange source: performing formal manipulations (which may not even make sense!), just to see what happens, and connecting it to some other concept that I knew. It was kind of fun to watch me work my way through this, a mix of idle play and insight that let to a better understanding of a concept of which I’d always had a murky understanding.

However, I came to feel a little unsatisfied with this as my resolution of this puzzle. It seemed to rely on a trick, or a bit of knowledge that had I not had, I would not have been able to resolve this. In a sense, I feel that it didn’t really give me a better hands-on understanding, I just formally danced my way past really digging into it. In particular, it did not requite me at any point to dig into the Yoneda lemma, which still feels like it remains an obstacle in my understanding.

So let’s look at it in that light, which may illuminate the concept a little bit more. According to my understanding of Yoneda, we should produce a diagram like so:

$\begin{matrix} \mathbf{C}(R, R) & \longrightarrow &\mathbf{C}(A, R) \times \mathbf{C}(B, R) \\ \downarrow && \downarrow\\ \mathbf{C}(R, X) & \longrightarrow &\mathbf{C}(A, X) \times \mathbf{C}(B, X) \end{matrix}$

where the vertical maps are induced by the (hypothetical) map RX, and the horizontal ones by the natural isomorphism. But this diagram simply says that a pair of maps A, B → X must factor through R: that is, we have a unique (!) factorization through the vertical arrow RX below.

$\begin{matrix} A & \longrightarrow & R & \longleftarrow &B\\ &\searrow&\downarrow&\swarrow \\ &&X \end{matrix}$

Ok, so maybe I do understand this a little better now.

As an aside: here are what some other coproducts are, just to be clear that they are not all disjoint unions:

• In Ab, the coproduct is the same as the product, i.e. the direct product.
• In Grp, the coproduct is the free product of groups.
• In Set* and in Top*, the coproduct is the same as the pushout

$\begin{matrix} pt & \longrightarrow & A \\ \downarrow &&\downarrow \\ B & \longrightarrow & A \coprod B \end{matrix}$

i.e. it is a type of gluing.

Hot off the presses!

So let’s talk a little more in detail about the guts of the previous paper I just discussed. Just as a recap, the idea of that paper came from the fact that in MacMahon’s paper there were 8 variations discussed, and I had previously shown that two of them satisfied Nice PropertiesTM. I was curious about the rest of them, and so I went digging.

To understand what this paper is about, we need to talk a little bit more about theta functions which were mentioned in one sense in the first post in this series. However, in this case we are looking them a little differently. In a sense, all of the work in this paper comes from obtaining a better understanding of the ins-and-outs and the quirks of theta functions[1].

We’ve probably all burned our hands when touching a hot piece of metal; a pan, an oven rack, whatever. If only there were some way to predict how hot your fire poker would be…

It turns out that mathematicians long(ish) ago came up with a way to describe how heat flows in materials. They creatively called the resulting equation the “Heat Equation”; if your figure out the solution to it, then you can describe how heat propagates in whatever piece of metal you’re holding. (Of course, heat isn’t too too interesting as it propagates; it basically does what you think it does, and disperses evenly throughout whatever it’s in. If you start with an uneven distribution of heat, you end up with something smoother. But that’s not important right now)

Figure 1: Sort of like this! Bumpy to the left, smooth to the right.
These are the ravages of time, but the opposite way that people face them.

So what does this have to do with theta functions? Well, the theta function that was described earlier in relation to counting how many dots are are a fixed distance from a starting point:

1x0 + 2x1 + 0x2 + 0x32x4 + 0x5 + 0x6 +0x7 + 0x8 +  2x9 + …

Figure 2: This one

is in fact a solution to the heat equation[2].

This is one of those functions that shows up in many places in mathematics, two of which we see here. On the one hand, we have a function that describes the propagation of heat. On the other hand, we have something that is related to counting points in a lattice—magically, these are in fact the same thing. Math!

The upshot of this sort of connection is that you can use ideas from one domain to help study the other domain. So in particular, the fact that the theta function describes the propagation of heat has some strong implications for the coefficients that describe the counting of lattice points.

The hard part

So the easy part is the what I just described, where we are able to connect the counting functions of stacks of boxes:

Figure 3: It’s so minimalistically Scandinavian…

to theta functions. This is done much in the same way that I learned from George Andrews (something that is, once you learn it, a pretty standard application of the so-called Jacobi Triple Product, a subject that I intend to write about in the future).

The idea is to take the parts that count boxes, realize you can write them as a big product that is secretly a collection of modified theta functions. After all of this, you end up with the following beautiful expression:

$\displaystyle \sum_{k=0}^\infty (-1)^k A_{S, n, k}(q) x^{2k} = \prod_{j=1}^s\frac{\vartheta_{\alpha(n, \ell_j)}(q^n, -\zeta)}{\vartheta_{\alpha(n, \ell_j)}(q^n)}$

Figure 4: Ta-da!

Well, trust me that it’s beautiful.

But now we want to move on to understanding this elusive property, modularity. The parts are all there; theta functions tend to beget modular forms, but these are a little twisted so it’s not quite so clear. While trying to understand if and how these functions were modular—and determining how that is the case in the end—I ended up finding out something far more interesting.

In the end, I was studying a whole bunch of functions (the 8 families that MacMahon studied, as well as some other ones that come up pretty naturally) which happen to have my nice property, modularity. While interesting, this is a pretty broad property. However, it turned out that there is a special way in which these are modular—namely, all of the building blocks are the same! It’s sort of like if you saw a bunch of different sculptures from far away, and then when you got closer you realized that they were made out of Lego—that they all consisted of the same pieces used over and again to produce something magnificent.

Amazingly, this all follows from paying careful attention to the formula shown above in Figure 4. Even more amazing, this technique applies to all of the box-counting functions that MacMahon investigated, as well as the infinitely many more suggested by his work. Unlike my first paper which used special techniques, the ideas in this paper worked far more broadly.

In some sense, this paper is my favourite of all that I have written. It started with a natural question that had arisen in my work, and it bounced around in my head for a few years and through a few separate institutions. When I finally came to tackling it, not only did it answer in the affirmative the question I’d been asking, but it told an even richer tale.

However, as I mentioned in the previous post, there is still one outstanding part of this puzzle. All of the above is lovely and true—but where is the geometry? In fact, the nicer answer discovered in this paper teases me even more, since the fact that the building blocks are all the same suggests to me some deeper underlying regularity. But unfortunately, that is a question for another time, and another (as yet unwritten) paper.

[1] Not the least of which is how mathematicians write them. A normal Greek lowercase letter theta, as anyone clearly knows, is written

$\theta$

However, mathematicians also use a variant cursive font (which we pronounce—I kid you not—“vartheta”) which is written as

$\vartheta$

The more you know! Basically, mathematicians care far more than most anyone about fonts. We’re a bit weird.

[2] Well, sort of. There is a tweak that you have to put in place to account for variation over space and time.

Like cats, mathematicians like boxes.

Different ways of counting

To introduce the topic of my next paper, I want to step back a little bit to something classically studied by mathematicians. In fact, it has a pretty rich and interesting history going back to a major result due to Leonhard Euler, which I will get to in time.

So suppose that you’re trying to organize some boxes (of books, naturally) in the back corner of the room. For stability, you put them all up in one corner, maybe like so.

Of course, this isn’t the only way that you could have stacked these boxes. You also could have done like so.

Or more inspired by Ikea, even like the following.

A natural question is, how many different ways can you do this? For the 12 boxes shown above, it turns out that there are 77 different ways of stacking them (If you want a painful exercise, try and draw all of them!).

By adding up the heights of the columns, we can view these is as different ways of writing sums that total 12. The three above are, respectively,

5 + 3 + 2 + 2 = 12

4 + 3 + 3 + 1 + 1 = 12

3 + 3 + 3 + 3 = 12.

We mathematicians call these sums (or diagrams—which we call Young diagrams) partitions of 12. As stated above, you could enumerate somewhat painfully that there are 77 different partitions of 12.

Here are all of the partitions of the numbers 1 through 4:

is the loneliest partition of 1;

and  are better than one;

,, and  are a bit of a crowd;

, with  being the fifth wheel;

and they go on after that. Note that while in the first three cases the number of boxes is the same as the number of diagrams, this is coincidental—what we are interested in is the number of diagrams for a fixed number of boxes.[1]

Now as you may expect from my previous post, we probably want to make a generating function for these things. Euler’s theorem is that this has a very nice form as an infinite product (which isn’t as terrible as it sounds), which is essentially the inverse of the Dedekind Eta function.[More details]

Rectangles, rectangles.

Anyhow, as interesting as this is, it isn’t our main point. In order to introduce this, we need to take these ideas in a slightly different direction. If you wanted to count the divisors of a number, we can represent them in similar ways—in fact, this may be how you were introduced to multiplication in the first place!—that is, we look at how many different ways we can arrange boxes into rectangles. For example,  for 4 we have the following three possibilities:

, and .

Note that we threw two of the previous five diagrams out since they did not form rectangles. These remaining ones correspond to the divisors 1, 2, and 4 (the widths of the rectangles[3]). So the number of divisors of some integer are just the number of ways of arranging that many boxes in a rectangle.

There are of course other things that you could do with this idea, the simplest of which is summing the divisors, which in this description just amounts to adding up the widths of the boxes. That is,

+  +  = 1 + 2 + 4 = 7.

The function in question we call the “sum-of-divisors” function, and we write it as $\sigma(n)$ (a lower-case Greek ‘sigma’). So this above says that $\sigma(4) = 7$. It’s a fun function to play with, and it satisfies the interesting property that some of the time (can you figure out when?[4]), that $\sigma(a \cdot b) = \sigma(a) \cdot \sigma(b)$. Functions like this are rather rare, and so mathematicians’ ears tend to perk up when we find them.

Yet more rectangles!

This brings us back to Percy MacMahon, and a little bit more information about my previous post. What he studied was how many ways you can partition a number into a row of rectangles, all of whom have different heights (within a single diagram, that is). If we say that we want to have exactly two different heights, then we see that for 4 total boxes we have the following two possibilities:

and ,

while for 5 we have the following 5 possibilities:

, and .

As usual, there are a number of ways you can study these diagrams; you can count the number of such possibilities, or you can “add up their widths” (in a certain sense), which is what MacMahon did. In fact, he did a little bit more than that—in his paper (with the catchy title “Divisors of Numbers and their Continuations in the Theory of Partitions“) he studied a total of 8 variations of this theme, and played around with them and showed a bunch of different properties.

And now here is where I come in. In the paper that I discussed last time, two of these functions had shown up naturally in some of my other research. George Andrews and I were able to show that these functions were modular forms, and that was that. However…

As I said above, MacMahon had studied 8 variations on a theme and I could only show that two of them had some nice properties. I found this a little intellectually unsatisfying: what about the rest of them? There arose two natural questions to ask at this point.

1. Are the rest of them modular?
2. Given that the first two variations occurred in my research (as will be discussed in a later post), do the rest of them arise in a similar way?

The second question is still one that I have not answered, and would love to understand better. The first, however, was the topic of my next-to-most-recent paper, which I will say more about next time.

[1] This sequence goes 1, 2, 3, 5, 7, 11, 15, 22, and so on. As usual, see my bestie the OEIS. Not surprisingly, this is one of the first sequences to show up on there.

[2] For those that are curious, we write it as

$\displaystyle \frac{1}{(1-x)(1 -x^2)(1 -x^3) \cdots (1 - x^k)\cdots}$.

The reasoning isn’t too hard to follow, and is kind of fun to think about: it relies on one fact (that you probably learned in high school!) about geometric series, namely that

$\displaystyle \frac{1}{1-x} = 1 + x + x^2 + x^3 + x^4 + \cdots$

There’s even a more interesting case if we stack boxes in the corner in a 3-d setting (with the terrible name of Plane Partition), where the generating is the famous MacMahon function. A colleague of mine essentially got his PhD by working with a variant of this problem, and describes his work as “counting coloured boxes in the corner of a room”; this is a pretty good description.

[3] Or if you’re feeling particularly contrarian, you could use the heights instead.

[4] Here’s a hint: $\sigma(2 \cdot 4) = \sigma(8) = 15 \neq 21 = \sigma(2) \cdot \sigma(4)$, but $\sigma(2 \cdot 3) = 12 = \sigma(2) \cdot \sigma(3)$.

Posted in Uncategorized | 1 Comment

When Percy met Pafnuty

Math papers are an odd beast. They out of necessity contain a lot of technical details and can appear quite abstruse. They are filled with jargon, formulæ, and other bits that can easily scare the non-specialist reader.

At the same time, there is a lot of beauty, elegance, and interesting ideas floating about. I hope, over the next few blog posts, to go over some of my published papers and to express their content in a way that will appeal to the interested non-specialist reader (Hi John!).

A happy accident

My first paper came out almost accidentally. I was working on a part of my thesis, mucking around with some computations that I was trying to understand, when I happened across a curious connection between my thesis problem and some mathematical work studied by Percy MacMahon in 1921. I couldn’t explain it, and so it was recommended that I appeal to George Andrews who was far more experienced in this subject. After a quick “I find your question extremely interesting” response from him, a few days later we had put together a short little paper which proved to be invaluable in my thesis.

So let’s go over this paper. It’s not a particularly long one, and it has a couple important ideas that crop up all throughout my work, which we will introduce and discuss here. The overview of the paper is that we prove some surprising connections between some mathematical work of the wonderfully bearded Pafnuty Chebyshev and some other work of Percy MacMahon (whose moustache is pretty spectacular as well). This ends up having some pleasant consequences for my thesis, which will be discussed at a later point.

But what does that all mean? Well, to begin with,

Let’s start by assuming you have a collection of numbers that you find interesting. We could start with 1, 2, 3, 5, 7, 11, say. You can start with you favourite set of numbers (I recommend looking around on the Online Encyclopedia of Integer Sequences, for example), your list can be as long—even infinite!—as you like.

Briefly speaking, what we are interested in is a way to talk about all of these numbers together at once, while respecting their order—which, after all, is part of their structure. We want to be able to then manipulate the whole list in ways that illuminate and provide information about the nature of the underlying numbers, or that provide information about whatever our source of these numbers was.

Let’s look at an example. The Fibonacci numbers are the following:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …

That is, we start with the numbers 0 and 1, and then the next number is the sum of the previous two numbers. So for example, we have that 5 + 8 = 13, 5 and 8 being the two numbers before 13.

[There is a fun aside about Fibonacci numbers: If you take the ratio of successive Fibonacci numbers, say 3/2, 5/3, …, 144/89, …, then these get closer and closer to the “golden ratio”, which is approximately 1.619. By pure coincidence, the ratio of a mile to a kilometre is very close to 1.609, which is quite close to 1.619. So if you want to convert from miles to kilometres, you can just choose the next Fibonacci number to get a good approximation. For example, since the next Fibonacci number to 8 is 13, it follows that 8 miles is pretty close to 13 kilometres. In practice, you only need to memorize the first few Fibonacci numbers to get a pretty good guess for this.]

So what is the “generating function”  for the Fibonacci numbers? This is given by

0x0 + 1x1 + 1x2 + 2x3 + 3x4 + 5x5 + 8x6 + 13x7 + 21x8 + …

where you can see that we’ve put the Fibonacci numbers (in red) as the numbers in front of powers of a variable x. To use an analogy due to Herbert Wilf, we can think of a generating function as hanging the Fibonacci numbers on a clothesline.

Figure 1: A clothesline

Anyhow, you could ask and answer a lot of questions about these numbers using this clothesline very quickly. For example, you can show that the 7-th Fibonacci number is the closest integer to

$\displaystyle \frac{1}{\sqrt{5}}\Big[\frac{1 + \sqrt{5}}{2}\Big]^7$

Figure 2: Math!

(Try it out! It’s kind of cool!) This of course allows you to compute other Fibonacci numbers without having to compute all of the ones before it.

The moral of the story is that generating functions are tools that we can use in mathematics to help us understand different quantities by studying their generating functions—and hence better understand whatever the numbers that make them up are.

So let’s look at a few particular examples of generating functions. Consider a regular lattice of points in a line, a plane, or any higher dimensional space as in the following image.

Figure 3: Colourful telephone poles

This is a one-dimensional lattice; you can think of it like the integers along a real line, or evenly spaced telephone poles along the road. I’ve marked one special one, which I call the “origin”, in red. What we want to do is count the number of points a fixed distance from this origin. For technical reasons we will see shortly, we count them not by their distance d, but by the square of the distance, d2. For example, the blue dots are of distance 2 from the red dot, but the squared distance is 22 = 4.

We will now write down a generating function so that the coefficient in front of xd2 is the number of points whose squared distance from the origin is d2. For the case above, we obtain the expression

1x0 + 2x1 + 0x2 + 0x32x4 + 0x5 + 0x6 +0x7 + 0x8 +  2x9 + …

since there is exactly one point whose distance is zero from the red dot (the red dot itself), and then two points whose distance is one, two points whose distance is two (so the distance squared is four), two points whose distance is three (and hence distance squared is nine), and so forth. There are of course no points whose distance squared is 2, or 3, and so on. This expression is called the theta function of the lattice.

Our next example is the following lattice.

Figure 4: Dotty dotty dots

This new example is a two-dimensional square lattice. The colours are chosen so that red is again the origin, and then each other colour represents all the points a fixed distance from the (red) origin. If we count them up, we end up with

1x0 + 4x1 + 4x2 + 4x4 + 8x5 + 4x8 + …

As we discussed above, the powers of x are the squared distance, and the coefficient is the count. So there are 8 orange dots, all of whose squared distance is 5 from the red dot (Remember your Pythagorean theorem—a2 + b2 = c2!).

Figure 5: 1 + 4 = 5, I promise

In particular, this is why we use the squared distance instead of the distance—this will always be a whole number.

Having written these two generating functions down, we get to our point. If you start with a lattice (as always, there are a few technical conditions that I’m eliding over), then when you build a theta function as we did above, you will always get what we call a modular form. In particular, both

1x0 + 2x1 + 2x4 + 2x9 + 2x16 + 2x25 + …

and

1x0 + 4x1 + 4x2 + 4x4 + 8x5 + 4x8 + …

are modular forms.

So what are modular forms? Well, I’m the first to admit that I’m not an expert in their study. To me, they are simply a particular kind of generating functions that interest other mathematicians. More glibly, they are generating functions that have coefficients that are likely to show up on the OEIS. Of course, these theta functions are not the only things that are modular forms; many other generating functions—although a vanishingly small proportion of all generating functions—are modular (The theta functions are however the only ones for which we have any real sense as to what their modularity means are these theta functions; everything else is a mysterious and lovely accident).

In the end, the study of modular forms is a rich and classical subject, that unites many seemingly disparate areas of mathematics. It ties together geometry and number theory (as above), and possibly even also analysis (studying the generating function itself) and even of late, some branches of theoretical physics. They were, in fact, used in 1994 by Sir Andrew Wiles to prove the famous Fermat’s Last Theorem.

A = B

A surprising amount of mathematics comes down to statements of the form A = B. While this seems not so interesting in some cases (1 + 1 = 2), in its most exciting versions we are showing that two seemingly different objects or quantities are equal to each other, or possibly that there are two ways of writing the same thing.

We now get to the main point of our paper. At its heart, it’s an A = B statement, namely that there are two ways of writing a certain generating function (of Chebyshev polynomials, the aforementioned work of Pafnuty Chebyshev). What do we mean by that in this case? Well, look at the following image.

Figure 6: Up Up Down Down Left Right…

If we look at this generating function from one side (so to speak), we get a generating function for Chebyshev polynomials. If we look at it from the other side, we get one for some functions studied by MacMahon. This in and of itself is a nice and non-obvious statement, and was not known by MacMahon. But the real point of the paper was to examine certain consequences of this fact.

The Chebyshev polynomials are given by the rows in the diagram above, and MacMahon’s functions are given by the columns. Due to the fact that they describe the same collection of data—together with what we know about Chebyshev’s polynomials—we can deduce that the columns are modular forms, a fact which was also not known by MacMahon!

To re-iterate, we find a novel way of writing a generating function for Chebyshev polynomials, which we then leverage to show that another type of function is a modular form.

What are those functions, you might ask? They themselves are generating functions, specifically those that count a certain geometric type of object. However, as they are the subject of two other papers that I have written, I’ll discuss them in more detail in a later blog post.

Posted in Uncategorized | 2 Comments

Tropical Geometry, part 4.

We’re finally at the point where we can provide the first definition of Tropical Geometry, and for the sake of personal historicity, it will be the one that I didn’t particularly like when I learned of it.

Remember that the point of the algebraic geometry, as it’s studied, is that we study a geometric object by studying the functions defined on that object. The two perspectives are equivalent, and you can use tools from one to study the other (and of course, vice versa).

As an aside, this is one way that you can understand what is meant by non-commutative geometry, at least coarsely. One fact about all of the geometric objects that we study is that the rings of functions defined on any of these things are what we call commutative. That is, $f(x)g(x) = g(x)f(x)$ for every pair of functions $f(x), g(x)$. If you think about this, this is really a reflection of the fact that at it’s heart, the functions take values in real or complex numbers (well…), and so since those satisfy $x \cdot y = y\cdot x$, it follows that functions into them do as well.

So where does non-commutative geometry come in? Well, what happens when you study non-commutative rings? What do they represent “geometrically”?

The point in this case is that if we expand generalize the left-hand side of the equivalence

commutative $k$-algebras $\iff$ geometric stuff

then this in some sense should provide something that is a generalization of the right-hand side as well. This is a well trod-upon tactic in mathematics, and provides us with notions such as a stack (note: not the same as a stack in the computer science world!), or derived schemes, or even derived stacks (combine the two).

Anyhow! So I promised that I would talk about Tropical geometry, and how it fits into the picture. Well, here goes.

See, a ring is something that satisfied a collection of proprties (or axioms) which state how we can multiply and add things together. These basically mean that they behave like the familiar integers, real numbers, or whatever—they look like the normal number systems that we’re all familiar with. It turns out that this list of properties is all we really need to build a phenomenally rich geometric world.

So for Tropical geometry, we look at a slightly different starting point. Consider the real numbers, but with the following funny rules for “addition” and “multiplication”:

$x \oplus y = \max\{x, y\}$

and

$x \otimes y = x + y$

Ok, what the hell is this. Multiplication is addition now? Addition is… the maximum? This seems very strange (and it is!) but it turns out that with this bizarre notion of addition and multiplication that we still get a surprising amount of similar properties than normal addition and multiplication have. For example, we still have that

$x \otimes (y \oplus z) = (x \otimes y) \oplus (x \otimes z)$

i.e. the distributive law. We also have a multiplicative identity ($x \otimes 0 = x$ for every $x$), we have multiplicative inverses (since $x \otimes (-x) = 0$, the identity). We even can have an additive identity if we include $-\infty$ in the package. What we can’t have though, is additive inverses and hence no subtraction.

So yeah, weird. Something which satisfies this collection of rules is a semi-ring, and with this in mind, we do exactly what you should be now expecting: Tropical geometry is geometry done using semi-rings.

Tropical Geometry, part 3.

So we have discussed the following idea. Given a geometric object $X$, we can study it by studying the functions that are defined on $X$, which we will write as $\mathcal{O}_X$ (I’m not actually sure what the $\mathcal{O}$ stands for, but this is in a certain I’m-slightly-lying-to-you way the standard way of writing this).

Now, functions are objects that we can add together ($f(x) + g(x)$), we can multiply them together ($f(x)g(x)$), and perhaps if we feel like it, we can also scale them by multiplying them by a real (or even complex) number ($\lambda \cdot f(x)$). They are, to use mathematical terminology, a ring or an algebra. So restated, as above, we can associate to every geometric thingy $X$ its associated ring/algebra $\mathcal{O}_X$.

One of the great shifts in the 20th century is that you can actually do the reverse to this as well. That is, to every ring $R$, there is a canonically associated geometric object (a scheme) which we denote as $\mathrm{Spec}\, R$. Moreover, these associations are inverse to each other. That is, we have (in a certain sense)

$\mathrm{Spec}\, \mathcal{O}_X = X$

and

$\mathcal{O}_{\mathrm{Spec}\, R} = R$.

(I should really stress again that I am slightly lying to you here. There is a context in which this is 100% true, but there are some subtleties to what I am saying. Caveat lector.)

Let’s go over a few examples just to ground ourself here. The simplest non-trivial example in some sense is the following. If we write the ring of polynomials in one variable as $\mathbb{C}[x] = \{f(x) = a_0 + a_1x + \cdots + a_nx^n \mid a_i \in \mathbb{C}\}$ then this is certainly a ring (in fact, as algebra, because you can multiply polynomials by real or complex numbers) since you can add and multiply polynomials together. So what is the corresponding geometric object? It is just the complex plane! The rough idea is that a polynomial is determined by its roots, and so we identify a polynomial $f(x)$ with its zero set. That is,

$f(x) \leftrightarrow \{z \in \mathbb{C} \mid f(z) = 0\}$

For another similar example, if let consider polynomials in two variables (for example, $f(x, y) = 4y^2 - 2xy + 11xy^2 - \pi x$) and let the ring/algebra of all of these be written as $\mathbb{C}[x,y]$, then we have that

$\mathrm{Spec}\, \mathbb{C}[x,y] = \mathbb{C}^2$

and you may be able to guess how this generalizes.

Finally, to tie ourselves into the previous post, consider the following example. Suppose that we define the ring $R$ to be the collection of all two-variable polynomials $f(x, y)$ where we identify any two of them if their difference is a multiple of $h(x, y) = x^2 + y^2 - 1$. You can check that this makes sense as a definition, but given that, then we have that $\mathrm{Spec}\, R$ iscaveat lector, again  the circle!

So the tl;dr version of this post: up to some finicky details that can be dealt with, algebraic things like rings and algebras are the same as geometric things. This is a powerful, powerful tool.

Tropical Geometry, part 2.

So last post we went over the origin of the name “Tropical Geometry”, but not what it was. I would like to start to do that, but I think that in order to do so we have to take a few steps back and understand a little bit about algebraic geometry as a whole.

The idea of algebraic geometry is to study the geometry of objects defined by algebra. Let’s look at the simplest non-trivial example. As you may recall from high school mathematics, a circle of radius $R$ in the plane can be seen as the set of all solutions to the equation

$x^2 + y^2 - R^2 = 0$

although I have perhaps written it somewhat idiosyncratically, with all of the terms on the left-hand side of the equals sign. The point is that a circle can be defined by a polynomial equation, and these are the objects that interest us: those geometric figures that can be described by polynomial equations (this is the algebra part of algebraic geometry).

By contrast, if we consider the graph of the function $y = e^x$, then there is no algebraic equation that the coordinates of this graph will satisfy, and so it is not an object that we are interested in in this context.

So how does one study these? Well, it turns out that a major insight was that you can study objects (geometric or otherwise) by studying all of the functions that are defined on those objects. In our case since we are concerned with—for the time being—figures that are cut out by polynomials in the plane, we are also going to restrict ourselves to considering polynomial-type functions defined on these objects. So what are those?

Well, an obvious source of such a function is any polynomial in the variables $x, y$. Since our circle lies in the plane, any function defined on the plane a fortiori will define a function on our circle: just define the value of the function on the circle to be the value of the planar function at that point.

The problem with this approach is that you will typically get too many functions. There may be more than one function defined on the plane whose values on the circle are the same! For example, the two polynomials

$f(x, y) = x^2$

and

$g(x,y) = -y^2 + R^2$

will secretly yield the exact same function on our circle. The reason is that $f(x, y) - g(x, y) = x^2 + y^2 - R^2$—but this is the defining equation of our circle! So what we should do is say that any two functions on the plane are, for our purposes, the same function if they differ by the defining equation of our geometric figure. It turns out that if we do this, then we can get a meaningful way to talk about all of the functions on our figure.

Moreover—and this shouldn’t necessarily be obvious—in a certain sense, one can show that if we do this, then the geometric figure is entirely equivalent to the so-obtained functions. That is, it is completely equivalent to study either the figure itself, or the functions as we have described them. This is a very powerful shift in perspective.

Tropical Geometry, part 1.

Tropical geometry is a funny one. When I first learned of it, I had two reactions: first, I hated the name, and second, I thought it was unmotivated and was really just generalization for the sake of generalization.

I was wrong, on both counts.

Let’s talk first about the name, before we get into what Tropical geometry is and why I was wrong about its motivation (or lack thereof). It is named in honour of Imre Simon, a Hungarian-born mathematician living in Brazil. Since he was one of the pioneers in this field, and since he lived in the tropics… whence the name.

I’ve actually heard someone say further that he lived and worked on opposite sides of the tropic of Capricorn, which was also part of the name. That said, I’ve only heard this part once, and so I’m not sure how much I believe it.

Anyhow, originally I disliked the name due to its frivolous nature. Perhaps part of this was due to my initial dislike of the subject, but either way I was bothered by how un-descriptive it was. By contrast, mathematical terms are typically named either after a mathematician or in some descriptive manner. Hilbert space. Sheaf. Étale. Gromov-Witten theory. Solvable. Space-filling curve. Markov process.

In particular, the descriptive naming is something that works very well. The name itself tells you something about what you’re studying, which helps a lot in remembering the ideas involved.

However, one problem that occurs frequently is that mathematicians as a group can be strikingly unimaginative in naming their objects, and so we end up with a proliferation of “normal” objects, or “regular” ones. And one of the most infamous examples, of course, is that it is perfectly reasonable to describe something as being both reduced and irreducible.

By contrast—or even hypocritically—I had always loved the whimsical nature of some of the names coming from physics. I love the name quark, and even more than that I really love their names—strange, charmed—although I would have preferred that they stuck with the “truth” and “beauty” quarks instead of the “top” and “bottom” quarks.

And of course, here is a problem. On one hand, I disliked the term “tropical” for its irreverence, but lauded physicists for their whimsical name choices.

In the end, the name won out, at least to me.

Posted in Uncategorized | 1 Comment

Some thoughts on a provoking discussion

So I recently stumbled upon (via Izabella Laba) the discussion at Scott Aaronson’s blog that arose from the events surrounding the dismissal of Walter Lewin.

Amazingly enough, I actually read through the entire 593 comment responses. This was a surprisingly intelligent a civil discussion (on the internet!) between people who don’t completely see eye-to-eye about everything, and about sexism, no less!

The discussion is a little disheartening for the first (roughly) hundred comments or so. However, starting some time around the linked comment, things get a lot better—as a whole, the major players in the discussion actually listened and seemed to empathize with one another, if not perfectly all the time.

A few thoughts:

1. I think that Scott (and the main people in the discussion) did a great job of ignoring the more troll-ish posts. There are a number scattered throughout—towards the end, in particular, there is a post which calls for the ban of Amy (if only for a few days!), which thankfully is largely ignored. However…

Comments such as these are an interesting instance of Lewis’ Law. I do believe that Scott is a good person who—as much as possible—eschews overly sexist views. But it’s interesting that underscoring a discussion about the role of women in STEM fields that there is—quite literally!—a constant low-level buzz of commentary that at the least borders on anti-feminist. So if you were someone reading this post who held views similar to those of Amy (which are not radical in the least), on one hand I would be welcomed that the major discussion is civil and interesting. On the other hand, it’s also believable that you might also feel like the room in which the discussion is happening is subtly hostile to you and your views. Is it surprising that women might be discouraged from self-advocacy in situations like this?

I really should stress that I think that Scott did a wonderful job in this discussion of staying on point, not engaging the trolls, etc. But the existence of these background comments really does suggest something, I think.

2. On that note, seriously? Amy is by no means a “radical feminist” in her postings. I would describe her as pretty middle-of-the-road (although that may say more about me than anything, I guess). She advocates for communication and being aware of the existence of structural imbalances. CRAZY AND RADICAL INDEED.
3. Reading through this sort of discussion really makes me think again about the difficulty of communication when we don’t define our terms—or in this case, when either the context is difficult to convey, or the terms themselves may not be easily definable. Many of the flare-ups that occured throughout the discussion often seemed to result from a mis-reading of what one of the other posters was trying to convey. Not all, certainly, since not everyone agreed on a variety of issues. But there were still many of them.

Anyhow, it was a surprisingly edifying read, although I can’t really say if I would recommend reading through all 593 comments, which will take quite a long time regardless. Still, I’m glad to see that civil discussion about sexism among people who do not agree can take place in this day and age. Kudos to Scott, Amy, Gil, Vijay, dorothy, and a few others.

Posted in Uncategorized | Tagged , , | 2 Comments

Projectivity (continued)

So what does it mean for a variety to be projective? Well, that’s easy: A variety is projective if you can embed it in projective space.

That’s easy, but that’s not particularly helpful.

What are the benefits of being projective? Why is it something that we should care about?

The way I see it, the main advantage of projectivity is that any analytic projective variety is in fact algebraic i.e. it can be described in terms of zero sets of polynomials, and not just analytic functions. This is essentially a loose paraphrase of Chow’s theorem.

So this explains why projectivity is a good thing, but it doesn’t tell us how to detect it. To help with this, let’s consider what we do get if a variety (we will only really care about tori, but for now we will be more general) is projective.

On $\mathbb{P}^N$, we have the line bundle $\mathcal{O}(1)$. Thus, given a morphism $f : X \to \mathbb{P}^N$, we can pull this line bundle back to obtain a line bundle $L := f^*\mathcal{O}(1)$. This is an ample bundle; that is, if we take a sufficiently high power $L^{\otimes n}$, then sections of this new bundle will in fact yield an embedding into projective space of some dimension. More specifically, choose a basis $s_0, \ldots, s_N$ for $H^0(X, L^{\otimes n})$. Then as this line bundle is base-point free (it comes from a map into projective space), we can consider the map

$X \to \mathbb{P}H^0(X, L^{\otimes n}) \qquad x \mapsto (s_0(x) : \cdots : s_N(x))$

then this map will be an embedding.

Given such a pair $(X, L)$ consisting of a complex manifold $X$ together with an ample line bundle $L$, then we can see that it must be projective. Such a pair is called a polarized variety*.

Now, many manifolds come with natural choices of polarizations; for example, all non-elliptic curves have either their canonical or anti-canonical bundle which are ample, and so they are just naturally polarized. Elliptic curves are as well, but you can’t use their canonical bundle, since it is trivial.

The same is of course true with complex tori; their canonical bundles are trivial, and so these do not provide us with a projective embedding. So let’s see what else a polarization gives us.

Let’s consider the first chern class of our line bundle. We have (since we are working with the complex numbers) the exponential sequence of sheaves

$0 \to \mathbb{Z} \to \mathcal{O} \to \mathcal{O}^\times \to 0$

which yields the long exact sequence some of whose low degree terms are

$\cdots \to H^1(X, \mathcal{O}) \to H^1(X, \mathcal{O}^\times) \to H^2(X, \mathbb{Z}) \to \cdots$

where $H^1(X, \mathcal{O}^\times)$ is the Picard group of $X$ (denoted $Pic(X)$); that is, the group of line bundles on $X$. The map to $H^2(X, \mathbb{Z})$ is the map which takes a line bundle to its first chern class $c_1(L)$. It is this that we use to understand what makes a manifold projective.

In the case of tori, we know very well what $H^2(X, \mathbb{Z})$ (henceforth we will omit the coefficient ring if it is the integers) is. In fact, due to the Künneth theorem and the fact that topologically, a complex torus is simply a product of circles, we have the isomorphisms

$H^2(X) \cong \Lambda^2 H^1(X) \cong \Lambda^2 H_1(X)^\vee \cong \big(\Lambda^2 H_1(X)\big)^\vee$

Exercise: Check these!

That is, an element of $H^2(X)$ can be though of as an alternating bilinear form $E$ on the underlying lattice $H_1(X)$. In particular, the first chern class of a polarization on a complex torus $X = \mathbb{C}^k / \Gamma$ is an alternating form on its underlying lattice $\Gamma = H_1(X)$.

Now, it is not too hard to see (and you should check this) that there is a bijective correspondence between alternating bilinear forms $E$ on a lattice $\Gamma \subset \mathbb{C}^k$ which satisfy

$E(iv, iw) = E(v,w)$

and hermitian forms $H$ on $\mathbb{C}^k$ which satisfy $\mathfrak{Im}\, H(\Gamma, \Gamma) \subset \mathbb{Z}$; this is given by the bijection

$E(-,-) \qquad \iff \qquad E(i-,-) + iE(-,-)$

Another way to say this is that alternating bilinear forms on $\Gamma$ which are compatible with the complex structure on $\mathbb{C}^k$ are (essentially) the same as hermitian forms on $\mathbb{C}^k$ whose imaginary parts take integer values on $\Gamma$.

And the magic about this is that an element $E \in H^2(X)$ is the first chern class of an ample line bundle if and only if this latter condition is satisfied.

*Well, that’s not exactly correct. It isn’t the line bundle $L$ that is the polarization, but the class of the line bundle in the Neron-Severi group of $X$. But it’s close enough.

Posted in Uncategorized | 1 Comment