hobart logo
Submarines Don't Swim: A Conversation with Mathematician Kevin Buzzard photo

I attended a lecture at Stanford University given by professor and research mathematician Kevin Buzzard entitled “Can AI do Mathematics?” I was intrigued by his description of proof-making as containing both ‘mundane’ and ‘artistic’ processes, the former involving what we generally attribute to the domain of computers (pattern recognition, computation), and the latter involving the more nebulous qualities of insight, abstraction, and aesthetics. It occurred to me that the questions “Can AI do Mathematics?” and “Can AI write poetry?” may be explored simultaneously. 

AUDREY ZHENG: According to Aristotle, the heart of poetry is metaphor-making, or seeking insight by drawing a relation between concepts and objects that might be perceived as unalike. It seems like this sentiment about metaphor can also be extended to math. I am reminded of how the proof for Fermat’s Last Theorem involves relating elliptic curves to modular forms. Maybe the ability for AI to see these relations in math could foretell its ability to make poignant metaphors in literature. What are AI’s present capabilities, or possible capabilities, in creating proofs by drawing relations between diverse concepts?

KEVIN BUZZARD: A-ha! So, AI is completely dumb, right? It learns from what it’s trained on. If you give AI a bunch of links in mathematics, who knows what it might do? It might suggest the same links again, or it might suggest links that are absolute nonsense, or it might get lucky and it might have a brilliant idea. It’s really hard to control what these things are doing. You want it to be brilliant, but it’s not capable of being brilliant because it’s not capable of having any insight at all. It’s just trying to follow patterns. There’s two ways of fixing AI, one way is to try and make a system that thinks and understands, which is extremely difficult, and the other approach is to give it more and more data. 

AZ: I read an essay by Freeman Dyson where he described how mathematicians are either “birds” or “frogs”. “Birds” survey large vistas and “frogs” observe the flowers near them in minute detail. Do you think AI has a predilection towards being either a bird or a frog?

KB: My first reaction is to say that AI is like a space-alien, not like birds or frogs at all. AI is looking at the questions that humans are interested in, but from a completely different perspective that we’ve never encountered before. One thing I can say is that AI is very bad at being a frog. When you are being a frog, it is all about technical understanding and putting together all the pieces, working in some small area and just building. AI is bad at being a frog because it doesn’t understand a thing. It’s bad on detail. 

AZ: Do you consider yourself as being either a bird or a frog?

KB: Hmm, yeah, I’m very much a frog. One-hundred percent frog.

AZ: Oh, okay. I was hoping for some nominative determinism.

KB: [Laughs] Well, here’s the thing, you mentioned Fermat’s Last Theorem, and I’ve got a five-year research grant to try to teach this theorem to a computer, which involves a lot of, just, paying very close attention to detail. I’m doing a manual translation task, basically, for the next five years. Now, this is not AI. What I’m actually doing is making training data. I’m saying “this is what accurate mathematics looks like”, and hopefully the AI guys learn from it. Earlier in my career I’ve done more bird-like things. But I’m currently in some froggy corner.

AZ: In both literature and math, I’ve read accounts of “annihilating the human” as liberating the intellectual process. For example, the playwright Balzac, masturbated and caffeinated excessively as a means of disappearance, and Hilbert emphasized that the concepts of “point”, “line”, and “plane” could be entirely unfamiliarized from our spatial understanding. Do you think AI can have an edge on humans in that it’s very good at functioning with pure formalism? 

KB: I think insight gives humans an edge. When humans are manipulating axioms they have a strong geometric insight of what’s happening, whereas a computer has no insight and will quite happily think about dumb problems that we can see immediately are dumb because we have the full picture. Modern AI, machine learning, tries to teach itself human insight by looking at a huge amount of data. It can figure out the way that humans are thinking about points, lines and planes just because it’s seen so much stuff. With Hilbert’s axioms this is problematic because the computer doesn’t have enough data to get any insight, so it’s just going to be putting things together randomly. But on the other hand it’s very fast. 

AZ: When poetry people talk about AI, I hear a common sentiment that a poem written by AI can never be considered a “good poem” because it lacks an inherent human quality. Do you think that a mathematical proof has some aesthetic quality related to the fact that it was thought of by a human? Or do you think that question is kind of silly? 

KB: No, that doesn’t sound silly at all. There are formalists who would say that a theorem is true if you’ve written down a logical series of deductions, if you’ve reduced everything to the axioms of mathematics and the rules of logic. But many mathematicians would argue that that’s not the point of a proof at all, that the point of a proof is to advance human understanding. Mathematics, as it’s done by humans, is a cultural thing, right? It’s a bit weird because we live in a universe that a lot of people don’t understand, and we have ideas about the universe but some of them are grimy and just get the job done, and some of them are inspiring and beautiful. That’s what you hope from a proof, to see some beauty in it.
The job of mathematics isn’t to tick off all these problems, but to start to fly, to really understand what’s going on and become the bird. One of the reasons why we can relate modular forms to elliptic curves is this brilliant insight by Barry Mazur. He saw that a technique that had been introduced to geometry, which is a continuous part of mathematics, could also be applied to arithmetic, which is a discrete part of mathematics. He was being very birdish at the time. I suspect that AI will sort of steamroller through some aspects of mathematics in the next ten years, and I do fear that it will start by “proving” theorems of interest to humans, but by giving incomprehensible, boring, long proofs.

AZ: I did read an article recently, I’m not too sure about the specifics, about an AI generating a result that was true, but it was impossible to tell how the AI had come by this result. Apparently it was very disappointing. 

KB: I think that computers have somehow never given us a nice picture, that’s too subtle… You can give AI a big table of numbers that you’ve worked out, and it can say “oh, actually I can see a pattern in those numbers that you’ve haven’t spotted yet”, but it can’t really explain what that pattern is. There have also been instances when you give AI a question and AI is like “oh, actually, I’m not going to just spot a pattern, I’m going to prove your question and my proof is going to be one-thousand pages long.” And the proof will just be manipulation of the axioms of mathematics. Computers have never given us the details and the insights and everything, the kind of package that humans deliver sometimes. They sometimes say “well, maybe something like this should work” or sometimes they’ll say “oh, I’ve figured out all the details” and it’s an incomprehensible blob of logic, just literally a big string of symbols. 

AZ: In your lecture you spoke of proof making as containing both artistic and mundane processes. You gave some examples of how AI has progressed in its ability to “do mathematics”, with an emphasis on Google’s Deepmind scoring well on the 2024 IMO. You did mention, however, that this achievement only indicated Deepmind’s ability to apply “pre-undergraduate mathematics”. Do you think it initiated any “artistic processes” while working through the IMO [International Math Olympiad] or only mundane ones?

KB: It’s both mundane and extraordinary at the same time. If you’re making a system that solves IMO problems, you know before you even start that there’s a finite collection of techniques that you need. In the IMO there are rules about what is and isn’t allowed, you can never ask a question that relies on calculus, so the total amount of mathematics you need to know to solve these problems is sort of bounded and manageable. But you have this exponential issue. Let’s say you’ve got to make twenty moves, and twenty is a small number, right, and each move you can either go right or left or straight on, and there’s only one way to get to the end. The problem is that the number of ways you can go is three times three times three twenty times… and this is now in the billions. So the AI has done something magical, even if it only needed a limited number of techniques and a few moves. 

AZ: I suppose the students who took the IMO, although they were solving the same problems, their process of solving the problems must have been very different from the AI’s. Like they must have started with some sort of artistic insight.

KB: What the humans are doing is art for sure. We don’t really have the right words to explain what the AI is doing, we can’t really call it art or science or anything. Asking if a computer can do art or science is like asking if a submarine can swim, which is the analogue that Dijkstra drew. Can a submarine swim? Well, it’s moving through the water. One of the AI solutions to one of the IMO questions starts by saying “let’s do induction on twelve”. Now, this is a completely meaningless thing to say. You do induction on x, you do induction on a variable. The step was meaningless, but the computer did it anyway. That to me is an indication that you can’t really say the system is smart.

AZ: I have one last question for you, depending on your answer, and that question is: what is your favorite book?

KB: This is a very hard question. Can I ask my partner? She might know. 

KB: [to someone off camera] Tam? What’s my favorite book? What was that book by the same guy that wrote Ulysses

KB: Oh, Finnegan’s Wake! In the sense that it’s one of the few books I own. I bought that book forty years ago and occasionally pore through it. I don’t own many books. I own many many math books but I wouldn’t say they’re my favorites, they’re just tools for my job.

AZ: Have you ever read Alice in Wonderland? I heard that it's very popular with mathematicians.

KB: Oh, many times. I had Alice in Wonderland and Through the Looking Glass, both of them in one volume. And I read and re-read those books as a child, I was absolutely full of wonder. And then I went ahead and read Dodgson’s work on logic.

AZ: Do you think he was better at math or at writing children’s novels?

KB: Oh, the logic stuff was clever but was very basic, whereas the Alice books taught me how to think outside the box. 


SHARE