
Professor Ken Ono on Working with AI in Mathematics
ML4Sci
AI and Mathematics: Exploring Collaborative Potential
This chapter explores the influence of AI on mathematics, particularly focusing on the ABC conjecture and the development of benchmark problems for evaluating AI's reasoning skills. It highlights the collaborative relationship between mathematicians and AI, as they work together to enhance mathematical understanding and problem-solving capabilities.
Introduction
In this episode, I sit down with Ken Ono, professor of mathematics at the University of Virginia, to explore the evolving relationship between artificial intelligence and mathematical research. We discuss the cultural shifts in mathematics over the last 30 years, as well as what kinds of reasoning and creativity are uniquely human. Ken reflects on how AI models perform in mathematical benchmarks, the surprising ways they already assist with research tasks, and the real challenges of evaluating their capabilities.
We also talk about the Spirit of Ramanujan project, formal proof assistants like Lean, and why large language models might soon become the default lab partner for pure mathematicians. Here are three takeaways from our conversation:
1. Mathematics has shifted from solo practice to collaborative, cross-disciplinary research
In the past, mathematicians were often discouraged from collaborating Today, partnerships are the norm, and fields like number theory now intersect with areas like physics, biology, and computer science. Cultural and institutional incentives, like NSF REU programs, have helped embed collaboration into how mathematics is practiced and taught.
2. AI is a powerful assistant in mathematical research, but it’s not yet creative
Large language models can quickly summarize unfamiliar areas of math, identify relevant literature, and even debug incorrect reasoning traces. While they outperform human researchers at scanning and synthesizing existing knowledge, they don’t yet generate fundamentally new ideas.
3. Benchmarking mathematical AI is harder than it looks, and is often misunderstood
Benchmarks can misrepresent how scientists work in practice; real research isn’t solving puzzles, it’s building ideas across messy, creative processes. The real challenge isn’t beating the model on a hard problem but rather designing a benchmark problem that is human-solvable, numerically checkable, and likely to remain unsolved by AI for 5–10 years. This effort is less about competition and more about probing model reasoning in a measurable, reproducible way.
Transcript
Charles
Ken, thanks for joining us.
Ken Ono
Charles, wonderful to be here.
Can you describe your research in combinatorics and number theory? (00:50)
Charles
Maybe we could start by talking broadly about your work. I know your background is in combinatorics and number theory. Some people might recognize those from undergrad courses, but could you give us a layman's explanation of the kind of research you do?
Ken Ono
I'm originally trained as a pure mathematician and have been working as a scientist for almost 30 years at various universities. The questions that inspired me early in my career were the famous problems in number theory and combinatorics. I started my PhD in the early 1990s at UCLA. Around that time, Andrew Wiles, who was then a professor at Princeton, announced a proof of the very famous problem known as Fermat's Last Theorem.
It's hard to believe it's been 30 years since that was proven. The theorem says that aⁿ plus bⁿ can never equal cⁿ for integers a, b, and c that are non-zero, and for n greater than 2. By contrast, the Pythagorean theorem tells us that a² plus b² can equal c², and there are lots of integer solutions like 3, 4, and 5. But Fermat's claim was that once you go beyond squares, no such solutions exist. Wiles famously proved that, although there was a hiccup early on.
This kind of work falls into the realm of very abstract mathematics, often based on questions that go back centuries. Over the last 30 years, I've continued to work in pure mathematics. I’ve also gotten deeply involved in representation theory, which grows out of abstract algebra. I've thought a lot about the application of number theory to physics, especially string theory and the distribution of black holes.
More recently, my research has shifted a bit. I still write papers on number theory. I just wrote one on how to use partitions to detect prime numbers, which was recognized by the National Academy as a runner-up for the Cozzarelli Prize. But my research portfolio has also expanded to include data applications in athletics. I’ve worked with the U.S. Olympic swimming team, and more recently, the role of AI in mathematics as a partner in scientific discovery.
How much do mathematicians specialize versus work across different fields throughout their careers? (04:30)
Charles
Awesome. That's certainly a wide-ranging career. On that point, I’m curious: people like mathematician Terence Tao are known for spanning many fields. It sounds like you’ve also worked across a number of different areas, both academic and applied. How common is that in mathematics? Do most people move around between different fields, or do they tend to specialize?
Ken Ono
When I started graduate school in the late 1980s, it was typical for students and faculty to specialize. Even saying you were a number theorist was considered broad. You were expected to narrow further, like focusing on the Langlands program or multiplicative number theory.
Solo papers were the gold standard back then. Collaboration among students was actually discouraged. But that has changed dramatically. Today, most mathematics papers are the result of partnerships, three or four authors are common, and there are even large-scale collaborations. You may have heard of the Polymath Project, where some papers have two or three dozen authors.
So while many professors still specialize in a particular domain, it’s now much more common to see mathematicians using their expertise across fields. Terry is a good example, but so is my friend Jordan Ellenberg, who works in arithmetic geometry, AI, statistics, and more. There’s a growing appreciation for mathematicians who can span disciplines.
Charles
What do you think drove that change over the last 20 or 30 years?
Ken Ono
Part of it is cultural. About 20 years ago, we began encouraging undergraduates to do research, especially through programs like the NSF's REU program. Those programs built their models on teamwork, consisting of groups of undergraduates working with postdocs and faculty. That collaborative mindset carried forward.
We also see more institutional support for cross-disciplinary work. As the STEM advisor to the provost at UVA, part of my role is to identify opportunities for collaboration across departments. That kind of work is increasingly encouraged by university leaders, by academic societies, and by the science itself.
Look at the recent Nobel Prize in Chemistry, awarded for solving the protein folding problem using AI. That was a major chemistry question answered with tools from machine learning, which originated in electrical and computer engineering. A few years ago, it would have been hard to imagine that happening.
So this shift has both social and scientific roots. There's greater openness to partnership, and the problems we're trying to solve increasingly demand a broader range of tools. We’re in a new era of scientific discovery. I don’t think anyone doubts that AI, machine learning, and large language models are now central to research. That’s part of what makes your podcast so timely. It helps us navigate these changes, especially as we think about how to train the next generation of scientists.
Nobody wants to replace scientists. That’s not the goal. But we do have to think carefully about how to work alongside AI and how to make sure current scientists don’t feel threatened by it. These are big questions, and it’s good that we’re talking about them.
Charles
Yeah, that’s certainly part of the thesis of this podcast, that in some sense, everyone is an interdisciplinary scientist now, with AI as a paradigm large enough to potentially disrupt every field. That’s part of the goal here: to try to make sense of that.
Ken Ono
I wouldn’t go so far as to say every field. In fact, I don’t believe that large language models are anywhere close in many areas of mathematics. And it’s not due to a lack of interest in trying to apply AI to mathematics — a lot of people are thinking deeply about that. But at the cutting edge of mathematics, where progress depends on the invention and creation of entirely new ideas, I don’t think it’s clear yet whether AI will be useful.
I have a strong opinion that this is still the realm of the pure mathematician, whose main purpose is to generate entirely new ideas, not extensions or adaptations of existing ones. Most professional mathematicians probably don’t work at that level of abstraction, and I’d like to explain what I mean by that.
Throughout history, there have always been a small number of mathematicians — like Alexander Grothendieck, Peter Scholze, or Jean-Pierre Serre — whose ideas seem almost spiritual in origin. They give birth to entire new fields of mathematics, often from nowhere. And I’ve seen no evidence that AI can create ideas we haven’t already glimpsed in some form.
That said, AI absolutely has a role to play in certain areas of mathematics. It can be a useful partner, especially in the types of work most mathematicians actually do.
How would you describe what mathematicians do? (15:20)
Charles
Interesting. Notably, you’re making a distinction between different classes of mathematical work. For the majority of mathematicians, how would you conceptualize the work they do? That’s a relevant starting point if we want to understand how AI might integrate with or disrupt the field.
Ken Ono
This is important to clarify because many of the 700- or 800-word stories that pop up on social media every day don’t define the fields they’re talking about. It can be misleading, and I’ve been burned by that myself — nuance gets lost, and we end up with this kind of hysteria. The world is indeed changing quickly, but let’s take a deep breath and try to get it right.
So when we ask, “What do mathematicians do?” — the answer isn’t simple. Math is broad. You have applied mathematicians working on problems in engineering, such as designing bridges, spacecraft, or climate models. On the other end, you have pure mathematicians who are inventing new ideas in fields that may be so specialized that only two or three dozen people worldwide can understand them. These ideas may not have practical applications today, though they might in the future.
The vast majority of pure mathematicians, though, are working within their fields or adjacent ones, trying to answer open questions. Mathematics is a cumulative discipline, and we take pride in building on the knowledge that came before us.
In my case, I work in number theory, which is easier to describe: I study numbers. But other mathematicians might work on things like schemes, varieties, or topological spaces. Whether or not those are accessible to you depends on your training.
What do pure mathematicians do? We prove theorems. That’s our goal. Our aim is to confirm guesses as true mathematical facts, based on axioms and logical deduction.
Here’s a trivial example: every even number is divisible by two. You could define what “even” means in different ways, but it always comes back to the idea that dividing by two gives you a whole number.
But things get much more complex. You might ask: given a family of equations in variables x and y, what are all the solutions? For example, y = 5x + 3 is a line — everyone knows that. But make a small change to the equation, and it can go from high school algebra to territory no one understands.
Before AI, a pure mathematician might use a computer to look for examples and patterns. Then we’d build a theory and try to prove something like: every equation of this type has only solutions with certain properties.
Replace “equation” with “scheme” or “topological space” or “quantum invariant,” and the work is the same. What are the rules that govern the behavior of these objects? And can we confirm those rules with certainty? That’s what pure mathematicians strive to do. Not sometimes, but always: to establish truth.
Would you say most mathematicians are essentially selecting tools and strategies from the literature to tackle open problems? (21:15)
Charles
So I guess one model I have for pure mathematicians is that there’s a set of existing tools and theorems already present in the literature, and they’re essentially trying to find the right set of tools and chart a path toward some currently unproven object. Would you say that’s a fair high-level description of what most pure mathematicians do?
Ken Ono
Yes, every professional mathematician does a lot of that, but it would be a mistake to characterize pure mathematics as just that. When we talk about how models and AI can assist us, I’m going to come back to exactly what you just said. But let me break it down. What you described can be thought of in three major tasks.
First, a professional mathematician must know the literature and the main tools used in their field. For your listeners who’ve taken an introduction to proofs course, you’ve likely seen techniques like proof by induction or proof by contradiction, or assembling lemmas that together prove a theorem. These are foundational skills. As you progress and specialize, you move beyond basic techniques to very high-level theorems. You might be working under assumptions like the Riemann Hypothesis or using things like Hilbert’s Theorem 90. The level of depth increases, but the core idea is the same: understanding the landscape of results and how to combine them. That’s the first piece: knowing the literature and how to parse problems.
The second piece is strategizing. Given the enormous number of theorems and ideas produced by over two centuries of mathematics, it’s not enough to know what exists. You need to figure out how to creatively pull together ideas into a coherent proof. Many modern papers are over 100 pages long. Some major results involve hundreds of pages assembled over several years. This takes skill and subtlety.
The third skill is getting your hands dirty with examples. You have to test out low-hanging examples to get a feel for the landscape of a problem. You don’t just set out to solve a famous problem without a germ of an idea. You need some initial steps that give you a foothold, even if the final proof is hundreds of pages long.
Of course, for the deepest work — like the Local Langlands Correspondence, which made big news recently — even those three tasks are not enough. That kind of breakthrough involves inventing entirely new ideas. That’s the fourth task. It’s very rare, but some individuals like Grothendieck or Serre contribute in that way.
Most mathematicians don’t need to reach that level to be successful. You can be world-famous without ever having a groundbreaking idea that looks completely unlike anything seen before. That’s normal.
You can win a Nobel Prize in physics or chemistry without inventing a new kind of experiment. In science, we build on existing work. You might not even prove something yourself, but if you find a phenomenon that drives discovery for decades, that’s still hugely valuable.
Einstein is a great example. You can still win the Nobel Prize today for experimentally confirming a prediction of Einstein’s. That sort of contribution is common across the sciences, and it exists in math too. You might become famous for solving an old problem with ideas that escaped others. Or you might be the one who knows the literature better than anyone and finds a rare combination of arguments that works in exactly the right order. All of those are valid paths to success in mathematics.
How do entirely new mathematical ideas come about? (28:20)
Charles
Can you give us a better sense of how those novel ideas are born? For instance, I know the proof of Fermat’s Last Theorem involved ideas from very different areas. Is it usually something like unexpected connections or is it more like a new class of techniques coming out of nowhere?
Ken Ono
Great question. Let me give you two examples that reflect very different types of creativity.
First, Srinivasa Ramanujan. He was born in the late 19th century in South India and was a two-time college dropout. He was self-taught and kept thousands of formulas in three notebooks. He believed his ideas were revealed to him in dreams by a Hindu goddess. He didn’t value proof in the way professional mathematicians do because, to him, these ideas were given.
Fortunately, he wrote letters to professors in England, or he might have been lost to history. The formulas he recorded are still being explored today. We know how to prove them now, but we don’t always understand their full importance. Some of his formulas, written months before his death in 1920, are used in modern black hole physics, decades before black holes were even understood. His creativity was almost artistic, like a new era of painting. He saw beauty and meaning in expressions others overlooked. His reasons for caring about a formula often had no connection to why it would later become important.
Second, Andrew Wiles. Very different story. Wiles had formal training. He earned his PhD at Cambridge and worked in a field called Iwasawa theory, which deals with elliptic curves. In the early 1980s, German mathematician Gerhard Frey suggested a wild idea: if you could relate elliptic curves to modular forms — a bridge between two very different areas of math — you might be able to prove Fermat’s Last Theorem.
Japanese mathematician Yutaka Taniyama had earlier proposed what that bridge might look like. Frey speculated, “If someone builds this bridge, maybe we could solve Fermat’s problem.” At the time, nobody thought that was realistic.
But Wiles, working privately in his attic, actually did it. He built the tools and proved the theorem. The creativity of Taniyama and Frey to imagine the connection was remarkable. But what Wiles did was something else entirely. He had the vision and audacity to follow through and build the bridge himself. That kind of creativity is rare.
It was like watching someone climb Everest for the first time, or landing on the moon. The level of insight and perseverance Wiles and his student Richard Taylor showed that was a once-in-a-lifetime achievement.
What are the main ways AI can be used in mathematics today? (37:15)
Charles
I know Terence Tao has given a few talks on AI and math, and he’s kind of laid out two or three approaches we can use. One is using machine learning and optimization to find counterexamples or generate connections between different examples. Another is using formal proof solvers like Lean to compile theorems and reasoning in a formal way. And then, of course, there's using large language models. Is that how you think about the different ways AI can be used in math? Are there particular approaches you've found especially promising or not so promising?
Ken Ono
Yeah, great question. Circling back to the Nobel Prize-winning work of Jumper and Hassabis on protein architecture, it's clear to many of us in mathematics — and likely to all scientists — that the use of AI in combinatorial construction problems is both undeniable and extraordinary. The computer's ability to explore possible constructions and mathematical architectures is inhuman. There are certain problems that humans really shouldn't be doing when a computer can do it better through clever brute-force strategies.
There are many examples of that. AlphaGeometry is one direction, but we don't need to go down that path right now. That work is very precise. It's similar to how AI discovered new openings in Go or chess, and those kinds of skills are useful beyond games, in drug discovery, and even in developing conjectures in pure mathematics.
Now, the work I've been most interested in lately is how AI could go beyond what we already expect computers to do. You mentioned Lean and the formalization of proof. Circling back to what I said earlier: pure mathematicians, not applied ones, rigorously prove theorems. That is the bread and butter of the field.
Kevin Buzzard, a friend and well-known number theorist, has been a strong advocate for using tools like Lean. He’s working toward a future where computers have a vast library of mathematics they can use to verify proofs. And that’s amazing. Just last week, there was a formalized proof of something called the ABC conjecture. It's a great piece of work in analytic number theory. We'd love to know if there are truly no counterexamples, but this Lean-based result tells us that any such exceptions are extremely rare. That’s a big deal. It shows how computers can help check our work, not do it for us, but that checking is incredibly valuable.
I wish we could introduce something like Lean into federal government processes or airline operations. It’s that kind of precision.
Now, on the daily work of mathematicians: there's a lot of hype that AI will be everywhere, and I don’t disagree. AI can read MRIs, x-rays, and do many things better than people. But I haven’t yet seen AI come up with ideas that we haven’t at least glimpsed before.
That said, I’ve been working with a company called Epoch AI, which is developing benchmark problems to test mathematical reasoning in large models. Since late fall, I’ve been working with them and using cutting-edge models like Gemini 2.5 Pro and a high-end version of GPT-4, not the version most people have access to.
Our task has been to create a large set of benchmark math problems across fields — from topology to algebraic geometry to combinatorics to number theory — to test these models. These are not problems meant to mirror the research process of a mathematician. Instead, they are precisely defined problems where we know the answer in advance and can check them numerically. That’s key. These problems must have verifiable numerical answers, even if those numbers are a hundred digits long. And the bar for what counts as a good benchmark problem is very high.
The media misunderstood this a bit. They thought we were struggling to find problems AI could solve. But the challenge wasn’t that: we can easily find problems AI can't solve. The goal isn't model versus mathematician. The goal is to understand what reasoning capabilities these models already have. We're trying to figure out how AI can be a partner, an assistant, a co-pilot in scientific discovery.
That said, I'm deeply impressed by where the best models are today. I don’t use AI in my research yet, but I wouldn't be surprised if I start within three or four months. It still makes lots of mistakes, but you have to look more deeply at what's going on.
When it comes to mathematics, LLMs are astonishing. It's far beyond where I thought we would be just six months ago. Using the best models, I can confidently say that I can type in a question in high-level mathematics in any field and, within a couple of minutes, the model will identify the relevant literature, name the leading researchers, and generate a three- or four-page summary of the topic with remarkable accuracy.
I’ve been quoted as saying these models outperform the best graduate students at top universities. I regret putting it that way. It would have been more accurate to say that, in some respects, they outperform my own abilities as a professional mathematician with 30 years of experience.
Because if you asked me questions from all over math, I’d have to admit there are only a few areas where I truly feel like an expert. In all the others, I couldn’t begin to tell you what’s currently considered cutting-edge, even if some of the world experts work down the hall from me. If someone were to talk nonsense in one of those fields but used the right terminology, I wouldn’t know how to tell the difference. What’s impressive is that a large language model, in every scientific discipline, can get you into the ballpark.
Where does it start to make mistakes? When the questions become very difficult or subtle. And some of my colleagues take comfort in seeing a model fail. But when those failures happen in areas outside my expertise, I don't find it comforting. I realize I would have gotten it wrong too. The model sounds like it knows what it’s talking about.
Models often resemble a strong beginning graduate student. They know the terminology, they know the statements of theorems, but they’re overconfident, and they miss the subtleties and the importance of details. But they’re still very good.
Now, I don’t want us to feel comfortable just because the models make mistakes. That can be misleading. You can trace the reasoning. You can see what the model is trying to do. And in fields with a saturated literature, the model has read the papers. It knows the theorems and even the lemmas and can follow the inner workings.
And when it makes a mistake, you can say, "I think you're wrong." It pauses, reflects — "the user thinks I’m wrong" — and then it will correct itself. It might say, "I accidentally made this mistake. Let me try again." If you approach the model like a fallible human partner, then you realize it can already function as a kind of research collaborator. Not as a black box that gives you an answer, but as a tool to access the accumulated wisdom of science and have a conversation with it. You can challenge it, and it will adjust.
People talk about writing good prompts to get the most out of these models. But as scientists, we can take it further. The model doesn’t just recognize patterns in the field you ask about. it brings in methods from related fields and applies them in new ways. That’s a skill I didn’t expect. I didn’t think, for example, that ideas from statistics could so quickly be brought into number theory. I assumed that in 2025, the models would only scrape directly relevant papers and get stumped by anything outside their niche. But they’ve already gone beyond that.
What makes benchmark problems for mathematical reasoning models so challenging to build? (55:15)
Charles
It sounds like the challenge in making benchmarks is creating problems that are machine-readable ones that can be incorporated into a benchmark suite. That’s hard, right? Not just finding open problems that AI can’t solve. There are plenty of those. It’s more that they wouldn’t make sense as benchmarks.
Ken Ono
Charles, that's right. At Epoch AI, Elliot Glazer, who leads the Frontier Math team, gave us a very specific challenge: we want the problems we create to remain unsolved by AI for five to ten years. That’s a high bar.
It’s not that I’m racing against a computer and can’t solve something by August. We’re looking for problems that are human-solvable today, that have verifiable numerical answers, but where the AI’s reasoning trace shows it's far off. We want to believe those problems will still be out of reach for models five to ten years from now.
That’s an incredibly hard challenge. And I wish the media would get that right. It’s not about me or the other mathematicians going head-to-head with large language models. It’s about that broader task. Though the media is right to ask, “Is this really where we are now—that it’s so hard to find problems like this?” And yes, it really is.
But this isn’t the typical work we do as mathematicians. That kind of framing is more like sport. Sure, we can come up with questions that the models will fail on, but it’s still remarkable how much they get right.
Charles
And I think that’s the other challenge with benchmarks. They try to measure model performance, but they don’t actually capture the full scope of scientific activity. People often point to AlphaFold as an endpoint but really that’s just the beginning. How scientists actually use AlphaFold is a whole other question.
Ken Ono
Exactly. I don’t think of my AI work as being mostly about benchmarking. I don’t see myself as some kind of AI hero either. But I’m happy to say that I fully expect these models to become useful research assistants in the near future.
Google changed everything by making information easily searchable. Now, at a high level, all of science is at our fingertips. These models are like a supercharged Google, but with the added ability to compute, generate low-level examples, and understand strategies within specific fields.
One of the most impressive things I’ve encountered while working with these models is that sometimes we pose a problem that it can’t solve, even after significant nudging. It spins its wheels and gets lost. And when that happens, I like to end the session by asking: “So, what have you learned from your mistakes?”
And the model responds beautifully. It might say, “I learned that if I jump to conclusions under these hypotheses, I can make mistakes when given examples with certain properties.” It can thoughtfully summarize its missteps. I run these models in temporary mode, so they’re not being trained on these interactions. But if they were, they’d already be improving. That’s one reason I believe models like GPT-4 mini or GPT-5 could serve as real lab assistants, possibly by the end of this year.
How are you preparing your students to use AI in their research? (1:01)
Charles
So you mentioned you don’t currently use AI in your own research. But do your students? How do you think about preparing graduate students to work with these tools?
Ken Ono
I haven’t explicitly asked my graduate students and postdocs whether they’re using AI in their work. Maybe I should. But I’d be surprised if they are, mainly because these advanced models aren’t cheap, and our graduate students and postdocs aren’t exactly well paid. Most likely, they’re not using the same tools I’ve been testing. That might change. Even if prices stay high, capabilities will go up, so the value proposition could make it worthwhile.
I’d guess they’re not using these tools in their research, but maybe they’re using them for daily tasks like email writing or other routine things. I’d be surprised if they weren’t using them in some way.
Charles
Yeah, it would be an interesting survey, and maybe even something universities could offer as a benefit in the future. Imagine giving all graduate students access to a high-level model as part of their education.
Ken Ono
That’s a very controversial idea. Some schools will jump on it quickly, others will resist. But I agree with you, it’s going to be important.
How did your experience with the film The Man Who Knew Infinity lead to founding the Spirit of Ramanujan project? (1:03:00)
Charles
Yeah, we’ll see how it plays out. Maybe just one last question. You worked on the movie The Man Who Knew Infinity, about Ramanujan, and started the Spirit of Ramanujan project to identify emerging math talent around the world. How was that experience, and how do you think about cultivating talent in young people?
Ken Ono
Being part of The Man Who Knew Infinity was one of the most exciting things I’ve ever done. It just came out of the blue, and I’m so grateful it happened.
Hollywood decided to make a film about mathematics, and they chose Ramanujan’s story. When we screened the film in Silicon Valley, we had some well-known supporters. The Breakthrough Foundation helped fund it, and we screened it at Yuri Milner’s house. Dev Patel was there, Stephen Fry was there, and we did a Q&A afterward.
What stood out was how many audience members were involved in SETI: the search for extraterrestrial intelligence. I said, “You all are searching the stars, but I think the story of Ramanujan tells us we should also be searching planet Earth for terrestrial intelligence.”
That struck a chord. The president of the Templeton World Charity Foundation came up to me afterward and said, “You’re right. What could you do with $550,000?” That’s how the Spirit of Ramanujan project was born.
Since then, we’ve supported 125 young scholars from around the world, from about three dozen countries. The program is on pause right now due to funding issues, but through last year, it’s been amazing.
Charles
That’s awesome. How did you find the students? I know Ramanujan famously sent a letter to a professor in England.
Ken Ono
Yes, it’s funny. It’s an application process, but that story inspires it. Ramanujan’s letter to G.H. Hardy is what launched his career, and we like to think of our application process as a modern version of that. Students write us letters or essays that are very much in the spirit of Ramanujan’s message to Hardy. Of course, we also require recommendation letters and personal statements, which you’d expect these days, but it is about capturing that same spirit.
Charles
Awesome. Well, Ken, thanks so much for taking the time. This has been a fascinating conversation.
Ken Ono
Great. Have a good day, Charles.