
The Future of Education (private feed for michael.b.horn@gmail.com) From Education to Anthropic: Neerav Kingsland On the Impact AI Will Have
Diane Tavenner and I sat down with Neerav Kingsland, a longtime education leader who is now at Anthropic, to explore the evolving intersection of artificial intelligence and education. Neerav shares his journey from working in New Orleans’ public school reform to his current role at one of the leading AI companies. Our conversation covers the promise of AI tutors and teacher support tools, the key role of application “wrappers” for safe and effective student interaction with AI, and the need for humility and caution, especially with young learners. The episode also delves into the broader societal impacts of AI, the future evolution of schools, and the increasing importance of experimentation and risk-taking for students navigating an uncertain, tech-driven landscape.
Linked References:
“Machines of Loving Grace,” Dario Amodei, October 2024
Michael Horn
Hi, it’s Michael. What you’re about to hear is a conversation that Diane Tavenner and I had with Neerav Kingsland, longtime person in the education world who’s now at Anthropic, one of the major companies behind the large language models—of course Claude being theirs. And I had several takeaways from this conversation, but I just wanted to highlight a few for you. First was Neerav’s humility in constantly saying we don’t know the answer to the full impact of AI on education, let alone society, and just how honest that felt. Second, I was struck by how much he sees AI tutors as being a major use case for the technology, and he referenced things like Amira or Ello as perhaps examples of where this could be going. Third, teacher support was something he named, whether it be for efficiency gains or to help with facilitation and the like. Fourth, I was struck by how he repeatedly emphasized the importance for caution when it comes to young children interacting directly with AI, particularly the large language models themselves, and his belief as a result that wrappers, essentially applications, if you will, application layers, will be a critical part of how young people interact with AI, both to build in more content, expertise, more scaffolding, but also the protection from AI perhaps itself.
And then finally, the last thing I’ll leave you with was when we asked him what perhaps would be most valued in the years ahead for schools, he said something that is perhaps undervalued today and that is radio risk taking. And that’s something that certainly landed for me. So I hope you enjoyed this conversation with Neerav Kingsland, and we’ll talk to you soon on Class Disrupted.
AI’s Role in Education Trends
Diane Tavenner
Hey, Michael.
Michael Horn
Hey, Diane. It is good to see you and excited to get into this conversation that we’ve been teasing our audience with in the opening episode around AI. And then we had a few weeks to get our guests lined up. And I think, as today’s conversation will show, it has been well worth the wait, I suspect. But there are a lot of developments, obviously, AI to large companies, constantly making some exciting updates, rolling out new applications and features and the like. And so you and I have been constantly updating our own thinking, emailing back and forth a lot, and I think today is going to be really exciting to continue to update our thinking.
Diane Tavenner
Yeah, I agree. I have conversations regularly with people who listen, who say, you know, this is the dialogue we want to have about AI and education. And honestly, I can’t think of a better person I’d like to be talking about this topic with. Our guest today is Neerav Kingsland. And Neerav is someone Michael and I have both known for many, many years. And the reason why is he’s worked in New Orleans in post Katrina days helping to build the nation’s first public school system there, where over 80%, 90% of the students attend charter schools. He served as the CEO of New Schools for New Orleans and then in a variety of philanthropic roles with the Arnold foundation and Reed Hastings and a managing partner at the City Fund. And then Neerav made this big jump a few years ago and joined Anthropic, which is of course one of the handful of leading foundational AI companies known for its large language model Claude, and he leads strategy there. So with education and AI sort of covered, Neerav, it was hard for us to imagine someone better positioned to come and open this season and talk to us about the big picture of AI and education. And so welcome. We’re really happy to have you here.
Neerav Kingsland
So thrilled to be here. Thanks, Diane.
Michael Horn
No, well, so, Neerav, I want to start with this because I’d love to just understand your pathway from education to Anthropic. And I’ll say up front, Diane may already know some of this, but I don’t. On your LinkedIn, it looks like you effectively left education and moved hook, line and sinker, if you will, into one of the leaders in AI. So I would love to just understand what is, you know, what led to the move. What does your day job look like these days? Is education still present in it?
Just help us understand the pathway.
Neerav Kingsland
Yeah, totally. So I had been following and reading about AI since my time in New Orleans. The book that really hooked me was The Singularity is Near, the Ray Kurzweil book, which is 25 years old now, but pretty prescient. I think he predicted AGI in like 2033 or something. And here we are. And so I think that opened my eyes to the possibility I wasn’t technical enough to know how right he might be, but kind of big if true. After you, you know, you read a book like that and then, you know, as a layperson, just kept on reading, listening to podcasts, blogs and so forth. And then it was really when GPT2 came out, so kind of, you know, maybe 15.
Michael Horn
You were earlier than us.
Neerav Kingsland
Yeah, only because I was like, trying to write poetry with it and I was like, oh, my gosh, like, this is pretty good. Like, we might be knocking on the door. And so, you know, I just started thinking like this, you know, these ideas and this technology could be the biggest thing to ever happen to humanity. And we might be getting pretty close. And so I started thinking very seriously about a career change there, and the transition was a little more gradual. I reached out to Open Philanthropies. I knew the leader, a guy named Holden there who ran that foundation, that’s Dustin Moskovitz foundation, and just asked if there was anything I can do. I knew they did a lot of AI safety work, and in a cool way, they had a lot of young founders, and I, at that point, was a little older, so it scaled nonprofit and philanthropic work.
So I became an executive coach, just kind of an advisor to some AI safety founders, and did that on the side for about a year and a half. So I got to know the field, got to know a lot of amazing people, and eventually paths crossed with the Anthropic folks. And, I was wowed by their mission and the team, and so joined about three years ago now. It was before ChatGPT, so it was really a small research org when I joined. And then, you know, the rest is history.
Michael Horn
It’s such an interesting trajectory. It’s such a cool example, frankly, of putting yourself in the middle of something. Right. To make that sort of a switch. How does it connect? Like, does it feel like you’re leaving education in some ways, or does this feel like some other way of framing it in terms of, you know, your own purpose, life, work, the arc of the things that you’ve done in terms of impact on humanity? I just love to get that insight.
Neerav Kingsland
Yeah, I’m still very involved in education. I’m on the board at City Fund. There’s a new leader there, Marlon Marshall, who’s absolutely fantastic, but so stay connected through that. And then my first couple years at Anthropic, we were mostly just trying to stay alive. And I didn’t have much to contribute on research, so I was doing business, sales, BD fundraising, and did that for about two and a half years. So I went from an education nonprofit to, like, SaaS salesperson for two or three years, which is great. I learned a lot, and, you know, very important, obviously, for a company to succeed. And then about a year ago, our CEO, Dario, wrote this piece called Machines of Loving Grace, which I’d highly recommend, and set forth kind of a positive vision for AI and society.
And at that point, we were a little more stable on revenue, and so I and a couple others kind of raised our hands to go create an org within Anthropic our unit called Mission Labs. And so that’s actually where I sit now, where we incubate projects that can help AI do good in the world. And so I’ve done some education work, helped get our life sciences kind of drug discovery work going. I’m working on cyber defense now. I can go into more details on many of that. But through that I just feel insanely fortunate to sit in both at Anthropic and then a part of the org that’s mission is to incubate projects to do good with AI.
Michael Horn
That’s fascinating. That’s really neat and great of Anthropic to create a division that’s focused on all those questions as it emerges. And we’ll make sure to link also to that letter in the show notes because I think that’s an important one for the audience to have the context. Just one more question before Diane, you can jump in there. But I like, I’m curious. We’re getting all these hot takes right now that AI is going to radically transform education. AI is going to be the worst thing to ever hit education or maybe incremental at best to, you know, it actually obliterates the purpose of education itself in some pretty significant ways.
Give us sort of your headline of where you sit on that continuum and you can provide the nuance I just gave you the headlines to navigate.
The Future of Education is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Neerav Kingsland
Yeah, maybe fortunately, unfortunately, any of those headlines could end up being true. And you know, we can see what we can do to get to the good outcomes. I think maybe let me start more at the micro of education as it exists today, then we can zoom out a bit. In most ways this is the most optimistic I’ve been about education, in the 15 to 20 years I’ve been working in the field. The two things that I am really, really thrilled about are AI tutors.
I have a four year old and a six year old. I experiment on them all the time and I’ve just been wowed. My time in New Orleans was early days in the edtech space and the products were pretty nascent. It wasn’t a huge part of our strategy. But now I couldn’t imagine running a school where that wasn’t a pretty key part of what you were thinking about. And the AI tutors, specifically teaching kids to read programs like Amira and Ello I think are very strong for elementary math. I have my daughter in a program called Super Teacher, which I think is wonderful. And then as you get up, I think there’s just more and more in the high school college, there’s a group called Study Fetch that builds on top of us that we’re thrilled by.
AI Tutors and Teacher Support
Neerav Kingsland
So it just feels like the AI tutors are going to happen. They’ll likely be very impactful and we’ll get fairly close to the dream of scaling a high quality, one on one instruction for at least an hour or two a day for every kid. The other thing I’m super excited about is AI teacher support, both in the efficiency sense of lesson planning, but more in classroom facilitation. So you guys might have seen Course Mojo, which Eric and Dacia founded, where you basically combine AI giving live feedback in a classroom, that information going back to the teacher, the teacher then being able to modify their instruction and how they’re facilitating the class. And that all just seems pretty magical to me. And so very excited about that as well. The things I’m worried about are: you can cheat with AI. Obviously we’ve seen that happen.
So it can make you dumber. You know, Anthropic intentionally doesn’t. I think it’s actually against our terms of policy to be a child and use our product. And so we really want there to be an app layer on top of us that is shaping the experience for a kid so we can push it in the right direction. And then zooming out, like, where is this all heading? You know, I think the greatest opportunity is that we have a chance to flourish. We can choose the jobs we want, the education paths we want, and you can imagine a much better world than the grind a lot of the world has to be in today. I do think there’s a real threat. There’s a phrase like intellectual pieces coming out on gradual disempowerment, which I’d encourage your readers to get familiar with. It’s basically the idea of the more you hand off to the AI, the more you might hand off of your intellectual and emotional maturity and humans could get disempowered.
And so I think staying on the good side of that is obviously very, very important. So all that’s to say, I agree with all the headlines and the future is, you know, up to us in some way.
Diane Tavenner
That was awesome. Super helpful and I’m. There’s like 10 different directions we could go with that right now. I think one of the things that Michael and I have noticed is that it feels like that across the board, education gets used as sort of a use case and a case study for how AI will be applied far more than it normally does, you know, in technology and we’re not used to this. We’re not used to sort of being at the center of the conversation and what’s happening. And so that’s been a really interesting idea for us to grapple with. One of the things you said there, Neerav was like, and I think this might be helpful to dig in for people, is that you don’t expect young people kind of under 18, K12 to be engaging directly with Claude. You expect there to be sort of this app layer on top of it and you named a variety of different programs.
And so I’d love to unpack that a little bit more because I don’t think most people think about that. I think they think AI is literally this, this dialogue box. And it’s just you go back and forth, back and forth. And we’re really trying to uncover, you know, what does it actually, you know, when you put that app layer on top, how does that, how do people engage with that and what is, what does that do? Especially for young people who don’t have skills yet and don’t have experience and don’t have knowledge. It’s very different when the three of us are using a dialogue box than with someone who hasn’t really built their, you know, whatever they might be, analytical skills, argumentative skills, their, their expertise. And so let’s dig in a little bit. Like, talk to us about like, who builds on top of you and how does that happen and what does that look like?
Neerav Kingsland
Yeah, I mean, just to start from a values perspective. We need to be careful with kids. Yeah. As we’ve seen with social media, gaming, whatever, whenever there’s new technology, you don’t know how it’ll affect kids and this technology, particularly when they’re basically talking to a human-like figure that is increasingly more and more intelligent. Yeah. Our brains weren’t hardwired for that and kids need to be supported in how they use AI. So I think that’s just like our starting point. It’s early days.
Let’s not make dumb mistakes that we’ll look back on and regret. At the same time, let’s figure out ways to give kids access to this technology so they can benefit from it. So, to start, maybe at the extreme example, if you talk to Claude or you talk to ChatGPT or Gemini and you’re four years old, you’re not going to learn how to read. Like it’s not gonna happen. But if you use Ello or Amira, like, you know, with my daughter, when she was about 6 I started using Ello with her and I was like, pretty convinced that if she just did that for 20 minutes a day, she would learn how to read, which was like, just spectacular that I really didn’t think she needed much human tutoring to learn how to read, given that app. And you can imagine how many grids across the world that would just be game changing for. The only piece that was interesting, I think, gets into the future of schooling is there was no way she was going to do 20 minutes of that app without me sitting beside her. So I think historically, when I think back on my times in New Orleans, very often tech was used as a kind of a babysitter to allow the teacher to do small group instruction. And so a big curiosity for me, and I know, Diane, you’re a pioneer here, is how to get these tools into the school in a way where the teacher feels accountable for what’s happening and that the culture of the school is motivating the kid to get through that.
Transforming Education with Tech Innovation
Neerav Kingsland
And I know groups like Alpha School are thinking a lot about the cultural piece now, but yeah, just the idea that if school was set up to really maximize the interaction here with the app layer, we could have, you know, amazing gains, I really do think. But yeah, that’s the short of it is I, I don’t think typing into a box for maybe, you know, kids under 18 is just great pedagogy. It’s so much more you can do and we’re thrilled to be doing it. So maybe one last thing on the app layer, when I took over this role in the Mission labs, because I knew education was a place I thought we could start. So I just did a sprint and probably over two months met with 40 or 50 ed tech companies, philanthropists, VCs, to see what was out there. And then kind of informally, we just started working with 10 or 15 of them and giving them the same technical support we’d give to like the Fortune 500, but, you know, more out of a mission perspective. And so through that, we’ve got to start building with a lot of the app layer companies that have just been wild.
Diane Tavenner
That’s pretty awesome. What does that look like when you build or work with them? I mean, I, again, I think people have no idea what this would even be.
Michael Horn
You know, well, and just to stand that right, Diana? I think a lot of people say, well, like, why doesn’t Anthropic just do it all? Like, why does, why do we even need the apps that are from third party companies? Right?
Neerav Kingsland
Totally. You know, Michael, to your question. I think most domains right now to really understand like the person on the other end, in this case children and their needs, like you need domain expertise. You know, maybe one day like Claude out of the box will know everything but it doesn’t right now it doesn’t know how to be a great teacher the way you know, educators building apps would. And so we don’t feel it’s ready to do it all, particularly in education and then in what we do it’s kind of like forward deployed engineering. So we take a technical person in our team who’s an expert at building on top of Claude and we just, you know, we’ll do an intake meeting where we try to understand their overall mission of the org we’re working with and then their product roadmap, what they want AI to be able to do and where they’re struggling. Then we just dig in with them very tactically. It might be like a shared Slack channel, a weekly meeting and we try to get whatever they’re building out to launch. We’ll stick with them until that happens.
Diane Tavenner
That’s awesome. Let’s shift to older young people if you will. I think, you know, I’m now really focused on the successful launch of young people post high school into whatever their post secondary pathway is and into their first foothold job and careers and life. And I think that your CEO has been one of the first and few people to be really honest about maybe the short sort of medium-ish term impacts potentially on careers, especially for young people. And I think we’re seeing some data and statistics that suggest that, you know, recent college graduates are struggling to find first jobs and AI might be an impact there. And clearly there’s complicating factors around the economy and whatnot. But I think if we look back in history it’s logical to assume with such a seismic transformation that we will see, you know, many jobs go away and new jobs will be created. But there might be some, you know, gaps and timeline where that’s, that’s going to be a little bit rough.
Like how do you think about that? How do you think we should be thinking about that? How does that influence what you think maybe we should be focused on in high school and post secondary as we, for those of us serving, you know, directly serving kids.
Neerav Kingsland
Yeah, you know, I think at Anthropic we just try to be open and honest about what we’re seeing and where the text going and you know, ultimately we’re not policymakers and so we want to inform the people who, both citizens and government who are making this maybe to like zoom out a little bit to your point. And we’ve been through these transitions before. We have. And you know, I think exactly to what you said, they can be painful while they’re happening, even if you end up in a better place on the other side. But the last big one we went through was farming to the Industrial revolution. And then, you know, coinciding with that was basically the falling of the monarchies across Europe. And then we went on 150 year exploration to kind of get to capitalistic welfare, democratic systems at least in Europe and the U.S.
and you know, a couple world wars in between. And so it was extremely, you know, a tumultuous time. And you know, whatever happens in this transition, I hope it happens much more peacefully. And I think, you know, we have the lessons of history now and maybe a way we didn’t back then. And so all that’s to say, I think just setting the stage, you’re absolutely right. And big changes are likely afoot. In terms of what that means right now, I find that to be a very confusing question that I personally don’t feel like I have good answers to. And you know, I find that I live kind of in two worlds.
One, when I show up at Anthropic every day and then I go home and like teach my kids to read or whatever. And I don’t quite know how to put those two worlds together sometimes. So I think the short answer is I really don’t know. Like if I was a kid in college or you know, what would I do differently? It’s very hard to know. I’ll give like a take because I’m on a podcast, but this is low confidence. I think things that I’ve been thinking about, you know, for my own kids on some level is experimentation and risk taking. I think we’re probably already undervalued in school relative to just like grinding and taking a test and so forth. And so I think that’ll be even more important during a time of transition because the paths will be less structured and we’ll just know less and so trying failing.
You know, the more you can do that, the earlier in life probably the better. Then another thing I’ve been curious about is the ability to manage AIs as basically small teams could be a very important thing to, you know, managing teams is a very important skill, obviously as we all grow through. And you know, when you look at business schools now, they’ve really restructured around doing work in teams. And so I have been curious about what does it mean to have a team of AIs working for you and how should that affect, like, high school, college, grad school and early employment?
Diane Tavenner
It was just so fascinating to me as someone who has been pretty fanatical about leadership development and management development and tried to move, when we’re thinking about humans in that regard, to a much more sort of collaborative approach to leadership and management. Now I think about AIs, I’m like, well, I think we might be going back the other direction. I’m not sure you take that collaborative human approach right?
I think you take a more sort of classic management approach. So maybe what’s old will be new again.
Neerav Kingsland
I always say please when I’m asking Claude for things. Err on the side of seeing the good side.
Navigating AI in Education
Michael Horn
But I appreciate your honesty, Neerav, and like, sort of. There’s a lot we don’t know right now around this. I want to stay on the question of maybe the here and now with the older side of the young people, as Diane phrased it, just because you’re seeing a lot of professors, you mentioned cheating, for example. You’re seeing a lot of professors return to the blue book, oral exams, things of that nature and stuff like that. And I guess on the one hand I get it, and on the other hand it feels to me like maybe we’re not asking people to do the right things. Like we need an update on the purpose of what they’re actually doing in the work so we can see how do they use AI with the knowledge and skills that they’re building to do something more than they could have before. And I’m sort of doing like, I, I just love you to sort of think through that puzzle out loud with us about how you’re framing those sort of two dichotomies of approaches.
Neerav Kingsland
Yeah, I mean, I was talking to a couple education philanthropists or was over email, I think. And I said there’s never been a greater time and a more exciting time to be an educational entrepreneur and to go create a school. I think for these reasons, whether that’s a higher ed, high school or whatever, like what an amazing time to go build a school. And so for all your listeners, I hope there’s people out there who are doing some of the best work in the world that you can do. So generally, and this is what kind of the ethos of New Orleans is a lot of trial and error and trying to figure out what works and what doesn’t. Just needs to happen broadly across the country right now. And so I think my short answer to that is I hope a lot of people try things and we learn. That being said, we obviously have existing institutions.
I wouldn’t want that to be like all my exams, but I do think having kids writing classes, you know, in this transition is probably a pretty good thing to do. I also think it raises a bunch of questions about, like, how well are we doing on education if all these kids are just cheating?
I think I’m pretty sympathetic to the bluebook thing. I think that’s probably what I would do for like a certain type of. I wouldn’t want that to be all my exams, but I do think having kids write in class in this transition is probably a pretty good thing to do. I also think it raises a bunch of questions about how well are we doing on education if all these kids are just cheating?
Michael Horn
Do we have the incentive structure toward encouraging risk taking?
Neerav Kingsland
Yeah.
Michael Horn
Yeah.
Neerav Kingsland
Or like, I mean, you know, they’re kids and so, you know, 18’s not totally kids. But it worries me that for whatever reason, not necessarily the kid’s fault, they don’t value the learning in of themselves. And that could be because they’re getting taught the wrong thing because it’s hard and we’re all lazy or whatever. But cheating is also a sign of people not valuing the work. And so that does raise larger questions.
Diane Tavenner
Yeah. Two things coming up for me in what you’re sharing, Neerav. The first is we’ve both spent a lot of our career in the space of empowering families and parents to have choice and options and opportunities for their children. And you’re, you’re talking about teaching your own children to read and math. And so, I mean, I think it’s, it seems obvious that this is going to give more options, create more opportunity, more autonomy. But, you know, especially intersecting with a lot of the policy changes that are happening. How, how are you thinking about that? What do you think about possible? You just said never been a better time to create a school. But how do you think about from the family perspective, the sort of consumer, if you will, perspective what’s possible?
Neerav Kingsland
Yeah. And the thing, one of the things that’s really exciting to me, I a couple months ago had the chance to go to Rwanda and visit schools. There’s a great organization called Rising Academies that we’ve been there. Yeah. Truly spectacular. And hopefully we’ll be doing more in Rwanda and others countries in Africa and then also in India over the coming years. But AI relative to most historical education innovations I think will decrease inequality because it’s basically a cheap way to scale upgrade teaching and you know, if you have to rely on an individual human there’s obviously limits to what you can scale and there’s scarcity in that in a way there isn’t with AI. So I think big picture on the family consumer, you know, people in under-resourced schools.
This should be a boon if we can get it right. That just all makes me pretty optimistic. Yeah. And I feel way more empowered as a parent to be able to have these tools to use my kid if they were to falling behind or anything. So I think broadly it should just be if you can again get it right, avoid the cheating, get the app layer right and parents get involved. I think it should be amazing for families.
Diane Tavenner
Yeah. It makes me wonder what if people are going to start looking to schools for different things and maybe they’re already looking to schools for different things because they, they do tell us they care about the activities and the sports and the social interaction and the engagement and you know, if you’re learning to read at home and you’re, you know, do it, you have your personalized math tutor and whatnot. You know, it does sort of beg the question of what does school look like. And I think one other place we haven’t touched yet is we, I think people’s minds go to, you know, the AI being really direct to student. How is it teaching them or tutoring them or. But I think sometimes the unsexy stuff might be some of the most powerful stuff like how is it actually helping us to transform the master schedule, literally, you know, I mean which is the, how the, the bus schedule which used to dictate schools. And so do you see anything in that space, sort of the structural aspects of running big schools and systems and, and what might be possible there and how might we see that, feel that you know, in the field?
Balanced Learning with AI Tutors
Neerav Kingsland
I definitely remember the pain of bus routes for launching schools in new Orleans post Katrina. That was a gnarly bus route environment. You know, I’ll just riff a little bit, but again, I think great school entrepreneurs will build the future here. But so, you know, my daughter goes, she’s in first grade, she goes to the local public elementary school, which is wonderful, very happy with it. And I don’t begrudge them for not, you know, a year into the AI revolution or whatever, having restructured the school. But I think what I wish my daughter’s school looked like right now would be that she’d go to school. She’s six. And so maybe 60 to 90 minutes a day on screen is probably the max I’d want with AI tutors that were doing reading and math.
And like I said, with the teacher highly involved in her progression and human tutoring, augmenting it as the data’s coming out and where she’s struggling or not, and a culture that incentivizes completion, my guess is like she would be moving much faster in obviously a more individualized way, if that was structured. You could get a lot of the core content there. And then I think for the rest of the day it would be supplementing that. And then there’s some things you want whole group discussion around the book or things like that. And so I think there’d still be a lot of room for teachers to guide learning and discussion based formats and then to the experimentation and risk taking, which is the projects, whatever they might be doing things with other kids. So some version of that where core content’s delivered in an hour or two or day, then it’s supplemented with teacher instruction and then you have more time for exploration.
Diane Tavenner
Yeah.
Michael Horn
Neerav, I’m struck by, like you’ve said it several times now, the AI tutor, the power of that. Right. And the responsiveness to an individual, particularly if you build it in with the experience and insight. Right. That good educators in learning science bring to the table to create a good scaffolded experience. I’d love to get your take on this because I feel like a lot of the skeptics of like AI tutor seems to be one of the flashpoints where you get a lot of skeptics coming out that’ll say the results aren’t nearly as good as you think. That you talked about engagement, you can solve that with the teacher, but that they sort of feel like it’s very procedural, I think would be the word that they would say, and maybe not getting at the depth of the learning. And so I’d love your take on like, what are they missing that you’re seeing about how these work fundamentally right now and where they can go?
Neerav Kingsland
Yeah. Well, to be clear, they might be right. So again, I think we just, I love your humility and all.
Michael Horn
Refreshing, by the way.
Neerav Kingsland
Yeah, well, to be clear, they might be right. So again, I think we just.
Michael Horn
I love your humility in all this, it’s so refreshing, by the way.
Neerav Kingsland
Yeah. Which more just any vision I’m putting out, I think needs to be subject to reality based experimentation. So a lot to figure out though. I think we’ll head in this direction. Maybe another way to say it, while I think there’s never been a better time to be a school entrepreneur, plausibly my hope would be there’s never been a better time to be a teacher over the coming years. So I don’t want these tools to be dehumanizing school teaching kids. You know, my wife was a high school math teacher in New Orleans and so I’ve been not as front and center as she was, but fairly close to front and center of how hard it is to be a teacher and obviously worked with hundreds of teachers in New Orleans during my time there. And it’s extremely demanding and grueling job.
And I think most teachers would tell you they’re not spending their time the way they want to be spending their time. And so Diane’s point, if we can get more efficiency in and then if we can get some of the more routinized part of teaching, offload it to AI, I think the teacher’s job can be a lot more creative and wonderful as well. And maybe that’s where some of the depth that plausibly could be missing right now could come from. So we got a lot of arrows in our quiver. AI is one of them. But the teacher is just going to be absolutely necessary. Obviously I would not want to send my 6 year old to a school where she’s on a screen for 10 hours a day. I’m excited to see the role of the teacher evolve as well.
And I imagine a lot of depth will come from that.
Future Potential of AI Models
Michael Horn
I am struck how you are in this very moderate position though. Right. Because we’re seeing tons of legislation right now starting to move toward getting rid of all digital screen time. And then there’s the flip side of not wanting it to be sort of the zombie apocalypse, if you will. So maybe as we wrap up, let me ask this sort of broader question. Zoom back out away from education and just the larger set of tools. Right. That you’re working on and applications.
You’re seeing all sorts of different things that Anthropic, Claude, not just you, all the other LLM foundational models. Right. Are starting to tackle and sort of, I’m curious, like what folks maybe like me and Diane, others in education are sort of discounting or don’t understand that these models are capable of doing today or is right around the corner that we may be discounting and not seeing?
Neerav Kingsland
Seeing. Yeah, it is hard. Like things are moving exponentially and our brains don’t think exponentially. One thing to do is like go play with GPT2. Like I think that was four years ago now, three years ago now. And then like go talk to, you know, GPT5 or Claude or whatever. I think visceral ways to feel how fast things are moving help you understand where we might be five years from now. Because if we make the jump like we did then for another five years.
And so I think again, Anthropic’s just trying to be vocal that we as the people who are closest to the technology do think things are happening very, very fast and there’s opportunity there, but there’s also a bunch of risk. In terms of where the models are heading. One way to think about it is an AI safety group called Meter and one of the charts they put out that I think is great is how long can a model do autonomous work, in this case encoding at like 60% accuracy, I think is their bar or something. And you know, a couple years ago it was like 30 seconds or something and I think the latest was like four to eight hours. Yeah. And so I think AI being able to do knowledge work in 24 to 48 to maybe week-long chunks over the coming years might be one way to wrap your head around it. Like I think that’s coming and that’ll be a pretty big job in technology.
Reflecting Growth Over Time
Diane Tavenner
I love this suggestion of going to play with GPT2. I don’t know if you remember, but I had the good luck of, we were in a conversation right before the big models were announced and you showed me, I guess what the early version of.
Neerav Kingsland
I remember that. Yeah, Claude in Slack was, I mean, and.
Diane Tavenner
I was like, I must admit, like I really didn’t get it. I was like, wait, is this like, am I just googling something? Like I don’t really understand exactly what’s happening. You certainly saw much more than I did at that moment. It took me a little bit to wrap my head around it. But I think about that moment which I remember so clearly having with you and totally not getting it and quite frankly not being terribly impressed. And now and what a. I mean it’s just so dramatic, you know, my learning curve and my arc and I’m a novice and a layperson and. And so I love this idea of can we sort of, you know, sort of set markers for ourselves where we kind of document or record what we thought or believed in that moment or how we experienced it, and then look back and reflect on those as kind of this as things progress? Because it is.
I mean, I almost feel out of breath some days. Like it goes so fast.
Neerav Kingsland
Well, you shouldn’t feel too bad as somebody who was a part of leading our series C six months later, maybe dozens of investors also were not too impressed with Anthropic at the time, but here we are.
Diane Tavenner
Well, by then I was, so maybe.
Neerav Kingsland
There you go.
Diane Tavenner
This has been awesome. Thank you so much for joining us. Before we let you go, Michael and I have a tradition of we just like to share with each other something we’ve been reading, listening to, watching. We really try to keep it outside of our day jobs, but we fail at that quite often. And so we’d love to invite you to join in that tradition. Anything, anything fun to share. Intriguing. Interesting that you.
You’ve been consuming.
Neerav Kingsland
Yeah. Two things for you. One maybe too like a little window into our world over here. The podcast everyone listens to at all the AI labs is Dwarkesh Patel. And so if you want to go deep, I’d recommend listening to that. A lot of our CEOs have been on that and a lot of the researchers, and I always learn a ton there. And then the book I’ve been reading lately is a really a wild one. It’s called Blitzed, the history of drug use in the Third Reich, which might be the best title for a book ever.
And you know, it was kind of. It’s probably fairly obvious what the book is about, but like, there was a lot of speed going on, particularly in the later years of the war. And not that that was monocausal of like the fall of the Third Reich, but it played a role. And so, you know, it’s just like an interesting aha. Of like, why did historians miss that? And like what might be going on in our own time that is non obvious. That is pushing history in one direction or another, whether it be drugs or something else. But that’s a fun read. Yeah.
Diane Tavenner
Yeah, that one’s.
Michael Horn
I was gonna say it sounds like you knew that one, Diane.
Diane Tavenner
It’s on our shelf as well. The title and the cover are very fitting, for sure.
Michael Horn
Dan, what about you? What’s been on your, what’s been on your playlist or. Or bedside table recently?
Diane Tavenner
Well, I’ve gotten pretty obsessed with a lot of what Scott Galloway is talking about, and he is on a lot of podcasts, so he talks about it all over the place. I’vw really been listening to the Lost Boys podcast series, which is focused on sort of bringing light to what he would describe as a crisis among our young men in America. And there are a number of stats that suggest that these young folks are in crisis. And for me, I think I went down this path as a mom of two, sort of young, young men. And what I find is when I talk about some of the challenges or worries I have, I. There are lots of moms who come to me sort of quietly, in sort of whispered tones, and they’re feeling the same thing, experiencing the same thing, worried about the same thing. And so I do.
I think that it’s interesting and important, and I don’t know exactly what to do about it yet, but I feel compelled. So that’s where I’m spending some time.
How about you?
Michael Horn
That’s good.
Yeah. We had Richard Reeves on our Future you podcast last year around this and which was great conversation. And Jeff Salingo is obsessed with Scott Galloway. I think it’s okay that I say that here. So those books both resonate as well. Mine, I. I finished Scott Anthony, who was an early collaborator with Clay Christensen, he wrote a book called Epic Disruptions, which is like disruptive innovation throughout history, some of which. I don’t know if I qualified them all as disruptive innovations myself, but they were all moments that changed things in pretty significant ways and sort of the establishment’s reaction or. Or struggle, if you will, to get their heads around what was coming and what.
How that would change things. And so it’s. It’s some pretty interesting flashpoints told in entertaining ways. So that’s been on my list, but we’ll wrap it there. Neerav, just huge thank you. This has been a great conversation and stretched, I think, both of our thinking. And so just thank you and for all of you listening, please, please, please keep writing in with comments, questions, lines of inquiry you want us to follow.
It’s been a real inspiration to me and Diane and directing us as we thought about the season. And so we look forward to more and we’ll see you next time on Class Disrupted.
The Future of Education is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Thank you for subscribing. Leave a comment or share this episode.
