Faster, Please! — The Podcast cover image

Faster, Please! — The Podcast

Latest episodes

undefined
Jan 24, 2025 • 28min

🤖 My chat (+transcript) with journalist Nicole Kobie on why the future of tech still hasn’t arrived

Nicole Kobie, a prominent science and technology journalist known for her work with Teen Vogue and Wired, delves into the frustrations surrounding the slow pace of technological advancement. She discusses regulatory hurdles that stunt innovation and critiques the media's hype around technologies like AI and driverless cars. Kobie highlights the historical patterns of risk aversion and how they affect current developments. The conversation also touches on the expected technological landscape by 2035, emphasizing the need for balanced progress amidst pressing global challenges.
undefined
Jan 16, 2025 • 30min

⚡ My chat (+transcript) with Virginia Postrel on promoting a culture of dynamism

Big changes are happening: space; energy; and, of course, artificial intelligence. The difference between sustainable, pro-growth change, versus a retreat back into stagnation, may lie in how we implement that change. Today on Faster, Please! — The Podcast, I talk with Virginia Postrel about the pitfalls of taking a top-down approach to innovation, versus allowing a bottom-up style of dynamism to flourish.Postrel is an author, columnist, and speaker whose scholarly interests range from emerging technology to history and culture. She has authored four books, including The Future and Its Enemies (1998) and her most recent, The Fabric of Civilization: How Textiles Made the World (2020). Postrel is a contributing editor for the Works in Progress magazine and has her own Substack.In This Episode* Technocrats vs. dynamists (1:29)* Today’s deregulation movement (6:12)* What to make of Musk (13:37)* On electric cars (16:21)* Thinking about California (25:56)Below is a lightly edited transcript of our conversation. Technocrats vs. dynamists (1:29)I think it is a real thing, I think it is in both parties, and its enemies are in both parties, too, that there are real factional disagreements.Pethokoukis: There is this group of Silicon Valley founders and venture capitalists, they supported President Trump because they felt his policies were sort of pro-builder, pro-abundance, pro-disruption, whatever sort of name you want to use.And then you have this group on the center-left who seemed to discover that 50 years of regulations make it hard to build EV chargers in the United States. Ezra Klein is one of these people, maybe it's limited to center-left pundits, but do you think there's something going on? Do you think we're experiencing a dynamism kind of vibe shift? I would like to think we are.Postrel: I think there is something going on. I think there is a real progress and abundance movement. “Abundance” tends to be the word that people who are more Democrat-oriented use, and “progress” is the word that people who are more — I don't know if they're exactly Republican, but more on the right . . . They have disagreements, but they represent distinct Up Wing (to put it in your words) factions within their respective parties. And actually, the Up Wing thing is a good way of thinking about it because it includes both people that, in The Future and Its Enemies, I would classify as technocrats, and Ezra Klein read the books and says, “I am a technocrat.” They want top-down direction in the pursuit of what they see as progress. And people that I would classify as dynamists who are more bottom-up and more about decentralized decision-making, price signals, markets, et cetera.They share a sense that they would like to see the possibility of getting stuff done, of increasing abundance, of more scientific and technological progress, all of those kinds of things. I think it is a real thing, I think it is in both parties, and its enemies are in both parties, too, that there are real factional disagreements. In many ways, it reminds me of the kind of cross-party seeking for new answers that we experienced in the late ’70s and early ’80s, where . . . the economy was problematic in the ’70s.Highly problematic.And there was a lot of thinking about what the problems were and what could be done better, and one thing that came out of that was a lot of the sort of deregulation efforts that, in the many pay-ins to Jimmy Carter, who's not my favorite president, but there was a lot of good stuff that happened through a sort of left-right alliance in that period toward opening up markets.So you had people like Ralph Nader and free-market economists saying, “We really don't need to have all these regulations on trucking, and on airlines, and these are anti-consumer, and let's free things up.” And we reaped enormous benefits from that, and it's very hard to believe how prescriptive those kinds of regulations were back before the late ’70s.The progress and abundance movement has had its greatest success — although it still has a lot to go — on housing, and that's where you see people who are saying, “Why do we have so many rules about how much parking you can have?” I mean, yes, a lot of people want parking, but if they want parking, they'll demand it in the marketplace. We don't need to say, “You can't have tandem parking.” Every place I've lived in LA would be illegal to build nowadays because of the parking, just to take one example.Today’s deregulation movement (6:12). . . you've got grassroots kind of Trump supporters who supported him because they're sick of regulation. Maybe they’re small business owners, they just don't like being told what to do . .. . and it's a coalition, and it's going to be interesting to see what happens.You mentioned some of the deregulation in the Carter years, that's a real tangible achievement. Then you also had a lot more Democrats thinking about technology, what they called the “Atari Democrats” who looked at Japan, so there was a lot of that kind of tumult and thinking — but do you think this is more than a moment, it’s kind of this brief fad, or do you think it can turn into something where you can look back in five and 10 years, like wow, there was a shift, big things actually happened?I don't think it's just a fad, I think it’s a real movement. Now, movements are not always successful. And we'll see, when we saw an early blowup over immigration.That's kind of what I was thinking of, it's hardly straightforward.Within the Trump coalition, you've got people who are what I in The Future and Its Enemies would call reactionaries. That is, people who idealize an idea of an unchanging America someplace in the past. There are different versions of that even within the Trump coalition, and those people are very hostile to the kinds of changes that come with bottom-up innovation and those sorts of things.But then you've also got people, and not just people from Silicon Valley, you've got grassroots kind of Trump supporters who supported him because they're sick of regulation. Maybe they’re small business owners, they just don't like being told what to do, so you've got those kinds of people too, and it's a coalition, and it's going to be interesting to see what happens.It's not just immigration, it's also if you wanted to have a big technological future in the US, some of the materials you need to build come from other countries. I think some of them come from Canada, and probably we're not going to annex it, and if you put big tariffs on those things, it's going to hamper people's ability to do things. This is more of a Biden thing, but the whole Nippon Steel can't buy US Steel and invest huge amounts of money in US plants because, “Oh no, they're Japanese!” I mean it's like back to the ’80s.Virginia, what if we wake up one morning and they've moved the entire plant to Tokyo? We can't let them do that!There’s one thing about steel plants, they're very localized investments. And we have a lot of experience with Japanese investment in the US, by the way, lots of auto plants and other kinds of things. It’s that sort of backward thinking, which, in this case, was a Biden administration thing, but Trump agrees, or has agreed, is not good. And it's not even politically smart, and it's not even pro the workers because the workers who actually work at the relevant plant want this investment because it will improve their jobs, but instead we get this creating monopoly. If things go the way it looks like they will, there will be a monopoly US Steel supplier, and that's not good for the auto industry or anybody else who uses steel.I think if we look back in 2030 at what's happened since 2025, whether this has turned out to be a durable kind of pro-progress, pro-growth, pro-abundance moment, I'll look at how have we reacted to advances in artificial intelligence: Did we freak out and start worrying about job loss and regulate it to death? And will we look back and say, “Wow, it became a lot easier to build a nuclear power plant or anything energy.” Has it become significantly easier over the past five years? How deep is the stasis part of America, and how big is the dynamist part of America, really?Yeah, I think it's a big question. It's a big question both because we're at this moment of what looks like big political change, we're not sure what that change is going to look like because the Trump coalition and Trump himself are such a weird grab bag of impulses, and also because, as you mentioned, artificial intelligence is on the cusp of amazing things, it looks like.And then you throw in the energy issues, which are related to climate, but they're also related to AI because AI requires a lot of energy. Are we going to build a lot of nuclear power plants? It's conceivable we will, both because of new technological designs for them, but also because of this growing sense — what I see is a lot of elite consensus (and elites are bad now!) that we made a wrong move when we turned against nuclear power. There's still aging Boomer and older are environmentalist types who still react badly to the idea of nuclear power, but if you talk to younger people, they are more open-minded because they're more concerned with the climate, and if we're going to electrify everything, the electricity's got to come from someplace. Solar and wind don't get you there.To me, not only is this the turnaround in nuclear, to me, stunning, but the fact that we had one of the most severe accidents only about 10 years ago in Japan, and if you would have asked anybody back then, they're like, “That's the death knell. No more nuclear renaissance in these countries. Japan's done. It's done everywhere.” Yet here we are.And yet, part of that may even be because of that accident, because it was bad, and yet, the long-run bad effects were negligible in terms of actual deaths or other things that you might point to. It's not like suddenly you had lots of babies being born with two heads or something.What to make of Musk (13:37)I’m glad the world has an Elon Musk, I'm glad we don't have too many of them, and I worry a little bit about someone of that temperament being close to political power.What do you make of Elon Musk?Well, I reviewed Walter Isaacson's biography of him.Whatever your opinion was after you read the biography, has it changed?No, it hasn't. I think he is somebody who has poor impulse control, and some of his impulses are very good. His engineering and entrepreneurial genius are best focused in the world of building things — that is, working with materials, physically thinking about properties of materials and how could you do spaceships, or cars, or things differently. He's a mixed bag and a lot of these kinds of people, I say it well compared.What do people expect that guy to be like?Compared to Henry Ford, I'd prefer Elon Musk. I’m glad the world has an Elon Musk, I'm glad we don't have too many of them, and I worry a little bit about someone of that temperament being close to political power. It can be a helpful corrective to some of the regulatory impulses because he does have this very strong builder impulse, but I don't think he's a particularly thoughtful person about his limitations or about political concerns.Aside from his particular strange personality, there is a general problem among the tech elite, which is that they overemphasize how much they know. Smart people are always prone to the problem of thinking they know everything because they're smart, or that they can learn everything because they're smart, or that they're better than people because they're smart, and it's just like one characteristic. Even the smartest person on earth can't know everything because there's more knowledge than any one person can have. That's why I don't like the technocratic impulse, because the technocratic impulse is like, smart people should run the world and they tell you exactly how to do it.To take a phrase that Ruxandra Teslo uses on her Substack, I think weird nerds are really important to the progress of the world, but weird nerds also need to realize that our goal should be to create a world in which they have a place and can do great things, but not a world in which they run everything, because they're not the only people who are valuable and important.On electric cars (16:21)If you look at the statistics, the people who buy electric cars tend to be people who don't actually drive that much, and they're skewed way to high incomes.You were talking about electrification a little earlier, and you've written a little bit about electric cars. Why did you choose to write about electric cars? And it seems like there's a vibe shift on electric cars as well in this country.This is the funny thing, because this January interview is actually scheduled because of a July post I had written on Substack called “Don't Talk About Electric Cars!”It’s as timely as today's headlines.The headline was inspired by a talk that I heard Celinda Lake, the Democratic pollster (been around forever) give at a Breakthrough Institute conference back in June. Breakthrough Institute is part of this sort of UP Wing, pro-progress coalition, but they have a distinct Democrat tilt. And this conference, there was a panel on it that was about how to talk about these issues, specifically if you want Democrats to win.She gave this talk where she showed all these polling results where you would say, “The Biden administration is great because of X,” and then people would agree or disagree. And the thing that polled the worst, and in fact the only thing that actually made people more likely to vote Republican, was saying that they had supported building all these electric charging stations. Celinda Lake's opinion, her analysis of that, digging into the numbers, was that people don't like electric cars, and especially women don't like electric cars, because of concerns about range. Women are terrified of being stranded, that was her take. I don't know if that's true, but that was her take. But women love hybrids, and I think people love hybrids. I think hybrids are very popular, and in fact, I inherited my mother's hybrid because she stopped driving. So I now have a 2018 Prius, which I used to take this very long road trip in the summer where I drove from LA to a conference in Wichita, and then to Red Cloud Nebraska, and then back to Wichita for a second conference.The reason people don't like electric cars is really a combination of the fact that they tend to cost more than equivalent gasoline vehicles and because they have limited range and you have to worry about things like charging them and how long charging them is going to take.If you look at the statistics, the people who buy electric cars tend to be people who don't actually drive that much, and they're skewed way to high incomes. So I live in this neighborhood in West LA, and it is full of Priuses — I mean it used to be full of Priuses, there's still a lot of Priuses, but it's full of Teslas and it is not typical. And the people in LA who are driving many, many miles are people who have jobs like they’re gardeners, or their contractors, or they're insurance adjusters and they have to drive all around and they don't drive electric cars. They might very well drive hybrids because you get better gas mileage, but they're not people who have a lot of time to be sitting around in charging stations.I think what's happened is there's some groups of people who are see this as a problem to be solved, but then there are a lot of people who see it as more symbolic than not. And they let their ideal, perfect world prevent improvements. So instead of saying, “We should switch from coal to natural gas,” they say, “We should outlaw fossil fuels.” Instead of saying, “Hybrids are a great thing, great invention, way lower emissions,” they say, “We must have all electric vehicles.” And what will happen, California has this rule, it has this law, that you're not going to be able to sell [non-]electric vehicles in the state after, I think it's 2035, and it's totally predictable what's going to happen: People just keep their gasoline cars longer. We’re going to end up like Cuba with a bunch of old cars.I swear, every report I get from a think tank, or a consultancy, or a Wall Street bank, for years has talked about electric cars, the energy transition, as if it was an absolutely done deal, and maybe it is a done deal over some longer period of time, I don't know, but to me it sort of gets to your point about top-down technocratic impulse — it seems to be failing.And I think that electric cars are a good example of that because there are a lot of people who think electric cars are really cool, they're kind of an Up Wing thing, if you will. It's like a new technology, there’ve been big advances, and exciting entrepreneurs . . . and I think a lot of people who like the idea of technological progress like electric cars, and in fact, the adoption of electric cars by people who maybe don't drive a whole lot but have a lot of money, it's not just environmental, cool, or even status, it's partly techno-lust, especially with Teslas.A lot of people who bought Teslas, they're just like people who like technology, but the top-down proclamation that you must have an electric vehicle, and we're going to use a combination of subsidies and bans to force everybody to have an electric vehicle, really doesn't acknowledge the diversity of transportation needs that people have.One way of looking at electric cars, but also the effort to build all these chargers, which has been a failure, the effort to start to creating broadband connectivity to all these rural areas — which isn't working very well — there was this lesson learned by people on the center-left, and Ezra Klein, that there was this wild overreaction, perhaps, to environmental problems in the ’60s and ’70s, and the unintended consequence here is that one, the biggest environmental problem may be worse because we don't have nuclear power and climate change, but now we can't really solve any problems. So it took them 50 years, but they learned a lesson.My concern is to look at what's going on with some of the various Biden initiatives which are taking forever to implement, may be wildly unpopular — will they learn the risk of this top-down technocratic approach, or they'll just memory hold that and they'll move on to their next technocratic approach? Will there be a learning?No, I'm skeptical that there will be. I think that the learning that has taken place — and by the way, I hate that: “a learning,” that kind of thing. . .That's why I said it, because it’s kind of delightfully annoying.The “learning,” gerund, that has taken place is that we shouldn't put so much process in the way of government doing things. And while I more or less agree with that, in particular, there are too many veto points and it is too easy for a very small group of objectors to hold up, not just private, but also public initiatives that are providing public goods.I think that the reason we got all of these process things that keep things from being done was because of things like urban renewal in the 1960s. And no, it was not just Robert Moses, he just got the big book written about him, but this took place every place where neighborhoods were completely torn down and hideous, brutalist structures were built for public buildings, or public housing, and these kinds of things, and people eventually rebelled against that.I think that yes, there are some people on the center-left who will learn. I do not think Ezra Klein is one of them, but price signals are actually useful things. They convey knowledge, and if you're going to go from one regulatory regime to another, you'll get different results, but if you don't have something that surfaces that bottom-up knowledge and takes it seriously, eventually it's going to break down. It's either going to break down politically or it's just waste a lot of money. . . You have your own technocratic streak.Thinking about California (25:56)Everybody uses California fires as an excuse to grind whatever axe they have.But listen, they'd be the good technocrats.Final question: As we're speaking, as we're doing this interview, huge fires raging sort of north of Los Angeles — how do you feel about the future of California? You live in California. California is extraordinarily important, both the American economy and to the world as a place of culture, as a place of technology. How do you feel about the state?The state has done a lot of shooting itself in the foot over the last . . . I moved here in 1986, and over that time, particularly in the first decade I was there, things were going great, the state was kind of stupid. I think if California solves its housing problem and actually allows significant amounts of housing to be built so that people can move here, people can stay here, young people don't have to leave the state, I think that will go a long way. It has made some positive movement in that direction. I think that's the biggest single obstacle.Fires are a problem, and I just recirculated on my Substack something I wrote about understanding the causes of California fires and what would need to be done to stop them.You’ve got to rake that underbrush.I wrote this in 2019, but it's still true: Everybody uses California fires as an excuse to grind whatever axe they have.Some of the Twitter commentary has been less-than-generous toward the people of California and its governor.One of the forms of progress that we take for granted is that cities don't burn regularly. Throughout most of human history, regular urban fires were a huge deal, and one of the things that city governments feared the most was fire and how were they prevented. There's the London fire, and the Chicago fires, and I remember, I just looked up yesterday, there was a huge fire in Atlanta in 1917, which was when my grandparents were children there. I remember my grandparents talking about that fire. Cities used to regularly burn — now they don't, where you have, they call it the “urban wildlife,” I forget what it's called, but there's a place where the city meets up against the natural environment, and that's where we have fires now, so that people like me who live in the concrete are not threatened. It's the people who live closer to nature, or they have more money, have a big lot of land.It's kind of understood what would be needed to prevent such fires. It's hard to do because it costs a lot of money in some cases, but it's not like, “Let's forget civilization. Let's not build anything. Let's just let nature take its course.” And one of the problems that was in the 20th century where people had the false idea — again, bad technocrats — that you needed to prevent forest fires, forest fires were always bad, and that is a complete misunderstanding of how the natural world works.California has a great future if it fixes this housing problem. If it doesn't fix its housing problem, it can write off the future. It will be all old people who already have houses.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised▶ Business* Google Thinks It Has the Best AI Tech. Now It Needs More Users. - WSJ* Anduril Picks Ohio for Military Drone Factory Employing 4,000 - Bberg* A lesson for oligarchs: politics can be deadly - FT Opinion* EU Needs Deregulation to Keep Up with Trump, Ericsson CEO Says - Bberg▶ Policy/Politics* Europe’s ‘super-regulator’ role is under threat - FT Opinion* Biden’s AI Data Center and Climate Contradiction - WSJ Opinion* After Net Neutrality: The Return of the States - AEI* China Has a $1 Trillion Head Start in Any Tariff Fight - WSJ▶ AI/Digital* She Is in Love With ChatGPT - NYT* Meta AI creates speech-to-speech translator that works in dozens of languages - Nature* AI-designed proteins tackle century-old problem — making snake antivenoms - Nature* Meta takes us a step closer to Star Trek’s universal translator - Ars▶ Clean Energy/Climate* Chris Wright backs aggressive build-out of the US power grid - EEN* We Have to Stop Underwriting People Who Move to Climate Danger Zones - NYT Opinion* Has China already reached peak oil? - FT* Molten salt nuclear reactor in Wyoming hits key milestone - New Atlas▶ Space/Transportation* SpaceX catches Super Heavy booster on Starship Flight 7 test but loses upper stage - Space* Blue Origin reaches orbit on first flight of its titanic New Glenn rocket - Ars* Jeff Bezos’ New Glenn Rocket Lifts Off on First Flight - NYT* Blue Origin’s New Glenn rocket reaches orbit in first test - WaPo* Blue Ghost, a Private U.S. Lunar Lander, Launches to the Moon - SciAm* Human exploration of Mars is coming, says former NASA chief scientist - NS▶ Substacks/Newsletters* TikTok is just the beginning - Noahpinion* Unstable Diffusion - Hyperdimensional* Progress's First Principles - Risk & Progress* How Trump, China & Trade Wars Will Affect the Global AI Landscape in 2025 - AI Supremacy* After the Green New Deal - Slow Boring* Washington Must Prioritize Mineral Supply Results Over Political Point Scoring - Breakthrough JournalFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Dec 19, 2024 • 27min

🌐 My chat (+transcript) with chaos theorist Doyne Farmer on our interconnected economy

Doyne Farmer, a professor at Oxford's Institute for New Economic Thinking and a pioneer in chaos theory, discusses the fascinating world of complexity economics. He explains how dynamic models can reveal internal economic cycles instead of relying solely on external shocks. Farmer highlights the role of agent-based simulations in addressing crises like COVID-19 and explores innovative solutions for climate change. He also delves into the future of energy, weighing the pros and cons of nuclear power versus renewables in driving economic growth.
undefined
Dec 13, 2024 • 27min

🎨 My chat (+transcript) with innovation expert Duncan Wardle on practical tips for corporate creativity

The future will be built on the big ideas we dare to conjure up today. We know that the most groundbreaking ideas often seemed ludicrous or simply impossible when first dreamed up, from the telephone, to human flight, to artificial intelligence. The key was a willingness to be creative and test the limits.While many of us might not consider ourselves creative people, Duncan Wardle assures us that we can take our ideas and brainstorms to the next level, no matter who we are or what we do. Today on Faster, Please! — The Podcast, Wardle and I explore some concrete tools for breaking down our own barriers to innovation and accessing the genius within all of us.Wardle is the former Head of Innovation and Creativity at Disney and founder of ID8. He has delivered multipl eTED Talks and teaches innovation Master Classes at Yale,Harvard, and the University of Edinburgh. His interactive book, The Imagination Emporium: Creative Recipes for Innovation has just been released.In This Episode* Creativity is learnable (1:37)* Building a career of creativity (8:09)* Tools for unlocking innovation (13:50)* Expansionist vs. reductionist tools (18:39)* Gamifying learning (25:20)Below is a lightly edited transcript of our conversation. Creativity is learnable (1:37)I believe we're all born creative with an imagination. We're all born curious. We're all born with intuition. We're all born with empathy. They may not have been the most employable skill of our entire careers. They are now.Pethokoukis: One of my favorite economists, Paul Romer, loves to use recipes as a metaphor to explain how innovation works in an economy. Like cooking recipes, innovation and ideas can be used repeatedly without being used up, you can combine different ideas as ingredients and create something new. I love that idea, and I love the way you present the book as kind of a recipe book you can sort of dip in and out of to help you be more creative and innovative.How should someone use this book, and who is it broadly for?Wardle: Me. Seriously. When I say me, I mean the busy, normal, hardworking person who says 10 times a day, “I don't have time to think.” And often considered the number one barrier to innovation and creativity: “I don't have time to think.” And I thought, “Okay, when you walk into a business office and you will look around, where's the book?” It's on the bookshelf, it's on the coffee table — nobody reads them. I thought, “Well, that's a waste of their money.” So I thought, “What book have I ever read — nonfiction — that I could read one page, know exactly what I need to do, and don't have to read the rest of the book today?” I thought, “My mom's cookbook! You want shepherd's pie? You go to page 67.” So I've designed the contents page the same way. It says, “Have you ever been to a brainstorm where nothing ever happened? Go to page 14. Fed up with your boss, shooting your ideas down? Go to page 12.”So it is designed to be hop in and hop out, but I also designed the principles around: take the intimidation out of innovation, make creativity tangible for people who are uncomfortable with ambiguity and gray, far more importantly, make it fun, give people tools they choose to use when you and I are not around. I also designed it around this principle and I'll see if this works: Close your eyes for me for a second. How many days are there in September?31?Well, we'll pretend it's 30.Or 30! That's the one thing I always confuse, which is the 30 and the 31.Close your eyes for a second. Just think about how you might have known there were 30 days in September. How might you have remembered? What might you have learned or what can you see with your eyes closed?Well, if I was a more melodic, musical person, loved a good rhyme, I might've used that very famous rhyme, which apparently I don't know veryWell, that's okay, neither do I, but I'll attempt it. About 30 percent of people go, “30 days has September, blah, blah, blah, and November.” They've just told me they're an auditory learner. That's their preferred learning style. They probably read a lot. How do I know that? Because when they learned it, they were six. When I asked the question, they learned it because they'd heard it.I'm sure you've seen somebody at some point in your life count their knuckles: January, February, March, April, May, June, July, et cetera. You may not remember this because you might not be a kinesthetic learner. Those are the people who learn by doing. Again, how do we know this? They learned it when they were six. How did they remember it? By doing it.And then 40 percent of an audience would just go, “No, no, I could just see a calendar with a number 30.” They're your visual learners. So I've designed the book to appeal to all three learning styles. It has a QR code in each chapter with a Spotify playlist for the auditory learners, animated videos where Duncan is now an animated character (who knew?) who pops out with a bunch of characters to tell you how to use the tools. And then hopefully, as of next Tuesday, the QR code on the back for kinesthetic learners will allow you to engage with the book and learn kinesthetically through artificial intelligence and ChatGPT and actually ask the book questions.The fundamental conceit of the book, though, is that being innovative, being creative, that can be learned. You can get better at it. Some people say, “I'm not a math person,” which I also don't believe. They'll say, “I'm not a super creative person. I'm not super innovative.” One, I'm assuming you think that's wrong; and two, you mentioned AI, if people are worried about robots doing more repetitive kinds of tasks, then having the tools to bring out or enhance that imagination seem more important now than ever.There's one thing I firmly believe in: We were all born a human, shockingly enough, and when you were given a gift for a holiday, perhaps, it came in an enormous box and it took you ages of time to take the toy out of the box because the box was the same height as you were. What do you spend the rest of the week playing with?I love a good box.Right? It was your castle, it was your rocket.Love a good box. Oh man, that box can be a time machine, anything.It was anything you wanted it to be until you went to the number one killer of creativity in imagination: western education, and the first thing you were told to do was, “Don't forget the color in between the lines.” Children are very curious. They ask, “Why, why, why, why?” Again, because they're after the insight for innovation. The insight for innovation comes on the sixth or seventh, why not the first one?If I were to survey you and ask you, “Why do you go to Disney on holiday?” People would say they go for the new attractions. But that's not strictly true, is it?So if you say, “Well, why do you go for the new attractions?”“Well, no, I like the classics.”“Well, why do you like the classics?” Why?“I like It's a Small World.”“Well, why do you like It’s a Small World?”“I remember the music.”“Why the music?”“Well, that's my mom's favorite ride. We used to go every summer.”“Why is that important to you 25 years later?”“Oh, I take my daughter now.”There's your insight for innovation. It has nothing to do with the capital investment strategy whatsoever and everything to do with that person's personal memory and nostalgia. But then we go to the number one killer of curiosity: western education. And the next thing our teacher tells us to do is stop asking “why,” because there's only one right answer.We know when somebody is staring at the back of our head. When you've stared at the back of the head of somebody that you think is really hot, a stranger, they turn around and look at you. You have to look away really quickly. It's okay, we've all done it. We have 120 billion neurons in our first brain and 120 million neurons in our second brain, the brain with which we say we make lots of our decisions, when we say “with our gut.” We are all empathetic.I believe we're all born creative with an imagination. We're all born curious. We're all born with intuition. We're all born with empathy. They may not have been the most employable skill of our entire careers. They are now. Why? Because I've been working with Google on DeepMind with their chief programmer — this is the AI program — and I asked her, “How the hell am I going to compete with this? How will any of us compete with this?” She said, “Well, by developing the things which will be the hardest for her to program into AI.” And I asked her what they were. She said, “The ones with which you were born: creativity, imagination, curiosity, empathy, and intuition.”Will they be programmed one day? Interestingly enough, she said intuition will go first. I was like, oh, that hurt. So I said, “Why intuition?” She said, “It's built on experience and we could build an algorithm that will give them experience.” I'm like, oh, so will they be programed one day? Perhaps. Anytime in the short term? No.Building a career of creativity (8:09)Your subconscious brain is 87 percent of the capacity. Every innovation you've ever seen, every creative problem you've ever solved, is back here to work as unrelated stimulus, but when the door is shut, you can't access it. So what do I do? I'm playful. I'm deliberately playful. In a moment, I want to briefly roll through the book, but first I want to ask about your job as the former head of innovation and creativity at Disney, which sounds like a fake job. It sounds like the kind of job someone would dream up and they wish there was such a job. It sounds like a dream job, but that was a real job. And what did you do there? Because it sounds fairly awesome.I finished as Head of Innovation — I didn't start that way. I started as a coffee boy in the London office. In 1986, I used to go and get my boss six cappuccinos a day from Bar Italia, and about three weeks into the role, I was told I would be the character coordinator, the person that looks after the walk-around characters at the Royal Premier of Who Framed Roger Rabbit in the presence of the Princess of Wales, Diana. I was like, “What do I do?” They said, “Well you just stand at the bottom of the stairs, Roger Rabbit will come down the stairs, the princess will come in on the receiving line, she'll greet him or blow him off and move into the auditorium.” How could you possibly screw that up? Well, I could. That was the day when I found out what a contingency plan was, because I didn't have one.A contingency plan would tell you, if you're going to bring a very tall rabbit with very long feet down a very large staircase towards the Princess of Wales, one might want to measure the width of the steps first before Roger trips on the top stair, is now hurdling like a bullet, head over feet at torpedo speed directly down the stairs towards Diana's head, whereupon he was taken out by two royal protection officers. There’s a very famous picture of Roger being taken out on the stairs and a 21-year-old PR guy in the background from Disney. “Oh s**t, I'm fired.” I got a call from somebody called a CMO — didn't know who that was, I thought I was going to tell me I'm fired. He goes, “That was great publicity.” I was like, “Wow, I can make a career out of this.”So for the first 20 years I had some of the more mad, audacious, outrageous ideas for Disney, and then Disney purchased Pixar, then they purchased Marvel, then they purchased Lucasfilm, and we found that we all had different definition of creativity and different innovation models. I tried four models of innovation.Number one, I hired an outside consultant and said, “Make me look good.” They were very good at what they did, but they weren't around for execution and they weren't going to show us how they did what they did. They were worried we wouldn't hire them again.Model number two, innovation team. Duncan will be in charge. What could possibly go wrong? Well, when you have a legal team, nobody outside of legal does legal. When you have a sales team . . . So when you have an innovation team, the subliminal message you've sent to the rest of the organization is: You are off the hook, we've got an innovation team.Third model was an accelerator program where we were bringing some young tech startups and take a 50-50 stake in their business. They could help us bring it to market much quicker than we could. We could help them scale it. But we had failed in the overall goal that Bob Iger had set for us: How might we embed a culture of innovation and creativity into everybody's DNA? So I set out to create a toolkit. A toolkit that takes the intimidation out of innovation, makes creativity tangible, and the process fun. And essentially, that's what the book is. It's not a book, it's a toolkit. Why? Because I want you to use it. It's broken up into creative behaviors, which I think if you don't get the creative behaviors right, the tools won't matter. They'll just be oblivious. I think the creative behaviors are the engine, and I'll explain what I mean by that.Let me ask you a question. Close your eyes if you would?I've done very poorly on the questions. Very poorly, but I will continue to answer them.Where are you usually, and what are you doing when you get your best ideas?I would say either on walks or, I think a lot of people say, in the shower, one of the two.There we go. Alright. But here's the thing. I've done it with 20,000 people in the audience. Do you know how many people say at work? Nobody ever says at work. Why do we never have our best ideas at work?Well, think about that last argument you were in. You turn to walk away from that argument, now you're still a bit angry, but you're beginning to relax, you're 10 seconds away, 20 seconds, and what pops into your brain? The killer one liner, that one perfect line you wish you'd used during but you didn't, did you? No. Why? Because when you are in an argument, your brain is moving at a thousand miles an hour defending yourself.When you're in the office, you're doing emails, reports, quarterly results, and meetings. And I hear myself say, “I don't have time to think.” When you don't have time to think, the door between your conscious and subconscious brain is firmly closed. You're in the brain state called beta, and you're only working with your conscious brain. 90 percent of your working day — you can look this up — your conscious brain is 13 percent of the capacity of your brain. Your subconscious brain is 87 percent of the capacity. Every innovation you've ever seen, every creative problem you've ever solved, is back here to work as unrelated stimulus, but when the door is shut, you can't access it. So what do I do? I'm playful. I'm deliberately playful. There's a chapter of energizers in the book. They’re 60-second exercises. What are they for? To make you laugh, laughter with purpose.What's an example of one of those?Okay, I'll tell you what then, you are the world's leading designer of parachutes for elephants. I will now interview you about your job. So question, “How did you get into this industry in the first place?”I was actually interviewing for a different job, I walked in the wrong door, and I ended up interviewing for that job.Okay, and do you have to use different material for the parachutes? What are the parachutes made of? How big are they? Do you have to make bigger ones for elephants with smaller ears and smaller ones for elephants with big ears, the African and Indian elephants?Thankfully the kind of material is changing all the time. A lot of advances: graphene, nanotechnology materials. So the kind of material is changing, which actually gives us a lot more flexibility for the kind of material and the sizes, depending, of course, on the size of the elephants and perhaps even their ears, and tails, and tusks.So we'll stop there. You do that in a room full of people and you'll hear laughter. And the moment I hear laughter, I've opened the door between your conscious subconscious brain and placed you metaphorically back in the shower where you are when you have your best idea. I don't expect people to be playful every minute of every day. I do expect, particularly leaders, to be playful when they're trying to get other people to open up their brains and have big ideas.Tools for unlocking innovation (13:50)If you like breaking rules, this tool is for you. It's about breaking rules metaphorically. So step one, you list the rules of your challenge. Step two, you take one and ask the most audacious question. Step three, you land a big idea.In the book, you sort of create these three animated characters representing . . . there's Spark who represents creative behaviors; Nova, innovation tools; and then Zing for these energizing exercises. But you sort of need all three of those?You do, but you don't have to know them all at the same time, and that's the beauty of the book. But here's the thing: I created a character called Archie. Archie was a direct descendant of Archimedes, because when I ask people where they are when they get the best ideas, they say the shower. Archimedes was in the bath. And my daughter, who’s about 25, walks in the room and she goes, “Dad, he's an old white guy. You are an old white guy. You can't do that s**t anymore.” So I created three new characters. Spark is male, introduces creative behaviors; Zing, gender-neutral, introduces the energizers; and Nova, the brains of the organization, introduces innovation tools. The tools are split between what I call expansionist tools and reductionist tools. The more expertise and the more experience we have, the more reasons we know why the new idea won't work.But here's the challenge: Up until 2020, we pretty much got away with doing what we did, and then came a global pandemic, enormous climate change, generation Z entering the workplace who don't want to work for us, and here comes AI. We don't get to think the way we thought four years ago. So the tools are designed specifically to stop you thinking the way you always do and give you permission to think differently.I'll give you an example of one, it's called “What If.” A lot of people will say, “Oh, but we work in a very heavily regulated industry.” If you like breaking rules, this tool is for you. It's about breaking rules metaphorically. So step one, you list the rules of your challenge. Step two, you take one and ask the most audacious question. Step three, you land a big idea. So for example, it was created by Walt, but that's in the book, I won't go through the whole Walt Disney story because I want people to understand that this tool can work for them too.There was a very tiny company in Great Britain in the late ’60s, before the days of mass automation, that used to make glasses that we drink out of, and they found too much breakage and not enough production when the glasses were being packaged and shipped. So they went down to the shop floor, observed the process for eight hours, and just wrote down the rules. Don't think about them, because then you'll think of all the reasons you can't break them, just write them down. So they wrote them down. 26 employees convey about cardboard box, six glasses on the top, six on the bottom, separated by corrugated cardboard, glasses wrapped in newspaper, employees’ reading newspaper. So somebody asked these somewhat provocative “what if” question, “What if we poke their eyes out?” Well, that's against the law and it's not very nice, but because they had the courage to ask the most audacious “what if” question of all, the lady sitting next to them immediately got out of her river of thinking — her expertise and experience — and said, “Well, hang on a minute, why don't we just hire blind people?” So they did. Production up 26 percent, breakage down 42 percent, and the British government gave them a 50 percent salary subsidy for hiring people with disabilities. Simple, powerful, fun.You just mentioned briefly this notion of the river of thinking, which is sort of your thoughts and the assumptions that really come from your lifetime of experience. People obviously really, when evaluating ideas, they really value their own personal experience. You could have a hundred studies saying this will work, but if something about their personal experience says it won't, they won't listen to it. Now, I believe experience is important, it helps you make judgments, but sometimes I think you're right, that it's an absolute trap that leads us to say no when we should say yes, and yes when we should say no.So that was one of the expansionist tools. One of the reductive tools is ideas. Ideas are the most subjective thing on the planet. You like pink, I like green, our boss likes yellow, there's a very good chance we're going to be doing the yellow idea. Well, wait a minute, was that the right one targeted for our consumer? Was it aligned with our brand? So there's a tool called stargazer. I borrowed it with pride from Richard Branson of Virgin. Virgin is the most elastic brand on the planet, right? They've done condoms, they've done space travel, and everything in between. Disney is a non-elastic brand. They do family magical experiences. So how does Virgin decide, of all these ideas they get pitched, how do they decide which ones to bring to market?They have a tool, I call it stargazer, it looks like a starfish, it's got five prongs on it, you'll see it in the book, and each one has three criteria, and you can make up your own criteria at the beginning of the project. Let's say, is this a strategic brand fit? Is this aligned with who we stand for as a brand? Is this embedded in consumer truth? Is it relevant to our consumer? Can I get this into the market the next 18 to 24 months? Is it going to hit my financial goals? And is it socially engaging? Is it going to get people excited? And all you do with all of your ideas at the end is go around those five criteria and ask, does this do a poor job, a good job, or an outstanding job of being aligned with our brand, a poor job, a good job, or an outstanding job of being targeted at our consumer, relevant to our consumer? And then guess what? With different colors for each idea, you join the dots just as you did when you were a kid. And one idea will rise to the top as to meeting your criteria, objectives, the most, not the one you like the best.Expansionist vs. reductionist tools (18:39)I define creativity as the ability to have an idea. We all have hundreds a day. I define innovation is the ability to get it done. That's the hard part, and that's what the tools are designed and helping you with.Do you think that the book and your approach is most helpful in helping people be more creative and come up with ideas or helping other people judge ideas as being good ideas and being open to ideas and closed to the wrong ideas?I think people use confusing terms just to make themselves more intelligent. The amount of times I've been in a meeting and somebody used an acronym, nobody knows what it is, but nobody's going to put their hand up. I call it expansionist and reductionist, the official name is divergent and convergent, who cares? Expansionist tools are the ones that help you get out of your river of thinking and help you think differently, and the reductionist tools are okay, now we've got all of these ideas, which one goes to market, how do we take it to market, how do we actually get it done?A lot of people say, as you said at the beginning, “I'm not creative.” Well, if you define yourself as a musician or an artist, then guess what? I'm not creative either. I define creativity as the ability to have an idea. We all have hundreds a day. I define innovation is the ability to get it done. That's the hard part, and that's what the tools are designed and helping you with.If you're running a business and you're like, “I want to implement this,” how do you . . . I'm sure you would love this, buy everybody the book, buy everybody three copies of the book. How do you implement it? I mean, I'm just curious how you do that job.How do I do the job? Or how does the business?How would someone do that job if they're like, I'm trying to make my workforce more creative, I'm trying to make sure that we are open to good ideas. How do you institute that at an existing business?Here's a tool that can change a culture overnight: Now you and I have been tasked with coming up with an idea for a birthday party. We've been given a $100,000, which is a reasonable budget for a birthday party. The theme could be Star Wars or Harry Potter. What would you like it to be?I'd probably go with Star Wars.Okay, so I'm going to come at you some amazing ideas for a Star Wars birthday. I'd like you to start each and every response with the words “No, because.” They'll be the first two words you use in each response, and then you'll tell me why not.So I was thinking of coming to your house, painting your kitchen dark, turn it into the Death Star canteen, and we'll have a food and wine festival from Hoth and Naboo and Tatooine.No, no, no. We can't do that because I like the way it looks now, I'm worried about repainting it and matching those colors. That's too significant of a change.What if, then, we just turn the lights out, we do a glow-in-the dark lightsaber fight full of our favorite alcoholic liquid?Well, that sounds like a better idea. Am I still supposed to say “no, because?”“No, because.” Stay on the “no, because.”No, can't do it. Listen, I worry about those lightsabers breaking, I'll be honest with you, and that alcohol flying over the place. Also, there are going to be kids there, and I just worry about the alcohol aspect. Because I’m an American, and we're very tight.So perhaps if there's kids there, we could do a cosplay party, and all the tall people could come as Vader and all the little people could come as ewoks.No, because I think some of the tall people would like to be the good guy, and I think some of the people who are not quite as tall might feel we were infantilizing them by turning them into ewoks.I’ll tell you what, then, we'll do a movie marathon and we’ll show all seven films back-to-back with some popcorn and coke. What do you say?No, because that would be a really long event. I think people would be super sick of even watching their favorite movies after about two movies, so can't do it.Alright, so we'll stop there. When somebody's constantly saying “no, because” to you, how does that make you feel?Like I really don't feel like coming up with any more ideas and like they will just not get to “yes.”And we started there with a food and wine festival and we ended up with showing the movies. Would you say the idea was getting bigger as we were going, or was it getting smaller? Which direction was it?It was getting progressively smaller and less imaginative.So let's try that again. Can we do Harry Potter?Well, I don't know as much, but I'll do my best.Okay, so have you seen a couple of the films?Kind of?You pick the theme, then. What do you want?Marvel. A beautifully licensed property. Yes, Marvel.I'm going to come at you with some ideas for a Marvel party. I'd like you to start each and every response this time with the words, “yes, and,” and we'll just build it together, okay?I tell you what, we could do a Spider-Man party where everybody gets those little web things that they could shoot out of their hands, but are actually made out of cotton candy, so we could eat it, we could eat the webs.Oh yes, and perhaps we could have villain-themed targets the shoot at?Oh, yes, and we could have a room full of superheroes and a room full of villains, and we have cosplay party and there'll even be a make-your-own Iron Man suit!Yes, we can have an Iron Man suit, obviously, and we can have the other costumes, and perhaps some of their other tools, like Thor’s hammer, those could somehow also be candy-related.Oh yes, and we could actually invite the stars of the film, we could have Chris Hemsworth, Robert Downey, Jr., and Chris Pratt, and Rocket, and Groot.Yes. Love the idea. And perhaps if that's not quite possible —— That was a “no, because!”Oh that sounded like a “no.”Come on, come on.We've reached the limits of my creativity.We'll stop there. A couple of observations: a lot more laughter, a lot more energy.Bigger or smaller?We're taking our steps into an ever-wider world!We work in big organizations, we work in small organizations, we have colleagues, we have constituencies, we have bosses, we have local regulators, et cetera, to bring on board with our ideas. By the time we just finished building that idea together, whose idea was it by the time we'd finished?That is lost to the fog of history. It is now a collaborative idea that we both can take credit for when it's a huge success.Ours. Two very simple words from the world of improv that have the power to turn a small idea into a big one really quickly. You can always value-engineer a big idea back down again, but you can't turn a small idea into a big idea. Far more importantly, it transfers the power of “my idea,” which we know never goes anywhere outside an organization, to “our idea” and accelerate its opportunity to get done.For people listening today, I'll give you one word of advice to take away: Don't let the words “no, because” be the first two words you use when somebody comes bouncing into your office with an idea you are not thinking of. They may have genius two seconds from now, two weeks from now — they ain't coming back.Just remind yourselves: I know you have responsibilities, I know you've got deadlines, I know you've got quarterly results. We are not green-lighting this idea for execution today, we are mainly green-housing it together using “yes, and.”Gamifying learning (25:20)Gaming is the future of education, there's no question. So now I have one more question I think that's super valuable advice, actually. As you were talking about western education squashing the creativity. . . Do you have you any thoughts about how to change that, keeping the best of what we do?Gamify. Gamify everything. Gaming is the future of education, there's no question. Universities will fall, but why will universities fall? That's a fairly outrageous statement. Well, let me think. Blue-collar workers, the white collar workers laughed at them because they didn't go to university. Let me think — people who use their hands, artificial intelligence, probably not taking them out anytime soon. White collar workers, not so much. Goodbye. Not quite, that's a slight exaggeration, but universities are teaching the same thing that we learned.So I walk into a classroom, a professor says, “In the year 3 AD, Brutus stabbed Julius Caesar in the back on the steps of the Senate of Rome.” Okay, well I'm asleep already. However, if I could walk into the Senate in Rome, in virtual reality, or in Apple Vision Pro — hello, thank you very much — walk right up to Julius Caesar and Brutus debating with the senators and say, “Hey Julius, look behind you!”I tell you for why: My son sat down at the breakfast table many years ago, he was probably about 13 or 14 at the time, and he said, “Do you know the Doge’s Palace in Venice was built in 14 . . .” And he went on this whole diatribe. I was like, where the hell did you learn that? He goes, “Oh, Assassin's Creed.” Gaming will annihilate.See, when you say online training, the first words out of somebody's mouth are, “Boring!” So, what I aim to develop within a year from today is to gamify the Imagination Emporium and actually help people, train them how to be more imaginative using gaming.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* AI and the Future of Work: Opportunity or Threat? - St. Louis Fed* Industrial policies and innovation in the electrification of the global automobile industry - CEPR▶ Business* What Is Venture Capital Now Anyway? - NYT* When IBM Built a War Room for Executives - IEEE▶ Policy/Politics* How U.S. Firms Battled a Government Crackdown to Keep Tech Sales to China - NYT* Was mocking Musk a mistake? Democrats think about warmer relationship with the billionaire - Politico* Recent Immigration Surge Has Been Largest in U.S. History - NYT* The DOJ’s Misguided Overreach With Google Is An Opportunity for Trump - AEI* Harding, Coolidge and the Forerunner of DOGE - WSJ Opinion* We Are All Mercantilists Now - WSJ Opinion* Exclusive: Trump transition recommends scrapping car-crash reporting requirement opposed by Tesla - Reuters* Trump’s Treasury Pick Is Poised to Test ‘Three Arrows’ Economic Strategy - NYT* This Might Be the Last Chance for Permitting Reform - Heatmap▶ AI/Digital* Are LLMs capable of non-verbal reasoning? - Ars* Google’s new Project Astra could be generative AI’s killer app - MIT* The Mystery of Why ChatGPT Couldn’t Say the Name ‘David Mayer’ - WSJ* OpenAI’s ChatGPT Will Respond to Video Feeds in Real Time - Bberg* Google and Samsung’s first AI face computer to arrive next year - Wapo* Why AI must learn to admit ignorance and say 'I don't know' - NS* AI Pioneer Fei-Fei Li Has a Vision for Computer Vision - IEEE* Broadcom soars to $1tn as chipmaker projects ‘massive’ AI growth - FT* Chip Cities Rise in Japan’s Fields of Dreams - Bberg Opinion* Tetlock on Testing Grand Theories with AI - MR* The mysterious promise of the quantum future - FT Opinion▶ Biotech/Health* RFK Jr.’s Lawyer Has Asked the FDA to Revoke Polio Vaccine Approval - NYT* Designer Babies Are Teenagers Now—and Some of Them Need Therapy Because of It - Wired* The long shot - Science▶ Clean Energy/Climate* What has four stomachs and could change the world? - The Economist* Germany Sees Huge Jump in Power Prices on Low Wind Generation - Bberg▶ Space/Transportation* NASA’s boss-to-be proclaims we’re about to enter an “age of experimentation” - Ars* Superflares once per Century - MPI* Gwynne Shotwell, the woman making SpaceX’s moonshot a reality - FT Opinion▶ Substacks/Newsletters* The Changing US Labor Market - Conversable Economist* How we'll know if Trump is going to sell America out to China - Noahpinion* Can RFK Kneecap American Agriculture? - Breakthrough JournalFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Nov 26, 2024 • 28min

✨ My chat (+transcript) with tech policy expert Neil Chilson on regulating GenAI

Washington’s initial thinking about AI regulation has evolved from a knee-jerk fear response to a more nuanced appreciation of its capabilities and potential risks. Today on Faster, Please! — The Podcast, I talk with technology policy expert Neil Chilson about national competition, defense, and federal vs. state regulation in this brave new world of artificial intelligence.Chilson is the head of AI policy at the Abundance Institute. He is a lawyer, computer scientist, and former chief technologist at the Federal Trade Commission. He is also the author of “Getting Out of Control: Emergent Leadership in a Complex World.”In This Episode* The AI risk-benefit assessment (1:18)* AI under the new Trump Administration (6:31)* An AGI Manhattan Project (12:18)* State-level overregulation (15:17)* Potential impact on immigration (21:15)* AI companies as national champions (23:00)Below is a lightly edited transcript of our conversation. The AI risk-benefit assessment (1:18)Pethokoukis: We're going to talk a bit about AI regulation, the future of regulation, so let me start with this: Last summer, the Biden administration put out a big executive order on AI. I assume the Trump administration will repeal that and do their own thing. Any idea what that thing will be?We have a lead on the tech, we have the best companies in the world. I think a Trump administration is really going to amp up that rhetoric, and I would expect the executive order to reflect the need to keep the US and the lead on AI technology.Chilson: The Biden executive order, repealing it is actually part of the GOP platform, which does not say a lot about AI, but it does say that it's definitely going to get rid of the Biden executive order. I think that's the first order of business. The repeal and replace process . . . the previous Trump administration actually had a couple of executive orders on AI, and they were very big-picture. They were not nearly as pro-regulatory as the Biden executive order, and they saw a lot of the potential.I'd expect a shift back towards a vision of AI as a force for good, I'd expect to shift towards the international dynamics here, that we need to keep ahead of China in AI. We have a lead on the tech, we have the best companies in the world. I think a Trump administration is really going to amp up that rhetoric, and I would expect the executive order to reflect the need to keep the US and the lead on AI technology.That emphasis differs from the Biden emphasis in what way?The Biden emphasis, when you read the executive order, it has some nice language up top about how this is a great new technology, it's very powerful, but overwhelmingly the Biden executive order is directed at the risk of AI and, in particular, not existential risk, more the traditional risks that academics who have talked about the internet have had for a long time: these risks of bias, or risks to privacy, or risks to safety, or deepfakes. And to be honest, there are risks to all of these technologies, but the Biden executive order to really pounded that home, the emphasis was very much on what are the problems that this tech could cause and what do we as the federal government need to do to get in here and make sure it's safe for everybody?I would expect that would be a big change. I don't see, especially on the bias front, I don't see a Trump administration emphasizing that as a primary thing that the federal government needs to fix about AI. In fact, with people like Elon Musk having the ear of the president, I would expect maybe to go in the opposite direction, that these ideas around bias are inflated, that these risks aren't really real, and, to the extent that they are, that it's no business of the federal government to step in and tell companies how to bias or de-bias their products.One thing that sort of confuses me on the Elon Musk angle is that it seemed that he was — at least used to be — very concerned about these somewhat science-fictional existential risks to AI. I guess my concern is that we'll get that version of Musk again talking to the White House and maybe he says, “I'm not worried about bias, but I'm still worried about it killing us all.” Is there any concern there, that that theme, which I think seems to have faded a little bit from the public conversation (maybe I'm wrong) that that will reemerge.I agree with you that I think that theme has faded. The early Senate hearings were very much in that vein, they were about the existential risk, and some of that was the people who were up there talking. This is something that's been on the mind of some of the leaders of the cutting edge of the tech space, and it's part of the reason why they got into it. There's always been a tension there. There is some sort of dynamic here where they're like, “This stuff is super dangerous and super powerful, so I need to be the one creating it and controlling it.” I think Musk still kind of falls in that bucket, so I share a little bit of that concern, but I think you're right that Congress has said, “Oh, those things seem really farfetched. That's not how we're going to focus our time.” I would expect that to continue even with a Musk-influenced administration.I actually don't think that there is necessarily a big tension between that and a pushback against the sort of red-tape regulatory approach to AI that was kind of the more traditional pessimistic, precautionary approach to technology, generally. I think Musk is a guy who hates red tape. I think he's seen it in his own businesses, how it's slowed down launches of all sorts. I think you can hate red tape and be worried about this existential risk. It's not necessarily in intentioned, but it'll be interesting to see how those play out, how Musk influences the policy of the Trump administration on AI.AI under the new Trump Administration (6:31)One issue that seemed to be coming up over and over again is differing opinions among technologists, venture capitalists, about the open-source issue. How does that play out heading into a Trump administration? When I listen to the Andreessen Horowitz podcast, those guys seem very concerned.They're going to get this software. They're going to develop it themselves. We can't out-China China. We should lean into what we're really good at, and that is a dynamic software-development environment, of which open source is a key component.So there's a lot of disagreements about how open source plays out. Open source, it should be pointed out first, is a core technology across everything that people who develop software use. Most websites run on open source software. Most development tools have a huge open source component, and one of the best ways to develop and test technology is by sharing it with people and having people build on it.I do think it is a really important technology in the AI space. We've seen that already, people are building smaller models, doing new things in open source that it costs a lot of money to do in the first instance, maybe in a closed source.The concerns that people raise is that this, especially in the national security space or the national competition, that this sort of exposes our best research to other countries. I think there's a couple of responses to that.The first one is that closed source is no guarantee that those people don't have that technology as well. In fact, most of these models fit on the thumb drive. Most of these AI labs are not run like nuclear facilities, and it's much easier to smuggle a thumb drive out than it is to smuggle a gram of plutonium or something like that. They're going to get this software. They're going to develop it themselves. We can't out-China China. We should lean into what we're really good at, and that is a dynamic software-development environment, of which open source is a key component.It also offers, in many ways, an alternative to centralized sources of artificial intelligence models, which can offer a bunch of user interface-based benefits. They're just easier to use. It's much easier to log into OpenAI and use their ChatGPT than it is to download and build your own model, but it is really nice as a competitive gap filler to have thousands and thousands of other that might do something specific, or have a specific orientation, which you can train on your own. And those exist because of the open source ecosystem. So I think it solves a lot of problems, probably a lot more than it creates.So what would you expect — let's focus on the federal level — for this congress, for the Trump administration, to do other than broadly affirm that we love AI, we hope it continues? Will there be any sort of regulatory rule, any sort of guidance, that would in any way constrain or direct this technology? Maybe it's in the area of the frontier models, I don't know.I think we're likely to see a lot of action at the use level: What are the various uses of various applications and how does AI change that? So in transportation and healthcare . . . this is a general purpose technology, and so it's going to be deployed in lots of spaces, and a lot of these spaces already have a lot of regulatory frameworks in place, and so I think we'll see lots of agencies looking to see, “Hey, this new technology, does it really change anything about how we regulate medical devices? If it does, how do we need to accommodate that? What are the unique risks? What are the unique opportunities that maybe the current framework doesn't really allow for?”I think we'll see a lot of that. I think, once you get up to the abstract model level, it's much harder to figure out what problem both are we trying to solve at the model level and do we have the capability to solve at the model level. If we're worried about people developing bio weapons with this technology, is making sure the model doesn't allow that, is that useful? Is it even possible? Or should we focus those attentions maybe down on, people can't secure the components that they need to execute a biohazard? Would that be a more productive place? I don't see a lot of action, honestly, at the model level.Maybe there'll be some reporting requirements or training requirements. The executive order had those, although they used something called the Defense Production Act — I think probably unconstitutionally, how they use that. But that's going to go away. If that gets filled in by Congress, that there's some sort of reporting regime — maybe that's possible, but Congress doesn't seem to be able to get those types of really high-level tech regulations across the line. They haven't done it with privacy legislation for a long time and everybody seems to think that would be a good idea.I think we'll continue to see efforts at the agency level. One thing Congress might do is they might spend some money in this space, so maybe there will be some new investment or maybe the national laboratories will get some money to do additional AI research. That has its own challenges, but most of them are financial challenges, they're not so much whether or not it's going to impede the industry, so that's kind of how I think it'll likely play out at the federal level.An AGI Manhattan Project (12:18)A report just came out (yesterday, as we're recording this) from the outside advisory group on US-China relations that advises the government, and they're calling for a Manhattan Project to get to an artificial general intelligence, I assume before China or anybody else.Is that a good idea? Do you think we'll do that? What do you make of that recommendation, which caused quite a stir when it came out?For the most part, artificial general intelligence, I don't understand what the appeal of that is, frankly . . . Why not train something that could do something specific really well?Yeah, it's a really interesting report. If you read through the body of the report, it's pretty standard international competitiveness analysis that says, “What are the supply chains for chips? How does it look? How do we compare on talent development? How do we compare on the industry backing investment?” Things like that. And we compare very well, overall, the report says.But then, all of a sudden at the top level, the first recommendation talks about artificial general intelligence. This is the kind of AI that doesn't exist yet, but it's the kind that could basically do everything a human could do at the intellectual level that a human could do it. It's interesting because that recommendation, it doesn't seem to be founded on anything that's actually in the report. There's no other discussion in the report about artificial general intelligence, or how important it is strategically, or anything like that, and yet, they want to spend Manhattan Project-level amounts of money — I think in today's dollars, that'd be like $30 billion to create this artificial general intelligence. I don't know what to make of that, and, more than that, I think it's very unlikely to move the policy discussion. Maybe it moves the Overton window, so people are talking like, “Yeah, we need a Manhattan Project,” but I don't think that it's likely to do anything.For the most part, artificial general intelligence, I don't understand what the appeal of that is, frankly. It has a sort of theoretical appeal, that we could have a computer that could do all the things that a person could do, but in the modern economy, it's actually better to have things that are really good at doing a specific set of things rather than having a generalist that you can deploy lots of different places, especially if you're talking about software. Why not train something that could do something specific really well? I think that would slot into our economy better. I think it's much more likely to be the most productive value of the intense computation time and money that it takes to train these types of models. So it seems like a strange thing to emphasize in our federal spending, even if we're talking about the national security implications. It would seem like it'd be much better to train a model that's specifically built for some type of drone warfare or something rather than trying to make it good at everythi ng and then say, “Oh, now we're going to use you to fly drones.” That doesn't seem to make a ton of sense.State-level overregulation (15:17)We talked about the federal level. Certainly — and not that the states seem to need a nudge, but if they see the Washington doing less, I'm sure there'll be plenty of state governments saying, “Well then we need to do more. We need to fill up the gap with our state regulation.” That already seems to be happening. Will that continue to happen and can the Trump administration stop that?I think it will continue to happen, the question is what kind of gap is left by the Trump administration. I would say what the Biden administration left was a vision gap. They didn't really have an overarching vision for how the US was going to engage with this technology at the federal level, unlike the Clinton administration which set out a pretty clear vision for how the federal government planned to engage on the early version of the internet. What it said was, for some really good reasons, we're going to let the commercial sector lead on development here.I think sending a signal like that could have sort of bully-pulpit effect, especially in redder states. You'll still see states like California and New York, they're listening to Europe on how to do stuff in this space.Still? Are we still listening to . . . Who are the people out there who think, “They've got it figured out”? I understand that maybe that's your initial impulse when you have a new technology and you're like, “I don't know what to do, so who is doing something on it?” But we've had a little bit of time and I just don't get anybody who would default to be like, “Man, we're just going to look at a couple of EU white papers and off to the races here in our state.”I think we're starting to see . . . the shopping of bills that look a lot like the way privacy has worked across the states, and in some cases are being pushed by the same organizations that represent compliance companies saying, “Hey, yeah, we need to do all this algorithmic bias auditing, or safety auditing, and states should require it.”I think a lot of this is a hangover of the social media fights. AI, if you poll it just at that level, if you're like, “Hey, do you think AI is going to be good or bad for your job or for the economy?” Americans are somewhat skeptical. It's because they think of AI in the cultural context that includes Terminator, and automation, and so they think of it that way. They don't think about the thousands of applications on their phones that use artificial intelligence.So I think there's a political moment here around this. The Europeans jumped in on and said, “Hey, we're the first to regulate in this space comprehensively.” I think they're dialing that back since some of their member states are like, “Hey, this is killing our own homegrown AI industry.” But for some reason, you're right, California and New York seem to be embracing that, and I think they probably will continue to. At the very local level, at the state level, there's just weird incentives to do something and then you don't really pay a lot of consequences down the road.Having said that, there was a controversial bill that was very aggressively pushed, SB 1047, in California over the summer, and it got killed. It got canned by Gavin Newsom in the end. And I think that's a sort of a unique artifact of California's “go along to get along” legislature process where even people who don't support bills vote for them, kind of knowing that Gavin, or that the governor, will bring down the veto when it doesn't make political sense.All of this is to say, California's going to California. I think we're starting to see, and what concerns me is, we're starting to see the shopping of bills that look a lot like the way privacy has worked across the states, and in some cases are being pushed by the same organizations that represent compliance companies saying, “Hey, yeah, we need to do all this algorithmic bias auditing, or safety auditing, and states should require it.”There's a Texas draft bill that has been floated right now, and you wouldn't think that Texas would be on the frontier of banning differential effects in bias from AI. It doesn't really sound particularly red-state-y, but these things are getting shopped around and if it moves in Texas, it'll move other places too. I worry about that level of red tape coming at the state level, and that's just going to be ground warfare on the legislative front at the state level.So federal preemption, what is that and how would that work? And is that possible?It's really hard in this space because the technology is so general. Congress could, of course, write something that was very broad and preempted, certain types of regulation of models, and maybe that's a good idea, I've seen some draft language around that.On the other hand, I do believe in federalism and these aren't quite the same sort of network-based technologies that only make sense in a national sweep. So maybe there's an argument that we should let states suffer the consequences of their own regulatory approaches. That hurts my heart a little bit just to think about the future because there are a lot of talented people in those states who are going to find out it's the lawyers who are their main constraint. Those types of transaction costs, they will slow us down. I think if it looks like we're falling behind in the US because we can't get out of our own way regulatorily, I think there will be more impulse to fix things.There are some other creative solutions such as interstate compacts to try to get people to level up across multiple states about how they're going to treat AI and allow innovation to flourish, and so I think we'll see more of those experiments, but it is really hard at the federal level to preempt just because there's so many state-based interests who are going to push back against that sort of thing.Potential impact on immigration (21:15)As far as AI influencing what we do elsewhere — one thing you wrote about recently in a really great essay, which I've already drawn upon in some of these questions is — thinking about immigration and AI talent coming to the United States — what I think is now a widely accepted understanding, that this is an important technology and we certainly want to be the leader in this technology — does that change how we think about immigration, at least very high-skilled immigration?We should be wanting the most talented people to come here and stay here.I think it should. Frankly, we should have changed our minds about some of this stuff a long time ago. We should be wanting the most talented people to come here and stay here. The most talented people in the world already come here for school often. When I was in computer science grad school, it was full of people who really desperately wanted to stay in the US and build companies and build products, and some of them struggled really hard to figure out a way to do it legally.I think that making it easier for those people to stay is essential to keeping not just our lead in the world, I don't want to say it that way — I mean that's important, I think national competitiveness is sort of underrated, I think that is valuable — but those people are the most productive in the US system where they can get access to venture capital that's unlike any other part of the planet. They can get access to networks of talent that are unavailable on other parts of the planet. Keeping them here is good for the US, but I think it's good overall for technological development, and we should really, really, really focus on how to make that easier and more attractive.AI companies as national champions (23:00)This isn't necessarily a specific AI issue, but again, as you said earlier, it seems like a lot of the debate, initially, is really a holdover from the social media debates about moderation, and bias, and all that, and a lot of those sorts of people, in many cases, and frameworks just got globed onto AI.Another aspect is the antitrust, and now we’re worried about these big companies owning these platforms, and they're biased.Do we begin to look at issues of how we look at our big companies who have been leading in AI, doing a lot of R&D — does the politics around Big Tech change if we begin to see them as our vanguard companies that will keep us ahead of China?. . . in contrast to the Biden sort of “big-is-bad” rhetoric that they sort of leaned into entirely, I think a Trump administration is going to bring more nuance to that in some ways. And I do think that there will be more of a look towards our innovative companies as being the vanguard of what we do in the US.I think it already has, honestly. You saw early on, the Senate hearings around AI were totally inflected with the language of social media and that network-effects type of ecosystem. AI does not work like that. It doesn't work the same way. In fact, the feedback loops are so much faster from these models, we saw things like Google Gemini that had ahistorical renderings of the founding fathers, and that got so much shouting on Twitter, and on X, and lots of other places that Google very quickly adjusted, tweaked its path. I think we're seeing the toning down of that rhetoric and the recognition that these companies are creating a lot of powerful, useful products, and that they are sort of national champions.Trump, on the campaign trail, when asked about breaking up Google for an ongoing antitrust litigation was like, “Hold on guys, breaking up these companies might not be in our best interest. There might be other ways we can solve these types of problems.” I think that that level of, in contrast to the Biden sort of “big-is-bad” rhetoric that they sort of leaned into entirely, I think a Trump administration is going to bring more nuance to that in some ways. And I do think that there will be more of a look towards our innovative companies as being the vanguard of what we do in the US.Now, having said that, obviously I think there's tons of AI development that is not inside of these largest companies in the open source space, and especially in the application layer, building on top of some of these foundation models, and so I think that ecosystem is also extremely important. Things that sort of preference the big companies over the small ones, I would have a lot of concerns about, and there have been regulatory regimes proposed that, even while opposed by some of the bigger companies, would certainly be possible for them to comply with in a way that small companies would struggle to comply with, and open-source developers just don't have any sort of nexus with which to comply, since there is no actual business model that's propping that type of approach up. So I'd want to keep it pretty neutral between the big companies, the small companies, and open source, while having the cultural recognition that big companies are extremely valuable to the US innovation ecosystem.If you had some time with, I don’t know, somebody, the president, the vice president, the Secretary of Commerce, someone in an elevator going from the first to the 10th floor, and you had to quickly say, “Here's what you need to be keeping in mind about AI over the next two to four years,” what would you say?I think the number one thing I would say is that, at the state level, we're wrapping a lot of red tape around innovative companies and individuals, and that we need to find a way to clear that thicket or stop it from growing any further. That's the number one challenge that I see facing this.Secondary to that, I would say the US government needs to figure out how to take advantage of these tools. The federal government is slow to adopt new technologies, but this technology has a lot of applications to the types of government work that hundreds of thousands of federal employees do every day, and so finding ways to streamline using AI to do the job better I think is really valuable, and I think it would be worth some investment at the federal level to think about how to do that well.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Productivity During and Since the Pandemic - San Francisco Fed* The Effect of COVID-19 Immigration Restrictions on Post-Pandemic Labor Market Tightness - St. Louis Fed* Trump Plans Tariffs on Canada, China and Mexico That Could Cripple Trade - NYT▶ Business* Nvidia’s new AI audio model can synthesize sounds that have never existed - Ars* Europe’s Mistral expands in Silicon Valley in hunt for AI staff - FT▶ Policy/Politics* Musk Wants $2 Trillion of Spending Cuts. Here’s Why That’s Hard. - WSJ* AI Governance: From Fears and Fearmongering to Risks and Rewards - AEI* Newsom says California to offer EV subsidies if Trump kills federal tax credit - Wapo▶ AI/Digital* A new golden age of discovery - AI Policy Perspectives* How Do You Get to Artificial General Intelligence? Think Lighter - Wired* Is Creativity Dead? - NYT Opinion* The way we measure progress in AI is terrible - MIT* AI's scientific path to trust - Axios* AI Dash Cams Give Wake-Up Calls to Drowsy Drivers - Spectrum▶ Biotech/Health* Combining AI and Crispr Will Be Transformational - Wired* Neuralink Plans to Test Whether Its Brain Implant Can Control a Robotic Arm - Wired* Scientists are learning why ultra-processed foods are bad for you - Economist▶ Clean Energy/Climate* Taxing Farm Animals’ Farts and Burps? Denmark Gives It a Try. - NYT* These batteries could harness the wind and sun to replace coal and gas - Wapo▶ Robotics/AVs* On the Wings of War - NYT▶ Up Wing/Down Wing* ‘Genesis’ Review: Rise of the New Machines - WSJ* The Myth of the Loneliness Epidemic - Asterisk▶ Substacks/Newsletters* The Middle Income Trap - Conversable Economist * America's Productivity Boom - Apricitas Economics* The Rise of Anthropic powered by AWS - AI Supremacy* Data to start your week - Exponential View* Trump's economic team is on a collision course with reality - Slow Boring* Five Unmanned SpaceX Starships to Mars in 2026 with Thousands of Teslabots - next BIG futureFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Nov 15, 2024 • 28min

🏘️ My chat (+transcript) with economist Bryan Caplan on density and housing deregulation

In this engaging discussion, economist Bryan Caplan, a professor at George Mason University and author of "Build, Baby, Build," dives into the housing crisis in the U.S. He highlights how government regulations have exacerbated affordability issues and stifled mobility. Caplan discusses the rise of the YIMBY movement and the impact of local regulations in places like Texas versus the Bay Area. He further explores the challenges of building new cities and the innovative ideas needed to reshape housing policies for a brighter future.
undefined
Oct 31, 2024 • 27min

✨⏩ My chat (+transcript) with ... economist Robin Hanson on AI, innovation, and economic reality

In this episode of Faster, Please! — The Podcast, I talk with economist Robin Hanson about a) how much technological change our society will undergo in the foreseeable future, b) what form we want that change to take, and c) how much we can ever reasonably predict.Hanson is an associate professor of economics at George Mason University. He was formerly a research associate at the Future of Humanity Institute at Oxford, and is the author of the Overcoming Bias Substack. In addition, he is the author of the 2017 book, The Elephant in the Brain: Hidden Motives in Everyday Life, as well as the 2016 book, The Age of Em: Work, Love, and Life When Robots Rule the Earth.In This Episode* Innovation is clumpy (1:21)* A history of AI advancement (3:25)* The tendency to control new tech (9:28)* The fallibility of forecasts (11:52)* The risks of fertility-rate decline (14:54)* Window of opportunity for space (18:49)* Public prediction markets (21:22)* A culture of calculated risk (23:39)Below is a lightly edited transcript of our conversationInnovation is Clumpy (1:21)Do you think that the tech advances of recent years — obviously in AI, and what we're seeing with reusable rockets, or CRISPR, or different energy advances, fusion, perhaps, even Ozempic — do you think that the collective cluster of these technologies has put humanity on a different path than perhaps it was on 10 years ago?. . . most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years.That’s a pretty big standard. As you know, the world has been growing exponentially for a very long time, and new technologies have been appearing for a very long time, and the economy doubles roughly every 15 or 20 years, and that can't happen without a whole lot of technological change, so most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years. So to say that we're going more than that is really a high standard here. I don't think it meets that standard. Maybe the standard it meets is to say people were worried about maybe a stagnation or slowdown a decade or two ago, and I think this might weaken your concerns about that. I think you might say, well, we're still on target.Innovation's clumpy. It doesn't just out an entirely smooth . . . There are some lumpy ones once in a while, lumpier innovations than usual, and those boost higher than expected, sometimes lower than expected sometimes, and maybe in the last ten years we've had a higher-than-expected clump. The main thing that does is make you not doubt as much as you did when you had the lower-than-expected clump in the previous 10 years or 20 years because people had seen this long-term history and they thought, “Lately we're not seeing so much. I wonder if this is done. I wonder if we're running out.” I think the last 10 years tells you: well, no, we're kind of still on target. We're still having big important advances, as we have for two centuries.A history of AI advancement (3:25)People who are especially enthusiastic about the recent advances with AI, would you tell them their baseline should probably be informed by economic history rather than science fiction?[Y]es, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.By technical history! We have 70-odd years of history of AI. I was an AI researcher full-time from ’84 to ’93. If you look at the long sweep of AI history, we've had some pretty big advances. We couldn't be where we are now without a lot of pretty big advances all along the way. You just think about the very first digital computer in 1950 or something and all the things we've seen, we have made large advances — and they haven't been completely smooth, they've come in a bit of clumps.I was enticed into the field in 1984 because of a recent set of clumps then, and for a century, roughly every 30 years, we've had a burst of concern about automation and AI, and we've had big concern in the sense people said, “Are we almost there? Are we about to have pretty much all jobs automated?” They said that in the 1930s, they said it in the 1960s — there was a presidential commission in the 1960s: “What if all the jobs get automated?” I jumped in in the late ’80s when there was a big burst there, and I as a young graduate student said, “Gee, if I don't get in now, it'll all be over soon,” because I heard, “All the jobs are going to be automated soon!”And now, in the last decade or so, we've had another big burst, and I think people who haven't seen that history, it feels to them like it felt to me in 1984: “Wow, unprecedented advances! Everybody's really excited! Maybe we're almost there. Maybe if I jump in now, I'll be part of the big push over the line to just automate everything.” That was exciting, it was tempting, I was naïve, and I was sucked in, and we're now in another era like that. Yes, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.I like that you mentioned the automation scare of the ’60s. Just going back and looking at that, it really surprised me how prevalent and widespread and how serious people took that. I mean, you can find speeches by Martin Luther King talking about how our society is going to deal with the computerization of everything. So it does seem to be a recurrent fear. What would you need to see to think it is different this time?The obvious relevant parameter to be tracking is the percentage of world income that goes to automation, and that has been creeping up over the decades, but it's still less than five percent.What is that statistic?If you look at the percentage of the economy that goes to computer hardware and software, or other mechanisms of automation, you're still looking at less than five percent of the world economy. So it's been creeping up, maybe decades ago it was three percent, even one percent in 1960, but it's creeping up slowly, and obviously, when that gets to be 80 percent, game over, the economy has been replaced — but that number is creeping up slowly, and you can track it, so when you start seeing that number going up much faster or becoming a large number, then that's the time to say, “Okay, looks like we're close. Maybe automation will, in fact, take over most jobs, when it's getting most of world income.”If you're looking at economic statistics, and you're looking at different forecasts, whether by the Fed or CBO or Wall Street banks and the forecasts are, “Well, we expect, maybe because of AI, productivity growth to be 0.4 percentage points higher over this kind of time. . .” Those kinds of numbers where we're talking about a tenth of a point here, that's not the kind of singularity-emergent world that some people think or hope or expect that we're on.Absolutely. If you've got young enthusiastic tech people, et cetera — and they're exaggerating. The AI companies, even they're trying to push as big a dramatic images they can. And then all the stodgy conservative old folks, they're afraid of seeming behind the times, and not up with things, and not getting it — that was the big phrase in the Internet Boom: Who “gets it” that this is a new thing?I'm proud to be a human, to have been part of the civilization to have done this . . . but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is.Now it would be #teamgetsit.Exactly, something like that. They're trying to lean into it, they're trying to give it the best spin they can, but they have some self-respect, so they're going to give you, “Wow 0.4 percent!” They'll say, “That's huge! Wow, this is a really big thing, everybody should be into this!” But they can't go above 0.4 percent because they've got some common sense here. But we've even seen management consulting firms over the last decade or so make predictions that 10 years in the future, half all jobs would be automated. So we've seen this long history of these really crazy extreme predictions into a decade, and none of those remotely happened, of course. But people do want to be in with the latest thing, and this is obviously the latest round of technology, it's impressive. I'm proud to be a human, to have been part of the civilization to have done this, and I’d like to try them out, and see what I can do with them, and think of where they could go. That's all exciting and fun, but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is. The tendency to control new tech (9:28)Not to talk just about AI, but do you think AI is important enough that policymakers need to somehow guide the technology to a certain outcome? Daron Acemoglu, one of the Nobel Prize winners, has for quite some time, and certainly recently, said that this technology needs to be guided by policymakers so that it helps people, it helps workers, it creates new tasks, it creates new things for them to do, not automate away their jobs or automate a bunch of tasks.Do you think that there's something special about this technology that we need to guide it to some sort of outcome?I think those sort of people would say that about any new technology that seemed like it was going to be important. They are not actually distinguishing AI from other technologies. This is just what they say about everything.It could be “technology X,” we must guide it to the outcome that I have already determined.As long as you've said, “X is new, X is exciting, a lot of things seem to depend on X,” then their answer would be, “We need to guide it.” It wouldn't really matter what the details of X were. That's just how they think about society and technology. I don't see anything distinctive about this, per se, in that sense, other than the fact that — look, in the long run, it's huge.Space, in the long run, is huge, because obviously in the long run almost everything will be in space, so clearly, eventually, space will be the vast majority of everything. That doesn't mean we need to guide space now or to do anything different about it, per se. At the moment, space is pretty small, and it's pretty pedestrian, but it's exciting, and the same for AI. At the moment, AI is pretty small, minor, AI is not remotely threatening to cause harm in our world today. If you look about harmful technologies, this is way down the scale. Demonstrated harms of AI in the last 10 years are minuscule compared to things like construction equipment, or drugs, or even television, really. This is small.Ladders for climbing up on your roof to clean out the gutters, that's a very dangerous technology.Yeah, somebody should be looking into that. We should be guiding the ladder industry to make sure they don't cause harm in the world.The fallibility of forecasts (11:52)I'm not sure how much confidence we should ever have on long-term economic forecasts, but have you seen any reason to think that they might be less reliable than they always have been? That we might be approaching some sort of change? That those 50-year forecasts of entitlement spending might be all wrong because the economy's going to be growing so much faster, or the longevity is going to be increasing so much faster?Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.It was just a little over two centuries ago when the world saw this enormous revolution. Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.So you might say we can't trust these trends to continue maybe more than 10 doublings, and then who knows what might happen? You could just say — that's 200 years, say, if you double every 20 years — we just can't trust these forecasts more than 200 years out. Look at what's happened in the past after that many doublings, big changes happened, and you might say, therefore, expect, on that sort of timescale, something else big to happen. That's not crazy to say. That's not very specific.And then if you say, well, what is the thing people most often speculate could be the cause of a big change? They do say AI, and then we actually have a concrete reason to think AI would change the growth rate of the economy: That is the fact that, at the moment, we make most stuff in factories, and factories typically push out from the factory as much value as the factory itself embodies, in economic terms, in a few months.If you could have factories make factories, the economy could double every few months. The reason we can't now is we have humans in the factories, and factories don't double them. But if you could make AIs in factories, and the AIs made factories, that made more AIs, that could double every few months. So the world economy could plausibly double every few months when AI has dominated the economy.That's of the magnitude doubling every few months versus doubling every 20 years. That's a magnitude similar to the magnitude we saw before from farming to industry, and so that fits together as saying, sometime in the next few centuries, expect a transition that might increase the growth rate of the economy by a factor of 100. Now that's an abstract thing in the long frame, it's not in the next 10 years, or 20 years, or something. It's saying that economic modes only last so long, something should come up eventually, and this is our best guess of a thing that could come up, so it's not crazy.The risks of fertility-rate decline (14:54)Are you a fertility-rate worrier?If the population falls, the best models say innovation rates would fall even faster.I am, and in fact, I think we have a limited deadline to develop human-level AI, before which we won't for a long pause, because falling fertility really threatens innovation rates. This is something we economists understand that I think most other people don't: You might've thought that a falling population could be easily compensated by a growing economy and that we would still have rapid and fast innovation because we would just have a bigger economy with a lower population, but apparently that's not true.If the population falls, the best models say innovation rates would fall even faster. So say the population is roughly predicted to peak in three decades and then start to fall, and if it's falls, it would fall roughly a factor of two every generation or two, depending on which populations dominate, and then if it fell by a factor of 10, the innovation rate would fall by more than a factor of 10, and that means just a slower rate of new technologies, and, of course, also a reduction in the scale of the world economy.And I think that plausibly also has the side effect of a loss in liberality. I don't think people realize how much it was innovation and competition that drove much of the world to become liberal because the winning nations in the world were liberal and the rest were afraid of falling too far behind. But when innovation goes away, they won't be so eager to be liberal to be innovative because innovation just won't be a thing, and so much of the world will just become a lot less liberal.There's also the risk that — basically, computers are a very durable technology, in principle. Typically we don't make them that durable because every two years they get twice as good, but when innovation goes away, they won't get good very fast, and then you'll be much more tempted to just make very durable computers, and the first generation that makes very durable computers that last hundreds of years, the next generation won't want to buy new computers, they'll just use the old durable ones as the economy is shrinking and then the industry that commuters might just go away. And then it could be a long time before people felt a need to rediscover those technologies.I think the larger-scale story is there's no obvious process that would prevent this continued decline because there's no level at which, when you get that, some process kicks in and it makes us say, “Oh, we need to increase the population.” But the most likely scenario is just that the Amish and [Hutterites] and other insular, fertile subgroups who have been doubling every 20 years for a century will just keep doing that and then come to dominate the world, much like Christians took over the Roman Empire: They took it over by doubling every 20 years for three centuries. That's my default future, and then if we don't get AI or colonize space before this decline, which I've estimated would be roughly 70 years’ worth more of progress at previous rates, then we don't get it again until the Amish not only just take over the world, but rediscover a taste for technology and economic growth, and then eventually all of the great stuff could happen, but that could be many centuries later.This does not sound like an issue that can be fundamentally altered by tweaking the tax code.You would have to make a large —— Large turn of the dial, really turn that dial.People are uncomfortable with larger-than-small tweaks, of course, but we're not in an era that's at all eager for vast changes in policy, we are in a pretty conservative era that just wants to tweak things. Tweaks won't do it.Window of opportunity for space (18:49)We can't do things like Daylight Savings Time, which some people want to change. You mentioned this window — Elon Musk has talked about a window for expansion into space, and this is a couple of years ago, he said, “The window has closed before. It's open now. Don't assume it will always be open.”Is that right? Why would it close? Is it because of higher interest rates? Because the Amish don't want to go to space? Why would the window close?I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy.There's a demand for space stuff, mostly at the moment, to service Earth, like the internet circling the earth, say, as Elon's big project to fund his spaceships. And there's also demand for satellites to do surveillance of the earth, et cetera. As the earth economy shrinks, the demand for that stuff will shrink. At some point, they won't be able to afford fixed costs.A big question is about marginal cost versus fixed costs. How much is the fixed cost just to have this capacity to send stuff into space, versus the marginal cost of adding each new rocket? If it's dominated by marginal costs and they make the rockets cheaper, okay, they can just do fewer rockets less often, and they can still send satellites up into space. But if you're thinking of something where there's a key scale that you need to get past even to support this industry, then there's a different thing.So thinking about a Mars economy, or even a moon economy, or a solar system economy, you're looking at a scale thing. That thing needs to be big enough to be self-sustaining and economically cost-effective, or it's just not going to work. So I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy. Space economy needs to be big enough just to support itself, et cetera, and that's a problem because it's the same humans in space who are down here on earth, who are going to have the same fertility problems up there unless they somehow figure out a way to make a very different culture.A lot of people just assume, “Oh, you could have a very different culture on Mars, and so they could solve our cultural problems just by being different,” but I'm not seeing that. I think they would just have a very strong interconnection with earth culture because they're going to have just a rapid bandwidth stuff back and forth, and their fertility culture and all sorts of other culture will be tied closely to earth culture, so I'm not seeing how a Mars colony really solves earth cultural problems.Public prediction markets (21:22)The average person is aware that these things, whether it's betting markets or these online consensus prediction markets, that they exist, that you can bet on presidential races, and you can make predictions about a superconductor breakthrough, or something like that, or about when we're going to get AGI.To me, it seems like they have, to some degree, broken through the filter, and people are aware that they're out there. Have they come of age?. . . the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored.In this presidential election, there's a lot of discussion that points to them. And people were pretty open to that until Trump started to be favored, and people said, “No, no, that can't be right. There must be a lot of whales out there manipulating, because it couldn't be Trump's winning.” So the openness to these things often depends on what their message is.But honestly, the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored. Twenty-five years ago, I invented this concept of decision markets using in organizations, and now in the last year, I've actually seen substantial experimentation with them and so I'm excited to see where that goes, and I'm hopeful there, but that's not so much about the presidential markets.Roughly a century ago there was more money bet in presidential betting markets than in stock markets at the time. Betting markets were very big then, and then they declined, primarily because scientific polling was declared a more scientific approach to estimating elections than betting markets, and all the respectable people wanted to report on scientific polls. And then of course the stock market became much, much bigger. The interest in presidential markets will wax and wane, but there's actually not that much social value in having a better estimate of who's going to win an election. That doesn't really tell you who to vote for, so there are other markets that would be much more socially valuable, like predicting the consequences of who's elected as president. We don't really have much markets on those, but maybe we will next time around. But there is a lot of experimentation going in organizational prediction markets at the moment, compared to, say, 10 years ago, and I'm excited about those experiments.A culture of calculated risk (23:39)I want a culture that, when one of these new nuclear reactors, or these nuclear reactors that are restarting, or these new small modular reactors, when there's some sort of leak, or when a new SpaceX Starship, when some astronaut gets killed, that we just don't collapse as a society. That we're like, well, things happen, we're going to keep moving forward.Do you think we have that kind of culture? And if not, how do we get it, if at all? Is that possible?That's the question: Why has our society become so much more safety-oriented in the last half-century? Certainly one huge sign of it is the way we way overregulated nuclear energy, but we've also now been overregulating even kids going to school. Apparently they can't just take their bikes to school anymore, they have to go on a bus because that's safer, and in a whole bunch of ways, we are just vastly more safety-oriented, and that seems to be a pretty broad cultural trend. It's not just in particular areas and it's not just in particular countries.I've been thinking a lot about long-term cultural trends and trying to understand them. The basic story, I think, is we don't have a good reason to believe long-term cultural trends are actually healthy when they are shared trends of norms and status markers that everybody shares. Cultural things that can vary within the cultures, like different technologies and firm cultures, those we're doing great. We have great evolution of those things, and that's why we're having all these great technologies. But things like safetyism is more of a shared cultural norm, and we just don't have good reasons to think those changes are healthy, and they don't fix themselves, so this is just another example of something that’s going wrong.They don't fix themselves because if you have a strong, very widely shared cultural norm, and someone has a different idea, they need to be prepared to pay a price, and most of us aren’t prepared to pay that price.If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.Right. If, for example, we have 200 countries, if they were actually independent experiments and had just had different cultures going different directions, then I'd feel great; that okay, the cultures that choose too much safety, they'll lose out to the others and eventually it'll be worn out. If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.At the beginning of Covid, all the usual public health efforts said all the usual things, and then world elites got together and talked about it, and a month later they said, “No, that's all wrong. We have a whole different thing to do. Travel restrictions are good, masks are good, distancing is good.” And then the entire world did it the same way, and there was strong pressure on any deviation, even Sweden, that would dare to deviate from the global consensus.If you look about many kinds of regulation, it's very little deviation worldwide. We don't have 200, or even 100, independent policy experiments, we basically have a main global civilization that does it the same, and maybe one or two deviants that are allowed to have somewhat different behavior, but pay a price for it.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Micro Reads▶ Economics* The Next President Inherits a Remarkable Economy - WSJ* The surprising barrier that keeps us from building the housing we need - MIT* Trump’s tariffs, explained - Wapo* Watts and Bots: The Energy Implications of AI Adoption - SSRN* The Changing Nature of Technology Shocks - SSRN* AI Regulation and Entrepreneurship - SSRN▶ Business* Microsoft reports big profits amid massive AI investments - Ars* Meta’s Next Llama AI Models Are Training on a GPU Cluster ‘Bigger Than Anything’ Else - Wired* Apple’s AI and Vision Pro Products Don’t Meet Its Standards - Bberg Opinion* Uber revenues surge amid robust US consumer spending - FT* Elon Musk in funding talks with Middle East investors to value xAI at $45bn - FT▶ Policy/Politics* Researchers ‘in a state of panic’ after Robert F. Kennedy Jr. says Trump will hand him health agencies - Science* Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target - Wired* US Efforts to Contain Xi’s Push for Tech Supremacy Are Faltering - Bberg* The Politics of Debt in the Era of Rising Rates - SSRN▶ AI/Digital* Alexa, where’s my Star Trek Computer? - The Verge* Toyota, NTT to Invest $3.3 Billion in AI, Autonomous Driving - Bberg* Are we really ready for genuine communication with animals through AI? - NS* Alexa’s New AI Brain Is Stuck in the Lab - Bberg* This AI system makes human tutors better at teaching children math - MIT* Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games - Arxiv▶ Biotech/Health* Obesity Drug Shows Promise in Easing Knee Osteoarthritis Pain - NYT* Peak Beef Could Already Be Here - Bberg Opinion▶ Clean Energy/Climate* Chinese EVs leave other carmakers with only bad options - FT Opinion* Inside a fusion energy facility - MIT* Why aren't we driving hydrogen powered cars yet? There's a reason EVs won. - Popular Science* America Can’t Do Without Fracking - WSJ Opinion▶ Robotics/AVs* American Drone Startup Notches Rare Victory in Ukraine - WSJ* How Wayve’s driverless cars will meet one of their biggest challenges yet - MIT▶ Space/Transportation* Mars could have lived, even without a magnetic field - Big Think▶ Up Wing/Down Wing* The new face of European illiberalism - FT* How to recover when a climate disaster destroys your city - Nature▶ Substacks/Newsletters* Thinking about "temporary hardship" - Noahpinion* Hold My Beer, California - Hyperdimensional* Robert Moses's ideas were weird and bad - Slow Boring* Trading Places? No Thanks. - The Dispatch* The Case For Small Reactors - Breakthrough Journal* The Fourth Industrial Revolution and the Future of Work - Conversable EconomistFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Oct 23, 2024 • 27min

🚀 My chat (+transcript) with space journalist Eric Berger on SpaceX and America's New Space Age

In this conversation with Eric Berger, the senior space editor at Ars Technica, the discussion revolves around SpaceX's groundbreaking achievements, especially the successful launch and mid-air capture of the Starship rocket's booster. They dive into the implications of these advancements for future space travel. Berger highlights the commercial potential of SpaceX's missions, especially with Starlink, and reflects on the challenges and opportunities in lunar and Mars exploration, emphasizing how innovative engineering can revolutionize our path to becoming a multi-planetary species.
undefined
Oct 10, 2024 • 38min

💥 My chat (+transcript) with economist Eli Dourado on creating a fantastic future

Eli Dourado, chief economist at the Abundance Institute and a voice in economic innovation, discusses his vision for overcoming stagnation. He believes we're on the brink of a productivity boom, spurred by AI and creative disruption. Dourado delves into the evolving job market and the importance of embracing change. He also explores future energy solutions, including fusion and the role of nuclear power. Political reform, like NEPA, is vital for progress, while a pro-abundance mindset can fuel innovation and inclusive growth.
undefined
Sep 26, 2024 • 32min

☀️ My chat (+transcript) with economist Noah Smith on technological progress

Noah Smith, an economist known for his insights on technological progress, dives deep into the impact of advancements like generative AI and energy technologies on society. He discusses how geopolitical tensions can actually spur innovation and the benefits of a fragmented industrial policy. Smith also reflects on Japan’s economic history, drawing parallels that may soon apply to other nations, emphasizing the importance of well-thought-out regulations for AI and the need for Europe to embrace local innovation strategies.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode