Faster, Please! — The Podcast cover image

Faster, Please! — The Podcast

Latest episodes

undefined
Mar 14, 2025 • 31min

📈 Back to the Nineties? My chat (+transcript) with economist Skanda Amarnath on the 2020s productivity outlook

The American economy is growing, and, in many ways, it’s looking a lot like the 1990s. Upward trends in productivity growth and employment paired with downward trends in inflation are cause for optimism. The question is whether we will maintain this trajectory or be derailed by this emerging era of uncertainty.Today on Faster, Please! — The Podcast, I talk with Skanda Amarnath about trade policy, fiscal and monetary policy, AI advancement, demographic trends, and how all of this bodes for the US economy.Amarnath is the Executive Director of Employ America, a macroeconomic policy research and advocacy organization. He was previously vice president at MKP Capital Management, as well as an analyst at the Federal Reserve Bank of New York.In This Episode* The boomy ’90s (1:24)* Drivers of growth (7:24)* The boomy ’20s? (11:38)* Full employment and the Fed (22:03)* Demographics in the data (25:37)* Policies for productivity (27:55)Below is a lightly edited transcript of our conversation. The boomy ’90s (1:24)The ’90s stand out as a high productivity growth, low inflation, high employment economy, especially if we look at the years 1996 to the year 2000.Pethokoukis: What got me really excited about all the great work that Employ America puts out was one particular report that I think came out late last year called “The Dream of the 90’s is Alive in 2024,” and hopefully it's still alive in 2025. By ’90s of course you mean the 1990s.Let me start off by asking you: What was so awesome about the 1990s that it is worth writing about a dream of its return?Amarnath: The 1990s — if you're a macroeconomist, at least — had pitch-perfect conditions. Employment was reasonably high, we achieved the highest levels of prime-age employment relative to the population. We had low and declining inflation, and that variable that we use to say, this is the driver of welfare over time, productivity outcomes, the amount of output we can spin up from finite inputs, was also growing at a very strong rate, and one that we haven't really seen replicated since or really in the decades before.The ’90s stand out as a high productivity growth, low inflation, high employment economy, especially if we look at the years 1996 to the year 2000. We'd had high productivity maybe even afterwards . . . but that was also a period where a lot of that productivity was gained from the recession. When employment falls really quickly, productivity can go up for illusory reasons, but it's really that ’90s sweet spot where everything was kind of moving in the right direction.Obviously, over the last several years, we've seen a lot of those different challenges flare up, whether it was employment during Covid, but then also inflation over the last few years. So . . . a model to build towards, in some ways.Some of us — not me, and I don't think you — remember the very boomy immediate post-war decades. Probably many more of us remember the go-go 1990s. One thing I always find interesting is how gloomy people were in those years right before the takeoff, which is a wonderful contrarian indicator that we had this period [when] we appeared to have won the Cold War but we had a nasty recession early in the decade, kind of a choppy recovery, and there was plenty of gloom that the days of fast growth were over. And just as we sort of reached the nadir in our attitudes, boy, things took off. So maybe that's a good omen for right nowIf we're a contrarian, and if the past can be present, maybe that is a positive indicator to consider. In some ways, it's a bit surprising how much you hear the talk about growth [being] stuck in a very low-growth environment. Over the last two years, we have seen above-trend real GDP growth, above-trend productivity growth. We're going to get some productivity data revisions tomorrow. Again, this measure of productivity is output per hour, so it's basically, to a first approximation, real GDP divided by hours worked. We've seen that the labor market has, largely speaking, held itself up over the last few years, and yet, at the same time, real output has accelerated.So that's at least something that suggests better things are possible. It's a sign that productivity can accelerate, and with the benefit of revisions tomorrow, we are likely to see at least . . . I'd say if you take a fair reading of the pre-pandemic trend on productivity growth, so five to 15 years, maybe you want to include the financial crisis and what happened before, maybe you don't, but you end up with something like 1.4 percent is what we were seeing. 1.4, maybe 1.45, that's a pretty generous view of pre-pandemic productivity growth.I would like to do better than that going forward.I would too. And since 2019 Q4, with the benefit of data revisions, until now, we're likely to see something like 1.9 percent — 50 basis points higher, 0.5 percent higher, we could ideally like to do even better than that. But it's 0.5 percent better over a five-year horizon in which whatever labor market weirdness spanned Covid, we've largely recovered from that. Obviously, there are a lot of different things that have changed between now and five years ago, but at least the data distortion issues should hopefully have been filtered out at this point. And yet, we probably are posting much better real output outcomes.So through a lot of this turbulence, through a lot of the dynamism that's kind of transpired over the last few years, especially in terms of business formation activity, there was a high labor turnover environment in ’21 and ’22. That churn has come down in more recent quarters, but we have seen better productivity outcomes.Now, can they sustain? There's a lot of things that probably go into that. There are some new potential risks and shocks on the horizon, but at least it tells you better things are possible in a way that if — I'm sure you've had these discussions throughout the previous decade, in the 2010s, when people made a lot of claims about why productivity growth was destined to be stuck, that we were either not innovating enough, or we were not able to capture that into GDP, or else there are just some secular reasons, and so I think it's an instructive moment. If people are actually looking at the data, the last two years, real output and productivity growth has been very impressive, objectively. And it's not just about, “Hey, we're reverting to the pre-pandemic trend and nothing more.” I think there are signs that this is something at least a little different from what an honest forecast pre-pandemic would've suggested.Drivers of growth (7:24)The three-legged stool is one where you want have a labor market that's strong, fixed investment that's growing (ideally faster than usual), and on the third leg it's the set of things that you can do to control really salient costs that everyone's paying.Let's talk about those signs, but first let's take a quick step back. When you look at what drove growth, and productivity growth, specifically, in the ’90s, give me the factors that drove growth and then why those factors give us lessons for policymaking today.I think there are three drivers I can point to that are a little bit independent of each other.One is we had — I don't want to say a tight labor market, but especially a fully employed labor market is helpful in so far as, and we see this now over multiple episodes, especially when you're at high levels of prime-age employment, that's typically a point when there's a lot of human capital that's accumulated. People who have been employed for a while, they've been trained up, there's a little more returns to scale, they can scale revenue, they can scale output better. You don't need to add an additional worker to add additional unit of GDP.In the more tangible sense, it's that people are trained up, they have more tangible experience, productive experience. You're able to see output gains without necessarily having to add hours worked. We generally saw over the late ‘90s: Hours worked slowed down, but real GDP growth held up very well.The labor market wasn't contracting by any stretch, it was just, largely speaking, finding an equilibrium in which employment levels were high, job growth was solid if not always spectacular, but we were still seeing that real GDP growth could still be scaled up in a lot of ways. So there is a labor market dynamic to this.There is a fixed investment dynamic. Fixed investment growth is very strong in the late ’90s. That was about information processing equipment, IT, software. We did telecommunications deregulation in 1996, which is meant to really expand and accelerate the rollout of things. That became the fiber boom. We saw a lot of construction that went into those sectors, and so we saw it really touch construction, we saw it touch equipment, and we also saw it effect intellectual property.An investment to prevent the millennium bug?There was probably a lot of overinvestment that also was born of some of that deregulation, but at least in terms of it adding to our welfare, making it easier for us to use the internet and the long-term benefits of that, a lot of that was built in the late ’90s. You could probably point to some stuff in policy, obviously interacting with technology that was very favorable.The third thing I would say is also probably underrated is inflation fell over that whole period. While some of that inflation falling would've been some fortuitous dynamics, especially in the late ’90s around food and energy prices falling, the Asian financial crisis, there were also things that were very important for creating space for the consumer to spend more. Things like HMOs. Healthcare inflation really fell throughout the ’90s.Now, HMOs became more unpopular for a lot of reasons. These health management organizations were meant to control costs and did a pretty good job of it. This is something that Janet Yellen actually wrote about a long time ago, talking about the ’90s and how the healthcare dynamic was very underrated. In the 2000s, healthcare inflation really picked up again and a lot of the cost-control measures in the private sector were less effective, but you could see evidence that that was also creating space in terms of price stability, the ability for the consumer to spend more on other types of goods and services. That also allows for both more demand to be available but also for it to be supplied.I think with all these stories there's a demand- and a supply-side aspect to them. I think you kind of need both for it to be successful. The three-legged stool is one where you want have a labor market that's strong, fixed investment that's growing (ideally faster than usual), and on the third leg it's the set of things that you can do to control really salient costs that everyone's paying. Like healthcare, obviously there's a lot of cost bloat, and thinking about ways to really curb expenditure without curbing quality or real consumption itself, but there's obviously a lot of room for reforms in that area.The boomy ’20s? (11:38)Right now, you have still an increasing number of people who have had meaningful work experience over the last one, two, three, years. That human capital should accumulate and be more relevant for GDP growth going forward . . .So you've identified what, in your view, is a very successful mix of these very critical factors. So if you want to be bullish about the rest of this decade, which of those factors — maybe all of them — are at play right now? Or maybe none of them!Right now, the labor market is still holding up rather well. While we may not be seeing quite the level of labor market dynamism we saw earlier in this expansion, at the same time, that was also a period of great turbulence and high inflation. Right now, you have still an increasing number of people who have had meaningful work experience over the last one, two, three, years. That human capital should accumulate and be more relevant for GDP growth going forward, assuming we don't have a recession in the next year or two or whatever.If we do, I think it obviously would mean a lot of people are probably likely to not be as employed, and if that's the case, their marketable and productive skills may atrophy and depreciate. That's the risk there, but, all things considered, right now, non-farm payroll growth has been roughly speaking 160,000 per month. Employment rates adjusted for demographics are a little higher than they were before the pandemic. It's pretty historically high. That's not a bad outcome to start with and those initial conditions should hopefully bode well for the labor market's contribution to productivity growth.The challenge is in terms of real GDP growth. It's also a function of a lot of other factors: What are we going to see in terms of cost stability? I would generally say there's obviously a lot of turbulence right now, but what's going to happen to a lot of these key costs? On one hand, commodity prices should hopefully be stable, there's a lot of signs of, let's say, OPEC increasing production.On the other hand, we have also things about tariffs that are pretty significant threats on the table and I think you could also be equally concerned about how much this could matter. We've already had a bigger run-through of this with a lot of this supply chain turbulence, pandemic error stimulus, and how that stuff interacted. That was quite turbulent. Even if tariffs aren't quite as turbulent as that, it could still be something that detracted from productivity growth.We saw, actually, in the first two quarters of 2022 when inflation exploded, there were a compounding number of shocks on the supply side with the demand side that it did have a depressing effect on productivity in the short run. And so you can think if we see things on the cost side blow out, it will also restrict output. If you have to mark up the price of a lot of things to reflect different costs and risks, it's going to have some output-throttling effect, and a productivity-throttling effect. That's one side of things to be concerned about.And then the other side of it, in terms of fixed investment, I think there's a lot of reasons for optimism on fixed investment. If we just took the start of the year, there's clearly a lot of investment tied to the artificial intelligence boom: Data centers, all of the expenditures on software that should change, expenditures on hardware that should be upgraded, and there's a whole set of industrial infrastructure that's also tied to this where you should see capital deepening really emerge. You should see that there should be more room to scale up in capital formation relative to labor. You can probably point to some pockets of it right now, but it hadn't shown up in the GDP data yet. That was the optimistic case coming into this year and I think it's still there. The challenge is there's now other headwinds.The tariffs make me less optimistic. I really worry about the uncertainty freezing business investment and hiring, for that matter.I share your sentiment there. I think we learned in 2018 and -19, there were tariffs being implemented but on much smaller scale and scope, and even those had a pretty meaningful or identifiable impact on the manufacturing sector, leave aside even the other sectors that use manufactured inputs from imports or otherwise. So these are going to be likely headwinds if you're any kind of company that exports at any point in time to something across borders, you have to now incorporate higher costs, more uncertainty. We don't know how long this is supposed to stick. Are you supposed to assume this is going to be a transition period, as Treasury Secretary Bessent said, or is this something that is just like a little negotiation tactic, you get a win and then we move on?I don't think anyone's quite sure how this is supposed to play out and I worry both for the manufacturing sector itself because, contrary to the popular conception of it, we still export a lot of things. We still export, and the most competitive industries are exporting industries, and so that's a concern for whether you're a manufacturing construction machinery, you’re Caterpillar, or if you're agricultural machinery and you're John Deere, you have to start to think about this stuff more and the risk that's attached to it. The hurdle rates to investment go up, not down.And on the other side of the ledger then we have, or at least in terms of the sectors that use manufactured inputs. Transformers are really important for building out the energy infrastructure if we're going to have load growth that's driven by AI or whatever else, we're kind of entering more uncertainty on that side as well, and not really clear what the full strategy is. It strikes me as going to be very challenging.And then on the monetary policy [side], and this is the difference, you had in the ’90s a Federal Reserve which seems to have defeated the Great Inflation Monster of the 1970s while the Fed today is battling inflation.What do you make of that as far as setting the stage for a productivity boom, a Fed which is quite active and still quite concerned about that inflation surge and perhaps tariffs further playing into it going forward?I think the Fed's stuck in a hard spot here. If you think about a trade shock as likely being some mix of — well, it could be output throttling. Maybe the output throttling and the effects in the labor market are more outsized than the inflation effects? That was what we saw in 2018 and 19, but it's not a given that that's going to be the case this time. The scale of the threats are much bigger and much wider, and especially coming through a period now where there's higher inflation, maybe there's more willingness to raise prices in response to these shocks. So these things are a little different.The Fed has basically said, “We don't know exactly how this is going to play out and we're going to need to watch the data, keep an open mind, be pretty risk-averse about how we're going to adjust interest rate policy.” We've seen evidence of inflation expectations going up. That will not give the Fed a lot of confidence about cutting interest rates in the absence of other things getting worse. What the Fed’s supposed to do in response to supply shock is almost a philosophical question because you obviously don't want to break things if there's really just a supply shock that is a one-off that you can see through, but if it starts to have longer term consequences, create bigger pain points in terms of inflation, it's just a tough spot.When I try to square the circle here — and this will be no surprise to the listeners — I can't help but thinking, boy, it would be really fantastic if all the most techno-optimist dreams about AI came true, and this is not just an important technology, but an unbelievably important technology that diffuses through the economy in record time. That would be a wonderful factor to add into that mix.If there are ways for that to be a bigger tailwind — and there could be, I wouldn't be too pessimistic about how that could filter through even the GDP data amidst a lot of these trade policy headwinds, we're expected to see a lot grand buildout of data centers, for example. There's an energy infrastructure layer to that.But even beyond the investment side, actually being used, improving total factor productivity. Super hard to predict, and no one wants to do a budget forecast under the assumption we're going to be doubling a productivity growth, but it would be nice to have.Sure would. I will say about one of the things on the inflation side, especially with the Fed, we've come through a period now where the Fed has kept restrictive interest rate policies, but only more recently have we seen a little bit more of that show up in financial markets, for example. So the stock market over the last two years has ran up quite a bit, historically, and only now we've seen some signs of maybe some pricing of risk and some of the issues around the Fed.Inflation data itself coming into this year, relative to the Fed's target on the Fed's gauges, it was right now about 2.6, 2.7 percent. Most of that reflects a lot of lags of the past, I would say. If you look through the details, you see a lot of it in how inflation is measured for housing rent. How inflation is measured for financial services really tracks the stock market, and then there's obviously some other idiosyncratic stuff around where they're using wages as the measure of prices in PCE, which is the Fed's inflation gauge. If you take that stuff out, we still have a little bit of inflation work to do in terms of getting inflation down, but it would sound pretty manageable. If I told you, actually, if you take away those lags, you probably get some only 2.2 percent, that seems like we're almost there.Let's take away a little more, then we get to two percent. We can just keep cutting things outAnd there would probably be conditions for a lot. But if we can give the benefit of the time and do no harm, there's probably a positive story to be told. The challenge is, we may not be doing no harm here. There may be new things that rear up, to your point. If you start just deducting stuff just because you think it lags, but you don't think about forward-looking risks, which there are, then you start to get into a more challenged view of how things improve on the inflation side.I think that's a big dilemma for the Fed, which is, they have to be forward-looking. They can't just say, well, this stuff is lagging, we can ignore it. That doesn't cash when you have forward-looking risks, but if we do see that maybe some of these trade policy risks go away, if there's a change of heart, a change of mind, I think you can possibly tell yourself a more positive story about how maybe interest rates can come down a bit more and financial conditions can be more supportive of investment over time. So I think that that is the optimistic case there.Full employment and the Fed (22:03)Taking people away from their job and then trying to just bring them back in several years later, don't expect the productivity dividends to be quite the same.For someone who cares about full employment, how would you rate the Fed's performance after the global financial crisis? Too tight?It was too tight and also it was an environment in which the Fed, at various points from 2010, maybe 2009, through to 2015, they were very eager to try and get interest rates up before the economy was giving their hard signal that it was time to raise interest rates. Inflation hadn't really reared its head, nor had we seen evidence of really strong labor markets. We were seeing a recovery that was very gentle, and slow, and maybe we were slowly getting out of it, but it was a slow grind. GDP growth was not particularly stellar over that period. That's pretty disappointing, right? We don't want do that again. Obviously, there are things like maybe fiscal policy could have been done differently, as well as monetary policy on some level, but I think the Fed was very eager to get off of zero to the point where they weren't looking at the data, just didn't like the fact they were at zero.Coming out of it, now it's like that recovery is a lot of wasted output. We lost a lot of output out of that. We lost a lot of employment out of that. It's kind of just a big economic waste. Obviously, this past recovery has been very different and Covid was a different type of shock relative to the global financial crisis.The thing that worries me is actually, when we start to look at the global financial crisis and we look at, say, even the recession from the dot com boom, or even the recession, to your point, in the early ’90s, prime-age employment rates took a long time to recover and it's not ideal from a productivity perspective that you want to have people out of the labor force for long periods of time, people out of employment for an extended number of years —Also not good for social cohesion.The social fabric, yeah. There's a lot of stuff it's not great for. We don't want hysteresis of that kind. We don't want to have people who are, “Oh, because I lost my job, I'm not going to be able to get a new job in the foreseeable future.” A lot of skills, general intangible knowledge, that's kind of part of how people become more productive and how firms become more productive. You want that stuff to keep going on some level. That's also probably why even Covid was very turbulent. It's a lot of things that we kind of have in motion, we just switched it off and then switched it back on. Even that over a short horizon can be very disruptive. There was a reason, on some level, to do it, but it is also something to learn from: Taking people away from their job and then trying to just bring them back in several years later, don't expect the productivity dividends to be quite the same.So I look at those three recessions at least to say, if we're going to have slow recoveries out of those, it's going to cause problems. So it's a balance of Fed and fiscal policy, I'd say, because there are certain things — there was a 2001, -2, -3, there were attempts to lower taxes at the same time. That actually may have been the key catalyst, more so than the Fed cutting rates, but when you think about how the Fed is sometimes antsy to get off of low rates when the economy is depressed, that's not great. Right now the Fed has a very different set of trade-offs. Thankfully, on some level, for full employment especially, [we’re] not in that world, we're now more trying to defend full employment, protect full employment, ideally not have a recession now, would be great.Demographics in the data (25:37)When you see how population growth has a twofold dynamic, we typically see in periods of high population growth are the periods also where you tend to see both strong investment but also inflation risk.I would love to avoid that. That’s the last thing we need.I have two questions: One, how much do demographics, and there’s been a lot of talk about falling fertility rates, is that something you think about much?I think demographics play a lot of tricks on the data itself. When you see how population growth has a twofold dynamic, we typically see in periods of high population growth are the periods also where you tend to see both strong investment but also inflation risk. Obviously, when you know that there's a bigger base of people who you can sell your goods and services to, you might be more inclined to go forward with a longer-dated investment with some confidence that there will be growth to validate it. On the other hand, it's also because there's more spending that's happening in the economy, that's higher growth, there might be more inflation risk.I think that those background conditions then filter in various ways. You can kind of see how Japan and Europe have, generally speaking, at least maybe prior to this pandemic-era episode of inflation, are seeing lower inflation rates, lower growth rates, though, too. So lower real growth, lower inflation, real per capita outcomes are always hard to square in terms of Japan's population is declining, but also Japan's real GDP, is it declining as much more or less? These things are very hard to identify going forward.I think it's going to just muddy a lot of different math as far as what counts as strong investment. We've gotten used to a world of non-farm payroll growth every month in the job report. If it's like 150,000 to 200,000, that's pretty solid and great. Do we need to change our expectations to it being a 100,000 is good enough because we're not actually expanding the working age population as much? Those things are going to have an effect on the macroeconomic data and how we evaluate it in real time. Even just this year, because for some people's assessments of what counts as strong payroll growth, there was a sense that payroll employment was strong in ’23 and ’24 because of immigration. I'm a little bit more skeptical than most of those claims, but if it's true, which I think it's still possibly true, that it’s then the case right now if we do see less immigration, is that the breakeven, the place where what counts as healthy employment growth might be a lot lower because of it.Policies for productivity (27:55)Healthcare cost growth and managing it will be important both in terms of what people see in the budgetary outcomes, but also inflation outcomes.My last question for you, I'll give you a choice of what to answer. If you were to recommend a pro-productivity piece of public policy, either give me your favorite one or the least-obvious one that you would recommend.Right now, I'd say the things that worry the most in productivity, and it's on the table, is the trade policy. This stuff has adverse impacts on prices and investment, and it may have impacts on employment, too, over time, if they stick. We're talking about really high, sizable numbers here, in terms of what's threatened now. Maybe it's all bark and no bite, but I would say this is what's on the table right now. I don't know what else is on the table at the very moment, but I'd say that's a place where you have to wonder what's the merits of any of this stuff, and I think I'm not seeing it.I am more intellectually flexible than most about where sometimes some very specific, targeted, narrow trade barriers have a lot of sense in them, either because solving a particular externalities, over-capacity kind of problem that might exist. There are some intellectualized reasons you can offer if it's narrow and targeted. If you're doing stuff at a really broad-based level, the way it's currently being evaluated, then I have to ask, what are we doing here? I am not sure this is good for investment, and investment is also part of how we are able to unlock a lot of general corporate technologies, able to actually see total factor productivity growth and increase over time. So I worry about that. That's top of mind.Things that are kind of underrated that I think is really important over time, that'll probably be also important, both for people who are thinking about efficiency, thinking about where there's room for public policy to support productivity growth, I'd say healthcare is a really prominent place right now. Healthcare cost growth and managing it will be important both in terms of what people see in the budgetary outcomes, but also inflation outcomes. There's just a lot of expenditures there where there's not a lot of incentive for rationalization that needs to be brought. And there's a way to do it equitably. There's a lot of low-hanging fruit out there in terms of ways we can reform the healthcare system. Site neutral payments, being one easy example to point to.The federal government itself and private insurers, both of them, though, in terms of paying for healthcare, how they pay for healthcare and actually ensure cost control in that process, if we're able to do that well, I think the space for productivity is pretty underrated and could be quite sizable. That's also, I'd say, an underrated reason why the 2000s became far less productive. Healthcare services inflation, healthcare cost growth really exploded over that period, and we did not get a good handle on it, and we kind exited the ’90s productivity boom phase. It was more obvious towards the latter half of the 2000s as a result.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro ReadsFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Mar 6, 2025 • 28min

🚀 My chat (+transcript) with economist Matt Weinzierl on the growing business of space

In this insightful discussion, Matt Weinzierl, a Harvard Business School senior associate dean and co-author of "Space to Grow," explores the rapid evolution of the space industry. He highlights how newcomers, along with giants like SpaceX and Blue Origin, are redefining space with innovations in asteroid mining and satellite technology. The conversation dives into the importance of decentralized space efforts, the role of NASA's Artemis program, and the pressing need for sustainability amidst intriguing economic opportunities.
undefined
Jan 31, 2025 • 25min

🌍 My chat (+transcript) with researcher Toby Ord on existential risk

The 2020s have so far been marked by pandemic, war, and startling technological breakthroughs. Conversations around climate disaster, great-power conflict, and malicious AI are seemingly everywhere. It’s enough to make anyone feel like the end might be near. Toby Ord has made it his mission to figure out just how close we are to catastrophe — and maybe not close at all!Ord is the author of the 2020 book, The Precipice: Existential Risk and the Future of Humanity. Back then, I interviewed Ord on the American Enterprise Institute’s Political Economy podcast, and you can listen to that episode here. In 2024, he delivered his talk, The Precipice Revisited, in which he reassessed his outlook on the biggest threats facing humanity.Today on Faster, Please — The Podcast, Ord and I address the lessons of Covid, our risk of nuclear war, potential pathways for AI, and much more.Ord is a senior researcher at Oxford University. He has previously advised the UN, World Health Organization, World Economic Forum, and the office of the UK Prime Minister.In This Episode* Climate change (1:30)* Nuclear energy (6:14)* Nuclear war (8:00)* Pandemic (10:19)* Killer AI (15:07)* Artificial General Intelligence (21:01)Below is a lightly edited transcript of our conversation. Climate change (1:30). . . the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.Pethokoukis: Let's just start out by taking a brief tour through the existential landscape and how you see it now versus when you first wrote the book The Precipice, which I've mentioned frequently in my writings. I love that book, love to see a sequel at some point, maybe one's in the works . . . but let's start with the existential risk, which has dominated many people's thinking for the past quarter-century, which is climate change.My sense is, not just you, but many people are somewhat less worried than they were five years ago, 10 years ago. Perhaps they see at least the most extreme outcomes less likely. How do you see it?Ord: I would agree with that. I'm not sure that everyone sees it that way, but there were two really big and good pieces of news on climate that were rarely reported in the media. One of them is that there's the question about how many emissions there'll be. We don't know how much carbon humanity will emit into the atmosphere before we get it under control, and there are these different emissions pathways, these RCP 4.5 and things like this you'll have heard of. And often, when people would give a sketch of how bad things could be, they would talk about RCP 8.5, which is the worst of these pathways, and we're very clearly not on that, and we're also, I think pretty clearly now, not on RCP 6, either. So the two worst pathways, we're pretty clearly not on, and so that's pretty good news that we're kind of headed more towards one of the better pathways in terms of the emissions that we'll put out there.What are we doing right?Ultimately, some of those pathways were based on business-as-usual ideas that there wouldn't be climate change as one of the biggest issues in the international political sphere over decades. So ultimately, nations have been switching over to renewables and low-carbon forms of power, which is good news. They could be doing it much more of it, but it's still good news. Back when we initially created these things, I think we would've been surprised and happy to find out that we were going to end up among the better two pathways instead of the worst ones.The other big one is that, as well as how much we'll admit, there's the question of how bad is it to have a certain amount of carbon in the atmosphere? In particular, how much warming does it produce? And this is something of which there's been massive uncertainty. The general idea is that we're trying to predict, if we were to double the amount of carbon in the atmosphere compared to pre-industrial times, how many degrees of warming would there be? The best guess since the year I was born, 1979, has been three degrees of warming, but the uncertainty has been somewhere between one and a half degrees and four and a half.Is that Celsius or Fahrenheit, by the way?This is all Celsius. The climate community has kept the same uncertainty from 1979 all the way up to 2020, and it’s a wild level of uncertainty: Four and a half degrees of warming is three times one and a half degrees of warming, so the range is up to triple these levels of degrees of warming based on this amount of carbon. So massive uncertainty that hadn't changed over many decades.Now they've actually revised that and have actually brought in the range of uncertainty. Now they're pretty sure that it's somewhere between two and a half and four degrees, and this is based on better understanding of climate feedbacks. This is good news if you're concerned about worst-case climate change. It's saying it's closer to the central estimate than we'd previously thought, whereas previously we thought that there was a pretty high chance that it could even be higher than four and a half degrees of warming.When you hear these targets of one and a half degrees of warming or two degrees of warming, they sound quite precise, but in reality, we were just so uncertain of how much warming would follow from any particular amount of emissions that it was very hard to know. And that could mean that things are better than we'd thought, but it could also mean things could be much worse. And if you are concerned about existential risks from climate change, then those kind of tail events where it's much worse than we would've thought the things would really get, and we're now pretty sure that we're not on one of those extreme emissions pathways and also that we're not in a world where the temperature is extremely sensitive to those emissions.Nuclear energy (6:14)Ultimately, when it comes to the deaths caused by different power sources, coal . . . killed many more people than nuclear does — much, much more . . .What do you make of this emerging nuclear power revival you're seeing across Europe, Asia, and in the United States? At least the United States it’s partially being driven by the need for more power for these AI data centers. How does it change your perception of risk in a world where many rich countries, or maybe even not-so-rich countries, start re-embracing nuclear energy?In terms of the local risks with the power plants, so risks of meltdown or other types of harmful radiation leak, I'm not too concerned about that. Ultimately, when it comes to the deaths caused by different power sources, coal, even setting aside global warming, just through particulates being produced in the soot, killed many more people than nuclear does — much, much more, and so nuclear is a pretty safe form of energy production as it happens, contrary to popular perception. So I'm in favor of that. But the proliferation concerns, if it is countries that didn't already have nuclear power, then the possibility that they would be able to use that to start a weapons program would be concerning.And as sort of a mechanism for more clean energy. Do you view nuclear as clean energy?Yes, I think so. It's certainly not carbon-producing energy. I think that it has various downsides, including the difficulty of knowing exactly what to do with the fuel, that will be a very long lasting problem. But I think it's become clear that the problems caused by other forms of energy are much larger and we should switch to the thing that has fewer problems, rather than more problems.Nuclear war (8:00)I do think that the Ukraine war, in particular, has created a lot of possible flashpoints.I recently finished a book called Nuclear War: A Scenario, which is kind of a minute-by-minute look at how a nuclear war could break out. If you read the book, the book is terrifying because it really goes into a lot of — and I live near Washington DC, so when it gives its various scenarios, certainly my house is included in the blast zone, so really a frightening book. But when it tried to explain how a war would start, I didn't find it a particularly compelling book. The scenarios for actually starting a conflict, I didn't think sounded particularly realistic.Do you feel — and obviously we have Russia invade Ukraine and loose talk by Vladimir Putin about nuclear weapons — do you feel more or less confident that we'll avoid a nuclear war than you did when you wrote the book?Much less confident, actually. I guess I should say, when I wrote the book, it came out in 2020, I finished the writing in 2019, and ultimately we were in a time of relatively low nuclear risk, and I feel that the risk has risen. That said, I was trying to provide estimates for the risk over the next hundred years, and so I wasn't assuming that the low-risk period would continue indefinitely, but it was quite a shock to end up so quickly back in this period of heightened tensions and threats of nuclear escalation, the type of thing I thought was really from my parents' generation. So yes, I do think that the Ukraine war, in particular, has created a lot of possible flashpoints. That said, the temperature has come down on the conversation in the last year, so that's something.Of course, the conversation might heat right back up if we see a Chinese invasion of Taiwan. I've been very bullish about the US economy and world economy over the rest of this decade, but the exception is as long as we don't have a war with China, from an economic point of view, but certainly also a nuclear point of view. Two nuclear armed powers in conflict? That would not be an insignificant event from the existential-risk perspective.It is good that China has a smaller nuclear arsenal than the US or Russia, but there could easily be a great tragedy.Pandemic (10:19)Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either.The book comes out during the pandemic. Did our response to the pandemic make you more or less confident in our ability and willingness to confront that kind of outbreak? The worst one that saw in a hundred years?Yeah, overall, it made me much less confident. There'd been general thought by those who look at these large catastrophic risks that when the chips are down and the threat is imminent, that people will see it and will band together and put a lot of effort into it; that once you see the asteroid in your telescope and it's headed for you, then things will really get together — a bit like in the action movies or what have you.That's where I take my cue from, exactly.And with Covid, it was kind of staring us in the face. Those of us who followed these things closely were quite alarmed a long time before the national authorities were. Overall, a lot of countries really just muddled through not very well, and the large institutions that were supposed to protect us from these things, like the CDC and the WHO, didn't do a great job either. That said, scientists, particularly developing RNA vaccines, did better than I expected.In the years leading up to the pandemic, certainly we'd seen other outbreaks, they’d had the avian flu outbreak, and you know as well as I do, there were . . . how many white papers or scenario-planning exercises for just this sort of event. I think I recall a story where, in 2018, Bill Gates had a conversation with President Trump during his first term about the risk of just such an outbreak. So it's not as if this thing came out of the blue. In many ways we saw the asteroid, it was just pretty far away. But to me, that says something again about as humans, our ability to deal with severe, but infrequent, risks.And obviously, not having a true global, nasty outbreak in a hundred years, where should we focus our efforts? On preparation? Making sure we have enough ventilators? Or our ability to respond? Because it seems like the preparation route will only go so far, and the reason it wasn't a much worse outbreak is because we have a really strong ability to respond.I'm not sure if it's the same across all risks as to how preparation versus ability to respond, which one is better. In some risks, there's also other possibilities like avoiding an outbreak, say, an accidental outbreak happening at all, or avoiding a nuclear war starting and not needing to actually respond at all. I'm not sure if there's an overall rule as to which one was better.Do you have an opinion on the outbreak of Covid?I don't know whether it was a lab leak. I think it's a very plausible hypothesis, but plausible doesn't mean it's proven.And does the post-Covid reaction, at least in the United States, to vaccines, does that make you more or less confident in our ability to deal with . . . the kind of societal cohesion and confidence to tackle a big problem, to have enough trust? Maybe our leaders don't deserve that trust, but what do you make from this kind of pushback against vaccines and — at least in the United States — our medical authorities?When Covid was first really striking Europe and America, it was generally thought that, while China was locking down the Wuhan area, that Western countries wouldn't be able to lock down, that it wasn't something that we could really do, but then various governments did order lockdowns. That said, if you look at the data on movement of citizens, it turns out that citizens stopped moving around prior to the lockdowns, so the lockdown announcements were more kind of like the tail, rather than the dog.But over time, citizens wanted to kind of get back out and interact more, and the rules were preventing them, and if a large fraction of the citizens were under something like house arrest for the better part of a year, would that lead to some fairly extreme resentment and some backlash, some of which was fairly irrational? Yeah, that is actually exactly the kind of thing that you would expect. It was very difficult to get a whole lot of people to row together and take the same kind of response that we needed to coordinate the response to prevent the spread, and pushing for that had some of these bad consequences, which are also going to make it harder for next time. We haven't exactly learned the right lessons.Killer AI (15:07)If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.We're more than halfway through our chat and now we're going to get to the topic probably most people would like to hear about: After the robots take our jobs, are they going to kill us? What do you think? What is your concern about AI risk?I'm quite concerned about it. Ultimately, when I wrote my book, I put AI risk as the biggest existential risk, albeit the most uncertain, as well, and I would still say that. That said, some things have gotten better since then.I would assume what makes you less confident is one, what seems to be the rapid advance — not just the rapid advance of the technology, but you have the two leading countries in a geopolitical globalization also being the leaders in the technology and not wanting to slow it down. I would imagine that would make you more worried that we will move too quickly. What would make you more confident that we would avoid any serious existential downsides?I agree with your supposition that the attempts by the US and China to turn this into some kind of arms race are quite concerning. But here are a few things: Back when I was writing the book, the leading AI systems with things like AlphaGo, if you remember that, or the Atari plane systems.Quaint. Quite quaint.It was very zero-sum, reinforcement-learning-based game playing, where these systems were learning directly to behave adversarially to other systems, and they could only understand the kind of limited aspect about the world, and struggle, and overcoming your adversary. That was really all they could do, and the idea of teaching them about ethics, or how to treat people, and the diversity of human values seemed almost impossible: How do you tell a chess program about that?But then what we've ended up with is systems that are not inherently agents, they're not inherently trying to maximize something. Rather, you ask them questions and they blurt out some answers. These systems have read more books on ethics and moral philosophy than I have, and they've read all kinds of books about the human condition. Almost all novels that have ever been published, and pretty much every page of every novel involves people judging the actions of other people and having some kind of opinions about them, and so there's a huge amount of data about human values, and how we think about each other, and what's inappropriate behavior. And if you ask the systems about these things, they're pretty good at judging whether something's inappropriate behavior, if you describe it.The real challenge remaining is to get them to care about that, but at least the knowledge is in the system, and that's something that previously seemed extremely difficult to do. Also, these systems, there are versions that do reasoning and that spend longer with a private text stream where they think — it's kind of like sub-vocalizing thoughts to themselves before they answer. When they do that, these systems are thinking in plain English, and that's something that we really didn't expect. If you look at all of the weights of a neural network, it's quite inscrutable, famously difficult to know what it's doing, but somehow we've ended up with systems that are actually thinking in English and where that could be inspected by some oversight process. There are a number of ways in which things are better than I’d feared.So what is your actual existential risk scenario look like? This is what you're most concerned about happening with AI.I think it's quite hard to be all that concrete on it at the moment, partly because things change so quickly. I don't think that there's going to be some kind of existential catastrophe from AI in the next couple of years, partly because the current systems require so much compute in order to run them that they can only be run at very specialized and large places, of which there's only a few in the world. So that means the possibility that they break out and copy themselves into other systems is not really there, in which case, the possibility of turning them off is much possible as well.Also, they're not yet intelligent enough to be able to execute a lengthy plan. If you have some kind of complex task for them, that requires, say, 10 steps — for example, booking a flight on the internet by clicking through all of the appropriate pages, and finding out when the times are, and managing to book your ticket, and fill in the special codes they sent to your email, and things like that. That's a somewhat laborious task and the systems can't do things like that yet. There's still the case that, even if they've got a, say, 90 percent chance of completing any particular step, that the 10 percent chances of failure add up, and eventually it's likely to fail somewhere along the line and not be able to recover. They'll probably get better at that, but at the moment, the inability to actually execute any complex plans does provide some safety.Ultimately, the concern is that, at a more abstract level, we're building systems which are smarter than us at many things, and we're attempting to make them much more general and to be smarter than us across the board. If you know that one player is a better chess player than another, suppose Magnus Carlsen's playing me at chess, I can't predict exactly how he's going to beat me, but I can know with quite high likelihood that he will end up beating me. I'll end up in checkmate, even though I don't know what moves will happen in between here and there, and I think that it's similar with AI systems. If we make things that are smarter than us and are not inherently able to control their values or give them moral rules to work within, then we should expect them to ultimately be calling the shots.Artificial General Intelligence (21:01)Ultimately, existential risks are global public goods problems.I frequently check out the Metaculus online prediction platform, and I think currently on that platform, 2027 for what they would call “weak AGI,” artificial general intelligence — a date which has moved up two months in the past week as we're recording this, and then I think 2031 also has accelerated for “strong AGI,” so this is pretty soon, 2027 or 2031, quite soon. Is that kind of what you're assuming is going to happen, that we're going to have to deal with very powerful technologies quite quickly?Yeah, I think that those are good numbers for the typical case, what you should be expecting. I think that a lot of people wouldn't be shocked if it turns out that there is some kind of obstacle that slows down progress and takes longer before it gets overcome, but it's also wouldn't be surprising at this point if there are no more big obstacles and it's just a matter of scaling things up and doing fairly simple processes to get it to work.It’s now a multi-billion dollar industry, so there's a lot of money focused on ironing out any kinks or overcoming any obstacles on the way. So I expect it to move pretty quickly and those timelines sound very realistic. Maybe even sooner.When you wrote the book, what did you put as the risk to human existence over the next a hundred years, and what is it now?When I wrote the book, I thought it was about one in six.So it's still one in six . . . ?Yeah, I think that's still about right, and I would say that most of that is coming from AI.This isn't, I guess, a specific risk, but, to the extent that being positive about our future means also being positive on our ability to work together, countries working together, what do you make of society going in the other direction where we seem more suspicious of other countries, or more even — in the United States — more suspicious of our allies, more suspicious of international agreements, whether they're trade or military alliances. To me, I would think that the Age of Globalization would've, on net, lowered that risk to one in six, and if we're going to have less globalization, to me, that would tend to increase that risk.That could be right. Certainly increased suspicion, to the point of paranoia or cynicism about other nations and their ability to form deals on these things, is not going to be helpful at all. Ultimately, existential risks are global public goods problems. This continued functioning of human civilization is this global public good and existential risk is the opposite. And so these are things where, one way to look at it is that the US has about four percent of the world's people, so one in 25 people live in the US, and so an existential risk is hitting 25 times as many people as. So if every country is just interested in themself, they'll undervalue it by a factor of 25 or so, and the countries need to work together in order to overcome that kind of problem. Ultimately, if one of us falls victim to these risks, then we all do, and so it definitely does call out for international cooperation. And I think that it has a strong basis for international cooperation. It is in all of our interests. There are also verification possibilities and so on, and I'm actually quite optimistic about treaties and other ways to move forward.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Tech tycoons have got the economics of AI wrong - Economist* Progress in Artificial Intelligence and its Determinants - Arxiv* The role of personality traits in shaping economic returns amid technological change - CEPR▶ Business* Tech CEOs try to reassure Wall Street after DeepSeek shock - Wapo* DeepSeek Calls for Deep Breaths From Big Tech Over Earnings - Bberg Opinion* Apple’s AI Moment Is Still a Ways Off - WSJ* Bill Gates Isn’t Like Those Other Tech Billionaires - NYT* OpenAI’s Sam Altman and SoftBank’s Masayoshi Son Are AI’s New Power Couple - WSJ* SoftBank Said to Be in Talks to Invest as Much as $25 Billion in OpenAI - NYT* Microsoft sheds $200bn in market value after cloud sales disappoint - FT▶ Policy/Politics* ‘High anxiety moment’: Biden’s NIH chief talks Trump 2.0 and the future of US science - Nature* Government Tech Workers Forced to Defend Projects to Random Elon Musk Bros - Wired* EXCLUSIVE: NSF starts vetting all grants to comply with Trump’s orders - Science* Milei, Modi, Trump: an anti-red-tape revolution is under way - Economist* FDA Deregulation of E-Cigarettes Saved Lives and Spurred Innovation - Marginal Revolution* Donald Trump revives ideas of a Star Wars-like missile shield - Economist▶ AI/Digital* Is DeepSeek Really a Threat? - PS* ChatGPT vs. Claude vs. DeepSeek: The Battle to Be My AI Work Assistant - WSJ* OpenAI teases “new era” of AI in US, deepens ties with government - Ars* AI's Power Requirements Under Exponential Growth - Rand* How DeepSeek Took a Chunk Out of Big AI - Bberg* DeepSeek poses a challenge to Beijing as much as to Silicon Valley - Economist▶ Biotech/Health* Creatine shows promise for treating depression - NS* FDA approves new, non-opioid painkiller Journavx - Wapo▶ Clean Energy/Climate* Another Boffo Energy Forecast, Just in Time for DeepSeek - Heatmap News* Column: Nuclear revival puts uranium back in the critical spotlight - Mining* A Michigan nuclear plant is slated to restart, but Trump could complicate things - Grist▶ Robotics/AVs* AIs and Robots Should Sound Robotic - IEEE Spectrum* Robot beauticians touch down in California - FT Opinion▶ Space/Transportation* A Flag on Mars? Maybe Not So Soon. - NYT* Asteroid triggers global defence plan amid chance of collision with Earth in 2032 - The Guardian* Lurking Inside an Asteroid: Life’s Ingredients - NYT▶ Up Wing/Down Wing* An Ancient 'Lost City' Is Uncovered in Mexico - NYT* Reflecting on Rome, London and Chicago after the Los Angeles fires - Wapo Opinion▶ Substacks/Newsletters* I spent two days testing DeepSeek R1 - Understanding AI* China's Technological Advantage -overlapping tech-industrial ecosystems - AI Supremacy* The state of decarbonization in five charts - Exponential View* The mistake of the century - Slow Boring* The Child Penalty: An International View - Conversable Economist* Deep Deepseek History and Impact on the Future of AI - next BIG futureFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Jan 24, 2025 • 28min

🤖 My chat (+transcript) with journalist Nicole Kobie on why the future of tech still hasn’t arrived

Nicole Kobie, a prominent science and technology journalist known for her work with Teen Vogue and Wired, delves into the frustrations surrounding the slow pace of technological advancement. She discusses regulatory hurdles that stunt innovation and critiques the media's hype around technologies like AI and driverless cars. Kobie highlights the historical patterns of risk aversion and how they affect current developments. The conversation also touches on the expected technological landscape by 2035, emphasizing the need for balanced progress amidst pressing global challenges.
undefined
Jan 16, 2025 • 30min

⚡ My chat (+transcript) with Virginia Postrel on promoting a culture of dynamism

Big changes are happening: space; energy; and, of course, artificial intelligence. The difference between sustainable, pro-growth change, versus a retreat back into stagnation, may lie in how we implement that change. Today on Faster, Please! — The Podcast, I talk with Virginia Postrel about the pitfalls of taking a top-down approach to innovation, versus allowing a bottom-up style of dynamism to flourish.Postrel is an author, columnist, and speaker whose scholarly interests range from emerging technology to history and culture. She has authored four books, including The Future and Its Enemies (1998) and her most recent, The Fabric of Civilization: How Textiles Made the World (2020). Postrel is a contributing editor for the Works in Progress magazine and has her own Substack.In This Episode* Technocrats vs. dynamists (1:29)* Today’s deregulation movement (6:12)* What to make of Musk (13:37)* On electric cars (16:21)* Thinking about California (25:56)Below is a lightly edited transcript of our conversation. Technocrats vs. dynamists (1:29)I think it is a real thing, I think it is in both parties, and its enemies are in both parties, too, that there are real factional disagreements.Pethokoukis: There is this group of Silicon Valley founders and venture capitalists, they supported President Trump because they felt his policies were sort of pro-builder, pro-abundance, pro-disruption, whatever sort of name you want to use.And then you have this group on the center-left who seemed to discover that 50 years of regulations make it hard to build EV chargers in the United States. Ezra Klein is one of these people, maybe it's limited to center-left pundits, but do you think there's something going on? Do you think we're experiencing a dynamism kind of vibe shift? I would like to think we are.Postrel: I think there is something going on. I think there is a real progress and abundance movement. “Abundance” tends to be the word that people who are more Democrat-oriented use, and “progress” is the word that people who are more — I don't know if they're exactly Republican, but more on the right . . . They have disagreements, but they represent distinct Up Wing (to put it in your words) factions within their respective parties. And actually, the Up Wing thing is a good way of thinking about it because it includes both people that, in The Future and Its Enemies, I would classify as technocrats, and Ezra Klein read the books and says, “I am a technocrat.” They want top-down direction in the pursuit of what they see as progress. And people that I would classify as dynamists who are more bottom-up and more about decentralized decision-making, price signals, markets, et cetera.They share a sense that they would like to see the possibility of getting stuff done, of increasing abundance, of more scientific and technological progress, all of those kinds of things. I think it is a real thing, I think it is in both parties, and its enemies are in both parties, too, that there are real factional disagreements. In many ways, it reminds me of the kind of cross-party seeking for new answers that we experienced in the late ’70s and early ’80s, where . . . the economy was problematic in the ’70s.Highly problematic.And there was a lot of thinking about what the problems were and what could be done better, and one thing that came out of that was a lot of the sort of deregulation efforts that, in the many pay-ins to Jimmy Carter, who's not my favorite president, but there was a lot of good stuff that happened through a sort of left-right alliance in that period toward opening up markets.So you had people like Ralph Nader and free-market economists saying, “We really don't need to have all these regulations on trucking, and on airlines, and these are anti-consumer, and let's free things up.” And we reaped enormous benefits from that, and it's very hard to believe how prescriptive those kinds of regulations were back before the late ’70s.The progress and abundance movement has had its greatest success — although it still has a lot to go — on housing, and that's where you see people who are saying, “Why do we have so many rules about how much parking you can have?” I mean, yes, a lot of people want parking, but if they want parking, they'll demand it in the marketplace. We don't need to say, “You can't have tandem parking.” Every place I've lived in LA would be illegal to build nowadays because of the parking, just to take one example.Today’s deregulation movement (6:12). . . you've got grassroots kind of Trump supporters who supported him because they're sick of regulation. Maybe they’re small business owners, they just don't like being told what to do . .. . and it's a coalition, and it's going to be interesting to see what happens.You mentioned some of the deregulation in the Carter years, that's a real tangible achievement. Then you also had a lot more Democrats thinking about technology, what they called the “Atari Democrats” who looked at Japan, so there was a lot of that kind of tumult and thinking — but do you think this is more than a moment, it’s kind of this brief fad, or do you think it can turn into something where you can look back in five and 10 years, like wow, there was a shift, big things actually happened?I don't think it's just a fad, I think it’s a real movement. Now, movements are not always successful. And we'll see, when we saw an early blowup over immigration.That's kind of what I was thinking of, it's hardly straightforward.Within the Trump coalition, you've got people who are what I in The Future and Its Enemies would call reactionaries. That is, people who idealize an idea of an unchanging America someplace in the past. There are different versions of that even within the Trump coalition, and those people are very hostile to the kinds of changes that come with bottom-up innovation and those sorts of things.But then you've also got people, and not just people from Silicon Valley, you've got grassroots kind of Trump supporters who supported him because they're sick of regulation. Maybe they’re small business owners, they just don't like being told what to do, so you've got those kinds of people too, and it's a coalition, and it's going to be interesting to see what happens.It's not just immigration, it's also if you wanted to have a big technological future in the US, some of the materials you need to build come from other countries. I think some of them come from Canada, and probably we're not going to annex it, and if you put big tariffs on those things, it's going to hamper people's ability to do things. This is more of a Biden thing, but the whole Nippon Steel can't buy US Steel and invest huge amounts of money in US plants because, “Oh no, they're Japanese!” I mean it's like back to the ’80s.Virginia, what if we wake up one morning and they've moved the entire plant to Tokyo? We can't let them do that!There’s one thing about steel plants, they're very localized investments. And we have a lot of experience with Japanese investment in the US, by the way, lots of auto plants and other kinds of things. It’s that sort of backward thinking, which, in this case, was a Biden administration thing, but Trump agrees, or has agreed, is not good. And it's not even politically smart, and it's not even pro the workers because the workers who actually work at the relevant plant want this investment because it will improve their jobs, but instead we get this creating monopoly. If things go the way it looks like they will, there will be a monopoly US Steel supplier, and that's not good for the auto industry or anybody else who uses steel.I think if we look back in 2030 at what's happened since 2025, whether this has turned out to be a durable kind of pro-progress, pro-growth, pro-abundance moment, I'll look at how have we reacted to advances in artificial intelligence: Did we freak out and start worrying about job loss and regulate it to death? And will we look back and say, “Wow, it became a lot easier to build a nuclear power plant or anything energy.” Has it become significantly easier over the past five years? How deep is the stasis part of America, and how big is the dynamist part of America, really?Yeah, I think it's a big question. It's a big question both because we're at this moment of what looks like big political change, we're not sure what that change is going to look like because the Trump coalition and Trump himself are such a weird grab bag of impulses, and also because, as you mentioned, artificial intelligence is on the cusp of amazing things, it looks like.And then you throw in the energy issues, which are related to climate, but they're also related to AI because AI requires a lot of energy. Are we going to build a lot of nuclear power plants? It's conceivable we will, both because of new technological designs for them, but also because of this growing sense — what I see is a lot of elite consensus (and elites are bad now!) that we made a wrong move when we turned against nuclear power. There's still aging Boomer and older are environmentalist types who still react badly to the idea of nuclear power, but if you talk to younger people, they are more open-minded because they're more concerned with the climate, and if we're going to electrify everything, the electricity's got to come from someplace. Solar and wind don't get you there.To me, not only is this the turnaround in nuclear, to me, stunning, but the fact that we had one of the most severe accidents only about 10 years ago in Japan, and if you would have asked anybody back then, they're like, “That's the death knell. No more nuclear renaissance in these countries. Japan's done. It's done everywhere.” Yet here we are.And yet, part of that may even be because of that accident, because it was bad, and yet, the long-run bad effects were negligible in terms of actual deaths or other things that you might point to. It's not like suddenly you had lots of babies being born with two heads or something.What to make of Musk (13:37)I’m glad the world has an Elon Musk, I'm glad we don't have too many of them, and I worry a little bit about someone of that temperament being close to political power.What do you make of Elon Musk?Well, I reviewed Walter Isaacson's biography of him.Whatever your opinion was after you read the biography, has it changed?No, it hasn't. I think he is somebody who has poor impulse control, and some of his impulses are very good. His engineering and entrepreneurial genius are best focused in the world of building things — that is, working with materials, physically thinking about properties of materials and how could you do spaceships, or cars, or things differently. He's a mixed bag and a lot of these kinds of people, I say it well compared.What do people expect that guy to be like?Compared to Henry Ford, I'd prefer Elon Musk. I’m glad the world has an Elon Musk, I'm glad we don't have too many of them, and I worry a little bit about someone of that temperament being close to political power. It can be a helpful corrective to some of the regulatory impulses because he does have this very strong builder impulse, but I don't think he's a particularly thoughtful person about his limitations or about political concerns.Aside from his particular strange personality, there is a general problem among the tech elite, which is that they overemphasize how much they know. Smart people are always prone to the problem of thinking they know everything because they're smart, or that they can learn everything because they're smart, or that they're better than people because they're smart, and it's just like one characteristic. Even the smartest person on earth can't know everything because there's more knowledge than any one person can have. That's why I don't like the technocratic impulse, because the technocratic impulse is like, smart people should run the world and they tell you exactly how to do it.To take a phrase that Ruxandra Teslo uses on her Substack, I think weird nerds are really important to the progress of the world, but weird nerds also need to realize that our goal should be to create a world in which they have a place and can do great things, but not a world in which they run everything, because they're not the only people who are valuable and important.On electric cars (16:21)If you look at the statistics, the people who buy electric cars tend to be people who don't actually drive that much, and they're skewed way to high incomes.You were talking about electrification a little earlier, and you've written a little bit about electric cars. Why did you choose to write about electric cars? And it seems like there's a vibe shift on electric cars as well in this country.This is the funny thing, because this January interview is actually scheduled because of a July post I had written on Substack called “Don't Talk About Electric Cars!”It’s as timely as today's headlines.The headline was inspired by a talk that I heard Celinda Lake, the Democratic pollster (been around forever) give at a Breakthrough Institute conference back in June. Breakthrough Institute is part of this sort of UP Wing, pro-progress coalition, but they have a distinct Democrat tilt. And this conference, there was a panel on it that was about how to talk about these issues, specifically if you want Democrats to win.She gave this talk where she showed all these polling results where you would say, “The Biden administration is great because of X,” and then people would agree or disagree. And the thing that polled the worst, and in fact the only thing that actually made people more likely to vote Republican, was saying that they had supported building all these electric charging stations. Celinda Lake's opinion, her analysis of that, digging into the numbers, was that people don't like electric cars, and especially women don't like electric cars, because of concerns about range. Women are terrified of being stranded, that was her take. I don't know if that's true, but that was her take. But women love hybrids, and I think people love hybrids. I think hybrids are very popular, and in fact, I inherited my mother's hybrid because she stopped driving. So I now have a 2018 Prius, which I used to take this very long road trip in the summer where I drove from LA to a conference in Wichita, and then to Red Cloud Nebraska, and then back to Wichita for a second conference.The reason people don't like electric cars is really a combination of the fact that they tend to cost more than equivalent gasoline vehicles and because they have limited range and you have to worry about things like charging them and how long charging them is going to take.If you look at the statistics, the people who buy electric cars tend to be people who don't actually drive that much, and they're skewed way to high incomes. So I live in this neighborhood in West LA, and it is full of Priuses — I mean it used to be full of Priuses, there's still a lot of Priuses, but it's full of Teslas and it is not typical. And the people in LA who are driving many, many miles are people who have jobs like they’re gardeners, or their contractors, or they're insurance adjusters and they have to drive all around and they don't drive electric cars. They might very well drive hybrids because you get better gas mileage, but they're not people who have a lot of time to be sitting around in charging stations.I think what's happened is there's some groups of people who are see this as a problem to be solved, but then there are a lot of people who see it as more symbolic than not. And they let their ideal, perfect world prevent improvements. So instead of saying, “We should switch from coal to natural gas,” they say, “We should outlaw fossil fuels.” Instead of saying, “Hybrids are a great thing, great invention, way lower emissions,” they say, “We must have all electric vehicles.” And what will happen, California has this rule, it has this law, that you're not going to be able to sell [non-]electric vehicles in the state after, I think it's 2035, and it's totally predictable what's going to happen: People just keep their gasoline cars longer. We’re going to end up like Cuba with a bunch of old cars.I swear, every report I get from a think tank, or a consultancy, or a Wall Street bank, for years has talked about electric cars, the energy transition, as if it was an absolutely done deal, and maybe it is a done deal over some longer period of time, I don't know, but to me it sort of gets to your point about top-down technocratic impulse — it seems to be failing.And I think that electric cars are a good example of that because there are a lot of people who think electric cars are really cool, they're kind of an Up Wing thing, if you will. It's like a new technology, there’ve been big advances, and exciting entrepreneurs . . . and I think a lot of people who like the idea of technological progress like electric cars, and in fact, the adoption of electric cars by people who maybe don't drive a whole lot but have a lot of money, it's not just environmental, cool, or even status, it's partly techno-lust, especially with Teslas.A lot of people who bought Teslas, they're just like people who like technology, but the top-down proclamation that you must have an electric vehicle, and we're going to use a combination of subsidies and bans to force everybody to have an electric vehicle, really doesn't acknowledge the diversity of transportation needs that people have.One way of looking at electric cars, but also the effort to build all these chargers, which has been a failure, the effort to start to creating broadband connectivity to all these rural areas — which isn't working very well — there was this lesson learned by people on the center-left, and Ezra Klein, that there was this wild overreaction, perhaps, to environmental problems in the ’60s and ’70s, and the unintended consequence here is that one, the biggest environmental problem may be worse because we don't have nuclear power and climate change, but now we can't really solve any problems. So it took them 50 years, but they learned a lesson.My concern is to look at what's going on with some of the various Biden initiatives which are taking forever to implement, may be wildly unpopular — will they learn the risk of this top-down technocratic approach, or they'll just memory hold that and they'll move on to their next technocratic approach? Will there be a learning?No, I'm skeptical that there will be. I think that the learning that has taken place — and by the way, I hate that: “a learning,” that kind of thing. . .That's why I said it, because it’s kind of delightfully annoying.The “learning,” gerund, that has taken place is that we shouldn't put so much process in the way of government doing things. And while I more or less agree with that, in particular, there are too many veto points and it is too easy for a very small group of objectors to hold up, not just private, but also public initiatives that are providing public goods.I think that the reason we got all of these process things that keep things from being done was because of things like urban renewal in the 1960s. And no, it was not just Robert Moses, he just got the big book written about him, but this took place every place where neighborhoods were completely torn down and hideous, brutalist structures were built for public buildings, or public housing, and these kinds of things, and people eventually rebelled against that.I think that yes, there are some people on the center-left who will learn. I do not think Ezra Klein is one of them, but price signals are actually useful things. They convey knowledge, and if you're going to go from one regulatory regime to another, you'll get different results, but if you don't have something that surfaces that bottom-up knowledge and takes it seriously, eventually it's going to break down. It's either going to break down politically or it's just waste a lot of money. . . You have your own technocratic streak.Thinking about California (25:56)Everybody uses California fires as an excuse to grind whatever axe they have.But listen, they'd be the good technocrats.Final question: As we're speaking, as we're doing this interview, huge fires raging sort of north of Los Angeles — how do you feel about the future of California? You live in California. California is extraordinarily important, both the American economy and to the world as a place of culture, as a place of technology. How do you feel about the state?The state has done a lot of shooting itself in the foot over the last . . . I moved here in 1986, and over that time, particularly in the first decade I was there, things were going great, the state was kind of stupid. I think if California solves its housing problem and actually allows significant amounts of housing to be built so that people can move here, people can stay here, young people don't have to leave the state, I think that will go a long way. It has made some positive movement in that direction. I think that's the biggest single obstacle.Fires are a problem, and I just recirculated on my Substack something I wrote about understanding the causes of California fires and what would need to be done to stop them.You’ve got to rake that underbrush.I wrote this in 2019, but it's still true: Everybody uses California fires as an excuse to grind whatever axe they have.Some of the Twitter commentary has been less-than-generous toward the people of California and its governor.One of the forms of progress that we take for granted is that cities don't burn regularly. Throughout most of human history, regular urban fires were a huge deal, and one of the things that city governments feared the most was fire and how were they prevented. There's the London fire, and the Chicago fires, and I remember, I just looked up yesterday, there was a huge fire in Atlanta in 1917, which was when my grandparents were children there. I remember my grandparents talking about that fire. Cities used to regularly burn — now they don't, where you have, they call it the “urban wildlife,” I forget what it's called, but there's a place where the city meets up against the natural environment, and that's where we have fires now, so that people like me who live in the concrete are not threatened. It's the people who live closer to nature, or they have more money, have a big lot of land.It's kind of understood what would be needed to prevent such fires. It's hard to do because it costs a lot of money in some cases, but it's not like, “Let's forget civilization. Let's not build anything. Let's just let nature take its course.” And one of the problems that was in the 20th century where people had the false idea — again, bad technocrats — that you needed to prevent forest fires, forest fires were always bad, and that is a complete misunderstanding of how the natural world works.California has a great future if it fixes this housing problem. If it doesn't fix its housing problem, it can write off the future. It will be all old people who already have houses.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised▶ Business* Google Thinks It Has the Best AI Tech. Now It Needs More Users. - WSJ* Anduril Picks Ohio for Military Drone Factory Employing 4,000 - Bberg* A lesson for oligarchs: politics can be deadly - FT Opinion* EU Needs Deregulation to Keep Up with Trump, Ericsson CEO Says - Bberg▶ Policy/Politics* Europe’s ‘super-regulator’ role is under threat - FT Opinion* Biden’s AI Data Center and Climate Contradiction - WSJ Opinion* After Net Neutrality: The Return of the States - AEI* China Has a $1 Trillion Head Start in Any Tariff Fight - WSJ▶ AI/Digital* She Is in Love With ChatGPT - NYT* Meta AI creates speech-to-speech translator that works in dozens of languages - Nature* AI-designed proteins tackle century-old problem — making snake antivenoms - Nature* Meta takes us a step closer to Star Trek’s universal translator - Ars▶ Clean Energy/Climate* Chris Wright backs aggressive build-out of the US power grid - EEN* We Have to Stop Underwriting People Who Move to Climate Danger Zones - NYT Opinion* Has China already reached peak oil? - FT* Molten salt nuclear reactor in Wyoming hits key milestone - New Atlas▶ Space/Transportation* SpaceX catches Super Heavy booster on Starship Flight 7 test but loses upper stage - Space* Blue Origin reaches orbit on first flight of its titanic New Glenn rocket - Ars* Jeff Bezos’ New Glenn Rocket Lifts Off on First Flight - NYT* Blue Origin’s New Glenn rocket reaches orbit in first test - WaPo* Blue Ghost, a Private U.S. Lunar Lander, Launches to the Moon - SciAm* Human exploration of Mars is coming, says former NASA chief scientist - NS▶ Substacks/Newsletters* TikTok is just the beginning - Noahpinion* Unstable Diffusion - Hyperdimensional* Progress's First Principles - Risk & Progress* How Trump, China & Trade Wars Will Affect the Global AI Landscape in 2025 - AI Supremacy* After the Green New Deal - Slow Boring* Washington Must Prioritize Mineral Supply Results Over Political Point Scoring - Breakthrough JournalFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Dec 19, 2024 • 27min

🌐 My chat (+transcript) with chaos theorist Doyne Farmer on our interconnected economy

Doyne Farmer, a professor at Oxford's Institute for New Economic Thinking and a pioneer in chaos theory, discusses the fascinating world of complexity economics. He explains how dynamic models can reveal internal economic cycles instead of relying solely on external shocks. Farmer highlights the role of agent-based simulations in addressing crises like COVID-19 and explores innovative solutions for climate change. He also delves into the future of energy, weighing the pros and cons of nuclear power versus renewables in driving economic growth.
undefined
Dec 13, 2024 • 27min

🎨 My chat (+transcript) with innovation expert Duncan Wardle on practical tips for corporate creativity

The future will be built on the big ideas we dare to conjure up today. We know that the most groundbreaking ideas often seemed ludicrous or simply impossible when first dreamed up, from the telephone, to human flight, to artificial intelligence. The key was a willingness to be creative and test the limits.While many of us might not consider ourselves creative people, Duncan Wardle assures us that we can take our ideas and brainstorms to the next level, no matter who we are or what we do. Today on Faster, Please! — The Podcast, Wardle and I explore some concrete tools for breaking down our own barriers to innovation and accessing the genius within all of us.Wardle is the former Head of Innovation and Creativity at Disney and founder of ID8. He has delivered multipl eTED Talks and teaches innovation Master Classes at Yale,Harvard, and the University of Edinburgh. His interactive book, The Imagination Emporium: Creative Recipes for Innovation has just been released.In This Episode* Creativity is learnable (1:37)* Building a career of creativity (8:09)* Tools for unlocking innovation (13:50)* Expansionist vs. reductionist tools (18:39)* Gamifying learning (25:20)Below is a lightly edited transcript of our conversation. Creativity is learnable (1:37)I believe we're all born creative with an imagination. We're all born curious. We're all born with intuition. We're all born with empathy. They may not have been the most employable skill of our entire careers. They are now.Pethokoukis: One of my favorite economists, Paul Romer, loves to use recipes as a metaphor to explain how innovation works in an economy. Like cooking recipes, innovation and ideas can be used repeatedly without being used up, you can combine different ideas as ingredients and create something new. I love that idea, and I love the way you present the book as kind of a recipe book you can sort of dip in and out of to help you be more creative and innovative.How should someone use this book, and who is it broadly for?Wardle: Me. Seriously. When I say me, I mean the busy, normal, hardworking person who says 10 times a day, “I don't have time to think.” And often considered the number one barrier to innovation and creativity: “I don't have time to think.” And I thought, “Okay, when you walk into a business office and you will look around, where's the book?” It's on the bookshelf, it's on the coffee table — nobody reads them. I thought, “Well, that's a waste of their money.” So I thought, “What book have I ever read — nonfiction — that I could read one page, know exactly what I need to do, and don't have to read the rest of the book today?” I thought, “My mom's cookbook! You want shepherd's pie? You go to page 67.” So I've designed the contents page the same way. It says, “Have you ever been to a brainstorm where nothing ever happened? Go to page 14. Fed up with your boss, shooting your ideas down? Go to page 12.”So it is designed to be hop in and hop out, but I also designed the principles around: take the intimidation out of innovation, make creativity tangible for people who are uncomfortable with ambiguity and gray, far more importantly, make it fun, give people tools they choose to use when you and I are not around. I also designed it around this principle and I'll see if this works: Close your eyes for me for a second. How many days are there in September?31?Well, we'll pretend it's 30.Or 30! That's the one thing I always confuse, which is the 30 and the 31.Close your eyes for a second. Just think about how you might have known there were 30 days in September. How might you have remembered? What might you have learned or what can you see with your eyes closed?Well, if I was a more melodic, musical person, loved a good rhyme, I might've used that very famous rhyme, which apparently I don't know veryWell, that's okay, neither do I, but I'll attempt it. About 30 percent of people go, “30 days has September, blah, blah, blah, and November.” They've just told me they're an auditory learner. That's their preferred learning style. They probably read a lot. How do I know that? Because when they learned it, they were six. When I asked the question, they learned it because they'd heard it.I'm sure you've seen somebody at some point in your life count their knuckles: January, February, March, April, May, June, July, et cetera. You may not remember this because you might not be a kinesthetic learner. Those are the people who learn by doing. Again, how do we know this? They learned it when they were six. How did they remember it? By doing it.And then 40 percent of an audience would just go, “No, no, I could just see a calendar with a number 30.” They're your visual learners. So I've designed the book to appeal to all three learning styles. It has a QR code in each chapter with a Spotify playlist for the auditory learners, animated videos where Duncan is now an animated character (who knew?) who pops out with a bunch of characters to tell you how to use the tools. And then hopefully, as of next Tuesday, the QR code on the back for kinesthetic learners will allow you to engage with the book and learn kinesthetically through artificial intelligence and ChatGPT and actually ask the book questions.The fundamental conceit of the book, though, is that being innovative, being creative, that can be learned. You can get better at it. Some people say, “I'm not a math person,” which I also don't believe. They'll say, “I'm not a super creative person. I'm not super innovative.” One, I'm assuming you think that's wrong; and two, you mentioned AI, if people are worried about robots doing more repetitive kinds of tasks, then having the tools to bring out or enhance that imagination seem more important now than ever.There's one thing I firmly believe in: We were all born a human, shockingly enough, and when you were given a gift for a holiday, perhaps, it came in an enormous box and it took you ages of time to take the toy out of the box because the box was the same height as you were. What do you spend the rest of the week playing with?I love a good box.Right? It was your castle, it was your rocket.Love a good box. Oh man, that box can be a time machine, anything.It was anything you wanted it to be until you went to the number one killer of creativity in imagination: western education, and the first thing you were told to do was, “Don't forget the color in between the lines.” Children are very curious. They ask, “Why, why, why, why?” Again, because they're after the insight for innovation. The insight for innovation comes on the sixth or seventh, why not the first one?If I were to survey you and ask you, “Why do you go to Disney on holiday?” People would say they go for the new attractions. But that's not strictly true, is it?So if you say, “Well, why do you go for the new attractions?”“Well, no, I like the classics.”“Well, why do you like the classics?” Why?“I like It's a Small World.”“Well, why do you like It’s a Small World?”“I remember the music.”“Why the music?”“Well, that's my mom's favorite ride. We used to go every summer.”“Why is that important to you 25 years later?”“Oh, I take my daughter now.”There's your insight for innovation. It has nothing to do with the capital investment strategy whatsoever and everything to do with that person's personal memory and nostalgia. But then we go to the number one killer of curiosity: western education. And the next thing our teacher tells us to do is stop asking “why,” because there's only one right answer.We know when somebody is staring at the back of our head. When you've stared at the back of the head of somebody that you think is really hot, a stranger, they turn around and look at you. You have to look away really quickly. It's okay, we've all done it. We have 120 billion neurons in our first brain and 120 million neurons in our second brain, the brain with which we say we make lots of our decisions, when we say “with our gut.” We are all empathetic.I believe we're all born creative with an imagination. We're all born curious. We're all born with intuition. We're all born with empathy. They may not have been the most employable skill of our entire careers. They are now. Why? Because I've been working with Google on DeepMind with their chief programmer — this is the AI program — and I asked her, “How the hell am I going to compete with this? How will any of us compete with this?” She said, “Well, by developing the things which will be the hardest for her to program into AI.” And I asked her what they were. She said, “The ones with which you were born: creativity, imagination, curiosity, empathy, and intuition.”Will they be programmed one day? Interestingly enough, she said intuition will go first. I was like, oh, that hurt. So I said, “Why intuition?” She said, “It's built on experience and we could build an algorithm that will give them experience.” I'm like, oh, so will they be programed one day? Perhaps. Anytime in the short term? No.Building a career of creativity (8:09)Your subconscious brain is 87 percent of the capacity. Every innovation you've ever seen, every creative problem you've ever solved, is back here to work as unrelated stimulus, but when the door is shut, you can't access it. So what do I do? I'm playful. I'm deliberately playful. In a moment, I want to briefly roll through the book, but first I want to ask about your job as the former head of innovation and creativity at Disney, which sounds like a fake job. It sounds like the kind of job someone would dream up and they wish there was such a job. It sounds like a dream job, but that was a real job. And what did you do there? Because it sounds fairly awesome.I finished as Head of Innovation — I didn't start that way. I started as a coffee boy in the London office. In 1986, I used to go and get my boss six cappuccinos a day from Bar Italia, and about three weeks into the role, I was told I would be the character coordinator, the person that looks after the walk-around characters at the Royal Premier of Who Framed Roger Rabbit in the presence of the Princess of Wales, Diana. I was like, “What do I do?” They said, “Well you just stand at the bottom of the stairs, Roger Rabbit will come down the stairs, the princess will come in on the receiving line, she'll greet him or blow him off and move into the auditorium.” How could you possibly screw that up? Well, I could. That was the day when I found out what a contingency plan was, because I didn't have one.A contingency plan would tell you, if you're going to bring a very tall rabbit with very long feet down a very large staircase towards the Princess of Wales, one might want to measure the width of the steps first before Roger trips on the top stair, is now hurdling like a bullet, head over feet at torpedo speed directly down the stairs towards Diana's head, whereupon he was taken out by two royal protection officers. There’s a very famous picture of Roger being taken out on the stairs and a 21-year-old PR guy in the background from Disney. “Oh s**t, I'm fired.” I got a call from somebody called a CMO — didn't know who that was, I thought I was going to tell me I'm fired. He goes, “That was great publicity.” I was like, “Wow, I can make a career out of this.”So for the first 20 years I had some of the more mad, audacious, outrageous ideas for Disney, and then Disney purchased Pixar, then they purchased Marvel, then they purchased Lucasfilm, and we found that we all had different definition of creativity and different innovation models. I tried four models of innovation.Number one, I hired an outside consultant and said, “Make me look good.” They were very good at what they did, but they weren't around for execution and they weren't going to show us how they did what they did. They were worried we wouldn't hire them again.Model number two, innovation team. Duncan will be in charge. What could possibly go wrong? Well, when you have a legal team, nobody outside of legal does legal. When you have a sales team . . . So when you have an innovation team, the subliminal message you've sent to the rest of the organization is: You are off the hook, we've got an innovation team.Third model was an accelerator program where we were bringing some young tech startups and take a 50-50 stake in their business. They could help us bring it to market much quicker than we could. We could help them scale it. But we had failed in the overall goal that Bob Iger had set for us: How might we embed a culture of innovation and creativity into everybody's DNA? So I set out to create a toolkit. A toolkit that takes the intimidation out of innovation, makes creativity tangible, and the process fun. And essentially, that's what the book is. It's not a book, it's a toolkit. Why? Because I want you to use it. It's broken up into creative behaviors, which I think if you don't get the creative behaviors right, the tools won't matter. They'll just be oblivious. I think the creative behaviors are the engine, and I'll explain what I mean by that.Let me ask you a question. Close your eyes if you would?I've done very poorly on the questions. Very poorly, but I will continue to answer them.Where are you usually, and what are you doing when you get your best ideas?I would say either on walks or, I think a lot of people say, in the shower, one of the two.There we go. Alright. But here's the thing. I've done it with 20,000 people in the audience. Do you know how many people say at work? Nobody ever says at work. Why do we never have our best ideas at work?Well, think about that last argument you were in. You turn to walk away from that argument, now you're still a bit angry, but you're beginning to relax, you're 10 seconds away, 20 seconds, and what pops into your brain? The killer one liner, that one perfect line you wish you'd used during but you didn't, did you? No. Why? Because when you are in an argument, your brain is moving at a thousand miles an hour defending yourself.When you're in the office, you're doing emails, reports, quarterly results, and meetings. And I hear myself say, “I don't have time to think.” When you don't have time to think, the door between your conscious and subconscious brain is firmly closed. You're in the brain state called beta, and you're only working with your conscious brain. 90 percent of your working day — you can look this up — your conscious brain is 13 percent of the capacity of your brain. Your subconscious brain is 87 percent of the capacity. Every innovation you've ever seen, every creative problem you've ever solved, is back here to work as unrelated stimulus, but when the door is shut, you can't access it. So what do I do? I'm playful. I'm deliberately playful. There's a chapter of energizers in the book. They’re 60-second exercises. What are they for? To make you laugh, laughter with purpose.What's an example of one of those?Okay, I'll tell you what then, you are the world's leading designer of parachutes for elephants. I will now interview you about your job. So question, “How did you get into this industry in the first place?”I was actually interviewing for a different job, I walked in the wrong door, and I ended up interviewing for that job.Okay, and do you have to use different material for the parachutes? What are the parachutes made of? How big are they? Do you have to make bigger ones for elephants with smaller ears and smaller ones for elephants with big ears, the African and Indian elephants?Thankfully the kind of material is changing all the time. A lot of advances: graphene, nanotechnology materials. So the kind of material is changing, which actually gives us a lot more flexibility for the kind of material and the sizes, depending, of course, on the size of the elephants and perhaps even their ears, and tails, and tusks.So we'll stop there. You do that in a room full of people and you'll hear laughter. And the moment I hear laughter, I've opened the door between your conscious subconscious brain and placed you metaphorically back in the shower where you are when you have your best idea. I don't expect people to be playful every minute of every day. I do expect, particularly leaders, to be playful when they're trying to get other people to open up their brains and have big ideas.Tools for unlocking innovation (13:50)If you like breaking rules, this tool is for you. It's about breaking rules metaphorically. So step one, you list the rules of your challenge. Step two, you take one and ask the most audacious question. Step three, you land a big idea.In the book, you sort of create these three animated characters representing . . . there's Spark who represents creative behaviors; Nova, innovation tools; and then Zing for these energizing exercises. But you sort of need all three of those?You do, but you don't have to know them all at the same time, and that's the beauty of the book. But here's the thing: I created a character called Archie. Archie was a direct descendant of Archimedes, because when I ask people where they are when they get the best ideas, they say the shower. Archimedes was in the bath. And my daughter, who’s about 25, walks in the room and she goes, “Dad, he's an old white guy. You are an old white guy. You can't do that s**t anymore.” So I created three new characters. Spark is male, introduces creative behaviors; Zing, gender-neutral, introduces the energizers; and Nova, the brains of the organization, introduces innovation tools. The tools are split between what I call expansionist tools and reductionist tools. The more expertise and the more experience we have, the more reasons we know why the new idea won't work.But here's the challenge: Up until 2020, we pretty much got away with doing what we did, and then came a global pandemic, enormous climate change, generation Z entering the workplace who don't want to work for us, and here comes AI. We don't get to think the way we thought four years ago. So the tools are designed specifically to stop you thinking the way you always do and give you permission to think differently.I'll give you an example of one, it's called “What If.” A lot of people will say, “Oh, but we work in a very heavily regulated industry.” If you like breaking rules, this tool is for you. It's about breaking rules metaphorically. So step one, you list the rules of your challenge. Step two, you take one and ask the most audacious question. Step three, you land a big idea. So for example, it was created by Walt, but that's in the book, I won't go through the whole Walt Disney story because I want people to understand that this tool can work for them too.There was a very tiny company in Great Britain in the late ’60s, before the days of mass automation, that used to make glasses that we drink out of, and they found too much breakage and not enough production when the glasses were being packaged and shipped. So they went down to the shop floor, observed the process for eight hours, and just wrote down the rules. Don't think about them, because then you'll think of all the reasons you can't break them, just write them down. So they wrote them down. 26 employees convey about cardboard box, six glasses on the top, six on the bottom, separated by corrugated cardboard, glasses wrapped in newspaper, employees’ reading newspaper. So somebody asked these somewhat provocative “what if” question, “What if we poke their eyes out?” Well, that's against the law and it's not very nice, but because they had the courage to ask the most audacious “what if” question of all, the lady sitting next to them immediately got out of her river of thinking — her expertise and experience — and said, “Well, hang on a minute, why don't we just hire blind people?” So they did. Production up 26 percent, breakage down 42 percent, and the British government gave them a 50 percent salary subsidy for hiring people with disabilities. Simple, powerful, fun.You just mentioned briefly this notion of the river of thinking, which is sort of your thoughts and the assumptions that really come from your lifetime of experience. People obviously really, when evaluating ideas, they really value their own personal experience. You could have a hundred studies saying this will work, but if something about their personal experience says it won't, they won't listen to it. Now, I believe experience is important, it helps you make judgments, but sometimes I think you're right, that it's an absolute trap that leads us to say no when we should say yes, and yes when we should say no.So that was one of the expansionist tools. One of the reductive tools is ideas. Ideas are the most subjective thing on the planet. You like pink, I like green, our boss likes yellow, there's a very good chance we're going to be doing the yellow idea. Well, wait a minute, was that the right one targeted for our consumer? Was it aligned with our brand? So there's a tool called stargazer. I borrowed it with pride from Richard Branson of Virgin. Virgin is the most elastic brand on the planet, right? They've done condoms, they've done space travel, and everything in between. Disney is a non-elastic brand. They do family magical experiences. So how does Virgin decide, of all these ideas they get pitched, how do they decide which ones to bring to market?They have a tool, I call it stargazer, it looks like a starfish, it's got five prongs on it, you'll see it in the book, and each one has three criteria, and you can make up your own criteria at the beginning of the project. Let's say, is this a strategic brand fit? Is this aligned with who we stand for as a brand? Is this embedded in consumer truth? Is it relevant to our consumer? Can I get this into the market the next 18 to 24 months? Is it going to hit my financial goals? And is it socially engaging? Is it going to get people excited? And all you do with all of your ideas at the end is go around those five criteria and ask, does this do a poor job, a good job, or an outstanding job of being aligned with our brand, a poor job, a good job, or an outstanding job of being targeted at our consumer, relevant to our consumer? And then guess what? With different colors for each idea, you join the dots just as you did when you were a kid. And one idea will rise to the top as to meeting your criteria, objectives, the most, not the one you like the best.Expansionist vs. reductionist tools (18:39)I define creativity as the ability to have an idea. We all have hundreds a day. I define innovation is the ability to get it done. That's the hard part, and that's what the tools are designed and helping you with.Do you think that the book and your approach is most helpful in helping people be more creative and come up with ideas or helping other people judge ideas as being good ideas and being open to ideas and closed to the wrong ideas?I think people use confusing terms just to make themselves more intelligent. The amount of times I've been in a meeting and somebody used an acronym, nobody knows what it is, but nobody's going to put their hand up. I call it expansionist and reductionist, the official name is divergent and convergent, who cares? Expansionist tools are the ones that help you get out of your river of thinking and help you think differently, and the reductionist tools are okay, now we've got all of these ideas, which one goes to market, how do we take it to market, how do we actually get it done?A lot of people say, as you said at the beginning, “I'm not creative.” Well, if you define yourself as a musician or an artist, then guess what? I'm not creative either. I define creativity as the ability to have an idea. We all have hundreds a day. I define innovation is the ability to get it done. That's the hard part, and that's what the tools are designed and helping you with.If you're running a business and you're like, “I want to implement this,” how do you . . . I'm sure you would love this, buy everybody the book, buy everybody three copies of the book. How do you implement it? I mean, I'm just curious how you do that job.How do I do the job? Or how does the business?How would someone do that job if they're like, I'm trying to make my workforce more creative, I'm trying to make sure that we are open to good ideas. How do you institute that at an existing business?Here's a tool that can change a culture overnight: Now you and I have been tasked with coming up with an idea for a birthday party. We've been given a $100,000, which is a reasonable budget for a birthday party. The theme could be Star Wars or Harry Potter. What would you like it to be?I'd probably go with Star Wars.Okay, so I'm going to come at you some amazing ideas for a Star Wars birthday. I'd like you to start each and every response with the words “No, because.” They'll be the first two words you use in each response, and then you'll tell me why not.So I was thinking of coming to your house, painting your kitchen dark, turn it into the Death Star canteen, and we'll have a food and wine festival from Hoth and Naboo and Tatooine.No, no, no. We can't do that because I like the way it looks now, I'm worried about repainting it and matching those colors. That's too significant of a change.What if, then, we just turn the lights out, we do a glow-in-the dark lightsaber fight full of our favorite alcoholic liquid?Well, that sounds like a better idea. Am I still supposed to say “no, because?”“No, because.” Stay on the “no, because.”No, can't do it. Listen, I worry about those lightsabers breaking, I'll be honest with you, and that alcohol flying over the place. Also, there are going to be kids there, and I just worry about the alcohol aspect. Because I’m an American, and we're very tight.So perhaps if there's kids there, we could do a cosplay party, and all the tall people could come as Vader and all the little people could come as ewoks.No, because I think some of the tall people would like to be the good guy, and I think some of the people who are not quite as tall might feel we were infantilizing them by turning them into ewoks.I’ll tell you what, then, we'll do a movie marathon and we’ll show all seven films back-to-back with some popcorn and coke. What do you say?No, because that would be a really long event. I think people would be super sick of even watching their favorite movies after about two movies, so can't do it.Alright, so we'll stop there. When somebody's constantly saying “no, because” to you, how does that make you feel?Like I really don't feel like coming up with any more ideas and like they will just not get to “yes.”And we started there with a food and wine festival and we ended up with showing the movies. Would you say the idea was getting bigger as we were going, or was it getting smaller? Which direction was it?It was getting progressively smaller and less imaginative.So let's try that again. Can we do Harry Potter?Well, I don't know as much, but I'll do my best.Okay, so have you seen a couple of the films?Kind of?You pick the theme, then. What do you want?Marvel. A beautifully licensed property. Yes, Marvel.I'm going to come at you with some ideas for a Marvel party. I'd like you to start each and every response this time with the words, “yes, and,” and we'll just build it together, okay?I tell you what, we could do a Spider-Man party where everybody gets those little web things that they could shoot out of their hands, but are actually made out of cotton candy, so we could eat it, we could eat the webs.Oh yes, and perhaps we could have villain-themed targets the shoot at?Oh, yes, and we could have a room full of superheroes and a room full of villains, and we have cosplay party and there'll even be a make-your-own Iron Man suit!Yes, we can have an Iron Man suit, obviously, and we can have the other costumes, and perhaps some of their other tools, like Thor’s hammer, those could somehow also be candy-related.Oh yes, and we could actually invite the stars of the film, we could have Chris Hemsworth, Robert Downey, Jr., and Chris Pratt, and Rocket, and Groot.Yes. Love the idea. And perhaps if that's not quite possible —— That was a “no, because!”Oh that sounded like a “no.”Come on, come on.We've reached the limits of my creativity.We'll stop there. A couple of observations: a lot more laughter, a lot more energy.Bigger or smaller?We're taking our steps into an ever-wider world!We work in big organizations, we work in small organizations, we have colleagues, we have constituencies, we have bosses, we have local regulators, et cetera, to bring on board with our ideas. By the time we just finished building that idea together, whose idea was it by the time we'd finished?That is lost to the fog of history. It is now a collaborative idea that we both can take credit for when it's a huge success.Ours. Two very simple words from the world of improv that have the power to turn a small idea into a big one really quickly. You can always value-engineer a big idea back down again, but you can't turn a small idea into a big idea. Far more importantly, it transfers the power of “my idea,” which we know never goes anywhere outside an organization, to “our idea” and accelerate its opportunity to get done.For people listening today, I'll give you one word of advice to take away: Don't let the words “no, because” be the first two words you use when somebody comes bouncing into your office with an idea you are not thinking of. They may have genius two seconds from now, two weeks from now — they ain't coming back.Just remind yourselves: I know you have responsibilities, I know you've got deadlines, I know you've got quarterly results. We are not green-lighting this idea for execution today, we are mainly green-housing it together using “yes, and.”Gamifying learning (25:20)Gaming is the future of education, there's no question. So now I have one more question I think that's super valuable advice, actually. As you were talking about western education squashing the creativity. . . Do you have you any thoughts about how to change that, keeping the best of what we do?Gamify. Gamify everything. Gaming is the future of education, there's no question. Universities will fall, but why will universities fall? That's a fairly outrageous statement. Well, let me think. Blue-collar workers, the white collar workers laughed at them because they didn't go to university. Let me think — people who use their hands, artificial intelligence, probably not taking them out anytime soon. White collar workers, not so much. Goodbye. Not quite, that's a slight exaggeration, but universities are teaching the same thing that we learned.So I walk into a classroom, a professor says, “In the year 3 AD, Brutus stabbed Julius Caesar in the back on the steps of the Senate of Rome.” Okay, well I'm asleep already. However, if I could walk into the Senate in Rome, in virtual reality, or in Apple Vision Pro — hello, thank you very much — walk right up to Julius Caesar and Brutus debating with the senators and say, “Hey Julius, look behind you!”I tell you for why: My son sat down at the breakfast table many years ago, he was probably about 13 or 14 at the time, and he said, “Do you know the Doge’s Palace in Venice was built in 14 . . .” And he went on this whole diatribe. I was like, where the hell did you learn that? He goes, “Oh, Assassin's Creed.” Gaming will annihilate.See, when you say online training, the first words out of somebody's mouth are, “Boring!” So, what I aim to develop within a year from today is to gamify the Imagination Emporium and actually help people, train them how to be more imaginative using gaming.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* AI and the Future of Work: Opportunity or Threat? - St. Louis Fed* Industrial policies and innovation in the electrification of the global automobile industry - CEPR▶ Business* What Is Venture Capital Now Anyway? - NYT* When IBM Built a War Room for Executives - IEEE▶ Policy/Politics* How U.S. Firms Battled a Government Crackdown to Keep Tech Sales to China - NYT* Was mocking Musk a mistake? Democrats think about warmer relationship with the billionaire - Politico* Recent Immigration Surge Has Been Largest in U.S. History - NYT* The DOJ’s Misguided Overreach With Google Is An Opportunity for Trump - AEI* Harding, Coolidge and the Forerunner of DOGE - WSJ Opinion* We Are All Mercantilists Now - WSJ Opinion* Exclusive: Trump transition recommends scrapping car-crash reporting requirement opposed by Tesla - Reuters* Trump’s Treasury Pick Is Poised to Test ‘Three Arrows’ Economic Strategy - NYT* This Might Be the Last Chance for Permitting Reform - Heatmap▶ AI/Digital* Are LLMs capable of non-verbal reasoning? - Ars* Google’s new Project Astra could be generative AI’s killer app - MIT* The Mystery of Why ChatGPT Couldn’t Say the Name ‘David Mayer’ - WSJ* OpenAI’s ChatGPT Will Respond to Video Feeds in Real Time - Bberg* Google and Samsung’s first AI face computer to arrive next year - Wapo* Why AI must learn to admit ignorance and say 'I don't know' - NS* AI Pioneer Fei-Fei Li Has a Vision for Computer Vision - IEEE* Broadcom soars to $1tn as chipmaker projects ‘massive’ AI growth - FT* Chip Cities Rise in Japan’s Fields of Dreams - Bberg Opinion* Tetlock on Testing Grand Theories with AI - MR* The mysterious promise of the quantum future - FT Opinion▶ Biotech/Health* RFK Jr.’s Lawyer Has Asked the FDA to Revoke Polio Vaccine Approval - NYT* Designer Babies Are Teenagers Now—and Some of Them Need Therapy Because of It - Wired* The long shot - Science▶ Clean Energy/Climate* What has four stomachs and could change the world? - The Economist* Germany Sees Huge Jump in Power Prices on Low Wind Generation - Bberg▶ Space/Transportation* NASA’s boss-to-be proclaims we’re about to enter an “age of experimentation” - Ars* Superflares once per Century - MPI* Gwynne Shotwell, the woman making SpaceX’s moonshot a reality - FT Opinion▶ Substacks/Newsletters* The Changing US Labor Market - Conversable Economist* How we'll know if Trump is going to sell America out to China - Noahpinion* Can RFK Kneecap American Agriculture? - Breakthrough JournalFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Nov 26, 2024 • 28min

✨ My chat (+transcript) with tech policy expert Neil Chilson on regulating GenAI

Washington’s initial thinking about AI regulation has evolved from a knee-jerk fear response to a more nuanced appreciation of its capabilities and potential risks. Today on Faster, Please! — The Podcast, I talk with technology policy expert Neil Chilson about national competition, defense, and federal vs. state regulation in this brave new world of artificial intelligence.Chilson is the head of AI policy at the Abundance Institute. He is a lawyer, computer scientist, and former chief technologist at the Federal Trade Commission. He is also the author of “Getting Out of Control: Emergent Leadership in a Complex World.”In This Episode* The AI risk-benefit assessment (1:18)* AI under the new Trump Administration (6:31)* An AGI Manhattan Project (12:18)* State-level overregulation (15:17)* Potential impact on immigration (21:15)* AI companies as national champions (23:00)Below is a lightly edited transcript of our conversation. The AI risk-benefit assessment (1:18)Pethokoukis: We're going to talk a bit about AI regulation, the future of regulation, so let me start with this: Last summer, the Biden administration put out a big executive order on AI. I assume the Trump administration will repeal that and do their own thing. Any idea what that thing will be?We have a lead on the tech, we have the best companies in the world. I think a Trump administration is really going to amp up that rhetoric, and I would expect the executive order to reflect the need to keep the US and the lead on AI technology.Chilson: The Biden executive order, repealing it is actually part of the GOP platform, which does not say a lot about AI, but it does say that it's definitely going to get rid of the Biden executive order. I think that's the first order of business. The repeal and replace process . . . the previous Trump administration actually had a couple of executive orders on AI, and they were very big-picture. They were not nearly as pro-regulatory as the Biden executive order, and they saw a lot of the potential.I'd expect a shift back towards a vision of AI as a force for good, I'd expect to shift towards the international dynamics here, that we need to keep ahead of China in AI. We have a lead on the tech, we have the best companies in the world. I think a Trump administration is really going to amp up that rhetoric, and I would expect the executive order to reflect the need to keep the US and the lead on AI technology.That emphasis differs from the Biden emphasis in what way?The Biden emphasis, when you read the executive order, it has some nice language up top about how this is a great new technology, it's very powerful, but overwhelmingly the Biden executive order is directed at the risk of AI and, in particular, not existential risk, more the traditional risks that academics who have talked about the internet have had for a long time: these risks of bias, or risks to privacy, or risks to safety, or deepfakes. And to be honest, there are risks to all of these technologies, but the Biden executive order to really pounded that home, the emphasis was very much on what are the problems that this tech could cause and what do we as the federal government need to do to get in here and make sure it's safe for everybody?I would expect that would be a big change. I don't see, especially on the bias front, I don't see a Trump administration emphasizing that as a primary thing that the federal government needs to fix about AI. In fact, with people like Elon Musk having the ear of the president, I would expect maybe to go in the opposite direction, that these ideas around bias are inflated, that these risks aren't really real, and, to the extent that they are, that it's no business of the federal government to step in and tell companies how to bias or de-bias their products.One thing that sort of confuses me on the Elon Musk angle is that it seemed that he was — at least used to be — very concerned about these somewhat science-fictional existential risks to AI. I guess my concern is that we'll get that version of Musk again talking to the White House and maybe he says, “I'm not worried about bias, but I'm still worried about it killing us all.” Is there any concern there, that that theme, which I think seems to have faded a little bit from the public conversation (maybe I'm wrong) that that will reemerge.I agree with you that I think that theme has faded. The early Senate hearings were very much in that vein, they were about the existential risk, and some of that was the people who were up there talking. This is something that's been on the mind of some of the leaders of the cutting edge of the tech space, and it's part of the reason why they got into it. There's always been a tension there. There is some sort of dynamic here where they're like, “This stuff is super dangerous and super powerful, so I need to be the one creating it and controlling it.” I think Musk still kind of falls in that bucket, so I share a little bit of that concern, but I think you're right that Congress has said, “Oh, those things seem really farfetched. That's not how we're going to focus our time.” I would expect that to continue even with a Musk-influenced administration.I actually don't think that there is necessarily a big tension between that and a pushback against the sort of red-tape regulatory approach to AI that was kind of the more traditional pessimistic, precautionary approach to technology, generally. I think Musk is a guy who hates red tape. I think he's seen it in his own businesses, how it's slowed down launches of all sorts. I think you can hate red tape and be worried about this existential risk. It's not necessarily in intentioned, but it'll be interesting to see how those play out, how Musk influences the policy of the Trump administration on AI.AI under the new Trump Administration (6:31)One issue that seemed to be coming up over and over again is differing opinions among technologists, venture capitalists, about the open-source issue. How does that play out heading into a Trump administration? When I listen to the Andreessen Horowitz podcast, those guys seem very concerned.They're going to get this software. They're going to develop it themselves. We can't out-China China. We should lean into what we're really good at, and that is a dynamic software-development environment, of which open source is a key component.So there's a lot of disagreements about how open source plays out. Open source, it should be pointed out first, is a core technology across everything that people who develop software use. Most websites run on open source software. Most development tools have a huge open source component, and one of the best ways to develop and test technology is by sharing it with people and having people build on it.I do think it is a really important technology in the AI space. We've seen that already, people are building smaller models, doing new things in open source that it costs a lot of money to do in the first instance, maybe in a closed source.The concerns that people raise is that this, especially in the national security space or the national competition, that this sort of exposes our best research to other countries. I think there's a couple of responses to that.The first one is that closed source is no guarantee that those people don't have that technology as well. In fact, most of these models fit on the thumb drive. Most of these AI labs are not run like nuclear facilities, and it's much easier to smuggle a thumb drive out than it is to smuggle a gram of plutonium or something like that. They're going to get this software. They're going to develop it themselves. We can't out-China China. We should lean into what we're really good at, and that is a dynamic software-development environment, of which open source is a key component.It also offers, in many ways, an alternative to centralized sources of artificial intelligence models, which can offer a bunch of user interface-based benefits. They're just easier to use. It's much easier to log into OpenAI and use their ChatGPT than it is to download and build your own model, but it is really nice as a competitive gap filler to have thousands and thousands of other that might do something specific, or have a specific orientation, which you can train on your own. And those exist because of the open source ecosystem. So I think it solves a lot of problems, probably a lot more than it creates.So what would you expect — let's focus on the federal level — for this congress, for the Trump administration, to do other than broadly affirm that we love AI, we hope it continues? Will there be any sort of regulatory rule, any sort of guidance, that would in any way constrain or direct this technology? Maybe it's in the area of the frontier models, I don't know.I think we're likely to see a lot of action at the use level: What are the various uses of various applications and how does AI change that? So in transportation and healthcare . . . this is a general purpose technology, and so it's going to be deployed in lots of spaces, and a lot of these spaces already have a lot of regulatory frameworks in place, and so I think we'll see lots of agencies looking to see, “Hey, this new technology, does it really change anything about how we regulate medical devices? If it does, how do we need to accommodate that? What are the unique risks? What are the unique opportunities that maybe the current framework doesn't really allow for?”I think we'll see a lot of that. I think, once you get up to the abstract model level, it's much harder to figure out what problem both are we trying to solve at the model level and do we have the capability to solve at the model level. If we're worried about people developing bio weapons with this technology, is making sure the model doesn't allow that, is that useful? Is it even possible? Or should we focus those attentions maybe down on, people can't secure the components that they need to execute a biohazard? Would that be a more productive place? I don't see a lot of action, honestly, at the model level.Maybe there'll be some reporting requirements or training requirements. The executive order had those, although they used something called the Defense Production Act — I think probably unconstitutionally, how they use that. But that's going to go away. If that gets filled in by Congress, that there's some sort of reporting regime — maybe that's possible, but Congress doesn't seem to be able to get those types of really high-level tech regulations across the line. They haven't done it with privacy legislation for a long time and everybody seems to think that would be a good idea.I think we'll continue to see efforts at the agency level. One thing Congress might do is they might spend some money in this space, so maybe there will be some new investment or maybe the national laboratories will get some money to do additional AI research. That has its own challenges, but most of them are financial challenges, they're not so much whether or not it's going to impede the industry, so that's kind of how I think it'll likely play out at the federal level.An AGI Manhattan Project (12:18)A report just came out (yesterday, as we're recording this) from the outside advisory group on US-China relations that advises the government, and they're calling for a Manhattan Project to get to an artificial general intelligence, I assume before China or anybody else.Is that a good idea? Do you think we'll do that? What do you make of that recommendation, which caused quite a stir when it came out?For the most part, artificial general intelligence, I don't understand what the appeal of that is, frankly . . . Why not train something that could do something specific really well?Yeah, it's a really interesting report. If you read through the body of the report, it's pretty standard international competitiveness analysis that says, “What are the supply chains for chips? How does it look? How do we compare on talent development? How do we compare on the industry backing investment?” Things like that. And we compare very well, overall, the report says.But then, all of a sudden at the top level, the first recommendation talks about artificial general intelligence. This is the kind of AI that doesn't exist yet, but it's the kind that could basically do everything a human could do at the intellectual level that a human could do it. It's interesting because that recommendation, it doesn't seem to be founded on anything that's actually in the report. There's no other discussion in the report about artificial general intelligence, or how important it is strategically, or anything like that, and yet, they want to spend Manhattan Project-level amounts of money — I think in today's dollars, that'd be like $30 billion to create this artificial general intelligence. I don't know what to make of that, and, more than that, I think it's very unlikely to move the policy discussion. Maybe it moves the Overton window, so people are talking like, “Yeah, we need a Manhattan Project,” but I don't think that it's likely to do anything.For the most part, artificial general intelligence, I don't understand what the appeal of that is, frankly. It has a sort of theoretical appeal, that we could have a computer that could do all the things that a person could do, but in the modern economy, it's actually better to have things that are really good at doing a specific set of things rather than having a generalist that you can deploy lots of different places, especially if you're talking about software. Why not train something that could do something specific really well? I think that would slot into our economy better. I think it's much more likely to be the most productive value of the intense computation time and money that it takes to train these types of models. So it seems like a strange thing to emphasize in our federal spending, even if we're talking about the national security implications. It would seem like it'd be much better to train a model that's specifically built for some type of drone warfare or something rather than trying to make it good at everythi ng and then say, “Oh, now we're going to use you to fly drones.” That doesn't seem to make a ton of sense.State-level overregulation (15:17)We talked about the federal level. Certainly — and not that the states seem to need a nudge, but if they see the Washington doing less, I'm sure there'll be plenty of state governments saying, “Well then we need to do more. We need to fill up the gap with our state regulation.” That already seems to be happening. Will that continue to happen and can the Trump administration stop that?I think it will continue to happen, the question is what kind of gap is left by the Trump administration. I would say what the Biden administration left was a vision gap. They didn't really have an overarching vision for how the US was going to engage with this technology at the federal level, unlike the Clinton administration which set out a pretty clear vision for how the federal government planned to engage on the early version of the internet. What it said was, for some really good reasons, we're going to let the commercial sector lead on development here.I think sending a signal like that could have sort of bully-pulpit effect, especially in redder states. You'll still see states like California and New York, they're listening to Europe on how to do stuff in this space.Still? Are we still listening to . . . Who are the people out there who think, “They've got it figured out”? I understand that maybe that's your initial impulse when you have a new technology and you're like, “I don't know what to do, so who is doing something on it?” But we've had a little bit of time and I just don't get anybody who would default to be like, “Man, we're just going to look at a couple of EU white papers and off to the races here in our state.”I think we're starting to see . . . the shopping of bills that look a lot like the way privacy has worked across the states, and in some cases are being pushed by the same organizations that represent compliance companies saying, “Hey, yeah, we need to do all this algorithmic bias auditing, or safety auditing, and states should require it.”I think a lot of this is a hangover of the social media fights. AI, if you poll it just at that level, if you're like, “Hey, do you think AI is going to be good or bad for your job or for the economy?” Americans are somewhat skeptical. It's because they think of AI in the cultural context that includes Terminator, and automation, and so they think of it that way. They don't think about the thousands of applications on their phones that use artificial intelligence.So I think there's a political moment here around this. The Europeans jumped in on and said, “Hey, we're the first to regulate in this space comprehensively.” I think they're dialing that back since some of their member states are like, “Hey, this is killing our own homegrown AI industry.” But for some reason, you're right, California and New York seem to be embracing that, and I think they probably will continue to. At the very local level, at the state level, there's just weird incentives to do something and then you don't really pay a lot of consequences down the road.Having said that, there was a controversial bill that was very aggressively pushed, SB 1047, in California over the summer, and it got killed. It got canned by Gavin Newsom in the end. And I think that's a sort of a unique artifact of California's “go along to get along” legislature process where even people who don't support bills vote for them, kind of knowing that Gavin, or that the governor, will bring down the veto when it doesn't make political sense.All of this is to say, California's going to California. I think we're starting to see, and what concerns me is, we're starting to see the shopping of bills that look a lot like the way privacy has worked across the states, and in some cases are being pushed by the same organizations that represent compliance companies saying, “Hey, yeah, we need to do all this algorithmic bias auditing, or safety auditing, and states should require it.”There's a Texas draft bill that has been floated right now, and you wouldn't think that Texas would be on the frontier of banning differential effects in bias from AI. It doesn't really sound particularly red-state-y, but these things are getting shopped around and if it moves in Texas, it'll move other places too. I worry about that level of red tape coming at the state level, and that's just going to be ground warfare on the legislative front at the state level.So federal preemption, what is that and how would that work? And is that possible?It's really hard in this space because the technology is so general. Congress could, of course, write something that was very broad and preempted, certain types of regulation of models, and maybe that's a good idea, I've seen some draft language around that.On the other hand, I do believe in federalism and these aren't quite the same sort of network-based technologies that only make sense in a national sweep. So maybe there's an argument that we should let states suffer the consequences of their own regulatory approaches. That hurts my heart a little bit just to think about the future because there are a lot of talented people in those states who are going to find out it's the lawyers who are their main constraint. Those types of transaction costs, they will slow us down. I think if it looks like we're falling behind in the US because we can't get out of our own way regulatorily, I think there will be more impulse to fix things.There are some other creative solutions such as interstate compacts to try to get people to level up across multiple states about how they're going to treat AI and allow innovation to flourish, and so I think we'll see more of those experiments, but it is really hard at the federal level to preempt just because there's so many state-based interests who are going to push back against that sort of thing.Potential impact on immigration (21:15)As far as AI influencing what we do elsewhere — one thing you wrote about recently in a really great essay, which I've already drawn upon in some of these questions is — thinking about immigration and AI talent coming to the United States — what I think is now a widely accepted understanding, that this is an important technology and we certainly want to be the leader in this technology — does that change how we think about immigration, at least very high-skilled immigration?We should be wanting the most talented people to come here and stay here.I think it should. Frankly, we should have changed our minds about some of this stuff a long time ago. We should be wanting the most talented people to come here and stay here. The most talented people in the world already come here for school often. When I was in computer science grad school, it was full of people who really desperately wanted to stay in the US and build companies and build products, and some of them struggled really hard to figure out a way to do it legally.I think that making it easier for those people to stay is essential to keeping not just our lead in the world, I don't want to say it that way — I mean that's important, I think national competitiveness is sort of underrated, I think that is valuable — but those people are the most productive in the US system where they can get access to venture capital that's unlike any other part of the planet. They can get access to networks of talent that are unavailable on other parts of the planet. Keeping them here is good for the US, but I think it's good overall for technological development, and we should really, really, really focus on how to make that easier and more attractive.AI companies as national champions (23:00)This isn't necessarily a specific AI issue, but again, as you said earlier, it seems like a lot of the debate, initially, is really a holdover from the social media debates about moderation, and bias, and all that, and a lot of those sorts of people, in many cases, and frameworks just got globed onto AI.Another aspect is the antitrust, and now we’re worried about these big companies owning these platforms, and they're biased.Do we begin to look at issues of how we look at our big companies who have been leading in AI, doing a lot of R&D — does the politics around Big Tech change if we begin to see them as our vanguard companies that will keep us ahead of China?. . . in contrast to the Biden sort of “big-is-bad” rhetoric that they sort of leaned into entirely, I think a Trump administration is going to bring more nuance to that in some ways. And I do think that there will be more of a look towards our innovative companies as being the vanguard of what we do in the US.I think it already has, honestly. You saw early on, the Senate hearings around AI were totally inflected with the language of social media and that network-effects type of ecosystem. AI does not work like that. It doesn't work the same way. In fact, the feedback loops are so much faster from these models, we saw things like Google Gemini that had ahistorical renderings of the founding fathers, and that got so much shouting on Twitter, and on X, and lots of other places that Google very quickly adjusted, tweaked its path. I think we're seeing the toning down of that rhetoric and the recognition that these companies are creating a lot of powerful, useful products, and that they are sort of national champions.Trump, on the campaign trail, when asked about breaking up Google for an ongoing antitrust litigation was like, “Hold on guys, breaking up these companies might not be in our best interest. There might be other ways we can solve these types of problems.” I think that that level of, in contrast to the Biden sort of “big-is-bad” rhetoric that they sort of leaned into entirely, I think a Trump administration is going to bring more nuance to that in some ways. And I do think that there will be more of a look towards our innovative companies as being the vanguard of what we do in the US.Now, having said that, obviously I think there's tons of AI development that is not inside of these largest companies in the open source space, and especially in the application layer, building on top of some of these foundation models, and so I think that ecosystem is also extremely important. Things that sort of preference the big companies over the small ones, I would have a lot of concerns about, and there have been regulatory regimes proposed that, even while opposed by some of the bigger companies, would certainly be possible for them to comply with in a way that small companies would struggle to comply with, and open-source developers just don't have any sort of nexus with which to comply, since there is no actual business model that's propping that type of approach up. So I'd want to keep it pretty neutral between the big companies, the small companies, and open source, while having the cultural recognition that big companies are extremely valuable to the US innovation ecosystem.If you had some time with, I don’t know, somebody, the president, the vice president, the Secretary of Commerce, someone in an elevator going from the first to the 10th floor, and you had to quickly say, “Here's what you need to be keeping in mind about AI over the next two to four years,” what would you say?I think the number one thing I would say is that, at the state level, we're wrapping a lot of red tape around innovative companies and individuals, and that we need to find a way to clear that thicket or stop it from growing any further. That's the number one challenge that I see facing this.Secondary to that, I would say the US government needs to figure out how to take advantage of these tools. The federal government is slow to adopt new technologies, but this technology has a lot of applications to the types of government work that hundreds of thousands of federal employees do every day, and so finding ways to streamline using AI to do the job better I think is really valuable, and I think it would be worth some investment at the federal level to think about how to do that well.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* Productivity During and Since the Pandemic - San Francisco Fed* The Effect of COVID-19 Immigration Restrictions on Post-Pandemic Labor Market Tightness - St. Louis Fed* Trump Plans Tariffs on Canada, China and Mexico That Could Cripple Trade - NYT▶ Business* Nvidia’s new AI audio model can synthesize sounds that have never existed - Ars* Europe’s Mistral expands in Silicon Valley in hunt for AI staff - FT▶ Policy/Politics* Musk Wants $2 Trillion of Spending Cuts. Here’s Why That’s Hard. - WSJ* AI Governance: From Fears and Fearmongering to Risks and Rewards - AEI* Newsom says California to offer EV subsidies if Trump kills federal tax credit - Wapo▶ AI/Digital* A new golden age of discovery - AI Policy Perspectives* How Do You Get to Artificial General Intelligence? Think Lighter - Wired* Is Creativity Dead? - NYT Opinion* The way we measure progress in AI is terrible - MIT* AI's scientific path to trust - Axios* AI Dash Cams Give Wake-Up Calls to Drowsy Drivers - Spectrum▶ Biotech/Health* Combining AI and Crispr Will Be Transformational - Wired* Neuralink Plans to Test Whether Its Brain Implant Can Control a Robotic Arm - Wired* Scientists are learning why ultra-processed foods are bad for you - Economist▶ Clean Energy/Climate* Taxing Farm Animals’ Farts and Burps? Denmark Gives It a Try. - NYT* These batteries could harness the wind and sun to replace coal and gas - Wapo▶ Robotics/AVs* On the Wings of War - NYT▶ Up Wing/Down Wing* ‘Genesis’ Review: Rise of the New Machines - WSJ* The Myth of the Loneliness Epidemic - Asterisk▶ Substacks/Newsletters* The Middle Income Trap - Conversable Economist * America's Productivity Boom - Apricitas Economics* The Rise of Anthropic powered by AWS - AI Supremacy* Data to start your week - Exponential View* Trump's economic team is on a collision course with reality - Slow Boring* Five Unmanned SpaceX Starships to Mars in 2026 with Thousands of Teslabots - next BIG futureFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
undefined
Nov 15, 2024 • 28min

🏘️ My chat (+transcript) with economist Bryan Caplan on density and housing deregulation

In this engaging discussion, economist Bryan Caplan, a professor at George Mason University and author of "Build, Baby, Build," dives into the housing crisis in the U.S. He highlights how government regulations have exacerbated affordability issues and stifled mobility. Caplan discusses the rise of the YIMBY movement and the impact of local regulations in places like Texas versus the Bay Area. He further explores the challenges of building new cities and the innovative ideas needed to reshape housing policies for a brighter future.
undefined
Oct 31, 2024 • 27min

✨⏩ My chat (+transcript) with ... economist Robin Hanson on AI, innovation, and economic reality

In this episode of Faster, Please! — The Podcast, I talk with economist Robin Hanson about a) how much technological change our society will undergo in the foreseeable future, b) what form we want that change to take, and c) how much we can ever reasonably predict.Hanson is an associate professor of economics at George Mason University. He was formerly a research associate at the Future of Humanity Institute at Oxford, and is the author of the Overcoming Bias Substack. In addition, he is the author of the 2017 book, The Elephant in the Brain: Hidden Motives in Everyday Life, as well as the 2016 book, The Age of Em: Work, Love, and Life When Robots Rule the Earth.In This Episode* Innovation is clumpy (1:21)* A history of AI advancement (3:25)* The tendency to control new tech (9:28)* The fallibility of forecasts (11:52)* The risks of fertility-rate decline (14:54)* Window of opportunity for space (18:49)* Public prediction markets (21:22)* A culture of calculated risk (23:39)Below is a lightly edited transcript of our conversationInnovation is Clumpy (1:21)Do you think that the tech advances of recent years — obviously in AI, and what we're seeing with reusable rockets, or CRISPR, or different energy advances, fusion, perhaps, even Ozempic — do you think that the collective cluster of these technologies has put humanity on a different path than perhaps it was on 10 years ago?. . . most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years.That’s a pretty big standard. As you know, the world has been growing exponentially for a very long time, and new technologies have been appearing for a very long time, and the economy doubles roughly every 15 or 20 years, and that can't happen without a whole lot of technological change, so most people don't notice just how much stuff is changing behind the scenes in order for the economy to double every 15 or 20 years. So to say that we're going more than that is really a high standard here. I don't think it meets that standard. Maybe the standard it meets is to say people were worried about maybe a stagnation or slowdown a decade or two ago, and I think this might weaken your concerns about that. I think you might say, well, we're still on target.Innovation's clumpy. It doesn't just out an entirely smooth . . . There are some lumpy ones once in a while, lumpier innovations than usual, and those boost higher than expected, sometimes lower than expected sometimes, and maybe in the last ten years we've had a higher-than-expected clump. The main thing that does is make you not doubt as much as you did when you had the lower-than-expected clump in the previous 10 years or 20 years because people had seen this long-term history and they thought, “Lately we're not seeing so much. I wonder if this is done. I wonder if we're running out.” I think the last 10 years tells you: well, no, we're kind of still on target. We're still having big important advances, as we have for two centuries.A history of AI advancement (3:25)People who are especially enthusiastic about the recent advances with AI, would you tell them their baseline should probably be informed by economic history rather than science fiction?[Y]es, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.By technical history! We have 70-odd years of history of AI. I was an AI researcher full-time from ’84 to ’93. If you look at the long sweep of AI history, we've had some pretty big advances. We couldn't be where we are now without a lot of pretty big advances all along the way. You just think about the very first digital computer in 1950 or something and all the things we've seen, we have made large advances — and they haven't been completely smooth, they've come in a bit of clumps.I was enticed into the field in 1984 because of a recent set of clumps then, and for a century, roughly every 30 years, we've had a burst of concern about automation and AI, and we've had big concern in the sense people said, “Are we almost there? Are we about to have pretty much all jobs automated?” They said that in the 1930s, they said it in the 1960s — there was a presidential commission in the 1960s: “What if all the jobs get automated?” I jumped in in the late ’80s when there was a big burst there, and I as a young graduate student said, “Gee, if I don't get in now, it'll all be over soon,” because I heard, “All the jobs are going to be automated soon!”And now, in the last decade or so, we've had another big burst, and I think people who haven't seen that history, it feels to them like it felt to me in 1984: “Wow, unprecedented advances! Everybody's really excited! Maybe we're almost there. Maybe if I jump in now, I'll be part of the big push over the line to just automate everything.” That was exciting, it was tempting, I was naïve, and I was sucked in, and we're now in another era like that. Yes, if you're young, and you haven't seen the world for decades, you might well believe that we are almost there, we're just about to automate everything — but we're not.I like that you mentioned the automation scare of the ’60s. Just going back and looking at that, it really surprised me how prevalent and widespread and how serious people took that. I mean, you can find speeches by Martin Luther King talking about how our society is going to deal with the computerization of everything. So it does seem to be a recurrent fear. What would you need to see to think it is different this time?The obvious relevant parameter to be tracking is the percentage of world income that goes to automation, and that has been creeping up over the decades, but it's still less than five percent.What is that statistic?If you look at the percentage of the economy that goes to computer hardware and software, or other mechanisms of automation, you're still looking at less than five percent of the world economy. So it's been creeping up, maybe decades ago it was three percent, even one percent in 1960, but it's creeping up slowly, and obviously, when that gets to be 80 percent, game over, the economy has been replaced — but that number is creeping up slowly, and you can track it, so when you start seeing that number going up much faster or becoming a large number, then that's the time to say, “Okay, looks like we're close. Maybe automation will, in fact, take over most jobs, when it's getting most of world income.”If you're looking at economic statistics, and you're looking at different forecasts, whether by the Fed or CBO or Wall Street banks and the forecasts are, “Well, we expect, maybe because of AI, productivity growth to be 0.4 percentage points higher over this kind of time. . .” Those kinds of numbers where we're talking about a tenth of a point here, that's not the kind of singularity-emergent world that some people think or hope or expect that we're on.Absolutely. If you've got young enthusiastic tech people, et cetera — and they're exaggerating. The AI companies, even they're trying to push as big a dramatic images they can. And then all the stodgy conservative old folks, they're afraid of seeming behind the times, and not up with things, and not getting it — that was the big phrase in the Internet Boom: Who “gets it” that this is a new thing?I'm proud to be a human, to have been part of the civilization to have done this . . . but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is.Now it would be #teamgetsit.Exactly, something like that. They're trying to lean into it, they're trying to give it the best spin they can, but they have some self-respect, so they're going to give you, “Wow 0.4 percent!” They'll say, “That's huge! Wow, this is a really big thing, everybody should be into this!” But they can't go above 0.4 percent because they've got some common sense here. But we've even seen management consulting firms over the last decade or so make predictions that 10 years in the future, half all jobs would be automated. So we've seen this long history of these really crazy extreme predictions into a decade, and none of those remotely happened, of course. But people do want to be in with the latest thing, and this is obviously the latest round of technology, it's impressive. I'm proud to be a human, to have been part of the civilization to have done this, and I’d like to try them out, and see what I can do with them, and think of where they could go. That's all exciting and fun, but we've seen that for 70 years: new technologies, we get excited, we try them out, we try to apply them, and that's part of what progress is. The tendency to control new tech (9:28)Not to talk just about AI, but do you think AI is important enough that policymakers need to somehow guide the technology to a certain outcome? Daron Acemoglu, one of the Nobel Prize winners, has for quite some time, and certainly recently, said that this technology needs to be guided by policymakers so that it helps people, it helps workers, it creates new tasks, it creates new things for them to do, not automate away their jobs or automate a bunch of tasks.Do you think that there's something special about this technology that we need to guide it to some sort of outcome?I think those sort of people would say that about any new technology that seemed like it was going to be important. They are not actually distinguishing AI from other technologies. This is just what they say about everything.It could be “technology X,” we must guide it to the outcome that I have already determined.As long as you've said, “X is new, X is exciting, a lot of things seem to depend on X,” then their answer would be, “We need to guide it.” It wouldn't really matter what the details of X were. That's just how they think about society and technology. I don't see anything distinctive about this, per se, in that sense, other than the fact that — look, in the long run, it's huge.Space, in the long run, is huge, because obviously in the long run almost everything will be in space, so clearly, eventually, space will be the vast majority of everything. That doesn't mean we need to guide space now or to do anything different about it, per se. At the moment, space is pretty small, and it's pretty pedestrian, but it's exciting, and the same for AI. At the moment, AI is pretty small, minor, AI is not remotely threatening to cause harm in our world today. If you look about harmful technologies, this is way down the scale. Demonstrated harms of AI in the last 10 years are minuscule compared to things like construction equipment, or drugs, or even television, really. This is small.Ladders for climbing up on your roof to clean out the gutters, that's a very dangerous technology.Yeah, somebody should be looking into that. We should be guiding the ladder industry to make sure they don't cause harm in the world.The fallibility of forecasts (11:52)I'm not sure how much confidence we should ever have on long-term economic forecasts, but have you seen any reason to think that they might be less reliable than they always have been? That we might be approaching some sort of change? That those 50-year forecasts of entitlement spending might be all wrong because the economy's going to be growing so much faster, or the longevity is going to be increasing so much faster?Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.It was just a little over two centuries ago when the world saw this enormous revolution. Previously, the world had been doubling roughly every thousand years, and that had been going on for maybe 10,000 years, and then, within the space of a century, we switched to doubling roughly every 15 or 20 years. That's a factor of 60 increase in the growth rate, and it happened after a previous transition from forging to farming, roughly 10 doublings before.So you might say we can't trust these trends to continue maybe more than 10 doublings, and then who knows what might happen? You could just say — that's 200 years, say, if you double every 20 years — we just can't trust these forecasts more than 200 years out. Look at what's happened in the past after that many doublings, big changes happened, and you might say, therefore, expect, on that sort of timescale, something else big to happen. That's not crazy to say. That's not very specific.And then if you say, well, what is the thing people most often speculate could be the cause of a big change? They do say AI, and then we actually have a concrete reason to think AI would change the growth rate of the economy: That is the fact that, at the moment, we make most stuff in factories, and factories typically push out from the factory as much value as the factory itself embodies, in economic terms, in a few months.If you could have factories make factories, the economy could double every few months. The reason we can't now is we have humans in the factories, and factories don't double them. But if you could make AIs in factories, and the AIs made factories, that made more AIs, that could double every few months. So the world economy could plausibly double every few months when AI has dominated the economy.That's of the magnitude doubling every few months versus doubling every 20 years. That's a magnitude similar to the magnitude we saw before from farming to industry, and so that fits together as saying, sometime in the next few centuries, expect a transition that might increase the growth rate of the economy by a factor of 100. Now that's an abstract thing in the long frame, it's not in the next 10 years, or 20 years, or something. It's saying that economic modes only last so long, something should come up eventually, and this is our best guess of a thing that could come up, so it's not crazy.The risks of fertility-rate decline (14:54)Are you a fertility-rate worrier?If the population falls, the best models say innovation rates would fall even faster.I am, and in fact, I think we have a limited deadline to develop human-level AI, before which we won't for a long pause, because falling fertility really threatens innovation rates. This is something we economists understand that I think most other people don't: You might've thought that a falling population could be easily compensated by a growing economy and that we would still have rapid and fast innovation because we would just have a bigger economy with a lower population, but apparently that's not true.If the population falls, the best models say innovation rates would fall even faster. So say the population is roughly predicted to peak in three decades and then start to fall, and if it's falls, it would fall roughly a factor of two every generation or two, depending on which populations dominate, and then if it fell by a factor of 10, the innovation rate would fall by more than a factor of 10, and that means just a slower rate of new technologies, and, of course, also a reduction in the scale of the world economy.And I think that plausibly also has the side effect of a loss in liberality. I don't think people realize how much it was innovation and competition that drove much of the world to become liberal because the winning nations in the world were liberal and the rest were afraid of falling too far behind. But when innovation goes away, they won't be so eager to be liberal to be innovative because innovation just won't be a thing, and so much of the world will just become a lot less liberal.There's also the risk that — basically, computers are a very durable technology, in principle. Typically we don't make them that durable because every two years they get twice as good, but when innovation goes away, they won't get good very fast, and then you'll be much more tempted to just make very durable computers, and the first generation that makes very durable computers that last hundreds of years, the next generation won't want to buy new computers, they'll just use the old durable ones as the economy is shrinking and then the industry that commuters might just go away. And then it could be a long time before people felt a need to rediscover those technologies.I think the larger-scale story is there's no obvious process that would prevent this continued decline because there's no level at which, when you get that, some process kicks in and it makes us say, “Oh, we need to increase the population.” But the most likely scenario is just that the Amish and [Hutterites] and other insular, fertile subgroups who have been doubling every 20 years for a century will just keep doing that and then come to dominate the world, much like Christians took over the Roman Empire: They took it over by doubling every 20 years for three centuries. That's my default future, and then if we don't get AI or colonize space before this decline, which I've estimated would be roughly 70 years’ worth more of progress at previous rates, then we don't get it again until the Amish not only just take over the world, but rediscover a taste for technology and economic growth, and then eventually all of the great stuff could happen, but that could be many centuries later.This does not sound like an issue that can be fundamentally altered by tweaking the tax code.You would have to make a large —— Large turn of the dial, really turn that dial.People are uncomfortable with larger-than-small tweaks, of course, but we're not in an era that's at all eager for vast changes in policy, we are in a pretty conservative era that just wants to tweak things. Tweaks won't do it.Window of opportunity for space (18:49)We can't do things like Daylight Savings Time, which some people want to change. You mentioned this window — Elon Musk has talked about a window for expansion into space, and this is a couple of years ago, he said, “The window has closed before. It's open now. Don't assume it will always be open.”Is that right? Why would it close? Is it because of higher interest rates? Because the Amish don't want to go to space? Why would the window close?I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy.There's a demand for space stuff, mostly at the moment, to service Earth, like the internet circling the earth, say, as Elon's big project to fund his spaceships. And there's also demand for satellites to do surveillance of the earth, et cetera. As the earth economy shrinks, the demand for that stuff will shrink. At some point, they won't be able to afford fixed costs.A big question is about marginal cost versus fixed costs. How much is the fixed cost just to have this capacity to send stuff into space, versus the marginal cost of adding each new rocket? If it's dominated by marginal costs and they make the rockets cheaper, okay, they can just do fewer rockets less often, and they can still send satellites up into space. But if you're thinking of something where there's a key scale that you need to get past even to support this industry, then there's a different thing.So thinking about a Mars economy, or even a moon economy, or a solar system economy, you're looking at a scale thing. That thing needs to be big enough to be self-sustaining and economically cost-effective, or it's just not going to work. So I think, unfortunately, we've got a limited window to try to jumpstart a space economy before the earth economy shrinks and isn't getting much value from a space economy. Space economy needs to be big enough just to support itself, et cetera, and that's a problem because it's the same humans in space who are down here on earth, who are going to have the same fertility problems up there unless they somehow figure out a way to make a very different culture.A lot of people just assume, “Oh, you could have a very different culture on Mars, and so they could solve our cultural problems just by being different,” but I'm not seeing that. I think they would just have a very strong interconnection with earth culture because they're going to have just a rapid bandwidth stuff back and forth, and their fertility culture and all sorts of other culture will be tied closely to earth culture, so I'm not seeing how a Mars colony really solves earth cultural problems.Public prediction markets (21:22)The average person is aware that these things, whether it's betting markets or these online consensus prediction markets, that they exist, that you can bet on presidential races, and you can make predictions about a superconductor breakthrough, or something like that, or about when we're going to get AGI.To me, it seems like they have, to some degree, broken through the filter, and people are aware that they're out there. Have they come of age?. . . the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored.In this presidential election, there's a lot of discussion that points to them. And people were pretty open to that until Trump started to be favored, and people said, “No, no, that can't be right. There must be a lot of whales out there manipulating, because it couldn't be Trump's winning.” So the openness to these things often depends on what their message is.But honestly, the big value here isn't going to be betting on elections, it's going to be organizations using them to make organization decisions, and that process is being explored. Twenty-five years ago, I invented this concept of decision markets using in organizations, and now in the last year, I've actually seen substantial experimentation with them and so I'm excited to see where that goes, and I'm hopeful there, but that's not so much about the presidential markets.Roughly a century ago there was more money bet in presidential betting markets than in stock markets at the time. Betting markets were very big then, and then they declined, primarily because scientific polling was declared a more scientific approach to estimating elections than betting markets, and all the respectable people wanted to report on scientific polls. And then of course the stock market became much, much bigger. The interest in presidential markets will wax and wane, but there's actually not that much social value in having a better estimate of who's going to win an election. That doesn't really tell you who to vote for, so there are other markets that would be much more socially valuable, like predicting the consequences of who's elected as president. We don't really have much markets on those, but maybe we will next time around. But there is a lot of experimentation going in organizational prediction markets at the moment, compared to, say, 10 years ago, and I'm excited about those experiments.A culture of calculated risk (23:39)I want a culture that, when one of these new nuclear reactors, or these nuclear reactors that are restarting, or these new small modular reactors, when there's some sort of leak, or when a new SpaceX Starship, when some astronaut gets killed, that we just don't collapse as a society. That we're like, well, things happen, we're going to keep moving forward.Do you think we have that kind of culture? And if not, how do we get it, if at all? Is that possible?That's the question: Why has our society become so much more safety-oriented in the last half-century? Certainly one huge sign of it is the way we way overregulated nuclear energy, but we've also now been overregulating even kids going to school. Apparently they can't just take their bikes to school anymore, they have to go on a bus because that's safer, and in a whole bunch of ways, we are just vastly more safety-oriented, and that seems to be a pretty broad cultural trend. It's not just in particular areas and it's not just in particular countries.I've been thinking a lot about long-term cultural trends and trying to understand them. The basic story, I think, is we don't have a good reason to believe long-term cultural trends are actually healthy when they are shared trends of norms and status markers that everybody shares. Cultural things that can vary within the cultures, like different technologies and firm cultures, those we're doing great. We have great evolution of those things, and that's why we're having all these great technologies. But things like safetyism is more of a shared cultural norm, and we just don't have good reasons to think those changes are healthy, and they don't fix themselves, so this is just another example of something that’s going wrong.They don't fix themselves because if you have a strong, very widely shared cultural norm, and someone has a different idea, they need to be prepared to pay a price, and most of us aren’t prepared to pay that price.If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.Right. If, for example, we have 200 countries, if they were actually independent experiments and had just had different cultures going different directions, then I'd feel great; that okay, the cultures that choose too much safety, they'll lose out to the others and eventually it'll be worn out. If we had a healthy cultural evolution competition among even nations, this would be fine. The problem is we have this global culture, a monoculture, really, that enforces everybody.At the beginning of Covid, all the usual public health efforts said all the usual things, and then world elites got together and talked about it, and a month later they said, “No, that's all wrong. We have a whole different thing to do. Travel restrictions are good, masks are good, distancing is good.” And then the entire world did it the same way, and there was strong pressure on any deviation, even Sweden, that would dare to deviate from the global consensus.If you look about many kinds of regulation, it's very little deviation worldwide. We don't have 200, or even 100, independent policy experiments, we basically have a main global civilization that does it the same, and maybe one or two deviants that are allowed to have somewhat different behavior, but pay a price for it.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Micro Reads▶ Economics* The Next President Inherits a Remarkable Economy - WSJ* The surprising barrier that keeps us from building the housing we need - MIT* Trump’s tariffs, explained - Wapo* Watts and Bots: The Energy Implications of AI Adoption - SSRN* The Changing Nature of Technology Shocks - SSRN* AI Regulation and Entrepreneurship - SSRN▶ Business* Microsoft reports big profits amid massive AI investments - Ars* Meta’s Next Llama AI Models Are Training on a GPU Cluster ‘Bigger Than Anything’ Else - Wired* Apple’s AI and Vision Pro Products Don’t Meet Its Standards - Bberg Opinion* Uber revenues surge amid robust US consumer spending - FT* Elon Musk in funding talks with Middle East investors to value xAI at $45bn - FT▶ Policy/Politics* Researchers ‘in a state of panic’ after Robert F. Kennedy Jr. says Trump will hand him health agencies - Science* Elon Musk’s Criticism of ‘Woke AI’ Suggests ChatGPT Could Be a Trump Administration Target - Wired* US Efforts to Contain Xi’s Push for Tech Supremacy Are Faltering - Bberg* The Politics of Debt in the Era of Rising Rates - SSRN▶ AI/Digital* Alexa, where’s my Star Trek Computer? - The Verge* Toyota, NTT to Invest $3.3 Billion in AI, Autonomous Driving - Bberg* Are we really ready for genuine communication with animals through AI? - NS* Alexa’s New AI Brain Is Stuck in the Lab - Bberg* This AI system makes human tutors better at teaching children math - MIT* Can Machines Think Like Humans? A Behavioral Evaluation of LLM-Agents in Dictator Games - Arxiv▶ Biotech/Health* Obesity Drug Shows Promise in Easing Knee Osteoarthritis Pain - NYT* Peak Beef Could Already Be Here - Bberg Opinion▶ Clean Energy/Climate* Chinese EVs leave other carmakers with only bad options - FT Opinion* Inside a fusion energy facility - MIT* Why aren't we driving hydrogen powered cars yet? There's a reason EVs won. - Popular Science* America Can’t Do Without Fracking - WSJ Opinion▶ Robotics/AVs* American Drone Startup Notches Rare Victory in Ukraine - WSJ* How Wayve’s driverless cars will meet one of their biggest challenges yet - MIT▶ Space/Transportation* Mars could have lived, even without a magnetic field - Big Think▶ Up Wing/Down Wing* The new face of European illiberalism - FT* How to recover when a climate disaster destroys your city - Nature▶ Substacks/Newsletters* Thinking about "temporary hardship" - Noahpinion* Hold My Beer, California - Hyperdimensional* Robert Moses's ideas were weird and bad - Slow Boring* Trading Places? No Thanks. - The Dispatch* The Case For Small Reactors - Breakthrough Journal* The Fourth Industrial Revolution and the Future of Work - Conversable EconomistFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode