
Amplifying Cognition
Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85)
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- The evolution of job roles necessitates adapting to new skill sets as AI transforms traditional professions, emphasizing continuous learning and emotional intelligence.
- Expert predictions indicate that while AI may threaten social skills, it will enhance uniquely human attributes like creativity and curiosity, highlighting their future importance.
Deep dives
The Evolution of Jobs and Skills
Jobs are increasingly seen as bundles of skills that evolve over time, reflecting significant changes brought by technology. The historical context of job roles has shifted dramatically, with modern demands transforming the responsibilities associated with traditional titles. For instance, the functions of a nurse today involve advanced technology and applications that were absent a few decades ago. This evolution underscores the need to recognize and adapt to the new skill sets required in various professions as they innovate with AI and other technological advancements.
Expert Predictions and Methodology
The dialogue emphasizes the importance of expert predictions regarding the future of technology and society. Research by a center has shown that while predictions around digital technology have generally been accurate, they often address imminent trends rather than long-term impacts. A methodology combining both quantitative and qualitative insights highlights how expert opinions can reflect a collective knowledge pool that may effectively forecast upcoming trends. This approach reveals that generalists—those familiar with various perspectives—tend to provide more nuanced and insightful predictions compared to specialists with a narrow focus.
Insights on Being Human in 2035
The report on Being Human in 2035 explores anticipated changes in human traits and skills amidst the rise of AI. While experts project notable negative impacts on social and emotional skills, they also foresee positive effects on creativity, curiosity, and decision-making. These insights underscore the notion that as machines take over technical tasks, uniquely human skills will become even more critical. The conversation highlights the significance of fostering these human traits early in education systems to prepare individuals for future challenges.
Future of Work and Human Identity
As AI capabilities grow, concerns arise about the future of human work and identity, especially regarding the potential for decreased engagement with meaningful tasks. A historical perspective indicates that while technological advancements disrupt job markets, they ultimately lead to societal wealth and adaptability. However, the psychological impact of such transitions can lead to resignation among individuals who feel obsolete. The conversation stresses the need for proactive approaches to ensure that humans can thrive alongside technology, emphasizing the value of continuous learning and emotional intelligence.
“We could become obsolete by our own will—at least a portion of humanity just sort of giving up… But humans want to be valuable, want to be seen, want to be understood, want to be heard, want to think that their life matters. And this raises all sorts of questions about that.”
– Lee Rainie

About Lee Rainie
Lee Rainie is Director of Imagining the Digital Future Center at Elon University. He joined in 2023 after 24 years of directing Pew Research Center’s Pew Internet Project, where his team produced more than 850 reports about the impact of major technology revolutions. Lee is co-author of five books about the future of the internet including “Networked: The New Social Operating System”.
Website:
University Profile:
LinkedIn Profile:
What you will learn
- Imagining the digital future through expert insights
- Reflecting on past predictions about technology and society
- Understanding the human traits most at risk from AI
- Exploring the impact of AI on jobs and identity
- Identifying creativity and curiosity as human advantages
- Confronting the danger of overreliance on machines
- Redefining leadership in a tech-driven world
Episode Resources
People
Institutions & Organizations
Reports & Projects
Concepts & Technical Terms
Transcript
Ross Dawson: Lee, it’s a delight to have you on the show.
Lee Rainie: Thanks so much, Ross. I’m looking forward to it.
Ross: So you are director of the Imagining the Digital Future Center at Elon University. So that sounds like a wonderful initiative. Can you please tell us about it?
Lee: It is a wonderful initiative, and I feel very fortunate to be here studying this subject at this moment. It’s a center at Elon University of North Carolina that grew out of a partnership that I had with Elon in my previous job, when I worked for the Pew Research Center.
There were some interesting, enthusiastic, ambitious professors here who were interested in the digital future, and they basically rolled out the red carpet to me and offered a lot of labor, a lot of brainpower, and a lot of assistance in interviewing experts about the future.
One of the things that happened when I went to Pew in the first place, just at the turn of the millennium, was we were measuring adoption of technology—first the internet, then home broadband, and then a bunch of other things.
But whenever I went out to speak about our findings, the first question from the audience was, “Well, that’s all well and good. You’re looking at the here and now, and fine, dandy, but what’s the next big thing?” Because that’s always the urgent question when you’re thinking about digital technologies.
So I began to work with the professors at Elon to see if experts really had a decent track record in looking at the future.
The first project we did was looking at predictions about the rise of the internet and what it would do, both in social, political, and economic terms. We found 4,400 predictions that were made between 1990 and 1995 about the internet. And experts were largely on the mark, partly because it wasn’t really so much future questions that they were looking at.
They just knew what was coming out of the labs. They knew what they were working on. They knew what competitors were working on. And so it wasn’t hard to really anticipate the future if you talk to the right people.
So we built a database of experts, and it’s a convenience database. There’s no—this is not a representative sample of all expertise about digital technology. It’s pioneers of the technology, it’s builders of the technology, it’s analysts. A lot of academics are in our database.
And we just started asking in the year 2020, 2004, about things over the horizon. And it was a wonderful methodology, just to give us insight into the things that were around the corner.
We’re not pretending that it’s quantitatively, scientifically accurate. We marry the methodologies of quantitative and qualitative work. And so it’s basically smart people riffing on the future.
Ross: So wanted to get to that. So I actually tend, whenever I use the word expert, I always use quotation marks, because who’s an expert. I love what Marshall McLuhan said. Certainly the effect of the expert is the person who stays put, as the avatar is the one who continues to explore.
But having said that, of course, yeah, some people know more about particular topics, and if we’re looking into the future, we do that. So what the—in terms of—so have you looked back on the previous reports you’ve been doing during that period in terms of the degrees to which they were reflective of what did happen?
Lee: We don’t have a bad track record of predicting things. Often things happen sooner than the time frame we were suggesting to experts.
Sometimes we were criticized for asking questions about—this is happening now, why are you thinking about this as a future issue?
But they predicted the rise of the dominance of mobile connectivity about 15 years before it happened. They predicted the rise of violence-prone extremist groups enabled by digital technologies. They predicted the ways in which the boundary between work and leisure, work and home, work and studies would melt, and some of the consequences of that.
They were also pretty good about looking at the downstream ill effects of social media before they became really evident to the world, starting in the mid-20 teens. So it wasn’t bad.
There have been some clunkers in there. And we—there were—we’ve, a couple of times, gone back and we’ve talked to the experts who saw things correctly and said, what were you thinking at the time? Or how did you know?
And we’ve done one specific report on that, but often we just sort of amuse ourselves by doing that.
And actually, to the point you were just making about experts, some of the best predictors here are foxes rather than hedgehogs in the Isaiah Berlin formulation. They are interesting generalists. They have a purchase on any number of angles into these questions, and they’re not wedded to a single worldview or single ideology or a single even frame of mind about whether it’s going to end up well or end up awfully.
And so the foxes are looking good in these surveys.
But again, I think there are interesting limits that we try to be careful about as we release these findings. It’s a convenient sample of experts.
So our database is built on people who make public pronouncements and people who increasingly are in public forums where technology is discussed, or conferences and things like that.
But usually only between 10 and 15% of those we invite answer our questionnaires. It’s totally self-selecting.
It skews probably more heavily towards the academic analysts, who tend to be critics, than it is to the tech enthusiasts and the builders.
And it’s—the northern hemisphere is heavily represented here. The global south is not. English speakers, obviously, may find it easier to be dealing with us than others.
So there are all sorts of ways this is not universal. This is not diverse in interesting respects.
At the same time, we do have a diversity of folks who are builders and analysts and people who have long histories with this stuff, and people who are relatively new and critics almost from day one on this stuff.
So we try to be clear about that. But it’s not representative, and it’s not scientific by any stretch of the imagination.
Ross: Yeah, it’s—well, we can’t be. When you look at the future, the idea of foresight was we can’t know. And so all methodologies have very increased validity. And obviously, it’s valuable here.
One of the points is, for each of your studies, I believe you always try to have one consolidating question, where you have to sort of find yourself on one side or the other. And so essentially, it becomes statistical.
So it’s never, of course, 100% of the experts believe one thing. There is some balance. And so I suppose you are looking for where there are substantial majorities of experts.
And I suppose teasing into the detail of those—and in fact, I think one of the wonderful things about all the reports is you have the full, everything which is said by all of the experts in your report. So you can actually go to the detail, not just the statistical summaries.
But this comes back to this sort of balance of what is meaningful. Is it when more than 60 or 70% of experts lean in a particular way? Is that an indicator that we should be taking into account?
Where do we sort of see this as the balance of the statistical balance of experts starts to be a real guide to what we should be looking for?
Lee: We don’t have any firm rules of thumb about those things. It tends to be that if two-thirds or more of our experts say one thing rather than the other, we treat that as a notable finding.
But the way that we have framed a lot of the findings in the past is as split verdicts. And particularly as we’ve gotten more heavily into analysis of qualitative answers—the essays, basically, or the open-ended answers that people are giving us—they themselves often can have smart things to say on both sides of the question.
And so a lot of times where we find ourselves is trying to say this seems like it’s the more prevalent view among the people that we’re talking to, but there are a lot of nuances and caveats to sound, and just ways in which even the positive stuff can break bad or is moderated by worse kinds of findings.
So there’s a sort of intentional even-handedness to this. Although, as you’re right, we ask a foundational question, which, in a way, is a wonderful independent piece of analysis for us.
So people who give the more positive answer—we sort, we say, here’s what they’ve said. And those who have given the more negative answer—we say, here’s what they’ve said.
But again, there’s often sort of really interesting interplay between the negative things that positive people feel and the positive things that negative people feel. So we try to summarize all of that, as well as just give voice to a lot of their really smart answers.
Ross: So the moment—I want to get to your fascinating new report, Being Human in 2035, which is a very, I think, relevant topic today.
But first I just want to go back, because I have been for the last 10 years referencing a report which the Pew Internet Research ran in 2014. It was called AI, Robotics and the Future of Jobs. And I kept on quoting it, because essentially, the question was—I think the defining question was—will there be more jobs or less jobs?
And 48% said that there were going to be more jobs, and 52% said there’ll be fewer jobs. I think consolidating that—I mean, I’ve framed that as like: positive view of the future of jobs, negative view of the future of jobs.
And in fact, the negative ones were sometimes extraordinarily negative—as in, there’ll be complete devastation of employment. And the positive ones—there were a few sort of saying, “Oh, I’ll be dancing around with the flowers.” More of them would just say, “On balance, it will be good.”
Now it’s now 2025, and we can pretty clearly say that the ones who were on the positive side—the 52% saying we would have more jobs—were right.
And this goes to a time frame issue, of course. Well, maybe all the ones who were extremely negative were right, except that they were 10 years different in horizons.
So we could ask exactly the same question now, with the very same intent. So just love to hear your reflections back now to 2025, since you were on that survey in 2014.
Lee: It’s almost a perfect example of what we were talking about before. It’s one of those beautiful kind of split verdicts that gave voice to both sides of the dynamics that might have occurred.
And in that report, those who were positive—thought more jobs would be created than negative jobs—said, “Look at history.” There have been any number of enormous disruptions in labor forces and basic economies over time, the grandest of which was the Industrial Revolution before the Information Revolution occurred.
And yes, there’s disruption. Yes, there’s pain. A lot of people get hurt in the process, and a lot of jobs—specific jobs—are lost in the process.
But history teaches us you get a wealthier society out of it. The prices of commodities come down, especially the essential stuff that people use, which makes it more affordable, which means more of it can be made to make a profit. And so history just constantly reminds us of the adaptability of human beings and resilience, and that change eventually gets absorbed in interesting ways.
The negative folks—the folks who said history isn’t the good teacher here—basically said a number of things.
First of all, this is different. And I think, arguably, the rise of intelligence of any kind—particularly heading towards artificial general intelligence or even superintelligence—is different from just having information and media change direction or new forms coming into being.
And the other thing that they pointed out, which is still sort of really interesting, although we can’t see the interplay yet as clearly as they were arguing it: there’s never been this much change, this fast, on so many fronts in human history.
So you add the informatics revolutions—and AI being part of that—to the cognitive revolution (we know so much more about the brain, so much faster than we ever used to), the nanotechnology revolution, the genomics revolution.
And so it’s certainly at the level of absorption and being able to manage things well—one of the very cautionary notes they were sounding is, we don’t know how to do this stuff this fast, and create the guardrails and the cautions and the fixes that are going to be necessary as these things play through society.
So, for the moment, yes, more jobs than not.
And what I would do differently now, if I were going to field the same survey, is to talk about job functions rather than jobs themselves.
One of the most striking things that’s happened is that technology has been baked into jobs. And so the thing that used to be called a clerk is different now from what a clerk does now. The thing that is called a nurse now is radically different from what a nurse used to be.
And so, if you think about jobs as bundles of skills that earn pay, the bundles of skills inside jobs that have the same name now as they used to have are considerably different in many interesting ways.
Ross: So let’s step forward to Being Human in 2035 report—so fascinating and deeply, deeply relevant, very much of the moment in the sort of Zeitgeist and discussion. And essentially looking to what—not about jobs—but what it is to be a human being in 10 years from now.
And I suppose the very short summary was that predictions—there’s going to be lots of change—and most, or only 50%, believe that there’ll be both positive and negative change.
So we would like to dig into some of the specifics, but just like to get your reflections on the top-level findings from the report.
Lee: I’m so glad you’re asking this, particularly in the context of that 2014 report about the state of jobs.
One of the things that we captured in that survey and then got amplified in future AI-related things was the beginning of arguments about, well, how are humans going to survive this onslaught if it turns out not to be good? How are we going to save ourselves, basically?
And I think Erik Brynjolfsson, the great labor economist and now technology integrator, was one of the contributors to this. He, among others, was starting to make the case then that yes, AI will come aboard, and it will show higher levels of intelligence than at least some forms of human intelligence.
And so the way to prepare for that—and the way to make sure people have some meaning out of life and have some work for pay in this life—is to think about what, in the good old days, used to be called soft skills.
So as coding and math and sort of basic levels of logic and things like that got better and better at that, and potentially surpassed human capacity, the special secret sauce of human beings is things like social and emotional intelligence, and critical thinking, and empathy, and fluid thinking—that sort of adjusting on the fly—and sort of leadership large.
You know, it’s hard to think that machines will ever lead humans in any particular way.
So there are things to start stressing now and inculcating—and particularly in institutional connections: K to 12 education, but especially in higher education—that’s the kind of soft skill stuff you should be teaching.
And in a way, we’ve come full circle in the new survey we did, because we took that to the test with our experts.
We sort of said these seem to be—we listed 12 things that are critical human traits and skills and not necessarily replicable by machines, at least at the moment. And how do these experts think now that humans—those 12 traits—will survive and be influenced by AI as it continues to improve in the next decades?
Ross: And yeah, I want to dig into some of those—those 12 specific cognitive and social traits—in a moment.
But again, it comes back to, of course, these are look at the balance. On balance, nine are negative, or more clearly they’ll be negatively impacting. Positively impacting—there’s three, interestingly, or very interesting, where they believe there’ll be positive impact rather than negative.
And there’s some quite large disparities towards believing that more negative is more impact. But this all still, of course, depends on what it is we do—individually, institutionally, and as a society.
So perhaps these can be warning signals where we can respond so that we mitigate some of the negatives and accentuate the positives.
Lee: Absolutely. I mean, in a way, that was the spirit of this inquiry—was to sort of sound the warnings that experts had, or give voice to the warnings that experts have.
And there’s a pretty strong sense that this isn’t a settled issue yet, that things aren’t inevitable, and humans have enormous capacity for change and plasticity and adaptability. Maybe highlighting the things that they were highlighting would encourage institutions of higher learning and anybody who’s thinking about this to care about it.
So it was interesting to see that there were nine areas where people said that the outcome would be more negative than positive. Let me focus for a moment on the three things where they were more positive than negative, which were creativity, curiosity, and decision making.
Ross: And it was a better cognition.
Lee: Metacognition was on the borderline as a negative. But it was the one at the bottom of the negative list, closest—where the delta between the mostly negative and mostly positive folks was the least pronounced. And so I think there’s interesting things to say about that in general.
And even if you add metacognition to the list of positives, what seemed to be the organizing pattern of those positives was a thing that we didn’t ask in the survey. We didn’t ask about leadership, which is on a lot of lists of special human traits that can save our species or make our species still sort of unique and valuable in the world.
And we partly didn’t ask it because it was a hard thing to ask in the context of versus machines—it just didn’t feel like the right thing on our list.
But if you look at those now four things—I’ll take your point that metacognition is a maybe outlier case—those, that’s the secret sauce of leadership.
If you’re curious and you are creative, and if you have the capacity to make decisions, especially in environments where you don’t have complete data and you have to sort of weigh a variety of factors and things…
And now metacognition—if you can think about your thinking: Where are my blind spots here? Who else do I have to consult to fill in gaps in knowledge that I have? Crowdsourcing a decision is probably a good thing to do, and that’s a sort of hack for metacognition. Just thinking about how well you think and where things are is kind of represented there.
So in a way, what I think these experts told us, without our specifically asking it, is that great human leadership—in this sort of new sense of it, where it’s inclusive, it’s diverse, it’s deeply crowdsourced, you’re drawing on every capacity of human, social, and emotional intelligence, as well as just informational accuracy—might be this way that we pull ourselves out of whatever the problems are on those other dimensions.
Ross: That is a fantastic and fascinating distillation, which I didn’t—I’ve got to say—I haven’t read every word of the report. It’s pretty long. I didn’t see that point made. And I think that’s really important.
Lee: Well, it’s only dawned on me as I’ve—in talking to you—just sort of, what are the patterns here between the nine negatives and the three positives?
And the three positives are sort of very oriented towards action. You’re doing something, you’re creating something, you’re exploring something.
And the negatives are more—not withdrawn, in a way—it’s sort of internal calculations about social and emotional intelligence, and about empathy, and about critical thinking. Those seem a little bit more abstract and a little bit more—not necessarily of the moment.
And you don’t have any pressure to make a decision. So those are the longer-term human traits that serve them incredibly well. If you’re empathic and have great social intelligence, you’re going to just do yourself and your community a world of good.
But in a way, that’s a little bit—you don’t necessarily go into a decision thinking, what is the empathic response here that in the long term is going to do me good?
I’ve got to make a decision here—creativity, curiosity are going to serve me really well in the moment as I’m doing that. So it’s external, it’s action-oriented in an interesting way.
Ross: So, I mean, there is—carefully. So one of the things, which I think is fairly intuitive, is that one of the things which is more positive is the curiosity and the capacity to learn.
And of course, these are extraordinary learning tools—the large language models. And the curiosity is that, well, you can ask anything you want. You can get a half-decent answer.
But the single most negative response is—some of you, or there’s a lot of debate about at the moment—is capacity and willingness to think deeply about complex concepts.
And this is something which goes to something I often say, which is the greatest risk is overreliance, where we start to say, Oh, well, it can do all of our complex thinking for us. We don’t need to do that.
And so it’s just to get your reflections on particularly those most negative aspects that you highlighted.
Lee: I think you’re right in the center of gravity of the expert respondents who gave us their answers. That is the sort of overarching concern that they express when you ask about particular dimensions of human traits.
They just think that some portion of humanity is going to give up or default to the machines because they seem so smart.
And over the time, as I’ve studied technology, there are just always people—people who don’t feel on top of it, and feel daunted by it, or feel like satisficing is a good enough answer.
You know, I don’t necessarily have to take this to the bank and build my life around it, but that seems okay enough.
And so there’s this broad sense, across these 12 dimensions of special human traits, that we could become obsolete by our own will—at least a portion of humanity just sort of giving up.
If you remember the movie—the Pixar movie Wall-E—you know, the civilization up there was fat and happy and didn’t care about things, because all problems were solved, and everything seemed to be humming along just in a nice way.
And no matter what you asked—about social-emotional intelligence, empathy, trust in broad human norms and things like that—there’s this very strong sense that you well articulate: about people giving up or people feeling that they aren’t up to the job of being the sort of co-intelligence that can work with artificial intelligence.
Ross: So one of the really nice things about the report, as well as the highlights, not just the statistical balance in the reports, but also highlight these are the very different and interesting opinions which come out from a number of people and interested in just any—anything which you sort of really struck you in the thinking and the ideas presented.
Lee: We listed—one of the fun things to do when you get all these expert answers back is to find little gems, little nuggets. And my rule of thumb in highlighting them is, did it make me think, or did it change my sense of what’s possible here? Or was it just brand spanking new, and I’d never heard of stuff like that.
So we gathered about two dozen of these nuggets, and to sort of pick any number of them that are interesting:
One really fabulous futurist, Paul Saffo, who used to run the Institute for the Future, talked about the first multi-trillion dollar corporation that employs no human workers except legally required executives and a board. It has no offices. It owns no property—physical property. It’s basically run entirely through AI.
It’s a bit fanciful. Who knows whether it’ll be in the multi-trillion dollar level. But you hear now about companies that are basically saying, stop hiring people, start using AI. And so this is sort of, you know, a way in which the future could play out in dramatic form.
Another one of these respondents talked about AI religions and AI affinity formulations that are sort of brand new in the human condition. And so there are ways in which—this respondent talked about deity avatars that get followings and look a lot like cults, and actually speak to the same thing you were just asking about, where the AI dominates the relationship and so deeply understands humans that it can ethically override them and make moral decisions for them. And humans are, you know, outsourcing that kind of stuff.
The final one that we had—well, there’s one more—that from Vint Cerf, the creator, godfather of the Internet itself, who wrote the Internet protocols with some colleagues.
His prediction was that soon enough, it might be necessary for us to prove in interactions that we’re human. There are going to be so many bots and so many agents representing human beings—incredibly looking like human beings—that there’s going to have to be some scheme for us to prove that we’re the living, breathing, wetware that we are, rather than the avatars that are going to be so ubiquitous.
I mean, a lot of people said there are going to be more digital agents operating in the world than there will be human agents. And Vint was speaking to that possibility—that, yeah, we’re just—proof of humanity is going to be one of the things that is going to be part of our interactions in the world.
Ross: Yeah? Well, the thing is, a lot of us will have not just digital twins, but digital triplets and quadruplets. Which one of us is the original, as opposed to all of the copies of us that are manifest?
So the thinking about this, I suppose looking—and I think this is 2025—is a time when asking this question of what it is to be human. I think the reality is, we are—what it is to be human will be different in 10 years from now, and even more beyond that.
And there was not so much the issues of the synthetic biology and so uncovered in this report, but still simply the impact of AI and the impact on our cognition. That’s the heart of what is the cognition.
And that’s so extraordinarily appropriate to be interviewing you on the Amplifying Cognition podcast, because that’s exactly what this is about—understanding the impact of technology, and where possible, making them as tools to be able to amplify our capabilities.
I think that for each of those nine negatives, we could—if we choose to and took it the right way—we could use those to enhance our cognition or social skills.
And I think there’s many people that do find that they are able to, in fact, use tools which they perceive to be enhancing their social relationships, for example.
Lee: Yeah, sort of my favorite edge example of that is the great mystery of consciousness itself. And you can imagine just innumerable ways that brilliant AIs, combined with brilliant, creative explorers of that territory—I mean, maybe we’re going to solve that great mystery about what it is and where it comes from, and its meaning, especially for us as a species.
But throughout the universe, what does that maybe look like?
Then there are sort of lower-order, glorious things to be thinking about. I mean, one of the strong predictions we’ve gotten over the years about the future of AI is the scientific breakthroughs that are going to come from it.
And even at the level of popular consciousness—just general population—there’s such great expectations about medical breakthroughs, and just general provision of medical care. The Global South, among others, might be the biggest beneficiary, potentially, of all of this.
But up and down the healthcare stack—at the diagnostic level, at the treatment level, at the understanding of population dynamics and things like that—it’s interesting that people will separate that. That we’re just taking care of our wellness, potentially in a magnificent way.
And yet, the other thing that they worry about, almost in the same breath, is how we’re going to find purpose in a world where we’re not paid for our work, or where the meaning of life has to come from other than the traditional sources that a lot of people have built their lives around.
I mean, Americans in particular—their identity is their job, and their purpose in life and meaning in life is their job.
So if the bad outcome eventually comes, that lots of jobs get so changed and so overtaken by AI skills and intelligences—humans are smart and creative—then there will be a lot of humans who can figure out how to live their lives wonderfully, with a lot more time to spend on the things that matter and create the things that have meaning.
But a lot of people are going to potentially fall into that category of being complacent and eventually deciding, Well, I’m obsolete.
We have a very dramatic set of examples in that—in the deaths of despair in America—where manufacturing jobs have left particular regions of the country, and the suicide rates have risen substantially, the addiction rates have risen substantially, the measures of well-being more generally have declined.
And so we’re now having examples, particularly for older white men, of the longevity data going down for the first time in history after just this amazing story of the past 120, 130, 140 years.
That now, all of a sudden, the slope of the curve has turned on us, and it’s just—it’s a testament to: wow. Humans want to be valuable, want to be seen, want to be understood, want to be heard, want to think that their life matters. And this raises all sorts of questions about that.
Ross: Yeah, these deep, deep issues. So what is the approximate cadence of your report? These are big undertakings, of course, so you can’t get them out all the time.
Lee: We do one of these a year now, because it is, you know, it’s a special effort. Plus, we don’t want to wear our experts out.
We’re asking them to think metaphysically and existentially a lot, and they give us a lot of their time and effort, but asking them to do it multiple times a year would be overload.
So our cadence is one a year on these big issues.
But then right now, our immediate plan is to ask the same questions about the same traits and what’s going to happen with AI of the general population.
We’re going to do a real scientific survey of American adults, just to see, in its own terms—that’s going to be interesting—how regular folks think about this.
But there’s always interesting comparative analysis to do about how the elite community—the expert community—sees the world in the future differently from the way regular people do.
And they’re just sort of first-order questions that are relatively simple to do research on about what’s going on in this world. Who’s using this stuff? What are they getting out of it? How do they feel about it?
What parts of their life do they feel like they’re becoming dependent on it? Where do they think it’s serving them negatively, or things like that?
So this is the gift that keeps on giving. And there are a lot of very fresh research areas now to apply this to. And we’re not going to do them all, but we do a bunch.
Ross: So where can people find the research reports from this Imagining the Digital Future Center?
Lee: If they look at Imagining the Digital Future Center—if they had to add to it, they can add Elon University—but they can find it there.
And it’s been interesting to try to make our material—we’re a web publisher like everybody else—and so in this new age, we want to get attention for our work, and we want citations of our work, and we want to grow the footprint of the reputation of the center.
And it’s way harder than it used to be now that AI systems are becoming essentially the go-to search functions for a lot of people, and there are hallucinations in the citations. And so sometimes we’re cited well and accurately, and sometimes we’re not.
So it’s an interesting world to be living in—at the promulgation of our information as well as the creation of our information.
Ross: Well, I’m delighted to be able to share—to whatever—to my audience the findings, because I think they’re very important. It’s great—always great reports—everything, which both Pew Internet Research and Elon University—that has been wonderful, and I always make a point of looking at it.
So next time you do a major report, I’d love to get you back on.
Lee: Thank you, Ross. It’s a wonderful kind thing to say.
The post Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85) appeared first on Amplifying Cognition.