

AI & The Future of Humanity: Artificial Intelligence, Technology, VR, Algorithm, Automation, ChatBPT, Robotics, Augmented Reality, Big Data, IoT, Social Media, CGI, Generative-AI, Innovation, Nanotechnology, Science, Quantum Computing: The Creative Process Interviews
The Creative Process Original Series: Artificial Intelligence, Technology, Innovation, Engineering, Robotics & Internet of Things
What are the dangers, risks, and opportunities of AI? What role can we play in designing the future we want to live in? With the rise of automation, what is the future of work? We talk to experts about the roles government, organizations, and individuals can play to make sure powerful technologies truly make the world a better place–for everyone.
Conversations with futurists, philosophers, AI experts, scientists, humanists, activists, technologists, policymakers, engineers, science fiction authors, lawyers, designers, artists, among others.
The interviews are hosted by founder and creative educator Mia Funk with the participation of students, universities, and collaborators from around the world.
Conversations with futurists, philosophers, AI experts, scientists, humanists, activists, technologists, policymakers, engineers, science fiction authors, lawyers, designers, artists, among others.
The interviews are hosted by founder and creative educator Mia Funk with the participation of students, universities, and collaborators from around the world.
Episodes
Mentioned books

Jun 29, 2024 • 53min
How is AI Changing Our Perception of Reality, Creativity & Human Connection? w/ HENRY AJDER - AI Advisor
AI advisor Henry Ajder discusses the impact of AI on reality, creativity, and human connection. Topics include deepfakes, AI in art, education, and human creativity vs AI-generated content. The conversation also explores the challenges of responsible AI development and the need for governance in an AI-driven world.

Jun 18, 2024 • 12min
How to Fight for Truth & Protect Democracy in A Post-Truth World? - Highlights - LEE McINTYRE
“When AI takes over with our information sources and pollutes it to a certain point, we'll stop believing that there is any such thing as truth anymore. ‘We now live in an era in which the truth is behind a paywall and the lies are free.’ One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.”Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and a Senior Advisor for Public Trust in Science at the Aspen Institute. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan. He has taught philosophy at Colgate University, Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston. His books include On Disinformation and How to Talk to a Science Denier and the novels The Art of Good and Evil and The Sin Eater.https://leemcintyrebooks.comwww.penguinrandomhouse.com/books/730833/on-disinformation-by-lee-mcintyrehttps://mitpress.mit.edu/9780262545051/https://leemcintyrebooks.com/books/the-art-of-good-and-evil/https://leemcintyrebooks.com/books/the-sin-eater/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 18, 2024 • 55min
On Disinformation: How to Fight for Truth & Protect Democracy in the Age of AI - LEE McINTYRE
How do we fight for truth and protect democracy in a post-truth world? How does bias affect our understanding of facts?Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and a Senior Advisor for Public Trust in Science at the Aspen Institute. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan. He has taught philosophy at Colgate University, Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston. His books include On Disinformation and How to Talk to a Science Denier and the novels The Art of Good and Evil and The Sin Eater.“When AI takes over with our information sources and pollutes it to a certain point, we'll stop believing that there is any such thing as truth anymore. ‘We now live in an era in which the truth is behind a paywall and the lies are free.’ One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.”https://leemcintyrebooks.comwww.penguinrandomhouse.com/books/730833/on-disinformation-by-lee-mcintyrehttps://mitpress.mit.edu/9780262545051/https://leemcintyrebooks.com/books/the-art-of-good-and-evil/https://leemcintyrebooks.com/books/the-sin-eater/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 14, 2024 • 13min
How will AI Affect Education, the Arts & Society? - Highlights - STEPHEN WOLFRAM
"Nobody, including people who worked on ChatGPT, really sort of expected this to work. It's something that we just didn't know scientifically what it would take to make something that was a fluent producer of human language. I think the big discovery is that this thing that has been sort of a proud achievement of our species, human language, is perhaps not as complicated as we thought it was. It's something that is more accessible to sort of simpler automation than we expected. And so, people have been asking me, when ChatGPT had come out, we were doing a bunch of things technologically around ChatGPT because kind of what, when ChatGPT is kind of stringing words together to make sentences, what does it do when it has to actually solve a computational problem? That's not what it does itself. It's a thing for stringing words together to make text. And so, how does it solve a computational problem? Well, like humans, the best way for it to do it is to use tools, and the best tool for many kinds of computational problems is tools that we've built. And so very early in kind of the story of ChatGPT and so on, we were figuring out how to have it be able to use the tools that we built, just like humans can use the tools that we built, to solve computational problems, to actually get sort of accurate knowledge about the world and so on. There's all these different possibilities out there. But our kind of challenge is to decide in which direction we want to go and then to let our automated systems pursue those particular directions.”Stephen Wolfram is a computer scientist, mathematician, and theoretical physicist. He is the founder and CEO of Wolfram Research, the creator of Mathematica, Wolfram|Alpha, and the Wolfram Language. He received his PhD in theoretical physics at Caltech by the age of 20 and in 1981, became the youngest recipient of a MacArthur Fellowship. Wolfram authored A New Kind of Science and launched the Wolfram Physics Project. He has pioneered computational thinking and has been responsible for many discoveries, inventions and innovations in science, technology and business.www.stephenwolfram.comwww.wolfram.comwww.wolframalpha.comwww.wolframscience.com/nks/www.amazon.com/dp/1579550088/ref=nosim?tag=turingmachi08-20www.wolframphysics.orgwww.wolfram-media.com/products/what-is-chatgpt-doing-and-why-does-it-work/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 14, 2024 • 57min
What Role Do AI & Computational Language Play in Solving Real-World Problems?
How can computational language help decode the mysteries of nature and the universe? What is ChatGPT doing and why does it work? How will AI affect education, the arts and society?Stephen Wolfram is a computer scientist, mathematician, and theoretical physicist. He is the founder and CEO of Wolfram Research, the creator of Mathematica, Wolfram|Alpha, and the Wolfram Language. He received his PhD in theoretical physics at Caltech by the age of 20 and in 1981, became the youngest recipient of a MacArthur Fellowship. Wolfram authored A New Kind of Science and launched the Wolfram Physics Project. He has pioneered computational thinking and has been responsible for many discoveries, inventions and innovations in science, technology and business."Nobody, including people who worked on ChatGPT, really sort of expected this to work. It's something that we just didn't know scientifically what it would take to make something that was a fluent producer of human language. I think the big discovery is that this thing that has been sort of a proud achievement of our species, human language, is perhaps not as complicated as we thought it was. It's something that is more accessible to sort of simpler automation than we expected. And so, people have been asking me, when ChatGPT had come out, we were doing a bunch of things technologically around ChatGPT because kind of what, when ChatGPT is kind of stringing words together to make sentences, what does it do when it has to actually solve a computational problem? That's not what it does itself. It's a thing for stringing words together to make text. And so, how does it solve a computational problem? Well, like humans, the best way for it to do it is to use tools, and the best tool for many kinds of computational problems is tools that we've built. And so very early in kind of the story of ChatGPT and so on, we were figuring out how to have it be able to use the tools that we built, just like humans can use the tools that we built, to solve computational problems, to actually get sort of accurate knowledge about the world and so on. There's all these different possibilities out there. But our kind of challenge is to decide in which direction we want to go and then to let our automated systems pursue those particular directions.”www.stephenwolfram.comwww.wolfram.comwww.wolframalpha.comwww.wolframscience.com/nks/www.amazon.com/dp/1579550088/ref=nosim?tag=turingmachi08-20www.wolframphysics.orgwww.wolfram-media.com/products/what-is-chatgpt-doing-and-why-does-it-work/www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 10, 2024 • 11min
Can we have real conversations with AI? How do illusions help us make sense of the world? - Highlights - KEITH FRANKISH
“Generative AI, particularly Large Language Models, they seem to be engaging in conversation with us. We ask questions, and they reply. It seems like they're talking to us. I don't think they are. I think they're playing a game very much like a game of chess. You make a move and your chess computer makes an appropriate response to that move. It doesn't have any other interest in the game whatsoever. That's what I think Large Language Models are doing. They're just making communicative moves in this game of language that they've learned through training on vast quantities of human-produced text.”Keith Frankish is an Honorary Professor of Philosophy at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete. Frankish mainly works in the philosophy of mind and has published widely about topics such as human consciousness and cognition. Profoundly inspired by Daniel Dennett, Frankish is best known for defending an “illusionist” view of consciousness. He is also editor of Illusionism as a Theory of Consciousness and co-edits, in addition to others, The Cambridge Handbook of Cognitive Science.www.keithfrankish.comwww.cambridge.org/core/books/cambridge-handbook-of-cognitive-science/F9996E61AF5E8C0B096EBFED57596B42www.imprint.co.uk/product/illusionismwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

Jun 10, 2024 • 57min
Is Consciousness an Illusion? with Philosopher KEITH FRANKISH
Is consciousness an illusion? Is it just a complex set of cognitive processes without a central, subjective experience? How can we better integrate philosophy with everyday life and the arts?Keith Frankish is an Honorary Professor of Philosophy at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete. Frankish mainly works in the philosophy of mind and has published widely about topics such as human consciousness and cognition. Profoundly inspired by Daniel Dennett, Frankish is best known for defending an “illusionist” view of consciousness. He is also editor of Illusionism as a Theory of Consciousness and co-edits, in addition to others, The Cambridge Handbook of Cognitive Science.“Generative AI, particularly Large Language Models, they seem to be engaging in conversation with us. We ask questions, and they reply. It seems like they're talking to us. I don't think they are. I think they're playing a game very much like a game of chess. You make a move and your chess computer makes an appropriate response to that move. It doesn't have any other interest in the game whatsoever. That's what I think Large Language Models are doing. They're just making communicative moves in this game of language that they've learned through training on vast quantities of human-produced text.”www.keithfrankish.comwww.cambridge.org/core/books/cambridge-handbook-of-cognitive-science/F9996E61AF5E8C0B096EBFED57596B42www.imprint.co.uk/product/illusionismwww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

May 17, 2024 • 10min
What can AI teach us about human cognition & creativity? - Highlights - RAPHAËL MILLIÈRE
“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.https://raphaelmilliere.comhttps://researchers.mq.edu.au/en/persons/raphael-millierewww.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

May 17, 2024 • 1h 1min
How can we ensure that AI is aligned with human values? - RAPHAËL MILLIÈRE
How can we ensure that AI is aligned with human values? What can AI teach us about human cognition and creativity?Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”https://raphaelmilliere.comhttps://researchers.mq.edu.au/en/persons/raphael-milliere“I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”www.creativeprocess.infowww.oneplanetpodcast.orgIG www.instagram.com/creativeprocesspodcast

May 14, 2024 • 16min
Is understanding AI a bigger question than understanding the origin of the universe? - Highlights, NEIL JOHNSON
“It gets back to this core question. I just wish I was a young scientist going into this because that's the question to answer: Why AI comes out with what it does. That's the burning question. It's like it's bigger than the origin of the universe to me as a scientist, and here's the reason why. The origin of the universe, it happened. That's why we're here. It's almost like a historical question asking why it happened. The AI future is not a historical question. It's a now and future question.I'm a huge optimist for AI, actually. I see it as part of that process of climbing its own mountain. It could do wonders for so many areas of science, medicine. When the car came out, the car initially is a disaster. But you fast forward, and it was the key to so many advances in society. I think it's exactly the same as AI. The big challenge is to understand why it works. AI existed for years, but it was useless. Nothing useful, nothing useful, nothing useful. And then maybe last year or something, now it's really useful. There seemed to be some kind of jump in its ability, almost like a shock wave. We're trying to develop an understanding of how AI operates in terms of these shockwave jumps. Revealing how AI works will help society understand what it can and can't do and therefore remove some of this dark fear of being taken over. If you don't understand how AI works, how can you govern it? To get effective governance, you need to understand how AI works because otherwise you don't know what you're going to regulate.”How can physics help solve messy, real world problems? How can we embrace the possibilities of AI while limiting existential risk and abuse by bad actors?Neil Johnson is a physics professor at George Washington University. His new initiative in Complexity and Data Science at the Dynamic Online Networks Lab combines cross-disciplinary fundamental research with data science to attack complex real-world problems. His research interests lie in the broad area of Complex Systems and ‘many-body’ out-of-equilibrium systems of collections of objects, ranging from crowds of particles to crowds of people and from environments as distinct as quantum information processing in nanostructures to the online world of collective behavior on social media. https://physics.columbian.gwu.edu/neil-johnson https://donlab.columbian.gwu.eduwww.creativeprocess.infowww.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast