Artificiality: Being with AI

Helen and Dave Edwards
undefined
Feb 26, 2023 • 35min

ChatGPT: Why does it matter, how special is it, and what might be ahead?

Why does ChatGPT matter? * People always get excited about AI advances and this one is accessible in a way that others weren’t in the past. * People can use natural language to prompt a natural language response. * It’s seductive because it feels like synthesis. * And it can feel serendipitous. But… * We need to remember that ChatGPT and all other generative AI are tools and they can fail us. * While it may feel serendipitous, that serendipity is more constrained than it may feel. Some other ideas we cover: * The research at Google, OpenAI, Microsoft, and Apple gives us some context for evaluating how special ChatGPT actually is and what might be ahead. * The current craze about prompt engineering. What we’re reading: * Raghuveer Parthasarathy’s So Simple a Beginning * Don Norman’s Design for a Better World * Jamer Hunt’s Not to Scale * Ann Pendleton-Jullian & John Seely Brown’s Design Unbound About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.  Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 
undefined
Feb 19, 2023 • 1h 35min

David Krakauer: Complexity

We’re always looking for new ideas from science that we can use in our work. Over the past few years, we have been researching new ways to handle increasing complexity in the world and how to solve complex problems. Why do we seem to see emergent, adaptive, open, and networked problems more often? And why don’t they yield to traditional problem solving techniques? Our research has centered on complexity science and understanding how to apply its lessons to problem solving. Complexity science teaches us about the nature of complex systems including the nervous system, ecosystems, economies, social communities, and the internet. It teaches us ways to identify opportunities for change through metaphor, models, and math and ways to synchronize change through incentives. The Santa Fe Institute has been at the center of our complexity research journey. Founded in 1984, SFI is the leading research institute on complexity science. Its researchers endeavor to understand and unify the underlying, shared patterns in complex physical, biological, social, cultural, technological, and even possible astrobiological worlds. We encourage anyone interested in this topic to wander through the ample and diverse resources on the SFI website, SFI publications, and SFI courses. We had the pleasure of digging into complexity science and its applications with one of the leading minds in complexity, David Krakauer, who is President and William H. Miller Professor of Complex Systems at SFI. David's research explores the evolution of intelligence and stupidity on Earth. This includes studying the evolution of genetic, neural, linguistic, social, and cultural mechanisms supporting memory and information processing, and exploring their shared properties. He served as the founding director of the Wisconsin Institutes for Discovery, the co-director of the Center for Complexity and Collective Computation, and professor of mathematical genetics, all at the University of Wisconsin, Madison. He has been a visiting fellow at the Genomics Frontiers Institute at the University of Pennsylvania, a Sage Fellow at the Sage Center for the Study of the Mind at the University of California, Santa Barbara, a long-term fellow of the Institute for Advanced Study, and visiting professor of evolution at Princeton University. A graduate of the University of London, where he went on to earn degrees in biology and computer science, Dr. Krakauer received his D.Phil. in evolutionary theory from Oxford University. Learn more about SFI. Learn more about David Krakauer. About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.  Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 
undefined
Feb 12, 2023 • 30min

Generative AI: ChatGPT, DALL-E, Stable Diffusion, and the rest

Everyone’s talking about it so we will too. Generative AI is taking the world by storm. But is it a good storm or a scary storm? How should individuals think about what’s possible? What about companies? Our take: generative AI is hugely powerful but will always have flaws and potholes. As probabilistic systems, it will always produce errors—how will you plan for that? As systems that are trained on everything on the internet, they are essentially stealing IP from everyone, everywhere—how do you feel about participating in that theft? We’ve spent years witnessing companies take a “if we build it (data, analytics, AI), people will use it” approach—and fail. Digital transformation doesn’t happen successfully by itself. Digital transformation is actually all about people. Companies that succeed in integrating data, analytics, and AI are those that undertake thoughtful change management programs to help people understand how to integrate these technologies into their complex human systems. Generative AI is really exciting. But our prediction is that companies will need to undertake thoughtful change management to ensure they get the best out of these new AI technologies, not the worst. Nudges of the week Helen: Synthesize Later. Integrate argument and counter-argument into a decision. Good decisions involve reconciling subjective judgments and resolving clashing causal forces. The best way to do this is to be deliberate and conscious of the need to synthesize. Schedule a meeting titled “synthesis” and set expectations that now is the moment to step slowly through each point of view, iterate, and nudge each side. Have each side make a list of the things that would bring them toward each other. Failing to do this contributes to a sense that the decision is stuck. Dave: Explain, Teach, Pitch. Explanations and the stories that link cause and effect play a key role in allowing us to adapt flexibly to a changing world. Explaining our decisions is a generative act. We learn more about our own motivations and knowledge. Explanation is active and can help us when we need to rethink, reevaluate, and deal with regret. Teaching is set apart from explanation because good teaching also relies on empathy. A good teacher understands where the student is in their learning process and adjusts their teaching to fit the mental model of the learner. What We’re Learning Helen: Joined-Up Thinking by Hannah Critchlow. A great summary of the state of the science about how we can build our collective intelligence. A delightful read that Helen highly recommends. Dave: Don Norman’s next book who we will interview in the next few weeks! Stay tuned for that interview with one of Dave’s heroes. About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.  Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 
undefined
Jan 29, 2023 • 1h 3min

Kees Dorst: Frame Innovation

What can we learn from the practice of design? What might we learn if we had an insight into top designers’ minds? How might we apply the best practices of designers beyond the field of design itself? Most of our listeners are likely familiar with design thinking—what other practices should we learn about and understand? To answer these questions, we talked with Kees Dorst about his books, Frame Innovation and Notes on Design, to discover his views on the creative processes of top designers and understand his practice of frame innovation. We enjoyed both books and find insights that extend well beyond design into all areas of problem solving. We are particularly interested in applying frame innovation in our complex problem-solving sprints and consulting practice. Kees Dorst is Professor of Transdisciplinary Innovation at the University of Technology Sydney’s TD School. He is considered one of the lead thinkers developing the field of design, valued for his ability to connect a philosophical understanding of the logic of design with hands-on practice. As a bridge-builder between these two worlds, his writings on design as a way of thinking are read by both practitioners and academics. He has written several bestselling books in the field – ‘Understanding Design’ (2003, 2006), ‘Design Expertise’ (with Bryan Lawson, 2013), 'Frame Innovation' (2015) ‘Designing for the Common Good’ (2016) and ‘Notes on Design – How Creative Practice Works’ (2017). About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.  Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 
undefined
Dec 2, 2022 • 27min

No-duhs and some surprises

The latest Big Ideas report from MIT Sloan and BCG makes for an interesting read but contains flaws, obvious conclusions, and raises more questions than it answers.We discuss this report and make some suggestions about how to think about AI based on the survey’s conclusions:* trust matters (no-duh). The data suggests if people trust AI they will use it twice as much.* ability to override the AI matters (no-duh). The data suggests if people can override the AI they will use it twice as much.* people describe an AI as a co-worker but the majority of people don’t even know they are using it. Huh?Another surprise is that people like AI that means they don’t have to talk to their boss. Who would have anticipated that?Nudges of the weekHelen: Synthesize Later. Integrate argument and counter-argument into a decision. Good decisions involve reconciling subjective judgments and resolving clashing causal forces. The best way to do this is to be deliberate and conscious of the need to synthesize. Schedule a meeting titled “synthesis” and set expectations that now is the moment to step slowly through each point of view, iterate, and nudge each side. Have each side make a list of the things that would bring them toward each other. Failing to do this contributes to a sense that the decision is stuck.Dave: Be Less Wrong. Let go of perfectionism and feel the relief of knowing that by striving to be less wrong, you’ll probably end up being more right.What We’re LearningHelen: The Neuroscience of You by Chantel Prat. Delivers on the promise of showing you how your brain is different. Really fun and engaging book to read and do all the tests.Dave: Learning from Helen! He’s been reading the first draft of our next book Solve Better Problems: How to Solve Complex Problems in the Digital Age. Complexity really is a different animal and it’s mind opening to understand why.If you enjoy our podcasts, please subscribe on Substack or your favorite podcast platform. And please leave a positive rating or comment—haring your positive feedback helps us reach more people and connect them with the world's great minds. Seriously, a review on Apple podcasts is a big deal!And if you like how we think then contact us about our speaking and workshops, and human-centered product design. You can learn more about us at getsonder.com and you can contact us at hello@getsonder.com.You can learn more about making better decisions in our book, Make Better Decisions: How to Improve Your Decision-Making in the Digital Age. The book is an essential guide to practicing the cognitive skills needed for making better decisions in the age of data, algorithms, and AI. Please check it out at MBD.zone and purchase it from Amazon, Bookshop.org, or your favorite local bookstore. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world
undefined
Nov 18, 2022 • 27min

Elon's error calculation at Twitter

Twitter as we knew is gone. Elon has fired half the full time employees and 80 percent of the contractors. It’s a brutal way to trim excess fat, reset the culture, and establish a loyal band. But is it a good decision? How could it go wrong?Elon is incredibly successful at running engineering companies. But if you look at his failures—the fixes he hasn’t been able to affect—they are all in the zone of people; specifically networks of people and machines. He’s consistently failed to accurately forecast driverless technology and he’s overestimated the capabilities of robots when it come to human-like fine-grained automation. Our key question regarding Twitter is this: is Elon grossly underestimating the people factor in Twitter? As Dave says in the podcast, “he's taking a group of people that work in a system, that have personal connections, have human connections, the kinds of connections that are required to be productive, and he's treating them like bits and parts in a car factory, or like working capital. Something that you can just discard half of and continue going with half of it. He completely misses the fact that people in an organization like Twitter require the other people that are around them.”One of the good things about Twitter was its intellectualism and commitment to getting better. By all accounts, the culture was one of making a good decision based on considering many (complex) factors. It evolved based on many selection pressures—advertisers, users, activists. Now it’s going to be a place where one person makes what they consider to be the easy decision with an ideology that a system shock is the best way to force a new equilibrium. Elon would rather fix than futz. This is potentially a perfectly rational strategy. Indeed it may be the only one given the company faces significant financial pressure (in part brought on by Elon’s previous decisions). But the problem with how it’s been done at Twitter isn’t the speed the scale, or even the cruelty. The problem is that it’s all about creating complete unyielding loyalty. A Twitter where you’re either with Elon or against him. We aren’t the only ones to point out the irony of the situation. Almost overnight, the so-called champion of free speech has created a total FIFO (Fit In or F**k Off) employer.Will Twitter still operate as a global public square? What will happen next?Helen: I’m 80 percent confident the answer is yes. We’ll adapt: that’s what humans do. And Elon has an engineering challenge here that he can act on: it’s a financial engineering challenge. Dave: Yes, but people will be more cautious. “I think people will be more cautious. I think having a singular billionaire, slightly autocratic feature, a figurehead makes decisions willy nilly, throws people in or out, has thrown away all of the guardrails or any form of ethics, is going to have a long standing impact on the platform. So it's going to make people a little bit more wary about it being the trusted place. Dave’s Nudge OTW: Break Up Problems EarlyA company struggling to make the company decision framework function. The real problem? It’s hard to make a decision framework (who could decide what) work before you even know the problem you have. The nudge helped Dave step back and see that the real problem was that people need to understand that they are dealing with different types of problems—hard decisions, complicated problems, complex scenarios.Helen’s Nudge OTW: Plug the LeaksA great nudge for improving willpower. No decision is too small. Which means no action is too small either. Stop focusing on the big mass of motivation and focus instead on the trajectory or velocity. What you should do for an hour, do for 15 minutes instead (for example, writing or working out).Final Thing Helen: Klara and the Sun. The latest book from Nobel Prize winner Kazuo Ishiguro. Wonderful story about the relationship between an artificial friend and a teen. Says Helen: “I think it is clever because it doesn't beat you over the head about what an artificial consciousness might be. It requires quite a lot of discovery to figure out exactly how Klara is operating in the world, what the nature of her conscious perception actually is. The more you know about AI, the better it is because you see so many different angles into the way that an artificial mind might process the world. And ways that could lead to enormous flaws in relying on artificial friends.”Dave: Build: An Unorthodox Guide to Making Things Worth Making by Tony Fadell. He’s an old friend of Dave’s from Apple who has a lot of wisdom about people and products. “Sometimes the people you don't expect to be amazing, the ones you thought were B's and B pluses, turn out to completely rock your world. They hold your team together by being dependable and flexible and great mentors and teammates. They're modest and kind of just quietly do good work. They're a different type of Rockstar.” Says Dave: “I totally agree with him. This is so under appreciated. Those people won't show up very high on Elon’s list. I would imagine that he probably threw out a whole bunch of those people because they don't show up at the top of whatever performance metric he's using.”If you enjoy our podcasts, please subscribe on Substack or your favorite podcast platform. And please leave a positive rating or comment—haring your positive feedback helps us reach more people and connect them with the world's great minds. Seriously, a review on Apple podcasts is a big deal!And if you like how we think then contact us about our speaking and workshops, and human-centered product design. You can learn more about us at getsonder.com and you can contact us at hello@getsonder.com.You can learn more about making better decisions in our book, Make Better Decisions: How to Improve Your Decision-Making in the Digital Age. The book is an essential guide to practicing the cognitive skills needed for making better decisions in the age of data, algorithms, and AI. Please check it out at MBD.zone and purchase it from Amazon, Bookshop.org, or your favorite local bookstore. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world
undefined
Oct 30, 2022 • 57min

Marina Nitze and Nick Sinai: Hack Your Bureaucracy

We all likely want to improve the organizations we work in. We might want to improve the employee experience, improve the customer experience, or be more efficient and effective. But we all likely have had the experience of feeling like our organizations are too difficult, too entrenched, and too complex to change. Any organization—large or small, public or private—can feel like a faceless bureaucracy that is resistant to change. So what can people do who want to affect change? How do you accomplish things that can seem impossible? To answer these questions, we talked with Marina Nitze and Nick Sinai about their recently published book, Hack Your Bureaucracy: Get Things Done No Matter What Your Role on Any Team. Marina and Nick have deep experience in one of the largest, most complex bureaucracies in the world: the U.S. government. As technology leaders in the Obama White House, Marina and Nick undertook large change programs. Their book contains their stories and their advice for anyone who wants to affect change. We find the hacks in their book quite valuable, and we wish this book had been available early in our career when we were both in much larger organizations. We love the fact that their hacks focus on the people and working within a system for change—not the move fast & break things mentality of Silicon Valley. Above all, we appreciate that it’s clear that Marina and Nick thought deeply about what they would have wanted to know when they embarked on the significant technology change programs they undertook in the White House and Veterans Administration. Marina Nitze is currently a partner at Layer Aleph, a crisis response firm that specializes in restoring complex software systems to service. Marina was most recently Chief Technology Officer of the U.S. Department of Veterans Affairs under President Obama, after serving as Senior Advisor on technology in the Obama White House and as the first Entrepreneur-in-Residence at the U.S. Department of Education. Nick Sinai is a Senior Advisor at Insight Partners, a VC and private equity firm, and is also Adjunct Faculty at Harvard Kennedy School and a Senior Fellow at the Belfer Center for Science and International Affairs. Nick served as U.S. Deputy Chief Technology Officer in the Obama White House, and prior, played a key role in crafting the National Broadband Plan at the FCC. About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 
undefined
Oct 16, 2022 • 53min

Tom Davenport and Steve Miller: Working with AI

How will AI change our jobs? Will it replace humans and eliminate jobs? Will it help humans get things done? Will it create new opportunities for new jobs? People often speculate on these topics, doing their best to predict the somewhat unpredictable. To help us get a better understanding of the current state of humans and AI working together, we talked with Tom Davenport and Steve Miller about their recently-released book, Working with AI. The book is centered around 29 detailed and deeply-researched case studies about human-AI collaboration in real-world work settings. What they show is that AI isn’t a job destroyer but a technology that changes the way we work. Tom is Distinguished Professor of Information Technology and Management at Babson College, Visiting Professor at Oxford's Saïd Business School, Fellow of the MIT Initiative on the Digital Economy, and Senior Advisor to Deloitte's AI practice. He is the author of The AI Advantage and coauthor of Only Humans Need Apply and other books. Steve is Professor Emeritus of Information Systems at Singapore Management University, where he previously served as Founding Dean of the School of Computing and Information Systems and Vice Provost for Research. He is coauthor of Robotics Applications and Social Implications. About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.  Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 
undefined
Oct 2, 2022 • 37min

Helen Edwards and Dave Edwards: Make Better Decisions

We humans make a lot of decisions. Apparently, 35,000 of them every day! So how do we improve our decisions? Is there a process to follow? Who are the experts to learn from? Do big data and AI make decisions easier or harder? Is there any way to get better at making decisions in this complex, modern world we live in? To dig into these questions we talked with…ourselves! We recently published our first book, Make Better Decisions: How to Improve Your Decision-Making in the Digital Age. In this book, we’ve provided a guide to practicing the cognitive skills needed for making better decisions in the age of data, algorithms, and AI. Make Better Decisions is structured around 50 nudges that have their lineage in scholarship from behavioral economics, cognitive science, computer science, decision science, design, neuroscience, philosophy, and psychology. Each nudge prompts the reader to use their beautiful, big human brain to notice when our automatic decision-making systems will lead us astray in our complex, modern world, and when they'll lead us in the right direction. In this conversation, we talk about our book, our favorite nudges at the moment, and some of the Great Minds who we have interviewed on Artificiality including Barbara Tversky, Jevin West, Michael Bungay Stanier, Stephen Fleming, Steven Sloman and Tania Lombrozo. About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.  Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 
undefined
Sep 18, 2022 • 56min

Kat Cizek and William Uricchio: Co-Creation

We all do things with other people. We design things, we write things, we create things. Despite the fact that co-creation is all around us it can be easy to miss because creation gets assigned to individuals all too often. We’re quick to assume that one person should get credit thereby erasing the contributions of others. The two of us have a distinct interest in co-creation because we co-create everything we do. We co-created Sonder Studio, our speaking engagements, our workshops, our design projects, and our soon-to-be-published book, Make Better Decisions. We’re also interested in how humans can co-create with technology, specifically artificial intelligence, and when that is a good thing and when that might be something to avoid. To dig into these interests and questions we talked with Kat Cizek and William Uricchio whose upcoming book Collective Wisdom offers the first guide to co-creation as a concept and as a practice. Kat, William, and a lengthy list of co-authors have presented a wonderful tracing of the history of co-creation across many disciplines and societies. The book is based in interviews with 166 people and includes nearly 200 photographs that should not be missed. We hope that you all have a chance to experience their collective work. Kat is an Emmy and Peabody-winning documentarian who is the Artistic Director and Cofounder of the Co-Creation Studio at MIT Open Documentary Lab. William is Professor of Comparative Media Studies at MIT, where he is also Founder and Principal Investigator of the MIT Open Documentary Lab and Principal Investigator of the Co-Creation Studio. Their book is scheduled to be published by MIT Press on November 1st. About Artficiality from Helen & Dave Edwards: Artificiality is a research and services business founded in 2019 to help people make sense of artificial intelligence and complex change. Our weekly publication provides thought-provoking ideas, science reviews, and market research and our monthly research releases provides leaders with actionable intelligence and insights for applying AI in their organizations. We provide research-based and expert-led AI strategy and complex change management services to organizations around the world. We are artificial philosophers and meta-researchers who aim to make the philosophical more practical and the practical more philosophical. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We are dedicated to unraveling the profound impact of AI on our society, communities, workplaces, and personal lives.  Subscribe for free at https://www.artificiality.world. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Learn more about ⁠⁠Sonder Studio⁠⁠ Subscribe to get ⁠⁠Artificiality⁠⁠ delivered to your email Learn about our book ⁠⁠Make Better Decisions⁠⁠ and buy it on ⁠⁠Amazon⁠⁠ Thanks to ⁠⁠Jonathan Coulton⁠⁠ for our music This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.artificiality.world. #ai #artificialintelligence #generativeai #airesearch #complexity #futureofai 

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app