

Eye On A.I.
Craig S. Smith
Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.
Episodes
Mentioned books

4 snips
Jul 26, 2023 • 34min
#131 Andrew Ng: Exploring Artificial Intelligence’s Potential & Threats
Welcome to episode #131 of the Eye on AI podcast with Andrew Ng. Get ready to challenge your perspectives as we sit down with Andrew Ng. We navigate the widely disputed topic of AI as a potential existential threat, with Andrew assuring us that, with time and global cooperation, safety measures can be built to prevent disaster. He offers insight into the debates surrounding the harm AI might cause, including the notions of AI as a bio-weapon and the notorious ‘paper clip argument’. Listen as Andrew debunks these theories, delivering an interesting argument for why he believes the associated risks are minimal.Onwards, we venture into the intriguing realm of AI’s capability to understand the world, setting the stage for a conversation on how we can objectively assess their comprehension. We explore the safety measures of AI, drawing parallels with the rigour of the aviation industry, and contemplate on the consensus within the research community regarding the danger posed by AI. (00:00) Preview (01:08) Introduction (02:15) Existential risk of artificial intelligence (05:50) Aviation analogy with artificial intelligence (10:00) The threat of AI & deep learning (13:15) Lack of consensus in AI dangers (18:00) How AI can solve climate change (24:00) Landing AI and Andrew Ng (27:30) Visual prompting for images Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI Our sponsor for this episode is Masterworks, an art investing platform. They buy the art outright, from contemporary masters like Picasso and Banksy, then qualify it with the SEC, and offer it as an investment. Net proceeds from its sale are distributed to its investors. Since their inception, they have sold over $45 million dollars worth of artwork And so far, each of Masterworks’ exits have returned positive net returns to their investors. Masterworks has over 750,000 users, and their art offerings usually sell out in hours, which is why they’ve had to make a waitlist. But Eye on AI viewers can skip the line and get priority access right now by clicking this link: https://www.masterworks.art/eyeonai Purchase shares in great masterpieces from artists like Pablo Picasso, Banksy, Andy Warhol, and more. See important Masterworks disclosures: https://www.masterworks.com/cd “Net Return" refers to the annualized internal rate of return net of all fees and costs, calculated from the offering closing date to the date the sale is consummated. IRR may not be indicative of Masterworks paintings not yet sold and past performance is not indicative of future results. Returns shown are 4 examples of midrange returns selected to demonstrate Masterworks performance history. Returns may be higher or lower. Investing involves risk, including loss of principal.

40 snips
Jul 19, 2023 • 50min
#130 Mathew Lodge: The Future of Large Language Models in AI
Welcome to episode #130 of Eye on AI with Mathew Lodge. In this episode, we explore the world of reinforcement learning and code generation. Mathew Lodge, the CEO of Diffblue, shares insights into how reinforcement learning fuels generative AI. As we explore the intricacies of reinforcement learning, we uncover its potential in game playing and guiding us towards solutions. We shed light on the products that it powers, such as AlphaGo and AlphaDev. However, we also address the challenges of large language models and explain why they may not be the ultimate solution for code generation. In the last part of our conversation, we delve into the future of language models and intelligence. Mathew shares valuable insights on merging no-code and low-code solutions. We confront the skepticism of software developers towards AI for code products and the task of articulating program outcomes. Wrapping up, we reflect on the evolution of programming languages and the impact of abstraction on machine learning. (00:00) Preview & sponsorship (01:51) Reinforcement Learning and Code Generation (04:39) Reinforcement Learning and Improving Algorithms (15:32) The Challenges of Large Language Models(23:58) Future of Language Models and Intelligence (35:50) Challenges and Potential of AI-generated Code (48:32) Programming Language Evolution and Higher-Level Languages Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

9 snips
Jul 12, 2023 • 58min
#129 Alexandra Geese: Demystifying AI Regulations in Europe & Beyond
Welcome to episode #129 of Eye on AI with Alexandra Geese. Navigating the complex waters of the European Union’s AI Act is no simple task. Yet, that’s exactly what Alexandra Geese, a member of the European Parliament, and I venture to do in this conversation. Alexandra’s insights into the AI Act, its four defined categories of AI applications, and its current negotiation phase with the European Council and the European Commission are illuminating. We delve into the Act’s essential mission: ensuring AI serves humanity, while also exploring the influence of powerful players in the AI industry on the EU’s legislation. As our journey deepens, we tackle a range of crucial issues underpinning the Act. Alexandra and navigate through potential economic implications for those who rely on copyright legislation, and the risk of Europe falling behind in AI implementation if the Act is too restrictive. We also touch on the involvement of major American AI firms in the Act’s finalization process, and implications for copyrighted material. We dive into the ongoing debates shaping the legislation and the enforcement of the law, once passed. Alexandra shares her thoughts on potential fines for violations, different AI zones, and the possibility of the US following Europe’s lead in AI legislation. We wrap up with a deep reflection on the environmental impact of AI, the power held by few companies, and our collective responsibility as AI reshapes the world. (00:00) Preview (00:52) Introduction (03:10) Alexandra Geese background in digital legislation (04:00) The AI act: the explanation and details (08:00) The foundations of corporations for AI regulation (13:00) Copyright regulation and impacts of creativity (17:00) We need AI that serves humanity (21:30) Are foundation models high risk to society? (25:00) Should people be worried about investing in AI? (30:45) What is dynamic AI regulation? (36:10) What is the timeline for AI regulation? (38:50) What penalties will be applied to AI regulation? (44:30) Will US & EU merge on AI regulation? (50:30) How to solve AI hallucinations Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI Found is a show about founders and company-building that features the change-makers and innovators who are actually doing the work. Each week, TechCrunch Plus reporters, Becca Szkutak and Dom-Madori Davis talk with a founder about what it’s really like to build and run a company—from ideation to launch. They talk to founders across many industries and their conversations often lead back to AI as many startups start to implement AI into what they do. New episodes of Found are published every Tuesday and you can find them wherever you listen to podcasts. Found podcast: https://podlink.com/found

15 snips
Jul 6, 2023 • 49min
#128 Yoshua Bengio: Dissecting The Extinction Threat of AI
Yoshua Bengio, the legendary AI expert, will join us for Episode 128 of Eye on AI podcast. In this episode, we delve into the unnerving question: Could the rise of a superhuman AI signal the downfall of humanity as we know it? Join us as we embark on an exploration of the existential threat posed by superhuman AI, leaving no stone unturned. We dissect the Future of Life Institute’s role in overseeing large language model development. As well as the sobering warnings issued by the Centre for AI Safety regarding artificial general intelligence. The stakes have never been higher, and we uncover the pressing need for action. Prepare to confront the disconcerting notion of society’s gradual disempowerment and an ever-increasing dependency on AI. We shed light on the challenges of extricating ourselves from this intricate web, where pulling the plug on AI seems almost impossible. Brace yourself for a thought-provoking discussion on the potential psychological effects of realizing that our relentless pursuit of AI advancement may inadvertently jeopardize humanity itself. In this episode, we dare to imagine a future where deep learning amplifies system-2 capabilities, forcing us to develop countermeasures and regulations to mitigate associated risks. We grapple with the possibility of leveraging AI to combat climate change, while treading carefully to prevent catastrophic outcomes. But that’s not all. We confront the notion of AI systems acting autonomously, highlighting the critical importance of stringent regulation surrounding their access and usage. (00:00) Preview (00:42) Introduction (03:30) Yoshua Bengio's essay on AI extinction (09:45) Use cases for dangerous uses of AI (12:00) Why are AI risks only happening now? (17:50) Extinction threat and fear with AI & climate change (21:10) Super intelligence and the concerns for humanity (15:02) Yoshua Bengio research in AI safety (29:50) Are corporations a form of artificial intelligence? (31:15) Extinction scenarios by Yoshua Bengio (37:00) AI agency and AI regulation (40:15) Who controls AI for the general public? (45:11) The AI debate in the world Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

16 snips
Jun 29, 2023 • 1h 3min
#127 Clemens Mewald: Redefining the Boundaries of Artificial Intelligence & GPT-4
Welcome to Episode 127 of the Eye on AI podcast with host Craig Smith and guest Clemens Mewald. In this episode, we dive into the world of AI and its transformative impact on industries. Join us as we explore Instabase, the cutting-edge company led by our guest, a seasoned engineer with an impressive background at Google Brain, Databricks, and now Instabase. Discover how Instabase is revolutionizing automation and content capture across various organizations using AI-driven methods. Uncover the mission behind Instabase and delve into the intricate details of AI Hub, a groundbreaking marketplace for AI models and products. We explore the limitations of model repositories and marketplaces, particularly in large-scale applications. As we compare AI Hub with the AWS Marketplace, we touch upon the abundance of low-code app development solutions in the market, highlighting Accio’s rich SaaS offerings and generative AI apps as an industry benchmark. No discussion about AI would be complete without delving into the potential of GPT-4, a powerful language model capable of accurately predicting task outcomes. Join us on this ride as we uncover the heart of AI, its revolutionary applications, and its transformative power across industries. (00:00) Preview (00:38) Introduction (01:22) Clemens background and Google Brain (02:44) Instabase and solving unstructured data problems (07:40) How Instabase works and different use cases (13:20) The long term vision of the AI Hub (17:12) Blockchain based marketplace for AI models (21:50) AWS Marketplace compared to Instabase (24:05) Generative AI and no code web apps (31:05) Biggest concerns of using Open AI for security (35:40) Considerations of use cases of GPT4 (40:00) LLMs acting as knowledge and reasoning engines (46:40) Using different AI models based on different tasks (51:00) Leveraging other AI models for compatibility (54:14) How to get people to start using Instabase Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

21 snips
Jun 22, 2023 • 58min
#126 Noam Chomsky: Decoding the Human Mind & Neural Nets
Welcome to episode #126 of Eye on AI with Craig Smith and Noam Chomsky. Are neural nets the key to understanding the human brain and language acquisition? In this conversation with renowned linguist and cognitive scientist Noam Chomsky, we delve into the limitations of large language models and the ongoing quest to uncover the mysteries of the human mind. Together, we explore the historical development of research in this field, from Minsky’s thesis to Jeff Hinton’s goals for understanding the brain. We also discuss the potential harms and benefits of large language models, comparing them to the internal combustion engine and its differences from a gazelle running. We tackle the difficult task of studying the neurophysiology of human cognition and the ethical implications of invasive experiments. As we consider language as a natural object, we discuss the works of notable figures such as Albert Einstein, Galileo, Leibniz, and Turing, and the similarities between language and biology. We even entertain the possibility of extraterrestrial language and communication. Join us on this thought-provoking journey as we explore the intricacies of language, the brain, and our place in the cosmos. (00:00) Preview (00:43) Introduction (01:54) Noam Chomsky’s neural net ideology & criticisms (6:58) Jeff Hinton & Noam Chomsky’s: How the brain works (10:05) Correlation between neural nets and the brain (11:11) Noam Chomsky’s reaction to Chat-GPT & LLMs (15:21) Exploring the mechanisms of the brain (19:00) What do we learn from chatbots? (22:30) What are impossible languages? (26:45) Generative AI doesn’t show true intelligence? (28:40) Is there a danger of AI becoming too intelligent? (31:30) Can AI language models become sentient? (36:40) Turing machine and neural nets experimentations (42:40) Non-evasive procedures for understanding the brain (45:54) Does Noam Chomsky still work on understanding the brain? (49:33) Is Noam Chomsky excited about the future of neural nets? (55:30) Albert Einstein and Galileo’s principles (55:40) Is there an extraterrestrial language model? Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

Jun 15, 2023 • 57min
#125 Pascal Weinberger: Harnessing the Power of Generative AI for Creativity & Productivity
Welcome to episode 125 of Eye on AI, where we embark on a journey into the realm of Generative AI. In this episode, we have the pleasure of chatting with Pascal Weinberger, co-founder and CEO of Bardeen AI, who takes us through the evolution of AI and its incredible potential for creativity and professional endeavors. Join us as we venture behind the scenes of Telefonica’s Moonshot Lab, where AI projects in healthcare, energy, and city planning are explored. Discover the fascinating ideas and initiatives that have emerged, including the birth of a mental health company, as we uncover the immense impact of Generative AI. During our conversation, we’ll delve into the nuances of Generative AI technology, exploring how industry giants like Microsoft and Google are harnessing its power to enhance their products. We’ll also discuss the strategies and challenges faced by companies in the competitive Generative AI market, with a strong focus on meeting the needs of end users. We’ll also tackle the ongoing debates surrounding the risks and benefits of AI technology, ensuring you stay ahead of the curve in this ever-evolving world of Generative AI. Tune in and join us as we unravel the secrets of Generative AI, paving the way for a future where creativity and productivity reach new heights. (00:00) Preview (00:24) Pascal's Weinberger background in Telefonica (08:28) Machine learning & AI with Pascal's Weinberger (10:28) How Pascal's Weinberger founded Bardeen AI (13:25) Generate AI MVP for Bardeen AI (17:21) Generative AI applications and OpenAI competition (22:24) Competition in the AI space (25:24) Big tech companies vs. startups in AI (31:46) The future of AI and transformer algorithm (32:41) Bardeen AI features and functionality (46:24) AutoGPT problems and considerations (50:54) Risk of AI & misuse of commands Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

Jun 8, 2023 • 57min
#124 Sina Kian: Reshaping Privacy in the AI & Machine Learning Revolution
Welcome to episode #124 of the Eye on AI podcast, where we bring you the latest insights into the fascinating world of artificial intelligence. In this episode, Craig Smith is joined by Sina Kian, General Counsel and COO at Aleo, as they dive deep into the revolutionary realm of zero-knowledge proofs. Join us as we explore the incredible potential of zero-knowledge proofs in safeguarding sensitive data while leveraging it for machine learning and AI applications. Sina Kian provides shares how this innovative technology can reshape privacy, digital identity, and even social media authentication. During this conversation, we delve into the power of privacy-preserving blockchain technology and its far-reaching impact across industries. Discover how Aleo is at the forefront of making digital identity more secure and how it can be seamlessly integrated across platforms without compromising sensitive information. We examine the future of machine learning and AI, unraveling the role that digital identity plays in accessing products and content based on location. As we venture into the depths of the social media landscape, we also explore the risks and rewards associated with user data and privacy. Gain insights into how privacy-preserving technology can shield user information and authenticate data and content without compromising privacy. This conversation will discusses the potential of zero-knowledge proofs and privacy-preserving technology, offering a glimpse into how they will shape the future of machine learning and AI. (00:00) Preview (00:41) Introduction (02:28) Sina Kian's background in Aleo & blockchain (05:49) Blockchain's integration with AI & machine learning (11:48) How data is protected in blockchain technology (12:25) Use cases of encryption with Aleo (18:53) How Aleo works with an open source protocol (24:13) Aleo's progress in developing its open source project (31:13) Why social media platforms capture your data (34:16) How can you find widespread adoption? (35:43) How the government are getting involved in digital identity (41:53) How data privacy integrates to Web 3.0 (45:15) Blockchain's implementation in the real world (48:43) Next steps from Aleo (53:53) Social media interaction with privacy Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

14 snips
May 24, 2023 • 1h 2min
#123 Aidan Gomez: How AI Language Models Will Shape The Future
Welcome to Eye on AI, the podcast that keeps you informed about the latest trends, obstacles, and possibilities in the realm of artificial intelligence. In this episode, we have the privilege of engaging in a thought-provoking discussion with Aidan Gomez, an exceptional AI developer and co-founder of Cohere. Aidan’s passion lies in enhancing the efficiency of massive neural networks and effectively deploying them in the real world. Drawing from his vast experience, which includes leading a team of researchers at For.ai and conducting groundbreaking research at Google Brain, Aidan provides us with unique insights and anecdotes that shed light on the AI landscape. During our conversation, Aidan explains his collaboration with the legendary Geoffrey Hinton and their remarkable project at Google Brain. We delve into the intricate architecture of AI systems, demystifying the construction of the transformative transformer algorithm. Aidan generously shares his knowledge on the creation of attention within these models and the complexities of scaling such systems. As we explore the fascinating domain of language models, Aidan discusses their learning process, bridging the gap between code and data. We uncover the immense potential of these models to suggest other large-scale counterparts. We gain invaluable insights into Aidan’s journey as a co-founder of Cohere, an innovative platform revolutionizing the utilization of language technology. Tune in to Eye on AI now to immerse yourself in a captivating conversation that will expand your understanding of this ever-develop field. (00:00) Preview (00:33) Introduction & sponsorship (02:00) Aidan's background with machine learning & AI (05:10) Geoffrey Hinton & Aidan Gomez working together (07:55) Aidan Gomez & Google Brain's project (12:53) Aidan's role in building AI architecture (15:25) How the transformer algorithm is built (18:25) How do you create attention? (20:40) How do you scale the model? (25:10) How language models learn from code and data (29:55) Did you know the potential of the project? (34:15) Can LLMs suggest other large models? (36:45) How Aidan Gomez started Cohere (41:10) How do people use Cohere? (46:50) Examples of language technology models (48:40) How Cohere handles hallucinations (52:53) The dangers of AI Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

4 snips
May 10, 2023 • 56min
#122 Connor Leahy: Unveiling the Darker Side of AI
Welcome to Eye on AI, the podcast that explores the latest developments, challenges, and opportunities in the world of artificial intelligence. In this episode, we sit down with Connor Leahy, an AI researcher and co-founder of EleutherAI, to discuss the darker side of AI. Connor shares his insights on the current negative trajectory of AI, the challenges of keeping superintelligence in a sandbox, and the potential negative implications of large language models such as GPT4. He also discusses the problem of releasing AI to the public and the need for regulatory intervention to ensure alignment with human values. Throughout the podcast, Connor highlights the work of Conjecture, a project focused on advancing alignment in AI, and shares his perspectives on the stages of research and development of this critical issue. If you’re interested in understanding the ethical and social implications of AI and the efforts to ensure alignment with human values, this podcast is for you. So join us as we delve into the darker side of AI with Connor Leahy on Eye on AI. (00:00) Preview (00:48) Connor Leahy’s background with EleutherAI & Conjecture (03:05) Large language models applications with EleutherAI (06:51) The current negative trajectory of AI (08:46) How difficult is keeping super intelligence in a sandbox? (12:35) How AutoGPT uses ChatGPT to run autonomously (15:15) How GPT4 can be used out of context & negatively (19:30) How OpenAI gives access to nefarious activities (26:39) The problem with the race for AGI (28:51) The goal of Conjecture and advancing alignment (31:04) The problem with releasing AI to the public (33:35) FTC complaint & government intervention in AI (38:13) Technical implementation to fix the alignment issue (44:34) How CoEm is fixing the alignment issue (53:30) Stages of research and development of Conjecture Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI