Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
May 21, 2023 • 2h 2min

ROBERT MILES - "There is a good chance this kills everyone"

Please check out Numerai - our sponsor @ https://numerai.com/mlst Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance. Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk Welcome to an exciting episode featuring an outstanding guest, Robert Miles! Renowned for his extraordinary contributions to understanding AI and its potential impacts on our lives, Robert is an artificial intelligence advocate, researcher, and YouTube sensation. He combines engaging discussions with entertaining content, captivating millions of viewers from around the world. With a strong computer science background, Robert has been actively involved in AI safety projects, focusing on raising awareness about potential risks and benefits of advanced AI systems. His YouTube channel is celebrated for making AI safety discussions accessible to a diverse audience through breaking down complex topics into easy-to-understand nuggets of knowledge, and you might also recognise him from his appearances on Computerphile. In this episode, join us as we dive deep into Robert's journey in the world of AI, exploring his insights on AI alignment, superintelligence, and the role of AI shaping our society and future. We'll discuss topics such as the limits of AI capabilities and physics, AI progress and timelines, human-machine hybrid intelligence, AI in conflict and cooperation with humans, and the convergence of AI communities. Robert Miles: @RobertMilesAI https://twitter.com/robertskmiles https://aisafety.info/ YT version: https://www.youtube.com/watch?v=kMLKbhY0ji0 Panel: Dr. Tim Scarfe Dr. Keith Duggar Joint CTOs - https://xrai.glass/ Refs: Are Emergent Abilities of Large Language Models a Mirage? (Rylan Schaeffer) https://arxiv.org/abs/2304.15004 TOC: Intro [00:00:00] Numerai Sponsor Messsage [00:02:17] AI Alignment [00:04:27] Limits of AI Capabilities and Physics [00:18:00] AI Progress and Timelines [00:23:52] AI Arms Race and Innovation [00:31:11] Human-Machine Hybrid Intelligence [00:38:30] Understanding and Defining Intelligence [00:42:48] AI in Conflict and Cooperation with Humans [00:50:13] Interpretability and Mind Reading in AI [01:03:46] Mechanistic Interpretability and Deconfusion Research [01:05:53] Understanding the core concepts of AI [01:07:40] Moon landing analogy and AI alignment [01:09:42] Cognitive horizon and limits of human intelligence [01:11:42] Funding and focus on AI alignment [01:16:18] Regulating AI technology and potential risks [01:19:17] Aligning AI with human values and its dynamic nature [01:27:04] Cooperation and Allyship [01:29:33] Orthogonality Thesis and Goal Preservation [01:33:15] Anthropomorphic Language and Intelligent Agents [01:35:31] Maintaining Variety and Open-ended Existence [01:36:27] Emergent Abilities of Large Language Models [01:39:22] Convergence vs Emergence [01:44:04] Criticism of X-risk and Alignment Communities [01:49:40] Fusion of AI communities and addressing biases [01:52:51] AI systems integration into society and understanding them [01:53:29] Changing opinions on AI topics and learning from past videos [01:54:23] Utility functions and von Neumann-Morgenstern theorems [01:54:47] AI Safety FAQ project [01:58:06] Building a conversation agent using AI safety dataset [02:00:36]
undefined
May 16, 2023 • 50min

AI Senate Hearing - Executive Summary (Sam Altman, Gary Marcus)

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk In a historic and candid Senate hearing, OpenAI CEO Sam Altman, Professor Gary Marcus, and IBM's Christina Montgomery discussed the regulatory landscape of AI in the US. The discussion was particularly interesting due to its timing, as it followed the recent release of the EU's proposed AI Act, which could potentially ban American companies like OpenAI and Google from providing API access to generative AI models and impose massive fines for non-compliance. The speakers openly addressed potential risks of AI technology and emphasized the need for precision regulation. This was a unique approach, as historically, US companies have tried their hardest to avoid regulation. The hearing not only showcased the willingness of industry leaders to engage in discussions on regulation but also demonstrated the need for a balanced approach to avoid stifling innovation. The EU AI Act, scheduled to come into power in 2026, is still just a proposal, but it has already raised concerns about its impact on the American tech ecosystem and potential conflicts between US and EU laws. With extraterritorial jurisdiction and provisions targeting open-source developers and software distributors like GitHub, the Act could create more problems than it solves by encouraging unsafe AI practices and limiting access to advanced AI technologies. One core issue with the Act is the designation of foundation models in the highest risk category, primarily due to their open-ended nature. A significant risk theme revolves around users creating harmful content and determining who should be held accountable – the users or the platforms. The Senate hearing served as an essential platform to discuss these pressing concerns and work towards a regulatory framework that promotes both safety and innovation in AI. 00:00 Show 01:35 Legals 03:44 Intro 10:33 Altman intro 14:16 Christina Montgomery 18:20 Gary Marcus 23:15 Jobs 26:01 Scorecards 28:08 Harmful content 29:47 Startups 31:35 What meets the definition of harmful? 32:08 Moratorium 36:11 Social Media 46:17 Gary's take on BingGPT and pivot into policy 48:05 Democratisation
undefined
May 11, 2023 • 2h 32min

Future of Generative AI [David Foster]

Generative Deep Learning, 2nd Edition [David Foster] https://www.oreilly.com/library/view/generative-deep-learning/9781098134174/ Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk In this conversation, Tim Scarfe and David Foster, the author of 'Generative Deep Learning,' dive deep into the world of generative AI, discussing topics ranging from model families and auto regressive models to the democratization of AI technology and its potential impact on various industries. They explore the connection between language and true intelligence, as well as the limitations of GPT and other large language models. The discussion also covers the importance of task-independent world models, the concept of active inference, and the potential of combining these ideas with transformer and GPT-style models. Ethics and regulation in AI development are also discussed, including the need for transparency in data used to train AI models and the responsibility of developers to ensure their creations are not destructive. The conversation touches on the challenges posed by AI-generated content on copyright laws and the diminishing role of effort and skill in copyright due to generative models. The impact of AI on education and creativity is another key area of discussion, with Tim and David exploring the potential benefits and drawbacks of using AI in the classroom, the need for a balance between traditional learning methods and AI-assisted learning, and the importance of teaching students to use AI tools critically and responsibly. Generative AI in music is also explored, with David and Tim discussing the potential for AI-generated music to change the way we create and consume art, as well as the challenges in training AI models to generate music that captures human emotions and experiences. Throughout the conversation, Tim and David touch on the potential risks and consequences of AI becoming too powerful, the importance of maintaining control over the technology, and the possibility of government intervention and regulation. The discussion concludes with a thought experiment about AI predicting human actions and creating transient capabilities that could lead to doom. TOC: Introducing Generative Deep Learning [00:00:00] Model Families in Generative Modeling [00:02:25] Auto Regressive Models and Recurrence [00:06:26] Language and True Intelligence [00:15:07] Language, Reality, and World Models [00:19:10] AI, Human Experience, and Understanding [00:23:09] GPTs Limitations and World Modeling [00:27:52] Task-Independent Modeling and Cybernetic Loop [00:33:55] Collective Intelligence and Emergence [00:36:01] Active Inference vs. Reinforcement Learning [00:38:02] Combining Active Inference with Transformers [00:41:55] Decentralized AI and Collective Intelligence [00:47:46] Regulation and Ethics in AI Development [00:53:59] AI-Generated Content and Copyright Laws [00:57:06] Effort, Skill, and AI Models in Copyright [00:57:59] AI Alignment and Scale of AI Models [00:59:51] Democratization of AI: GPT-3 and GPT-4 [01:03:20] Context Window Size and Vector Databases [01:10:31] Attention Mechanisms and Hierarchies [01:15:04] Benefits and Limitations of Language Models [01:16:04] AI in Education: Risks and Benefits [01:19:41] AI Tools and Critical Thinking in the Classroom [01:29:26] Impact of Language Models on Assessment and Creativity [01:35:09] Generative AI in Music and Creative Arts [01:47:55] Challenges and Opportunities in Generative Music [01:52:11] AI-Generated Music and Human Emotions [01:54:31] Language Modeling vs. Music Modeling [02:01:58] Democratization of AI and Industry Impact [02:07:38] Recursive Self-Improving Superintelligence [02:12:48] AI Technologies: Positive and Negative Impacts [02:14:44] Runaway AGI and Control Over AI [02:20:35] AI Dangers, Cybercrime, and Ethics [02:23:42]
undefined
May 8, 2023 • 60min

PERPLEXITY AI - The future of search.

https://www.perplexity.ai/ https://www.perplexity.ai/iphone https://www.perplexity.ai/android Interview with Aravind Srinivas, CEO and Co-Founder of Perplexity AI – Revolutionizing Learning with Conversational Search Engines Dr. Tim Scarfe talks with Dr. Aravind Srinivas, CEO and Co-Founder of Perplexity AI, about his journey from studying AI and reinforcement learning at UC Berkeley to launching Perplexity – a startup that aims to revolutionize learning through the power of conversational search engines. By combining the strengths of large language models like GPT-* with search engines, Perplexity provides users with direct answers to their questions in a decluttered user interface, making the learning process not only more efficient but also enjoyable. Aravind shares his insights on how advertising can be made more relevant and less intrusive with the help of large language models, emphasizing the importance of transparency in relevance ranking to improve user experience. He also discusses the challenge of balancing the interests of users and advertisers for long-term success. The interview delves into the challenges of maintaining truthfulness and balancing opinions and facts in a world where algorithmic truth is difficult to achieve. Aravind believes that opinionated models can be useful as long as they don't spread misinformation and are transparent about being opinions. He also emphasizes the importance of allowing users to correct or update information, making the platform more adaptable and dynamic. Lastly, Aravind shares his thoughts on embracing a digital society with large language models, stressing the need for frequent and iterative deployments of these models to reduce fear of AI and misinformation. He envisions a future where using AI tools effectively requires clear thinking and first-principle reasoning, ultimately benefiting society as a whole. Education and transparency are crucial to counter potential misuse of AI for political or malicious purposes. YT version: https://youtu.be/_vMOWw3uYvk Aravind Srinivas: https://www.linkedin.com/in/aravind-srinivas-16051987/ https://scholar.google.com/citations?user=GhrKC1gAAAAJ&hl=en https://twitter.com/aravsrinivas?lang=en Interviewer: Dr. Tim Scarfe (CTO XRAI Glass) Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB TOC: Introduction and Background of Perplexity AI [00:00:00] The Importance of a Decluttered UI and User Experience [00:04:19] Advertising in Search Engines and Potential Improvements [00:09:02] Challenges and Opportunities in this new Search Modality [00:18:17] Benefits of Perplexity and Personalized Learning [00:21:27] Objective Truth and Personalized Wikipedia [00:26:34] Opinions and Truth in Answer Engines [00:30:53] Embracing the Digital Society with Language Models [00:37:30] Impact on Jobs and Future of Learning [00:40:13] Educating users on when perplexity works and doesn't work [00:43:13] Improving user experience and the possibilities of voice-to-voice interaction [00:45:04] The future of language models and auto-regressive models [00:49:51] Performance of GPT-4 and potential improvements [00:52:31] Building the ultimate research and knowledge assistant [00:55:33] Revolutionizing note-taking and personal knowledge stores [00:58:16] References: Evaluating Verifiability in Generative Search Engines (Nelson F. Liu et al, Stanford University) https://arxiv.org/pdf/2304.09848.pdf Note: this was a sponsored interview.
undefined
Apr 16, 2023 • 2h 47min

#114 - Secrets of Deep Reinforcement Learning (Minqi Jiang)

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Twitter: https://twitter.com/MLStreetTalk In this exclusive interview, Dr. Tim Scarfe sits down with Minqi Jiang, a leading PhD student at University College London and Meta AI, as they delve into the fascinating world of deep reinforcement learning (RL) and its impact on technology, startups, and research. Discover how Minqi made the crucial decision to pursue a PhD in this exciting field, and learn from his valuable startup experiences and lessons. Minqi shares his insights into balancing serendipity and planning in life and research, and explains the role of objectives and Goodhart's Law in decision-making. Get ready to explore the depths of robustness in RL, two-player zero-sum games, and the differences between RL and supervised learning. As they discuss the role of environment in intelligence, emergence, and abstraction, prepare to be blown away by the possibilities of open-endedness and the intelligence explosion. Learn how language models generate their own training data, the limitations of RL, and the future of software 2.0 with interpretability concerns. From robotics and open-ended learning applications to learning potential metrics and MDPs, this interview is a goldmine of information for anyone interested in AI, RL, and the cutting edge of technology. Don't miss out on this incredible opportunity to learn from a rising star in the AI world! TOC Tech & Startup Background [00:00:00] Pursuing PhD in Deep RL [00:03:59] Startup Lessons [00:11:33] Serendipity vs Planning [00:12:30] Objectives & Decision Making [00:19:19] Minimax Regret & Uncertainty [00:22:57] Robustness in RL & Zero-Sum Games [00:26:14] RL vs Supervised Learning [00:34:04] Exploration & Intelligence [00:41:27] Environment, Emergence, Abstraction [00:46:31] Open-endedness & Intelligence Explosion [00:54:28] Language Models & Training Data [01:04:59] RLHF & Language Models [01:16:37] Creativity in Language Models [01:27:25] Limitations of RL [01:40:58] Software 2.0 & Interpretability [01:45:11] Language Models & Code Reliability [01:48:23] Robust Prioritized Level Replay [01:51:42] Open-ended Learning [01:55:57] Auto-curriculum & Deep RL [02:08:48] Robotics & Open-ended Learning [02:31:05] Learning Potential & MDPs [02:36:20] Universal Function Space [02:42:02] Goal-Directed Learning & Auto-Curricula [02:42:48] Advice & Closing Thoughts [02:44:47] References: - Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth O. Stanley and Joel Lehman https://www.springer.com/gp/book/9783319155234 - Rethinking Exploration: General Intelligence Requires Rethinking Exploration https://arxiv.org/abs/2106.06860 - The Case for Strong Emergence (Sabine Hossenfelder) https://arxiv.org/abs/2102.07740 - The Game of Life (Conway) https://www.conwaylife.com/ - Toolformer: Teaching Language Models to Generate APIs (Meta AI) https://arxiv.org/abs/2302.04761 - OpenAI's POET: Paired Open-Ended Trailblazer https://arxiv.org/abs/1901.01753 - Schmidhuber's Artificial Curiosity https://people.idsia.ch/~juergen/interest.html - Gödel Machines https://people.idsia.ch/~juergen/goedelmachine.html - PowerPlay https://arxiv.org/abs/1112.5309 - Robust Prioritized Level Replay: https://openreview.net/forum?id=NfZ6g2OmXEk - Unsupervised Environment Design: https://arxiv.org/abs/2012.02096 - Excel: Evolving Curriculum Learning for Deep Reinforcement Learning https://arxiv.org/abs/1901.05431 - Go-Explore: A New Approach for Hard-Exploration Problems https://arxiv.org/abs/1901.10995 - Learning with AMIGo: Adversarially Motivated Intrinsic Goals https://www.researchgate.net/publication/342377312_Learning_with_AMIGo_Adversarially_Motivated_Intrinsic_Goals PRML https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf Sutton and Barto https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf
undefined
Apr 10, 2023 • 1h 50min

Unlocking the Brain's Mysteries: Chris Eliasmith on Spiking Neural Networks and the Future of Human-Machine Interaction

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Twitter: https://twitter.com/MLStreetTalk Chris Eliasmith is a renowned interdisciplinary researcher, author, and professor at the University of Waterloo, where he holds the prestigious Canada Research Chair in Theoretical Neuroscience. As the Founding Director of the Centre for Theoretical Neuroscience, Eliasmith leads the Computational Neuroscience Research Group in exploring the mysteries of the brain and its complex functions. His groundbreaking work, including the Neural Engineering Framework, Neural Engineering Objects software environment, and the Semantic Pointer Architecture, has led to the development of Spaun, the most advanced functional brain simulation to date. Among his numerous achievements, Eliasmith has received the 2015 NSERC "Polany-ee" Award and authored two influential books, "How to Build a Brain" and "Neural Engineering." Chris' homepage: http://arts.uwaterloo.ca/~celiasmi/ Interviewers: Dr. Tim Scarfe and Dr. Keith Duggar TOC: Intro to Chris [00:00:00] Continuous Representation in Biologically Plausible Neural Networks [00:06:49] Legendre Memory Unit and Spatial Semantic Pointer [00:14:36] Large Contexts and Data in Language Models [00:20:30] Spatial Semantic Pointers and Continuous Representations [00:24:38] Auto Convolution [00:30:12] Abstractions and the Continuity [00:36:33] Compression, Sparsity, and Brain Representations [00:42:52] Continual Learning and Real-World Interactions [00:48:05] Robust Generalization in LLMs and Priors [00:56:11] Chip design [01:00:41] Chomsky + Computational Power of NNs and Recursion [01:04:02] Spiking Neural Networks and Applications [01:13:07] Limits of Empirical Learning [01:22:43] Philosophy of Mind, Consciousness etc [01:25:35] Future of human machine interaction [01:41:28] Future research and advice to young researchers [01:45:06] Refs: http://compneuro.uwaterloo.ca/publications/dumont2023.html  http://compneuro.uwaterloo.ca/publications/voelker2019lmu.html  http://compneuro.uwaterloo.ca/publications/voelker2018.html http://compneuro.uwaterloo.ca/publications/lu2019.html  https://www.youtube.com/watch?v=I5h-xjddzlY
undefined
Apr 2, 2023 • 2h 40min

#112 AVOIDING AGI APOCALYPSE - CONNOR LEAHY

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 In this podcast with the legendary Connor Leahy (CEO Conjecture) recorded in Dec 2022, we discuss various topics related to artificial intelligence (AI), including AI alignment, the success of ChatGPT, the potential threats of artificial general intelligence (AGI), and the challenges of balancing research and product development at his company, Conjecture. He emphasizes the importance of empathy, dehumanizing our thinking to avoid anthropomorphic biases, and the value of real-world experiences in learning and personal growth. The conversation also covers the Orthogonality Thesis, AI preferences, the mystery of mode collapse, and the paradox of AI alignment. Connor Leahy expresses concern about the rapid development of AI and the potential dangers it poses, especially as AI systems become more powerful and integrated into society. He argues that we need a better understanding of AI systems to ensure their safe and beneficial development. The discussion also touches on the concept of "futuristic whack-a-mole," where futurists predict potential AGI threats, and others try to come up with solutions for those specific scenarios. However, the problem lies in the fact that there could be many more scenarios that neither party can think of, especially when dealing with a system that's smarter than humans. https://www.linkedin.com/in/connor-j-leahy/https://twitter.com/NPCollapse Interviewer: Dr. Tim Scarfe (Innovation CTO @ XRAI Glass https://xrai.glass/) TOC: The success of ChatGPT and its impact on the AI field [00:00:00] Subjective experience [00:15:12] AI Architectural discussion including RLHF [00:18:04] The paradox of AI alignment and the future of AI in society [00:31:44] The impact of AI on society and politics [00:36:11] Future shock levels and the challenges of predicting the future [00:45:58] Long termism and existential risk [00:48:23] Consequentialism vs. deontology in rationalism [00:53:39] The Rationalist Community and its Challenges [01:07:37] AI Alignment and Conjecture [01:14:15] Orthogonality Thesis and AI Preferences [01:17:01] Challenges in AI Alignment [01:20:28] Mechanistic Interpretability in Neural Networks [01:24:54] Building Cleaner Neural Networks [01:31:36] Cognitive horizons / The problem with rapid AI development [01:34:52] Founding Conjecture and raising funds [01:39:36] Inefficiencies in the market and seizing opportunities [01:45:38] Charisma, authenticity, and leadership in startups [01:52:13] Autistic culture and empathy [01:55:26] Learning from real-world experiences [02:01:57] Technical empathy and transhumanism [02:07:18] Moral status and the limits of empathy [02:15:33] Anthropomorphic Thinking and Consequentialism [02:17:42] Conjecture: Balancing Research and Product Development [02:20:37] Epistemology Team at Conjecture [02:31:07] Interpretability and Deception in AGI [02:36:23] Futuristic whack-a-mole and predicting AGI threats [02:38:27] Refs: 1. OpenAI's ChatGPT: https://chat.openai.com/ 2. The Mystery of Mode Collapse (Article): https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse 3. The Rationalist Guide to the Galaxy https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795 5. Alfred Korzybski: https://en.wikipedia.org/wiki/Alfred_Korzybski 6. Instrumental Convergence: https://en.wikipedia.org/wiki/Instrumental_convergence 7. Orthogonality Thesis: https://en.wikipedia.org/wiki/Orthogonality_thesis 8. Brian Tomasik's Essays on Reducing Suffering: https://reducing-suffering.org/ 9. Epistemological Framing for AI Alignment Research: https://www.lesswrong.com/posts/Y4YHTBziAscS5WPN7/epistemological-framing-for-ai-alignment-research 10. How to Defeat Mind readers: https://www.alignmentforum.org/posts/EhAbh2pQoAXkm9yor/circumventing-interpretability-how-to-defeat-mind-readers 11. Society of mind: https://www.amazon.co.uk/Society-Mind-Marvin-Minsky/dp/0671607405
undefined
Apr 1, 2023 • 27min

#111 - AI moratorium, Eliezer Yudkowsky, AGI risk etc

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Send us a voice message which you want us to publish: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/message In a recent open letter, over 1500 individuals called for a six-month pause on the development of advanced AI systems, expressing concerns over the potential risks AI poses to society and humanity. However, there are issues with this approach, including global competition, unstoppable progress, potential benefits, and the need to manage risks instead of avoiding them. Decision theorist Eliezer Yudkowsky took it a step further in a Time magazine article, calling for an indefinite and worldwide moratorium on Artificial General Intelligence (AGI) development, warning of potential catastrophe if AGI exceeds human intelligence. Yudkowsky urged for an immediate halt to all large AI training runs and the shutdown of major GPU clusters, calling for international cooperation to enforce these measures. However, several counterarguments question the validity of Yudkowsky's concerns: 1. Hard limits on AGI 2. Dismissing AI extinction risk 3. Collective action problem 4. Misplaced focus on AI threats While the potential risks of AGI cannot be ignored, it is essential to consider various arguments and potential solutions before making drastic decisions. As AI continues to advance, it is crucial for researchers, policymakers, and society as a whole to engage in open and honest discussions about the potential consequences and the best path forward. With a balanced approach to AGI development, we may be able to harness its power for the betterment of humanity while mitigating its risks. Eliezer Yudkowsky: https://en.wikipedia.org/wiki/Eliezer_Yudkowsky Connor Leahy: https://twitter.com/NPCollapse (we will release that interview soon) Gary Marcus: http://garymarcus.com/index.html Tim Scarfe is the innovation CTO of XRAI Glass: https://xrai.glass/ Gary clip filmed at AIUK https://ai-uk.turing.ac.uk/programme/ and our appreciation to them for giving us a press pass. Check out their conference next year! WIRED clip from Gary came from here: https://www.youtube.com/watch?v=Puo3VkPkNZ4 Refs: Statement from the listed authors of Stochastic Parrots on the “AI pause” letterTimnit Gebru, Emily M. Bender, Angelina McMillan-Major, Margaret Mitchell https://www.dair-institute.org/blog/letter-statement-March2023 Eliezer Yudkowsky on Lex: https://www.youtube.com/watch?v=AaTRHFaaPG8 Pause Giant AI Experiments: An Open Letter https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Pausing AI Developments Isn't Enough. We Need to Shut it All Down (Eliezer Yudkowsky) https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
undefined
Mar 23, 2023 • 57min

#110 Dr. STEPHEN WOLFRAM - HUGE ChatGPT+Wolfram announcement!

HUGE ANNOUNCEMENT, CHATGPT+WOLFRAM! You saw it HERE first! YT version: https://youtu.be/z5WZhCBRDpU Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Stephen's announcement post: https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/ OpenAI's announcement post: https://openai.com/blog/chatgpt-plugins In an era of technology and innovation, few individuals have left as indelible a mark on the fabric of modern science as our esteemed guest, Dr. Steven Wolfram. Dr. Wolfram is a renowned polymath who has made significant contributions to the fields of physics, computer science, and mathematics. A prodigious young man too, Wolfram earned a Ph.D. in theoretical physics from the California Institute of Technology by the age of 20. He became the youngest recipient of the prestigious MacArthur Fellowship at the age of 21. Wolfram's groundbreaking computational tool, Mathematica, was launched in 1988 and has become a cornerstone for researchers and innovators worldwide. In 2002, he published "A New Kind of Science," a paradigm-shifting work that explores the foundations of science through the lens of computational systems. In 2009, Wolfram created Wolfram Alpha, a computational knowledge engine utilized by millions of users worldwide. His current focus is on the Wolfram Language, a powerful programming language designed to democratize access to cutting-edge technology. Wolfram's numerous accolades include honorary doctorates and fellowships from prestigious institutions. As an influential thinker, Dr. Wolfram has dedicated his life to unraveling the mysteries of the universe and making computation accessible to all. First of all... we have an announcement to make, you heard it FIRST here on MLST! .... Intro [00:00:00] Big announcement! Wolfram + ChatGPT! [00:02:57] What does it mean to understand? [00:05:33] Feeding information back into the model [00:13:48] Semantics and cognitive categories [00:20:09] Navigating the ruliad [00:23:50] Computational irreducibility [00:31:39] Conceivability and interestingness [00:38:43] Human intelligible sciences [00:43:43]
undefined
Mar 20, 2023 • 2h 51min

#109 - Dr. DAN MCQUILLAN - Resisting AI

YT version: https://youtu.be/P1j3VoKBxbc (references in pinned comment) Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Dan McQuillan, a visionary in digital culture and social innovation, emphasizes the importance of understanding technology's complex relationship with society. As an academic at Goldsmiths, University of London, he fosters interdisciplinary collaboration and champions data-driven equity and ethical technology. Dan's career includes roles at Amnesty International and Social Innovation Camp, showcasing technology's potential to empower and bring about positive change. In this conversation, we discuss the challenges and opportunities at the intersection of technology and society, exploring the profound impact of our digital world. Interviewer: Dr. Tim Scarfe [00:00:00] Dan's background and journey to academia [00:03:30] Dan's background and journey to academia [00:04:10] Writing the book "Resisting AI" [00:08:30] Necropolitics and its relation to AI [00:10:06] AI as a new form of colonization [00:12:57] LLMs as a new form of neo-techno-imperialism [00:15:47] Technology for good and AGI's skewed worldview [00:17:49] Transhumanism, eugenics, and intelligence [00:20:45] Valuing differences (disability) and challenging societal norms [00:26:08] Re-ontologizing and the philosophy of information [00:28:19] New materialism and the impact of technology on society [00:30:32] Intelligence, meaning, and materiality [00:31:43] The constraints of physical laws and the importance of science [00:32:44] Exploring possibilities to reduce suffering and increase well-being [00:33:29] The division between meaning and material in our experiences [00:35:36] Machine learning, data science, and neoplatonic approach to understanding reality [00:37:56] Different understandings of cognition, thought, and consciousness [00:39:15] Enactivism and its variants in cognitive science [00:40:58] Jordan Peterson [00:44:47] Relationism, relativism, and finding the correct relational framework [00:47:42] Recognizing privilege and its impact on social interactions [00:49:10] Intersectionality / Feminist thinking and the concept of care in social structures [00:51:46] Intersectionality and its role in understanding social inequalities [00:54:26] The entanglement of history, technology, and politics [00:57:39] ChatGPT article - we come to bury ChatGPT [00:59:41] Statistical pattern learning and convincing patterns in AI [01:01:27] Anthropomorphization and understanding in AI [01:03:26] AI in education and critical thinking [01:06:09] European Union policies and trustable AI [01:07:52] AI reliability and the halo effect [01:09:26] AI as a tool enmeshed in society [01:13:49] Luddites [01:15:16] AI is a scam [01:15:31] AI and Social Relations [01:16:49] Invisible Labor in AI and Machine Learning [01:21:09] Exploititative AI / alignment [01:23:50] Science fiction AI / moral frameworks [01:27:22] Discussing Stochastic Parrots and Nihilism [01:30:36] Human Intelligence vs. Language Models [01:32:22] Image Recognition and Emulation vs. Experience [01:34:32] Thought Experiments and Philosophy in AI Ethics (mimicry) [01:41:23] Abstraction, reduction, and grounding in reality [01:43:13] Process philosophy and the possibility of change [01:49:55] Mental health, AI, and epistemic injustice [01:50:30] Hermeneutic injustice and gendered techniques [01:53:57] AI and politics [01:59:24] Epistemic injustice and testimonial injustice [02:11:46] Fascism and AI discussion [02:13:24] Violence in various systems [02:16:52] Recognizing systemic violence [02:22:35] Fascism in Today's Society [02:33:33] Pace and Scale of Technological Change [02:37:38] Alternative approaches to AI and society [02:44:09] Self-Organization at Successive Scales / cybernetics

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode