Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
Feb 3, 2023 • 1h 7min

#98 - Prof. LUCIANO FLORIDI - ChatGPT, Superintelligence, Ethics, Philosophy of Information

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 YT version: https://youtu.be/YLNGvvgq3eg (If music annoying, skip to main interview @ 14:14) We are living in an age of rapid technological advancement, and with this growth comes a digital divide. Professor Luciano Floridi of the Oxford Internet Institute / Oxford University believes that this divide not only affects our understanding of the implications of this new age, but also the organization of a fair society.  The Information Revolution has been transforming the global economy, with the majority of global GDP now relying on intangible goods, such as information-related services. This in turn has led to the generation of immense amounts of data, more than humanity has ever seen in its history. With 95% of this data being generated by the current generation, Professor Floridi believes that we are becoming overwhelmed by this data, and that our agency as humans is being eroded as a result.  According to Professor Floridi, the digital divide has caused a lack of balance between technological growth and our understanding of this growth. He believes that the infosphere is becoming polluted and the manifold of the infosphere is increasingly determined by technology and AI. Identifying, anticipating and resolving these problems has become essential, and Professor Floridi has dedicated his research to the Philosophy of Information, Philosophy of Technology and Digital Ethics.  We must equip ourselves with a viable philosophy of information to help us better understand and address the risks of this new information age. Professor Floridi is leading the charge, and his research on Digital Ethics, the Philosophy of Information and the Philosophy of Technology is helping us to better anticipate, identify and resolve problems caused by the digital divide. TOC: [00:00:00] Introduction to Luciano and his ideas [00:14:40] Chat GPT / language models [00:29:24] AI risk / "Singularitarians"  [00:30:34] Re-ontologising the world [00:56:35] It from bit and Computationalism and philosophy without purpose [01:03:43] Getting into Digital Ethics References: GPT‐3: Its Nature, Scope, Limits, and Consequences [Floridi] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827044 Ultraintelligent Machines, Singularity, and Other Sci-fi Distractions about AI [Floridi] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4222347 The Philosophy of Information [Floridi] https://www.amazon.co.uk/Philosophy-Information-Luciano-Floridi/dp/0199232393 Information: A Very Short Introduction [Floridi] https://www.amazon.co.uk/Information-Very-Short-Introduction-Introductions/dp/0199551375 https://en.wikipedia.org/wiki/Luciano_Floridi https://www.philosophyofinformation.net/
undefined
Jan 28, 2023 • 25min

#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

Research has shown that humans possess strong inductive biases which enable them to quickly learn and generalize. In order to instill these same useful human inductive biases into machines, a paper was presented by Sreejan Kumar at the NeurIPS conference which won the Outstanding Paper of the Year award. The paper is called Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines. This paper focuses on using a controlled stimulus space of two-dimensional binary grids to define the space of abstract concepts that humans have and a feedback loop of collaboration between humans and machines to understand the differences in human and machine inductive biases.  It is important to make machines more human-like to collaborate with them and understand their behavior. Synthesised discrete programs running on a turing machine computational model instead of a neural network substrate offers promise for the future of artificial intelligence. Neural networks and program induction should both be explored to get a well-rounded view of intelligence which works in multiple domains, computational substrates and which can acquire a diverse set of capabilities. Natural language understanding in models can also be improved by instilling human language biases and programs into AI models. Sreejan used an experimental framework consisting of two dual task distributions, one generated from human priors and one from machine priors, to understand the differences in human and machine inductive biases. Furthermore, he demonstrated that compressive abstractions can be used to capture the essential structure of the environment for more human-like behavior. This means that emergent language-based inductive priors can be distilled into artificial neural networks, and AI  models can be aligned to the us, world and indeed, our values. Humans possess strong inductive biases which enable them to quickly learn to perform various tasks. This is in contrast to neural networks, which lack the same inductive biases and struggle to learn them empirically from observational data, thus, they have difficulty generalizing to novel environments due to their lack of prior knowledge.  Sreejan's results showed that when guided with representations from language and programs, the meta-learning agent not only improved performance on task distributions humans are adept at, but also decreased performa on control task distributions where humans perform poorly. This indicates that the abstraction supported by these representations, in the substrate of language or indeed, a program, is key in the development of aligned artificial agents with human-like generalization, capabilities, aligned values and behaviour. References Using natural language and program abstractions to instill human inductive biases in machines [Kumar et al/NEURIPS] https://openreview.net/pdf?id=buXZ7nIqiwE Core Knowledge [Elizabeth S. Spelke / Harvard] https://www.harvardlds.org/wp-content/uploads/2017/01/SpelkeKinzler07-1.pdf The Debate Over Understanding in AI's Large Language Models [Melanie Mitchell] https://arxiv.org/abs/2210.13966 On the Measure of Intelligence [Francois Chollet] https://arxiv.org/abs/1911.01547 ARC challenge [Chollet] https://github.com/fchollet/ARC
undefined
Dec 30, 2022 • 2h 49min

#96 Prof. PEDRO DOMINGOS - There are no infinities, utility functions, neurosymbolic

Pedro Domingos, Professor Emeritus of Computer Science and Engineering at the University of Washington, is renowned for his research in machine learning, particularly for his work on Markov logic networks that allow for uncertain inference. He is also the author of the acclaimed book "The Master Algorithm". Panel: Dr. Tim Scarfe TOC: [00:00:00] Introduction [00:01:34] Galaxtica / misinformation / gatekeeping [00:12:31] Is there a master algorithm? [00:16:29] Limits of our understanding  [00:21:57] Intentionality, Agency, Creativity [00:27:56] Compositionality  [00:29:30] Digital Physics / It from bit / Wolfram  [00:35:17] Alignment / Utility functions [00:43:36] Meritocracy   [00:45:53] Game theory  [01:00:00] EA/consequentialism/Utility [01:11:09] Emergence / relationalism  [01:19:26] Markov logic  [01:25:38] Moving away from anthropocentrism  [01:28:57] Neurosymbolic / infinity / tensor algerbra [01:53:45] Abstraction [01:57:26] Symmetries / Geometric DL [02:02:46] Bias variance trade off  [02:05:49] What seen at neurips [02:12:58] Chalmers talk on LLMs [02:28:32] Definition of intelligence [02:32:40] LLMs  [02:35:14] On experts in different fields [02:40:15] Back to intelligence [02:41:37] Spline theory / extrapolation YT version:  https://www.youtube.com/watch?v=C9BH3F2c0vQ References; The Master Algorithm [Domingos] https://www.amazon.co.uk/s?k=master+algorithm&i=stripbooks&crid=3CJ67DCY96DE8&sprefix=master+algorith%2Cstripbooks%2C82&ref=nb_sb_noss_2 INFORMATION, PHYSICS, QUANTUM: THE SEARCH FOR LINKS [John Wheeler/It from Bit] https://philpapers.org/archive/WHEIPQ.pdf A New Kind Of Science [Wolfram] https://www.amazon.co.uk/New-Kind-Science-Stephen-Wolfram/dp/1579550088 The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future [Tom Chivers] https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795 The Status Game: On Social Position and How We Use It [Will Storr] https://www.goodreads.com/book/show/60598238-the-status-game Newcomb's paradox https://en.wikipedia.org/wiki/Newcomb%27s_paradox The Case for Strong Emergence [Sabine Hossenfelder] https://philpapers.org/rec/HOSTCF-3 Markov Logic: An Interface Layer for Artificial Intelligence [Domingos] https://www.morganclaypool.com/doi/abs/10.2200/S00206ED1V01Y200907AIM007 Note; Pedro discussed “Tensor Logic” - I was not able to find a reference Neural Networks and the Chomsky Hierarchy [Grégoire Delétang/DeepMind] https://arxiv.org/abs/2207.02098 Connectionism and Cognitive Architecture: A Critical Analysis [Jerry A. Fodor and Zenon W. Pylyshyn] https://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/proseminars/Proseminar13/ConnectionistArchitecture.pdf Every Model Learned by Gradient Descent Is Approximately a Kernel Machine [Pedro Domingos] https://arxiv.org/abs/2012.00152 A Path Towards Autonomous Machine Intelligence Version 0.9.2, 2022-06-27 [LeCun] https://openreview.net/pdf?id=BZ5a1r-kVsf Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges [Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković] https://arxiv.org/abs/2104.13478 The Algebraic Mind: Integrating Connectionism and Cognitive Science [Gary Marcus] https://www.amazon.co.uk/Algebraic-Mind-Integrating-Connectionism-D
undefined
Dec 26, 2022 • 39min

#95 - Prof. IRINA RISH - AGI, Complex Systems, Transhumanism

Canadian Excellence Research Chair in Autonomous AI. Irina holds an MSc and PhD in AI from the University of California, Irvine as well as an MSc in Applied Mathematics from the Moscow Gubkin Institute. Her research focuses on machine learning, neural data analysis, and neuroscience-inspired AI. In particular, she is exploring continual lifelong learning, optimization algorithms for deep neural networks, sparse modelling and probabilistic inference, dialog generation, biologically plausible reinforcement learning, and dynamical systems approaches to brain imaging analysis. Prof. Rish holds 64 patents, has published over 80 research papers, several book chapters, three edited books, and a monograph on Sparse Modelling. She has served as a Senior Area Chair for NeurIPS and ICML.   Irina's research is focussed on taking us closer to the holy grail of Artificial General Intelligence.  She continues to push the boundaries of machine learning, continually striving to make advancements in neuroscience-inspired AI. In a conversation about artificial intelligence (AI), Irina and Tim discussed the idea of transhumanism and the potential for AI to improve human flourishing. Irina suggested that instead of looking at AI as something to be controlled and regulated, people should view it as a tool to augment human capabilities. She argued that attempting to create an AI that is smarter than humans is not the best approach, and that a hybrid of human and AI intelligence is much more beneficial. As an example, she mentioned how technology can be used as an extension of the human mind, to track mental states and improve self-understanding. Ultimately, Irina concluded that transhumanism is about having a symbiotic relationship with technology, which can have a positive effect on both parties. Tim then discussed the contrasting types of intelligence and how this could lead to something interesting emerging from the combination. He brought up the Trolley Problem and how difficult moral quandaries could be programmed into an AI. Irina then referenced The Garden of Forking Paths, a story which explores the idea of how different paths in life can be taken and how decisions from the past can have an effect on the present. To better understand AI and intelligence, Irina suggested looking at it from multiple perspectives and understanding the importance of complex systems science in programming and understanding dynamical systems. She discussed the work of Michael Levin, who is looking into reprogramming biological computers with chemical interventions, and Tim mentioned Alex Mordvinsev, who is looking into the self-healing and repair of these systems. Ultimately, Irina argued that the key to understanding AI and intelligence is to recognize the complexity of the systems and to create hybrid models of human and AI intelligence. Find Irina; https://mila.quebec/en/person/irina-rish/ https://twitter.com/irinarish YT version: https://youtu.be/8-ilcF0R7mI  MLST Discord: https://discord.gg/aNPkGUQtc5 References; The Garden of Forking Paths: Jorge Luis Borges [Jorge Luis Borges] https://www.amazon.co.uk/Garden-Forking-Paths-Penguin-Modern/dp/0241339057 The Brain from Inside Out [György Buzsáki] https://www.amazon.co.uk/Brain-Inside-Out-Gy%C3%B6rgy-Buzs%C3%A1ki/dp/0190905387 Growing Isotropic Neural Cellular Automata [Alexander Mordvintsev] https://arxiv.org/abs/2205.01681 The Extended Mind [Andy Clark and David Chalmers] https://www.jstor.org/stable/3328150 The Gentle Seduction [Marc Stiegler] https://www.amazon.co.uk/Gentle-Seduction-Marc-Stiegler/dp/0671698877
undefined
Dec 26, 2022 • 14min

#94 - ALAN CHAN - AI Alignment and Governance #NEURIPS

Support us! https://www.patreon.com/mlst Alan Chan is a PhD student at Mila, the Montreal Institute for Learning Algorithms, supervised by Nicolas Le Roux. Before joining Mila, Alan was a Masters student at the Alberta Machine Intelligence Institute and the University of Alberta, where he worked with Martha White. Alan's expertise and research interests encompass value alignment and AI governance. He is currently exploring the measurement of harms from language models and the incentives that agents have to impact the world. Alan's research focuses on understanding and controlling the values expressed by machine learning models. His projects have examined the regulation of explainability in algorithmic systems, scoring rules for performative binary prediction, the effects of global exclusion in AI development, and the role of a graduate student in approaching ethical impacts in AI research. In addition, Alan has conducted research into inverse policy evaluation for value-based sequential decision-making, and the concept of "normal accidents" and AI systems. Alan's research is motivated by the need to align AI systems with human values, and his passion for scientific and governance work in this field. Alan's energy and enthusiasm for his field is infectious.  This was a discussion at NeurIPS. It was in quite a loud environment so the audio quality could have been better.  References: The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future [Tim Chivers] https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795 The implausibility of intelligence explosion [Chollet] https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec Superintelligence: Paths, Dangers, Strategies [Bostrom] https://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111 A Theory of Universal Artificial Intelligence based on Algorithmic Complexity [Hutter] https://arxiv.org/abs/cs/0004001 YT version: https://youtu.be/XBMnOsv9_pk  MLST Discord: https://discord.gg/aNPkGUQtc5 
undefined
Dec 24, 2022 • 1h 20min

#93 Prof. MURRAY SHANAHAN - Consciousness, Embodiment, Language Models

Support us! https://www.patreon.com/mlst Professor Murray Shanahan is a renowned researcher on sophisticated cognition and its implications for artificial intelligence. His 2016 article ‘Conscious Exotica’ explores the Space of Possible Minds, a concept first proposed by philosopher Aaron Sloman in 1984, which includes all the different forms of minds from those of other animals to those of artificial intelligence. Shanahan rejects the idea of an impenetrable realm of subjective experience and argues that the majority of the space of possible minds may be occupied by non-natural variants, such as the ‘conscious exotica’ of which he speaks.  In his paper ‘Talking About Large Language Models’, Shanahan discusses the capabilities and limitations of large language models (LLMs). He argues that prompt engineering is a key element for advanced AI systems, as it involves exploiting prompt prefixes to adjust LLMs to various tasks. However, Shanahan cautions against ascribing human-like characteristics to these systems, as they are fundamentally different and lack a shared comprehension with humans. Even though LLMs can be integrated into embodied systems, it does not mean that they possess human-like language abilities. Ultimately, Shanahan concludes that although LLMs are formidable and versatile, we must be wary of over-simplifying their capacities and limitations. YT version: https://youtu.be/BqkWpP3uMMU Full references on the YT description.  [00:00:00] Introduction [00:08:51] Consciousness and  Consciousness Exotica [00:34:59] Slightly Consciousness LLMs [00:38:05] Embodiment [00:51:32] Symbol Grounding  [00:54:13] Emergence [00:57:09] Reasoning [01:03:16] Intentional Stance [01:07:06] Digression on Chomsky show and Andrew Lampinen [01:10:31] Prompt Engineering Find Murray online: https://www.doc.ic.ac.uk/~mpsha/ https://twitter.com/mpshanahan?lang=en https://scholar.google.co.uk/citations?user=00bnGpAAAAAJ&hl=en MLST Discord: https://discord.gg/aNPkGUQtc5
undefined
Dec 23, 2022 • 52min

#92 - SARA HOOKER - Fairness, Interpretability, Language Models

Support us! https://www.patreon.com/mlst Sara Hooker is an exceptionally talented and accomplished leader and research scientist in the field of machine learning. She is the founder of Cohere For AI, a non-profit research lab that seeks to solve complex machine learning problems. She is passionate about creating more points of entry into machine learning research and has dedicated her efforts to understanding how progress in this field can be translated into reliable and accessible machine learning in the real-world. Sara is also the co-founder of the Trustworthy ML Initiative, a forum and seminar series related to Trustworthy ML. She is on the advisory board of Patterns and is an active member of the MLC research group, which has a focus on making participation in machine learning research more accessible. Before starting Cohere For AI, Sara worked as a research scientist at Google Brain. She has written several influential research papers, including "The Hardware Lottery", "The Low-Resource Double Bind: An Empirical Study of Pruning for Low-Resource Machine Translation", "Moving Beyond “Algorithmic Bias is a Data Problem”" and "Characterizing and Mitigating Bias in Compact Models".  In addition to her research work, Sara is also the founder of the local Bay Area non-profit Delta Analytics, which works with non-profits and communities all over the world to build technical capacity and empower others to use data. She regularly gives tutorials on machine learning fundamentals, interpretability, model compression and deep neural networks and is dedicated to collaborating with independent researchers around the world. Sara Hooker is famous for writing a paper introducing the concept of the 'hardware lottery', in which the success of a research idea is determined not by its inherent superiority, but by its compatibility with available software and hardware. She argued that choices about software and hardware have had a substantial impact in deciding the outcomes of early computer science history, and that with the increasing heterogeneity of the hardware landscape, gains from advances in computing may become increasingly disparate. Sara proposed that an interim goal should be to create better feedback mechanisms for researchers to understand how their algorithms interact with the hardware they use. She suggested that domain-specific languages, auto-tuning of algorithmic parameters, and better profiling tools may help to alleviate this issue, as well as provide researchers with more informed opinions about how hardware and software should progress. Ultimately, Sara encouraged researchers to be mindful of the implications of the hardware lottery, as it could mean that progress on some research directions is further obstructed. If you want to learn more about that paper, watch our previous interview with Sara. YT version: https://youtu.be/7oJui4eSCoY MLST Discord: https://discord.gg/aNPkGUQtc5 TOC: [00:00:00] Intro [00:02:53] Interpretability / Fairness [00:35:29] LLMs Find Sara: https://www.sarahooker.me/ https://twitter.com/sarahookr
undefined
Dec 20, 2022 • 21min

#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS

Support us! https://www.patreon.com/mlst Hattie Zhou, a PhD student at Université de Montréal and Mila, has set out to understand and explain the performance of modern neural networks, believing it a key factor in building better, more trusted models. Having previously worked as a data scientist at Uber, a private equity analyst at Radar Capital, and an economic consultant at Cornerstone Research, she has recently released a paper in collaboration with the Google Brain team, titled ‘Teaching Algorithmic Reasoning via In-context Learning’. In this work, Hattie identifies and examines four key stages for successfully teaching algorithmic reasoning to large language models (LLMs): formulating algorithms as skills, teaching multiple skills simultaneously, teaching how to combine skills, and teaching how to use skills as tools. Through the application of algorithmic prompting, Hattie has achieved remarkable results, with an order of magnitude error reduction on some tasks compared to the best available baselines. This breakthrough demonstrates algorithmic prompting’s viability as an approach for teaching algorithmic reasoning to LLMs, and may have implications for other tasks requiring similar reasoning capabilities. TOC [00:00:00] Hattie Zhou [00:19:49] Markus Rabe [Google Brain] Hattie's Twitter - https://twitter.com/oh_that_hat Website - http://hattiezhou.com/ Teaching Algorithmic Reasoning via In-context Learning [Hattie Zhou, Azade Nova, Hugo Larochelle, Aaron Courville, Behnam Neyshabur, and Hanie Sedghi] https://arxiv.org/pdf/2211.09066.pdf Markus Rabe [Google Brain]: https://twitter.com/markusnrabe https://research.google/people/106335/ https://www.linkedin.com/in/markusnrabe Autoformalization with Large Language Models [Albert Jiang Charles Edgar Staats Christian Szegedy Markus Rabe Mateja Jamnik Wenda Li Yuhuai Tony Wu] https://research.google/pubs/pub51691/ Discord: https://discord.gg/aNPkGUQtc5 YT: https://youtu.be/80i6D2TJdQ4
undefined
Dec 19, 2022 • 54min

(Music Removed) #90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]

Support us! https://www.patreon.com/mlst (On the main version we released; the music was a tiny bit too loud in places, and some pieces had percussion which was a bit distracting -- here is a version with all music removed so you have the option! ) David Chalmers is a professor of philosophy and neural science at New York University, and an honorary professor of philosophy at the Australian National University. He is the co-director of the Center for Mind, Brain, and Consciousness, as well as the PhilPapers Foundation. His research focuses on the philosophy of mind, especially consciousness, and its connection to fields such as cognitive science, physics, and technology. He also investigates areas such as the philosophy of language, metaphysics, and epistemology. With his impressive breadth of knowledge and experience, David Chalmers is a leader in the philosophical community. The central challenge for consciousness studies is to explain how something immaterial, subjective, and personal can arise out of something material, objective, and impersonal. This is illustrated by the example of a bat, whose sensory experience is much different from ours, making it difficult to imagine what it's like to be one. Thomas Nagel's "inconceivability argument" has its advantages and disadvantages, but ultimately it is impossible to solve the mind-body problem due to the subjective nature of experience. This is further explored by examining the concept of philosophical zombies, which are physically and behaviorally indistinguishable from conscious humans yet lack conscious experience. This has implications for the Hard Problem of Consciousness, which is the attempt to explain how mental states are linked to neurophysiological activity. The Chinese Room Argument is used as a thought experiment to explain why physicality may be insufficient to be the source of the subjective, coherent experience we call consciousness. Despite much debate, the Hard Problem of Consciousness remains unsolved. Chalmers has been working on a functional approach to decide whether large language models are, or could be conscious.  Filmed at #neurips22 Discord: https://discord.gg/aNPkGUQtc5 Pod: https://anchor.fm/machinelearningstreettalk/episodes/90---Prof--DAVID-CHALMERS---Slightly-Conscious-LLMs-e1sej50 TOC; [00:00:00] Introduction [00:00:40] LLMs consciousness pitch [00:06:33] Philosophical Zombies [00:09:26] The hard problem of consciousness [00:11:40] Nagal's bat and intelligibility  [00:21:04] LLM intro clip from NeurIPS [00:22:55] Connor Leahy on self-awareness in LLMs [00:23:30] Sneak peek from unreleased show - could consciousness be a submodule? [00:33:44] SeppH [00:36:15] Tim interviews David at NeurIPS (functionalism / panpsychism / Searle) [00:45:20] Peter Hase interviews Chalmers (focus on interpretability/safety) Panel: Dr. Tim Scarfe Dr. Keith Duggar Contact David; https://mobile.twitter.com/davidchalmers42 https://consc.net/ References; Could a Large Language Model Be Conscious? [Chalmers NeurIPS22 talk] https://nips.cc/media/neurips-2022/Slides/55867.pdf What Is It Like to Be a Bat? [Nagel] https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf Zombies https://plato.stanford.edu/entries/zombies/ zombies on the web [Chalmers] https://consc.net/zombies-on-the-web/ The hard problem of consciousness [Chalmers] https://psycnet.apa.org/record/2007-00485-017 David Chalmers, "Are Large Language Models Sentient?" [NYU talk, same as at NeurIPS] https://www.youtube.com/watch?v=-BcuCmf00_Y
undefined
Dec 19, 2022 • 54min

#90 - Prof. DAVID CHALMERS - Consciousness in LLMs [Special Edition]

Support us! https://www.patreon.com/mlst David Chalmers is a professor of philosophy and neural science at New York University, and an honorary professor of philosophy at the Australian National University. He is the co-director of the Center for Mind, Brain, and Consciousness, as well as the PhilPapers Foundation. His research focuses on the philosophy of mind, especially consciousness, and its connection to fields such as cognitive science, physics, and technology. He also investigates areas such as the philosophy of language, metaphysics, and epistemology. With his impressive breadth of knowledge and experience, David Chalmers is a leader in the philosophical community. The central challenge for consciousness studies is to explain how something immaterial, subjective, and personal can arise out of something material, objective, and impersonal. This is illustrated by the example of a bat, whose sensory experience is much different from ours, making it difficult to imagine what it's like to be one. Thomas Nagel's "inconceivability argument" has its advantages and disadvantages, but ultimately it is impossible to solve the mind-body problem due to the subjective nature of experience. This is further explored by examining the concept of philosophical zombies, which are physically and behaviorally indistinguishable from conscious humans yet lack conscious experience. This has implications for the Hard Problem of Consciousness, which is the attempt to explain how mental states are linked to neurophysiological activity. The Chinese Room Argument is used as a thought experiment to explain why physicality may be insufficient to be the source of the subjective, coherent experience we call consciousness. Despite much debate, the Hard Problem of Consciousness remains unsolved. Chalmers has been working on a functional approach to decide whether large language models are, or could be conscious.  Filmed at #neurips22 Discord: https://discord.gg/aNPkGUQtc5 YT: https://youtu.be/T7aIxncLuWk TOC; [00:00:00] Introduction [00:00:40] LLMs consciousness pitch [00:06:33] Philosophical Zombies [00:09:26] The hard problem of consciousness [00:11:40] Nagal's bat and intelligibility [00:21:04] LLM intro clip from NeurIPS [00:22:55] Connor Leahy on self-awareness in LLMs [00:23:30] Sneak peek from unreleased show - could consciousness be a submodule? [00:33:44] SeppH [00:36:15] Tim interviews David at NeurIPS (functionalism / panpsychism / Searle) [00:45:20] Peter Hase interviews Chalmers (focus on interpretability/safety) Panel: Dr. Tim Scarfe Dr. Keith Duggar Contact David; https://mobile.twitter.com/davidchalmers42 https://consc.net/ References; Could a Large Language Model Be Conscious? [Chalmers NeurIPS22 talk]  https://nips.cc/media/neurips-2022/Slides/55867.pdf What Is It Like to Be a Bat? [Nagel] https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf Zombies https://plato.stanford.edu/entries/zombies/ zombies on the web [Chalmers] https://consc.net/zombies-on-the-web/ The hard problem of consciousness [Chalmers] https://psycnet.apa.org/record/2007-00485-017 David Chalmers, "Are Large Language Models Sentient?" [NYU talk, same as at NeurIPS] https://www.youtube.com/watch?v=-BcuCmf00_Y

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode