Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

Latest episodes

undefined
Mar 16, 2023 • 2h 10min

#108 - Dr. JOEL LEHMAN - Machine Love [Staff Favourite]

Support us! https://www.patreon.com/mlst   MLST Discord: https://discord.gg/aNPkGUQtc5 We are honoured to welcome Dr. Joel Lehman, an eminent machine learning research scientist, whose work in AI safety, reinforcement learning, creative open-ended search algorithms, and indeed the philosophy of open-endedness and abandoning objectives has paved the way for innovative ideas that challenge our preconceptions and inspire new visions for the future. Dr. Lehman's thought-provoking book, "Why Greatness Cannot Be Planned" penned with with our MLST favourite Professor Kenneth Stanley has left an indelible mark on the field and profoundly impacted the way we view innovation and the serendipitous nature of discovery. Those of you who haven't watched our special edition show on that, should do so at your earliest convenience! Building upon this foundation, Dr. Lehman has ventured into the domain of AI systems that embody principles of love, care, responsibility, respect, and knowledge, drawing from the works of Maslow, Erich Fromm, and positive psychology. YT version: https://youtu.be/23-TXgJEv-Q http://joellehman.com/ https://twitter.com/joelbot3000 Interviewer: Dr. Tim Scarfe TOC: Intro [00:00:00] Model [00:04:26] Intro and Paper Intro [00:08:52] Subjectivity [00:16:07] Reflections on Greatness Book [00:19:30] Representing Subjectivity [00:29:24] Nagal's Bat [00:31:49] Abstraction [00:38:58] Love as Action Rather Than Feeling [00:42:58] Reontologisation [00:57:38] Self Help [01:04:15] Meditation [01:09:02] The Human Reward Function / Effective... [01:16:52] Machine Hate [01:28:32] Societal Harms [01:31:41] Lenses We Use Obscuring Reality [01:56:36] Meta Optimisation and Evolution [02:03:14] Conclusion [02:07:06] References: What Is It Like to Be a Bat? (Thomas Nagel) https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf Why Greatness Cannot Be Planned: The Myth of the Objective (Kenneth O. Stanley and Joel Lehman) https://link.springer.com/book/10.1007/978-3-319-15524-1  Machine Love (Joel Lehman) https://arxiv.org/abs/2302.09248  How effective altruists ignored risk (Carla Cremer) https://www.vox.com/future-perfect/23569519/effective-altrusim-sam-bankman-fried-will-macaskill-ea-risk-decentralization-philanthropy Philosophy tube - The Rich Have Their Own Ethics: Effective Altruism https://www.youtube.com/watch?v=Lm0vHQYKI-Y Abandoning Objectives: Evolution through the Search for Novelty Alone (Joel Lehman and Kenneth O. Stanley) https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/lehman_ecj11.pdf
undefined
Mar 13, 2023 • 1h 44min

#107 - Dr. RAPHAËL MILLIÈRE - Linguistics, Theory of Mind, Grounding

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Dr. Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. His research draws from his expertise in philosophy and cognitive science to explore the implications of recent progress in deep learning for models of human cognition, as well as various issues in ethics and aesthetics. He is also investigating what underlies the capacity to represent oneself as oneself at a fundamental level, in humans and non-human animals; as well as the role that self-representation plays in perception, action, and memory. In a world where technology is rapidly advancing, Dr. Millière is striving to gain a better understanding of how artificial neural networks work, and to establish fair and meaningful comparisons between humans and machines in various domains in order to shed light on the implications of artificial intelligence for our lives. https://www.raphaelmilliere.com/ https://twitter.com/raphaelmilliere Here is a version with hesitation sounds like "um" removed if you prefer (I didn't notice them personally): https://share.descript.com/view/aGelyTl2xpN YT: https://www.youtube.com/watch?v=fhn6ZtD6XeE TOC: Intro to Raphael [00:00:00] Intro: Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière) [00:01:18] Show Kick off [00:07:10] LLMs [00:08:37] Semantic Competence/Understanding [00:18:28] Forming Analogies/JPG Compression Article [00:30:17] Compositional Generalisation [00:37:28] Systematicity [00:47:08] Language of Thought [00:51:28] Bigbench (Conceptual Combinations) [00:57:37] Symbol Grounding [01:11:13] World Models [01:26:43] Theory of Mind [01:30:57] Refs (this is truncated, full list on YT video description): Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière) https://nautil.us/moving-beyond-mimicry-in-artificial-intelligence-238504/ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 (Bender et al) https://dl.acm.org/doi/10.1145/3442188.3445922 ChatGPT Is a Blurry JPEG of the Web (Ted Chiang) https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web The Debate Over Understanding in AI's Large Language Models (Melanie Mitchell) https://arxiv.org/abs/2210.13966 Talking About Large Language Models (Murray Shanahan) https://arxiv.org/abs/2212.03551 Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender) https://aclanthology.org/2020.acl-main.463/ The symbol grounding problem (Stevan Harnad) https://arxiv.org/html/cs/9906002 Why the Abstraction and Reasoning Corpus is interesting and important for AI (Mitchell) https://aiguide.substack.com/p/why-the-abstraction-and-reasoning Linguistic relativity (Sapir–Whorf hypothesis) https://en.wikipedia.org/wiki/Linguistic_relativity Cooperative principle (Grice's four maxims of conversation - quantity, quality, relation, and manner) https://en.wikipedia.org/wiki/Cooperative_principle
undefined
Mar 11, 2023 • 2h 59min

#106 - Prof. KARL FRISTON 3.0 - Collective Intelligence [Special Edition]

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst  Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.  To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it. YT version: https://www.youtube.com/watch?v=V_VXOdf1NMw Support us! https://www.patreon.com/mlst  MLST Discord: https://discord.gg/aNPkGUQtc5 TOC:  Intro [00:00:00] Numerai (Sponsor segment) [00:07:10] Designing Ecosystems of Intelligence from First Principles (Friston et al) [00:09:48] Information / Infosphere and human agency [00:18:30] Intelligence [00:31:38] Reductionism [00:39:36] Universalism [00:44:46] Emergence [00:54:23] Markov blankets [01:02:11] Whole part relationships / structure learning [01:22:33] Enactivism [01:29:23] Knowledge and Language [01:43:53] ChatGPT [01:50:56] Ethics (is-ought) [02:07:55] Can people be evil? [02:35:06] Ethics in Al, subjectiveness [02:39:05] Final thoughts [02:57:00] References: Designing Ecosystems of Intelligence from First Principles (Friston et al) https://arxiv.org/abs/2212.01354 GLOM - How to represent part-whole hierarchies in a neural network (Hinton) https://arxiv.org/pdf/2102.12627.pdf Seven Brief Lessons on Physics (Carlo Rovelli) https://www.amazon.co.uk/Seven-Brief-Lessons-Physics-Rovelli/dp/0141981725 How Emotions Are Made: The Secret Life of the Brain (Lisa Feldman Barrett) https://www.amazon.co.uk/How-Emotions-Are-Made-Secret/dp/B01N3D4OON Am I Self-Conscious? (Or Does Self-Organization Entail Self-Consciousness?) (Karl Friston) https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00579/full Integrated information theory (Giulio Tononi) https://en.wikipedia.org/wiki/Integrated_information_theory
undefined
Mar 4, 2023 • 1h 21min

#105 - Dr. MICHAEL OLIVER [CSO - Numerai]

Access Numerai here: http://numer.ai/mlst Michael Oliver is the Chief Scientist at Numerai, a hedge fund that crowdsources machine learning models from data scientists. He has a PhD in Computational Neuroscience from UC Berkeley and was a postdoctoral researcher at the Allen Institute for Brain Science before joining Numerai in 2020. He is also the host of Numerai Quant Club, a YouTube series where he discusses Numerai’s research, data and challenges. YT version: https://youtu.be/61s8lLU7sFg TOC: [00:00:00] Introduction to Michael and Numerai [00:02:03] Understanding / new Bing [00:22:47] Quant vs Neuroscience [00:36:43] Role of language in cognition and planning, and subjective...  [00:45:47] Boundaries in finance modelling [00:48:00] Numerai [00:57:37] Aggregation systems [01:00:52] Getting started on Numeral [01:03:21] What models are people using [01:04:23] Numerai Problem Setup [01:05:49] Regimes in financial data and quant talk [01:11:18] Esoteric approaches used on Numeral? [01:13:59]  Curse of dimensionality [01:16:32] Metrics [01:19:10] Outro References: Growing Neural Cellular Automata (Alexander Mordvintsev) https://distill.pub/2020/growing-ca/ A Thousand Brains: A New Theory of Intelligence (Jeff Hawkins) https://www.amazon.fr/Thousand-Brains-New-Theory-Intelligence/dp/1541675819 Perceptual Neuroscience: The Cerebral Cortex (Vernon B. Mountcastle) https://www.amazon.ca/Perceptual-Neuroscience-Cerebral-Vernon-Mountcastle/dp/0674661885 Numerai Quant Club with Michael Oliver https://www.youtube.com/watch?v=eLIxarbDXuQ&list=PLz3D6SeXhT3tTu8rhZmjwDZpkKi-UPO1F Numerai YT channel https://www.youtube.com/@Numerai/featured Support us! https://www.patreon.com/mlst  MLST Discord: https://discord.gg/aNPkGUQtc5
undefined
Feb 22, 2023 • 1h 29min

#104 - Prof. CHRIS SUMMERFIELD - Natural General Intelligence [SPECIAL EDITION]

Support us! https://www.patreon.com/mlst   MLST Discord: https://discord.gg/aNPkGUQtc5 Christopher Summerfield, Department of Experimental Psychology, University of Oxford is a Professor of Cognitive Neuroscience at the University of Oxford and a Research Scientist at Deepmind UK. His work focusses on the neural and computational mechanisms by which humans make decisions. Chris has just released an incredible new book on AI called "Natural General Intelligence". It's my favourite book on AI I have read so so far.  The book explores the algorithms and architectures that are driving progress in AI research, and discusses intelligence in the language of psychology and biology, using examples and analogies to be comprehensible to a wide audience. It also tackles longstanding theoretical questions about the nature of thought and knowledge. With Chris' permission, I read out a summarised version of Chapter 2 from his book on which was on Intelligence during the 30 minute MLST introduction.   Buy his book here: https://global.oup.com/academic/product/natural-general-intelligence-9780192843883?cc=gb&lang=en& YT version: https://youtu.be/31VRbxAl3t0 Interviewer: Dr. Tim Scarfe TOC: [00:00:00] Walk and talk with Chris on Knowledge and Abstractions [00:04:08] Intro to Chris and his book [00:05:55] (Intro) Tim reads Chapter 2: Intelligence  [00:09:28] Intro continued: Goodhart's law [00:15:37] Intro continued: The "swiss cheese" situation   [00:20:23] Intro continued: On Human Knowledge [00:23:37] Intro continued: Neats and Scruffies [00:30:22] Interview kick off  [00:31:59] What does it mean to understand? [00:36:18] Aligning our language models [00:40:17] Creativity  [00:41:40] "Meta" AI and basins of attraction  [00:51:23] What can Neuroscience impart to AI [00:54:43] Sutton, neats and scruffies and human alignment [01:02:05] Reward is enough [01:19:46] Jon Von Neumann and Intelligence [01:23:56] Compositionality References: The Language Game (Morten H. Christiansen, Nick Chater https://www.penguin.co.uk/books/441689/the-language-game-by-morten-h-christiansen-and--nick-chater/9781787633483 Theory of general factor (Spearman) https://www.proquest.com/openview/7c2c7dd23910c89e1fc401e8bb37c3d0/1?pq-origsite=gscholar&cbl=1818401 Intelligence Reframed (Howard Gardner) https://books.google.co.uk/books?hl=en&lr=&id=Qkw4DgAAQBAJ&oi=fnd&pg=PT6&dq=howard+gardner+multiple+intelligences&ots=ERUU0u5Usq&sig=XqiDgNUIkb3K9XBq0vNbFmXWKFs#v=onepage&q=howard%20gardner%20multiple%20intelligences&f=false The master algorithm (Pedro Domingos) https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543 A Thousand Brains: A New Theory of Intelligence (Jeff Hawkins) https://www.amazon.co.uk/Thousand-Brains-New-Theory-Intelligence/dp/1541675819 The bitter lesson (Rich Sutton) http://www.incompleteideas.net/IncIdeas/BitterLesson.html
undefined
Feb 11, 2023 • 1h 2min

#103 - Prof. Edward Grefenstette - Language, Semantics, Philosophy

Support us! https://www.patreon.com/mlst  MLST Discord: https://discord.gg/aNPkGUQtc5 YT: https://youtu.be/i9VPPmQn9HQ Edward Grefenstette is a Franco-American computer scientist who currently serves as Head of Machine Learning at Cohere and Honorary Professor at UCL. He has previously been a research scientist at Facebook AI Research and staff research scientist at DeepMind, and was also the CTO of Dark Blue Labs. Prior to his move to industry, Edward was a Fulford Junior Research Fellow at Somerville College, University of Oxford, and was lecturing at Hertford College. He obtained his BSc in Physics and Philosophy from the University of Sheffield and did graduate work in the philosophy departments at the University of St Andrews. His research draws on topics and methods from Machine Learning, Computational Linguistics and Quantum Information Theory, and has done work implementing and evaluating compositional vector-based models of natural language semantics and empirical semantic knowledge discovery. https://www.egrefen.com/ https://cohere.ai/ TOC: [00:00:00] Introduction [00:02:52] Differential Semantics [00:06:56] Concepts [00:10:20] Ontology [00:14:02] Pragmatics [00:16:55] Code helps with language [00:19:02] Montague [00:22:13] RLHF [00:31:54] Swiss cheese problem / retrieval augmented [00:37:06] Intelligence / Agency [00:43:33] Creativity [00:46:41] Common sense [00:53:46] Thinking vs knowing References: Large language models are not zero-shot communicators (Laura Ruis) https://arxiv.org/abs/2210.14986 Some remarks on Large Language Models (Yoav Goldberg) https://gist.github.com/yoavg/59d174608e92e845c8994ac2e234c8a9 Quantum Natural Language Processing (Bob Coecke) https://www.cs.ox.ac.uk/people/bob.coecke/QNLP-ACT.pdf Constitutional AI: Harmlessness from AI Feedback https://www.anthropic.com/constitutional.pdf Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Patrick Lewis) https://www.patricklewis.io/publication/rag/ Natural General Intelligence (Prof. Christopher Summerfield) https://global.oup.com/academic/product/natural-general-intelligence-9780192843883 ChatGPT with Rob Miles - Computerphile https://www.youtube.com/watch?v=viJt_DXTfwA
undefined
Feb 11, 2023 • 55min

#102 - Prof. MICHAEL LEVIN, Prof. IRINA RISH - Emergence, Intelligence, Transhumanism

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 YT: https://youtu.be/Vbi288CKgis Michael Levin is a Distinguished Professor in the Biology department at Tufts University, and the holder of the Vannevar Bush endowed Chair. He is the Director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology. His research focuses on understanding the biophysical mechanisms of pattern regulation and harnessing endogenous bioelectric dynamics for rational control of growth and form. The capacity to generate a complex, behaving organism from the single cell of a fertilized egg is one of the most amazing aspects of biology. Levin' lab integrates approaches from developmental biology, computer science, and cognitive science to investigate the emergence of form and function. Using biophysical and computational modeling approaches, they seek to understand the collective intelligence of cells, as they navigate physiological, transcriptional, morphognetic, and behavioral spaces. They develop conceptual frameworks for basal cognition and diverse intelligence, including synthetic organisms and AI. Also joining us this evening is Irina Rish. Irina is a Full Professor at the Université de Montréal's Computer Science and Operations Research department, a core member of Mila - Quebec AI Institute, as well as the holder of the Canada CIFAR AI Chair and the Canadian Excellence Research Chair in Autonomous AI. She has a PhD in AI from UC Irvine. Her research focuses on machine learning, neural data analysis, neuroscience-inspired AI, continual lifelong learning, optimization algorithms, sparse modelling, probabilistic inference, dialog generation, biologically plausible reinforcement learning, and dynamical systems approaches to brain imaging analysis.  Interviewer: Dr. Tim Scarfe TOC: [00:00:00] Introduction [00:02:09] Emergence [00:13:16] Scaling Laws [00:23:12] Intelligence [00:44:36] Transhumanism Prof. Michael Levin https://en.wikipedia.org/wiki/Michael_Levin_(biologist) https://www.drmichaellevin.org/ https://twitter.com/drmichaellevin Prof. Irina Rish https://twitter.com/irinarish https://irina-rish.com/
undefined
Feb 10, 2023 • 26min

#100 Dr. PATRICK LEWIS (co:here) - Retrieval Augmented Generation

Dr. Patrick Lewis is a London-based AI and Natural Language Processing Research Scientist, working at co:here. Prior to this, Patrick worked as a research scientist at the Fundamental AI Research Lab (FAIR) at Meta AI. During his PhD, Patrick split his time between FAIR and University College London, working with Sebastian Riedel and Pontus Stenetorp.  Patrick’s research focuses on the intersection of information retrieval techniques (IR) and large language models (LLMs). He has done extensive work on Retrieval-Augmented Language Models. His current focus is on building more powerful, efficient, robust, and update-able models that can perform well on a wide range of NLP tasks, but also excel on knowledge-intensive NLP tasks such as Question Answering and Fact Checking. YT version: https://youtu.be/Dm5sfALoL1Y MLST Discord: https://discord.gg/aNPkGUQtc5 Support us! https://www.patreon.com/mlst References: Patrick Lewis (Natural Language Processing Research Scientist @ co:here) https://www.patricklewis.io/ Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Patrick Lewis et al) https://arxiv.org/abs/2005.11401 Atlas: Few-shot Learning with Retrieval Augmented Language Models (Gautier Izacard, Patrick Lewis, et al) https://arxiv.org/abs/2208.03299 Improving language models by retrieving from trillions of tokens (RETRO) (Sebastian Borgeaud et al) https://arxiv.org/abs/2112.04426
undefined
Feb 5, 2023 • 1h 40min

#99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism

YT version (with references): https://www.youtube.com/watch?v=lxaTinmKxs0 Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Carla Cremer and Igor Krawczuk argue that AI risk should be understood as an old problem of politics, power and control with known solutions, and that threat models should be driven by empirical work. The interaction between FTX and the Effective Altruism community has sparked a lot of discussion about the dangers of optimization, and Carla's Vox article highlights the need for an institutional turn when taking on a responsibility like risk management for humanity. Carla's “Democratizing Risk” paper found that certain types of risks fall through the cracks if they are just categorized into climate change or biological risks. Deliberative democracy has been found to be a better way to make decisions, and AI tools can be used to scale this type of democracy and be used for good, but the transparency of these algorithms to the citizens using the platform must be taken into consideration. Aggregating people’s diverse ways of thinking about a problem and creating a risk-averse procedure gives a likely, highly probable outcome for having converged on the best policy. There needs to be a good reason to trust one organization with the risk management of humanity and all the different ways of thinking about risk must be taken into account. AI tools can help to scale this type of deliberative democracy, but the transparency of these algorithms must be taken into consideration. The ambition of the EA community and Altruism Inc. is to protect and do risk management for the whole of humanity and this requires an institutional turn in order to do it effectively. The dangers of optimization are real, and it is essential to ensure that the risk management of humanity is done properly and ethically. By understanding the importance of aggregating people’s diverse ways of thinking about a problem, and creating a risk-averse procedure, it is possible to create a likely, highly probable outcome for having converged on the best policy. Carla Zoe Cremer https://carlacremer.github.io/ Igor Krawczuk https://krawczuk.eu/ Interviewer: Dr. Tim Scarfe TOC: [00:00:00] Introduction: Vox article and effective altruism / FTX [00:11:12] Luciano Floridi on Governance and Risk [00:15:50] Connor Leahy on alignment [00:21:08] Ethan Caballero on scaling [00:23:23] Alignment, Values and politics [00:30:50] Singularitarians vs AI-thiests [00:41:56] Consequentialism [00:46:44] Does scale make a difference? [00:51:53] Carla's Democratising risk paper [01:04:03] Vox article - How effective altruists ignored risk [01:20:18] Does diversity breed complexity? [01:29:50] Collective rationality [01:35:16] Closing statements
undefined
Feb 3, 2023 • 1h 6min

[NO MUSIC] #98 - Prof. LUCIANO FLORIDI - ChatGPT, Singularitarians, Ethics, Philosophy of Information

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 YT version: https://youtu.be/YLNGvvgq3eg We are living in an age of rapid technological advancement, and with this growth comes a digital divide. Professor Luciano Floridi of the Oxford Internet Institute / Oxford University believes that this divide not only affects our understanding of the implications of this new age, but also the organization of a fair society.  The Information Revolution has been transforming the global economy, with the majority of global GDP now relying on intangible goods, such as information-related services. This in turn has led to the generation of immense amounts of data, more than humanity has ever seen in its history. With 95% of this data being generated by the current generation, Professor Floridi believes that we are becoming overwhelmed by this data, and that our agency as humans is being eroded as a result.  According to Professor Floridi, the digital divide has caused a lack of balance between technological growth and our understanding of this growth. He believes that the infosphere is becoming polluted and the manifold of the infosphere is increasingly determined by technology and AI. Identifying, anticipating and resolving these problems has become essential, and Professor Floridi has dedicated his research to the Philosophy of Information, Philosophy of Technology and Digital Ethics.  We must equip ourselves with a viable philosophy of information to help us better understand and address the risks of this new information age. Professor Floridi is leading the charge, and his research on Digital Ethics, the Philosophy of Information and the Philosophy of Technology is helping us to better anticipate, identify and resolve problems caused by the digital divide. TOC: [00:00:00] Introduction to Luciano and his ideas [00:14:00] Chat GPT / language models [00:28:45] AI risk / "Singularitarians"  [00:37:15] Forms of governance [00:43:56] Re-ontologising the world [00:55:56] It from bit and Computationalism and philosophy without purpose [01:03:05] Getting into Digital Ethics Interviewer: Dr. Tim Scarfe References: GPT‐3: Its Nature, Scope, Limits, and Consequences [Floridi] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827044 Ultraintelligent Machines, Singularity, and Other Sci-fi Distractions about AI [Floridi] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4222347 The Philosophy of Information [Floridi] https://www.amazon.co.uk/Philosophy-Information-Luciano-Floridi/dp/0199232393 Information: A Very Short Introduction [Floridi] https://www.amazon.co.uk/Information-Very-Short-Introduction-Introductions/dp/0199551375 https://en.wikipedia.org/wiki/Luciano_Floridi https://www.philosophyofinformation.net/

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode