I, scientist with Balazs Kegl cover image

I, scientist with Balazs Kegl

Latest episodes

undefined
Jan 24, 2024 • 1h 59min

Joel Gladd

The second hour is really just two guys trying to make sense of what's going on in the world as a reaction to AI and the meaning crisis, explored through the works of John Vervaeke. I'm sure the insights from this conversation will resonate with many educators and technologists alike. If you like it, please help the channel by signing up!00:00:00 Intro00:06:53 Path to teaching writing.00:15:36 Why do we write? Making an impact vs having a voice vs thinking something through.00:20:37 GPT vs human writing. The voice of GPT. Simile and metaphors.00:34:05 Teaching and AI. Assisting a lesson plan.00:37:48 Assessment in the age of GPT: policing or integration?00:46:51 OER: teaching LLMs within Open Education Resources.00:53:11 AI assistance: feedback generated by LLMs. Should we learn to drive a stick or read maps?01:03:32 GPT as a dialogical partner.01:10:14 Vervaeke's AI video essay. https://balazskegl.substack.com/p/notes-on-john-vervaekes-ai-the-coming01:15:02 Opponent processing: dialog, jujitsu.01:22:37 Jeremy England, life, entropy, dissipation, and e/acc.01:26:20 Doom vs zoom: I don't agree with the framing.01:34:49 Relevance realization and Vervaeke's trinity: nomological, normative, and narrative order.01:41:35 Open theology vs closed worlds.01:48:05 The irony of Enlightenment.01:52:11 What should educators be aware of around AI? Joel's question to me.I, scientist blog: https://balazskegl.substack.comTwitter: https://twitter.com/balazskeglArtwork: DALL-EMusic: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw Hosted on Acast. See acast.com/privacy for more information.
undefined
Nov 21, 2023 • 1h 27min

Tatjana Samopjan

00:00:00 Intro.00:03:18 Stories. Why do they fascinate us? Mechanism of emotional manipulation.00:06:45 True to life vs realistic. Startrek. Dramatic real stories are rarely good.00:10:08 GPT and stories. How to build stories word-by-word. What is creativity? Information and meaning. Saturation in the story space. Alignment of living and writing.00:19:25 Yellowstone. What makes a story good? Lioness. Average is OK but don't expect to get payed for it. 00:23:24 GPT and storywriting. Write us a joke about migrants. A new episode of Sherlock.00:28:54 Art = fire + algebra. A good story makes you want to stop watching it and reengage with your life.00:31:53 Fire in children and mystics. Chickens: individuals vs a category. Surprise and predictability. Intelligence is overrated.00:39:46 Christianity. The tension and harmony between the transcendent and the particular. The role of lived experience in storywriting.00:45:47 The zombie myth. Metaverse. Transhumanism. What is AI after? Zuckerberg and jujitsu.00:57:44 AGI singularity vs narrow social media AI. Cautious humbleness and exploration. Personalized storytelling and an autistic world. Disembodiment.01:15:50 GPT: how to use it for writing better stories? Support the research process then go and face real life. Paradoxically it will slow down the writing process.01:22:16 Love: learning by loving vs just downloading knowledge.I, scientist blog: https://balazskegl.substack.comTwitter: https://twitter.com/balazskeglArtwork: DALL-EMusic: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw Hosted on Acast. See acast.com/privacy for more information.
undefined
Nov 21, 2023 • 1h 55min

Keith Frankish

Delving into the complexities of consciousness, the podcast explores qualia, physicalism, dualism, and illusionism. It challenges the idea of a private world being an illusion and discusses self-awareness, emotions, and representational systems. The conversation delves into the subjective nature of experiences, the concept of high-functioning zombies in AI, and the intersection of consciousness and personal experiences.
undefined
Nov 21, 2023 • 1h 27min

Anna Ciaunica

00:00:00 Intro: empirical vs armchair philosopher. Visual vs tactile understanding.00:06:08 How subjective experience rises from physical matter. Entering wine into your body vs seeing a tomato. Perception of color, pain, interoception.00:14:09 Consciousness is not a thing, but conscious is an adjective.00:18:24 Brain is (part of) the body. The developmental biology view.00:21:23 Immune system: what is you, what is not you?00:24:30 Cells, tissue, (https://www.youtube.com/shorts/Rvmvt7gscIM) organs, body, how does hierarchical agency function? The relational ontology.00:27:08 Love, hate, and self, me and not me, at every level.00:28:24 Pregnancy. How selves are created and negotiating.00:30:45 Homeostasis and autopoiesis. Allostasis and homeorhesis. We are systems that create themselves. How we learn to deal with gravity? Self disorders.00:38:51 How to do science about first person experience? Reported lived experiences, physiological measurements, brain imaging.00:44:36 Depersonalization. No self-organizing system without movement. Transparent background and its crack. Can't afford processing the self in the background. Sense of touch or odor can bring you back. Fetuses touch themselves and each other.00:55:14 4P of John Vervaeke, is depersonalization a disorder of relevance realization? Automatic vs automaton.01:01:09 Meditation vs depersonalization, phenomena of the same system? Psychosis and depersonalization.01:07:29 Movement and depersonalization.01:09:55 Autism and depersonalization. Why is Anna interested in depersonalization?01:13:23 Feeling like a zombie or a ghost.01:16:22 Dissociation and depersonalization.01:19:01 The detachment of scientists of their subjects. Anna's question to me: what drove me to create the podcast? Soul hunting through interacting with people. We can't do it alone. Movement medicine and contact dance. I, scientist blog: https://balazskegl.substack.comTwitter: https://twitter.com/balazskeglArtwork: DALL-EMusic: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw Hosted on Acast. See acast.com/privacy for more information.
undefined
Nov 21, 2023 • 1h 45min

Mark Vernon

00:00:00 Intro: Mark's journey from undergrad physics to psychology, philosophy, and theology.00:05:17 Philosophy: "Who you are is directly related to what you know".00:06:25 The psychology of being a scientist. Lived dualism.00:07:49 Becoming a scientist is to become a certain kind of person, and this very much shapes the scientific worldview.00:08:46 AI is leaked from the lab and becoming a focal point around which the world is turning.00:12:05 Psychology and theology. How are they related? Developmental psychology. Thresholds, crisis moments, self-transcendence. 00:17:23 Intelligence is a deeply felt notion. Crisis, suffering, struggle, not knowing who we will become, are part of our intelligence. Intelligence is a deeply felt notion. It is not isolated, part of the cosmos, which is why science can be done.00:20:11 Self-transformation is optional as an adult. Dante and Barfield: are individual and collective transformations similar?00:32:28 Dante and AI. In hell there is no novelty: closed data set, no imagination, frozen world. Paradise: the joy of knowing more. Music.00:36:00 Fear.00:37:15 Addiction. AI recommendation engines. The infinite scroll.00:41:25 Francis Bacon: "technology was given to humanity by God to bring us back to the garden of Eden, to relieve suffering". Infinite as more and more vs the one thing opening onto all things.00:43:53 The therapeutic use of addictive AI. https://balazskegl.substack.com/p/mental-jujitsu-between-me-and-the My story with social media addiction and martial arts and its theological interpretation.00:49:48 Turing test. First of Mark's ten points. https://www.youtube.com/watch?v=LHIvKFY2kbk The dialogical Turing test. https://balazskegl.substack.com/p/gpt-4-in-conversation-with-itself GPT-4 is a much more sophisticated hell.00:55:53 AI leaders are incentivized to oversell AI. Governments do have levers. "We need more science."00:58:57 AI and feelings. What is to be human?01:01:20 Model of cognition is not cognizant itself.01:04:21 Metaphors of mind. Engineers realize metaphors.01:07:12 Engineering organisms vs machines. Michael Levin.01:09:40 We dwell in presence, not just compute. We participate in reality, not just observe it. The field metaphor. Memory is not stored in the brain.01:16:30 Technology is unadaptive. It is not built into reality, but our reality is built in a way into which technology fits. Thinking machine is an ancient dream.01:19:13 Attention is a moral act. How to manage fear. The GPT panic.01:23:35 Love of life is part of intelligence. Intellect is driven towards what is loved. The silence around the words.01:26:40 Embrace the boredom. Dwell in uncertainty. Convert suffering to hope. Think about our own psychology. The purgatorial state.01:29:27 Mark's question to me: do I feel that this crisis moment can be a turn for the better, rather than this panic-driven fear of what's going to happen. Agency in AI. Cybernetics.01:34:40 Alignment. John Vervaeke's program of bringing up AI.01:38:59 The exponential take-off is a theology. Agency gets more dependent as it gets more sophisticated.01:41:31 The high-functioning zombie metaphor. The fear of zombie AI is what will create it.I, scientist blog: https://balazskegl.substack.comTwitter: https://twitter.com/balazskeglArtwork: DALL-EMusic: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw Hosted on Acast. See acast.com/privacy for more information.
undefined
Nov 21, 2023 • 1h 20min

Gaël Varoquaux

00:00:00 Intro. Gaels's journey from physics to AI through coding, then health and social science applications.00:06:17 Looking for impact. Are we using our energy to solve the best problems? How to estimate future impact?00:12:18 How did the interacting with a wide variety of sciences changes you as an AI researcher? Out-of-the box. Empirical research.00:13:19 Benchmarks. How they incorporate value and drive AI research. AI went from a mathematical to an empirical science. Fei-Fei Li and ImageNet.00:19:07 The Autism Challenge: predict the condition from brain imaging. How to avoid fooling ourselves?00:25:24 How did the medical community react? The clash between what is true and what is valuable.00:27:15 How do you measure your scientific impact?00:31:09 Scientific/technological and societal progress.00:33:01 Recommender AI and the 2007 Netflix challenge.00:35:14 How to deal with social media addiction.00:42:06 Scikit-learn. The Toyota of AI. Origin story.A well-designed tool for scientist is also useful for business. 00:47:03 Open source organizational structure. Ecosystem building.00:55:57 Deep learning and scikit-learn.00:59:37 Sociology ad psychology of scikit-learn.01:09:57 How to bring science home. AI has become an ice breaker.01:13:13 Gael's question: what excites me these days. Being seen; clarifying my thoughts through dialogs and writing; agency, RL, and putting AI in hardware we connect with; moving my body.I, scientist blog: https://balazskegl.substack.comTwitter: https://twitter.com/balazskeglArtwork: DALL-EMusic: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw Hosted on Acast. See acast.com/privacy for more information.
undefined
Nov 21, 2023 • 1h 54min

Vava Gligorov

00:00:00 Intro00:04:20 The LHCb experiment. Fundamental particle physics. Why isn't there as much antimatter as matter? Timescales.00:17:18 Shortcomings of the Standard Model. Dark matter. The LHC.00:22:24 The LHCb collaboration. Organization of a scientific experiment beyond the Dunbar number. Carrier development in academia and physics.00:27:03 The skills and day job of an LHC physicist. The messy organization of a big experiment. Technical vs physics work.00:32:33 The (lack of) management levers and incentives. 70+ institutes. Where does meaning come from?00:38:26 Vava's journey from war-torn Yugoslavia through Vienna and Oxford to CERN in Geneva and permanent position in Paris.00:41:23 Why physics? Curiosity and introversion. The helpers on a hero's journey.00:45:03 The real-time aspects: how we take the data. 30 million+ proton-proton collisions, a few to 30+ Terabits per second. The real-time trigger reduces the rate by 3-4 orders of magnitude.00:48:39 Working in a small group. Career without planning in the early days vs students today.00:51:48 Early adoption of AI and GPUs in the real-time trigger. Separate signal (interesting events) from background (known particles) in a million-dimensional space. Reconstruction cuts it down to 10-20 features where we apply Boosted Decision Trees. Training data and simulation. Neural nets? Sometimes, in complex feature spaces, for example in the calorimeter.01:01:16 Simulation to real data: systematic uncertainties. How to prioritize what to care about? The soft process and social structure of scrutinizing results. The effect of the aggregated knowledge of the collaboration.01:09:29 The delicacies of the scientific method: the look-elsewhere effect and unknown unknowns. The soft side of the Popperian ideal.01:16:46 Who decides what to go after in physics? LHCb: 20 years x 100 PhD thesis is a lot of investment. The role of the critical mass.01:20:27 The International Linear Collider and the sociology of the next big experiment.01:24:10 ATLAS = Cathedral. The deep metaphor: multi-generational experiments. The sacrifice of early-career scientists.01:29:56 Vava's dream for the rest of his career. Survival guilt.01:36:01 Science at home. 01:39:43 Why the podcast? - Vava's question to me. Spirituality and science. Anger and separation anxiety. This little corner of the internet. Truth and importance: the daily dilemma at the Paris-Saclay Center for Data Science.I, scientist blog: https://balazskegl.substack.comTwitter: https://twitter.com/balazskeglArtwork: DALL-EMusic: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw Hosted on Acast. See acast.com/privacy for more information.
undefined
Aug 16, 2023 • 1h 40min

Bogdan Cirstea

00:00:00 Intro. Bogdan's journey through the French system towards PhD in AI. Inspiration by early DeepMind papers, research on LSTM and other recurrent architectures.00:05:29 Oxford postdoc between ML and Neuroscience, theory of mind. Turn towards safety. Influence of Nick Bostrom's 2014 book Superintelligence https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies.00:07:29 AI acceleration. AlphaGo, Startcraft, BART, GPT-2. Language. Learning people's preferences.00:10:24 The curious path of an independent AI safety researcher. 80000 hours https://80000hours.org/, the Alignment Forum https://www.alignmentforum.org/, LessWrong https://www.lesswrong.com/, effective altruism. 00:18:15 GPT: in-context learning, math.00:23:47 GPT: planning and thinking. Planning in the real world (reinforcement learning) vs planning in a math proof, planning as problem solving.00:27:29 GPT: chain of thought. "Let's think about this step by step."00:31:47 GPT: lying? HAL from 2001 Space Odyssey. Does GPT have the will to do something? Simulators, Bayesian inference, simulacra, autoregressivity. The surprising coherence of GPT-4. Playing personas.00:43:38 GPT: reinforcement learning with human feedback. RLHF is like an anti-psychotic drug? Or cognitive behavioral therapy?00:45:38 GPT: Vervaeke's dialog Turing test https://balazskegl.substack.com/p/gpt-4-in-conversation-with-itself.00:52:36 AI Safety. The issue of timescale. The OpenAI initiative https://openai.com/blog/our-approach-to-ai-safety. Aligning by debating.00:57:46 Direct alignment research; Bogdan's pessimism.The 2-step approach: automate alignment research. Who will align the aligner AI?01:04:11 Alignment by giving agency to AI. Embodiment. Let them confabulate but confront reality.01:12:09 Max Tegmark's waterfall metaphor. Munk debate on AI https://www.youtube.com/watch?v=144uOfr4SYA, Yoshua Bengio's interview https://www.youtube.com/watch?v=0RknkWgd6Ck.01:22:21 Open source AI. George Hotz interview https://www.youtube.com/watch?v=dNrTrx42DGQ. Bogdan's counterargument: engineering a pandemic. Some tools make a few people very powerful.01:28:15 Adversarial examples.01:31:32 Bogdan's dreams and fears, where are we heading? Hosted on Acast. See acast.com/privacy for more information.
undefined
Jul 20, 2023 • 1h 9min

Jonas Gonzalez

00:00:00 Intro.00:01:34 Jonas' story. Math, physics towards computer science, AI, and robotics.00:05:09 Embodiment in intelligence.00:07:25 LLMs in robots. Should we put LLMs in robots? Can we teach them as kids if they can speak? Will they develop their personalities depending on their unique experience? Should we add episodic memory to them?00:19:02 Relevance realization. How to filter important information from the immense incoming flow of signals? The top-down aspect of perception. Cultural learning and binding.00:23:30 Limits of GI. Is there an inherent limit to how intelligent a being can be? What if too much intelligence makes the map take over the agent, leading to something like schizophrenia?00:29:20 Continual learning. The brain is a little scientist. The scientific method: where do the hypotheses come from? Where does the value of a proposition come from? How do we decide what proposition to prove or what experiment to run? Why did I work on the Higgs boson?00:37:37 Dog intelligence. Do dogs want to "go beyond" what is "visible", or is it a purely human drive?00:40:00 Collective alignment. Higher level collective consciousness and its relationship to human and AI alignment.00:47:21 AGI. How far are we from AGI?00:50:38 Robots. Bodies are the bottleneck of robotics research.00:57:09 Jonas' dream: connecting the dots, merging the cognitive modules and experiment in the real world, towards a dog intelligence in 5-10 years.01:01:13 High-functioning zombies. Should we be afraid of the them: an agent smart enough to plan, but not smart enough to see the harm that some of the planned actions may cause?I, scientist blog: https://balazskegl.substack.comTwitter: https://twitter.com/balazskeglArtwork: DALL-EMusic: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw Hosted on Acast. See acast.com/privacy for more information.
undefined
Jul 18, 2023 • 57min

Giuseppe Paolo

1:35 Screwdriver hands: how Giuseppe became a scientist.6:07 Reinforcement learning (RL) explained to Giuseppe's grandma.9:01 Model-based reinforcement learning: how to fry an egg.10:45 The three components of model-based reinforcement learning: the actor, the model, and the planner.16:29 Planning = thinking: ham & eggs and Google map19:01 RL is responsible for collecting its own data22:05 Vervaeke's first two Ps: propositional (GPT) and procedural (RL)24:05 Fear from AI: the paperclip scenario, evil, indifference, and foolishness28:19 Should we add agency into AI? Can we socialize AI as we do with kids?32:22 Do we need to embody AI to align it? Can GPT bike? Is textual knowledge everything?36:28 Why to aspire to the dream of creating AI? 1) Why not :)? 2) Curiosity. 3) To understand ourselves better.41:52 How AI changed our lives. Recommendation engines, addiction, jujitsu, conspiracy theories, the attention economy.49:12 Open source large language models. AI as a mirror. Individuation of LLMs.51:16 Putting AI into things: they will individuate.52:56 Vervaeke's 2nd person Turing test. Let GPTs talk to each other.54:37 Could AI manipulate us?56:11 Closing.I, scientist blog: https://balazskegl.substack.comTwitter: https://twitter.com/balazskeglArtwork: DALL-EMusic: Bea Palya https://www.youtube.com/channel/UCBDp3qcFZdU1yoWIRpMSaZw Hosted on Acast. See acast.com/privacy for more information.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode