Data Skeptic cover image

Data Skeptic

Latest episodes

undefined
9 snips
Jul 18, 2023 • 37min

A Long Way Till AGI

Our guest today is Maciej Świechowski. Maciej is affiliated with QED Software and QED Games. He has a Ph.D. in Systems Research from the Polish Academy of Sciences. Maciej joins us to discuss findings from his study, Deep Learning and Artificial General Intelligence: Still a Long Way to Go.
undefined
22 snips
Jul 11, 2023 • 36min

Brain Inspired AI

Today on the show, we are joined by Lin Zhao and Lu Zhang. Lin is a Senior Research Scientist at United Imaging Intelligence, while Lu is a Ph.D. candidate at the Department of Computer Science and Engineering at the University of Texas. They both shared findings from their work When Brain-inspired AI Meets AGI. Lin and Lu began by discussing the connections between the brain and neural networks. They mentioned the similarities as well as the differences. They also shared whether there is a possibility for solid advancements in neural networks to the point of AGI. They shared how understanding the brain more can help drive robust artificial intelligence systems. Lin and Lu shared how the brain inspired popular machine learning algorithms like transformers. They also shared how AI models can learn alignment from the human brain. They juxtaposed the low energy usage of the brain compared to high-end computers and whether computers can become more energy efficient.
undefined
Jul 3, 2023 • 36min

Computable AGI

On today’s show, we are joined by Michael Timothy Bennett, a Ph.D. student at the Australian National University. Michael’s research is centered around Artificial General Intelligence (AGI), specifically the mathematical formalism of AGIs. He joins us to discuss findings from his study, Computable Artificial General Intelligence.
undefined
11 snips
Jun 26, 2023 • 46min

AGI Can Be Safe

We are joined by Koen Holtman, an independent AI researcher focusing on AI safety. Koen is the Founder of Holtman Systems Research, a research company based in the Netherlands. Koen started the conversation with his take on an AI apocalypse in the coming years. He discussed the obedience problem with AI models and the safe form of obedience. Koen explained the concept of Markov Decision Process (MDP) and how it is used to build machine learning models. Koen spoke about the problem of AGIs not being able to allow changing their utility function after the model is deployed. He shared another alternative approach to solving the problem. He shared how to engineer AGI systems now and in the future safely. He also spoke about how to implement safety layers on AI models. Koen discussed the ultimate goal of a safe AI system and how to check that an AI system is indeed safe. He discussed the intersection between large language Models (LLMs) and MDPs. He shared the key ingredients to scale the current AI implementations.
undefined
14 snips
Jun 19, 2023 • 52min

AI Fails on Theory of Mind Tasks

An assistant professor of Psychology at Harvard University, Tomer Ullman, joins us. Tomer discussed the theory of mind and whether machines can indeed pass it. Using variations of the Sally-Anne test and the Smarties tube test, he explained how LLMs could fail the theory of mind test.
undefined
Jun 12, 2023 • 36min

AI for Mathematics Education

The application of LLMs cuts across various industries. Today, we are joined by Steven Van Vaerenbergh, who discussed the application of AI in mathematics education. He discussed how AI tools have changed the landscape of solving mathematical problems. He also shared LLMs' current strengths and weaknesses in solving math problems.
undefined
7 snips
Jun 6, 2023 • 43min

Evaluating Jokes with LLMs

Fabricio Goes, a Lecturer in Creative Computing at the University of Leicester, joins us today. Fabricio discussed what creativity entails and how to evaluate jokes with LLMs. He specifically shared the process of evaluating jokes with GPT-3 and GPT-4. He concluded with his thoughts on the future of LLMs for creative tasks.
undefined
24 snips
May 29, 2023 • 55min

Why Machines Will Never Rule the World

Barry Smith and Jobst Landgrebe, authors of the book “Why Machines will never Rule the World,” join us today. They discussed the limitations of AI systems in today’s world. They also shared elaborate reasons AI will struggle to attain the level of human intelligence.
undefined
May 23, 2023 • 49min

A Psychopathological Approach to Safety in AGI

While the possibilities with AGI emergence seem great, it also calls for safety concerns. On the show, Vahid Behzadan, an Assistant Professor of Computer Science and Data Science, joins us to discuss the complexities of modeling AGIs to accurately achieve objective functions. He touched on tangent issues such as abstractions during training, the problem of unpredictability, communications among agents, and so on.
undefined
5 snips
May 15, 2023 • 50min

The NLP Community Metasurvey

Julian Michael, a postdoc at the Center for Data Science, New York University, joins us today. Julian’s conversation with Kyle was centered on the NLP community metasurvey: a survey aimed at understanding expert opinions on controversial NLP issues. He shared the process of preparing the survey as well as some shocking results.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner