undefined

RANDALL BALESTRIERO

AI researcher discussing counterintuitive findings in AI, including training large language models from scratch and fairness in AI models for Earth data. He is the author of multiple research papers on self-supervised learning and geographic bias in machine learning.

Top 3 podcasts with RANDALL BALESTRIERO

Ranked by the Snipd community
undefined
123 snips
Apr 23, 2025 • 35min

Prof. Randall Balestriero - LLMs without pretraining and SSL

Randall Balestriero, an AI researcher renowned for his work on self-supervised learning and geographic bias, explores fascinating findings in AI training. He reveals that large language models can perform well even without extensive pre-training. Randall also highlights the similarities between self-supervised and supervised learning, emphasizing their potential for improvement. Additionally, he discusses biases in climate models, demonstrating the risks of relying on their predictions, particularly for vulnerable regions, which has significant policy implications.
undefined
116 snips
Feb 8, 2025 • 1h 18min

Want to Understand Neural Networks? Think Elastic Origami! - Prof. Randall Balestriero

Professor Randall Balestriero, an expert in machine learning, dives deep into neural network geometry and spline theory. He introduces the captivating concept of 'grokking', explaining how prolonged training can enhance adversarial robustness. The discussion also highlights the significance of representing data through splines to improve model design and performance. Additionally, Balestriero explores the geometric implications for large language models in toxicity detection, and delves into the challenges of reconstruction learning and the intricacies of representation in neural networks.
undefined
55 snips
Jan 4, 2022 • 3h 20min

061: Interpolation, Extrapolation and Linearisation (Prof. Yann LeCun, Dr. Randall Balestriero)

Yann LeCun, Meta's Chief AI Scientist and Turing Award winner, joins Randall Balestriero, a researcher at Meta AI, to dive into the complexities of interpolation and extrapolation in neural networks. They discuss how heavily dimensional data challenges traditional views, presenting their groundbreaking paper on high-dimensional extrapolation. Yann critiques the notion of interpolation in deep learning, while Randall emphasizes the geometric principles that can redefine our understanding of neural network behavior. Expect eye-opening insights into AI's evolving landscape!

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app