Towards Data Science cover image

Towards Data Science

Latest episodes

undefined
8 snips
Apr 20, 2022 • 41min

120. Liam Fedus and Barrett Zoph - AI scaling with mixture of expert models

AI scaling has really taken off. Ever since GPT-3 came out, it’s become clear that one of the things we’ll need to do to move beyond narrow AI and towards more generally intelligent systems is going to be to massively scale up the size of our models, the amount of processing power they consume and the amount of data they’re trained on, all at the same time. That’s led to a huge wave of highly scaled models that are incredibly expensive to train, largely because of their enormous compute budgets. But what if there was a more flexible way to scale AI — one that allowed us to decouple model size from compute budgets, so that we can track a more compute-efficient course to scale? That’s the promise of so-called mixture of experts models, or MoEs. Unlike more traditional transformers, MoEs don’t update all of their parameters on every training pass. Instead, they route inputs intelligently to sub-models called experts, which can each specialize in different tasks. On a given training pass, only those experts have their parameters updated. The result is a sparse model, a more compute-efficient training process, and a new potential path to scale. Google has been pushing the frontier of research on MoEs, and my two guests today in particular have been involved in pioneering work on that strategy (among many others!). Liam Fedus and Barrett Zoph are research scientists at Google Brain, and they joined me to talk about AI scaling, sparsity and the present and future of MoE models on this episode of the TDS podcast. *** Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc *** Chapters: 2:15 Guests’ backgrounds 8:00 Understanding specialization 13:45 Speculations for the future 21:45 Switch transformer versus dense net 27:30 More interpretable models 33:30 Assumptions and biology 39:15 Wrap-up
undefined
Apr 13, 2022 • 49min

119. Jaime Sevilla - Projecting AI progress from compute trends

There’s an idea in machine learning that most of the progress we see in AI doesn’t come from new algorithms of model architectures. instead, some argue, progress almost entirely comes from scaling up compute power, datasets and model sizes — and besides those three ingredients, nothing else really matters. Through that lens the history of AI becomes the history f processing power and compute budgets. And if that turns out to be true, then we might be able to do a decent job of predicting AI progress by studying trends in compute power and their impact on AI development. And that’s why I wanted to talk to Jaime Sevilla, an independent researcher and AI forecaster, and affiliate researcher at Cambridge University’s Centre for the Study of Existential Risk, where he works on technological forecasting and understanding trends in AI in particular. His work’s been cited in a lot of cool places, including Our World In Data, who used his team’s data to put together an exposé on trends in compute. Jaime joined me to talk about compute trends and AI forecasting on this episode of the TDS podcast. *** Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc ***  Chapters: 2:00 Trends in compute 4:30 Transformative AI 13:00 Industrial applications 19:00 GPT-3 and scaling 25:00 The two papers 33:00 Biological anchors 39:00 Timing of projects 43:00 The trade-off 47:45 Wrap-up
undefined
Apr 6, 2022 • 52min

118. Angela Fan - Generating Wikipedia articles with AI

Generating well-referenced and accurate Wikipedia articles has always been an important problem: Wikipedia has essentially become the Internet's encyclopedia of record, and hundreds of millions of people use it do understand the world. But over the last decade Wikipedia has also become a critical source of training data for data-hungry text generation models. As a result, any shortcomings in Wikipedia’s content are at risk of being amplified by the text generation tools of the future. If one type of topic or person is chronically under-represented in Wikipedia’s corpus, we can expect generative text models to mirror — or even amplify — that under-representation in their outputs. Through that lens, the project of Wikipedia article generation is about much more than it seems — it’s quite literally about setting the scene for the language generation systems of the future, and empowering humans to guide those systems in more robust ways. That’s why I wanted to talk to Meta AI researcher Angela Fan, whose latest project is focused on generating reliable, accurate, and structured Wikipedia articles. She joined me to talk about her work, the implications of high-quality long-form text generation, and the future of human/AI collaboration on this episode of the TDS podcast. ---  Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc --- Chapters: 1:45 Journey into Meta AI 5:45 Transition to Wikipedia 11:30 How articles are generated 18:00 Quality of text 21:30 Accuracy metrics 25:30 Risk of hallucinated facts 30:45 Keeping up with changes 36:15 UI/UX problems 45:00 Technical cause of gender imbalance 51:00 Wrap-up
undefined
7 snips
Mar 30, 2022 • 47min

117. Beena Ammanath - Defining trustworthy AI

Trustworthy AI is one of today’s most popular buzzwords. But although everyone seems to agree that we want AI to be trustworthy, definitions of trustworthiness are often fuzzy or inadequate. Maybe that shouldn’t be surprising: it’s hard to come up with a single set of standards that add up to “trustworthiness”, and that apply just as well to a Netflix movie recommendation as a self-driving car. So maybe trustworthy AI needs to be thought of in a more nuanced way — one that reflects the intricacies of individual AI use cases. If that’s true, then new questions come up: who gets to define trustworthiness, and who bears responsibility when a lack of trustworthiness leads to harms like AI accidents, or undesired biases? Through that lens, trustworthiness becomes a problem not just for algorithms, but for organizations. And that’s exactly the case that Beena Ammanath makes in her upcoming book, Trustworthy AI, which explores AI trustworthiness from a practical perspective, looking at what concrete steps companies can take to make their in-house AI work safer, better and more reliable. Beena joined me to talk about defining trustworthiness, explainability and robustness in AI, as well as the future of AI regulation and self-regulation on this episode of the TDS podcast. Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc Chapters: 1:55 Background and trustworthy AI 7:30 Incentives to work on capabilities 13:40 Regulation at the level of application domain 16:45 Bridging the gap 23:30 Level of cognition offloaded to the AI 25:45 What is trustworthy AI? 34:00 Examples of robustness failures 36:45 Team diversity 40:15 Smaller companies 43:00 Application of best practices 46:30 Wrap-up
undefined
13 snips
Mar 23, 2022 • 54min

116. Katya Sedova - AI-powered disinformation, present and future

Until recently, very few people were paying attention to the potential malicious applications of AI. And that made some sense: in an era where AIs were narrow and had to be purpose-built for every application, you’d need an entire research team to develop AI tools for malicious applications. Since it’s more profitable (and safer) for that kind of talent to work in the legal economy, AI didn’t offer much low-hanging fruit for malicious actors. But today, that’s all changing. As AI becomes more flexible and general, the link between the purpose for which an AI was built and its potential downstream applications has all but disappeared. Large language models can be trained to perform valuable tasks, like supporting writers, translating between languages, or write better code. But a system that can write an essay can also write a fake news article, or power an army of humanlike text-generating bots. More than any other moment in the history of AI, the move to scaled, general-purpose foundation models has shown how AI can be a double-edged sword. And now that these models exist, we have to come to terms with them, and figure out how to build societies that remain stable in the face of compelling AI-generated content, and increasingly accessible AI-powered tools with malicious use potential. That’s why I wanted to speak with Katya Sedova, a former Congressional Fellow and Microsoft alumna who now works at Georgetown University’s Center for Security and Emerging Technology, where she recently co-authored some fascinating work exploring current and likely future malicious uses of AI. If you like this conversation I’d really recommend checking out her team’s latest report — it’s called “AI and the future of disinformation campaigns”. Katya joined me to talk about malicious AI-powered chatbots, fake news generation and the future of AI-augmented influence campaigns on this episode of the TDS podcast. *** Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc ***  Chapters: 2:40 Malicious uses of AI 4:30 Last 10 years in the field 7:50 Low handing fruit of automation 14:30 Other analytics functions 25:30 Authentic bots 30:00 Influences of service businesses 36:00 Race to the bottom 42:30 Automation of systems 50:00 Manufacturing norms 52:30 Interdisciplinary conversations 54:00 Wrap-up
undefined
17 snips
Mar 9, 2022 • 50min

115. Irina Rish - Out-of-distribution generalization

Imagine, for example, an AI that’s trained to identify cows in images. Ideally, we’d want it to learn to detect cows based on their shape and colour. But what if the cow pictures we put in the training dataset always show cows standing on grass? In that case, we have a spurious correlation between grass and cows, and if we’re not careful, our AI might learn to become a grass detector rather than a cow detector. Even worse, we could only realize that’s happened once we’ve deployed it in the real world and it runs into a cow that isn’t standing on grass for the first time. So how do you build AI systems that can learn robust, general concepts that remain valid outside the context of their training data? That’s the problem of out-of-distribution generalization, and it’s a central part of the research agenda of Irina Rish, a core member of the Mila— Quebec AI Research institute, and the Canadian Excellence Research Chair in Autonomous AI. Irina’s research explores many different strategies that aim to overcome the out-of-distribution problem, from empirical AI scaling efforts to more theoretical work, and she joined me to talk about just that on this episode of the podcast. *** Intro music: - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc *** Chapters: 2:00 Research, safety, and generalization 8:20 Invariant risk minimization 15:00 Importance of scaling 21:35 Role of language 27:40 AGI and scaling 32:30 GPT versus ResNet 50 37:00 Potential revolutions in architecture 42:30 Inductive bias aspect 46:00 New risks 49:30 Wrap-up
undefined
Mar 2, 2022 • 48min

114. Sam Bowman - Are we *under-hyping* AI?

Google the phrase “AI over-hyped”, and you’ll find literally dozens of articles from the likes of Forbes, Wired, and Scientific American, all arguing that “AI isn’t really as impressive at it seems from the outside,” and “we still have a long way to go before we come up with *true* AI, don’t you know.” Amusingly, despite the universality of the “AI is over-hyped” narrative, the statement that “We haven’t made as much progress in AI as you might think™️” is often framed as somehow being an edgy, contrarian thing to believe. All that pressure not to over-hype AI research really gets to people — researchers included. And they adjust their behaviour accordingly: they over-hedge their claims, cite outdated and since-resolved failure modes of AI systems, and generally avoid drawing straight lines between points that clearly show AI progress exploding across the board. All, presumably, to avoid being perceived as AI over-hypers. Why does this matter? Well for one, under-hyping AI allows us to stay asleep — to delay answering many of the fundamental societal questions that come up when widespread automation of labour is on the table. But perhaps more importantly, it reduces the perceived urgency of addressing critical problems in AI safety and AI alignment. Yes, we need to be careful that we’re not over-hyping AI. “AI startups” that don’t use AI are a problem. Predictions that artificial general intelligence is almost certainly a year away are a problem. Confidently prophesying major breakthroughs over short timescales absolutely does harm the credibility of the field. But at the same time, we can’t let ourselves be so cautious that we’re not accurately communicating the true extent of AI’s progress and potential. So what’s the right balance? That’s where Sam Bowman comes in. Sam is a professor at NYU, where he does research on AI and language modeling. But most important for today’s purposes, he’s the author of a paper titled, “When combating AI hype, proceed with caution,” in which he explores a trend he calls under-claiming — a common practice among researchers that consists of under-stating the extent of current AI capabilities, and over-emphasizing failure modes in ways that can be (unintentionally) deceptive. Sam joined me to talk about under-claiming and what it means for AI progress on this episode of the Towards Data Science podcast. *** Intro music:  - Artist: Ron Gelinas - Track Title: Daybreak Chill Blend (original mix) - Link to Track: https://youtu.be/d8Y2sKIgFWc  *** Chapters:  2:15 Overview of the paper 8:50 Disappointing systems 13:05 Potential double standard 19:00 Moving away from multi-modality 23:50 Overall implications 28:15 Pressure to publish or perish 32:00 Announcement discrepancies 36:15 Policy angle 41:00 Recommendations 47:20 Wrap-up
undefined
Feb 9, 2022 • 35min

113. Yaron Singer - Catching edge cases in AI

It’s no secret that AI systems are being used in more and more high-stakes applications. As AI eats the world, it’s becoming critical to ensure that AI systems behave robustly — that they don’t get thrown off by unusual inputs, and start spitting out harmful predictions or recommending dangerous courses of action. If we’re going to have AI drive us to work, or decide who gets bank loans and who doesn’t, we’d better be confident that our AI systems aren’t going to fail because of a freak blizzard, or because some intern missed a minus sign. We’re now past the point where companies can afford to treat AI development like a glorified Kaggle competition, in which the only thing that matters is how well models perform on a testing set. AI-powered screw-ups aren’t always life-or-death issues, but they can harm real users, and cause brand damage to companies that don’t anticipate them. Fortunately, AI risk is starting to get more attention these days, and new companies — like Robust Intelligence — are stepping up to develop strategies that anticipate AI failures, and mitigate their effects. Joining me for this episode of the podcast was Yaron Singer, a former Googler, professor of computer science and applied math at Harvard, and now CEO and co-founder of Robust Intelligence. Yaron has the rare combination of theoretical and engineering expertise required to understand what AI risk is, and the product intuition to know how to integrate that understanding into solutions that can help developers and companies deal with AI risk. ---  Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc ---  Chapters: 0:00 Intro 2:30 Journey into AI risk 5:20 Guarantees of AI systems 11:00 Testing as a solution 15:20 Generality and software versus custom work 18:55 Consistency across model types 24:40 Different model failures 30:25 Levels of responsibility 35:00 Wrap-up
undefined
8 snips
Feb 2, 2022 • 42min

112. Tali Raveh - AI, single cell genomics, and the new era of computational biology

Until very recently, the study of human disease involved looking at big things — like organs or macroscopic systems — and figuring out when and how they can stop working properly. But that’s all started to change: in recent decades, new techniques have allowed us to look at disease in a much more detailed way, by examining the behaviour and characteristics of single cells. One class of those techniques now known as single-cell genomics — the study of gene expression and function at the level of single cells. Single-cell genomics is creating new, high-dimensional datasets consisting of tens of millions of cells whose gene expression profiles and other characteristics have been painstakingly measured. And these datasets are opening up exciting new opportunities for AI-powered drug discovery — opportunities that startups are now starting to tackle head-on. Joining me for today’s episode is Tali Raveh, Senior Director of Computational Biology at Immunai, a startup that’s using single-cell level data to perform high resolution profiling of the immune system at industrial scale. Tali joined me to talk about what makes the immune system such an exciting frontier for modern medicine, and how single-cell data and AI might be poised to generate unprecedented breakthroughs in disease treatment on this episode of the TDS podcast. --- Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc ---  Chapters: 0:00 Intro 2:00 Tali’s background 4:00 Immune systems and modern medicine 14:40 Data collection technology 19:00 Exposing cells to different drugs 24:00 Labeled and unlabelled data 27:30 Dataset status 31:30 Recent algorithmic advances 36:00 Cancer and immunology 40:00 The next few years 41:30 Wrap-up
undefined
Jan 26, 2022 • 1h

111. Mo Gawdat - Scary Smart: A former Google exec’s perspective on AI risk

If you were scrolling through your newsfeed in late September 2021, you may have caught this splashy headline from The Times of London that read, “Can this man save the world from artificial intelligence?”. The man in question was Mo Gawdat, an entrepreneur and senior tech executive who spent several years as the Chief Business Officer at GoogleX (now called X Development), Google’s semi-secret research facility, that experiments with moonshot projects like self-driving cars, flying vehicles, and geothermal energy. At X, Mo was exposed to the absolute cutting edge of many fields — one of which was AI. His experience seeing AI systems learn and interact with the world raised red flags for him — hints of the potentially disastrous failure modes of the AI systems we might just end up with if we don’t get our act together now. Mo writes about his experience as an insider at one of the world’s most secretive research labs and how it led him to worry about AI risk, but also about AI’s promise and potential in his new book, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World. He joined me to talk about just that on this episode of the TDS podcast.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode