London Futurists cover image

London Futurists

Latest episodes

undefined
Sep 22, 2022 • 29min

AI Transformers in context, with Aleksa Gordić

Welcome to episode 5 of the London Futurist podcast, with your co-hosts David Wood and Calum Chace.We’re attempting something rather ambitious in episodes 5 and 6. We try to explain how today’s cutting edge artificial intelligence systems work, using language familiar to lay people, rather than people with maths or computer science degrees.Understanding how Transformers and Generative Adversarial Networks (GANs) work means getting to grips with concepts like matrix transformations, vectors, and landscapes with 500 dimensions.This is challenging stuff, but do persevere. These AI systems are already having a profound impact, and that impact will only grow. Even at the level of pure self-interest, it is often said that in the short term, AIs won’t take all the jobs, but people who understand AI will take the best jobs.We are extremely fortunate to have as our guide for these episodes a brilliant AI researcher at DeepMind, Aleksa Gordić.Note that Aleksa is speaking in personal capacity and is not representing DeepMind.Aleksa's YouTube channel is https://www.youtube.com/c/TheAIEpiphany00.03 An ambitious couple of episodes01.22 Introducing Aleksa, a double rising star02.15 Keeping it simple02.50 Aleksa's current research, and previous work on Microsoft's HoloLens03.40 Self-taught in AI. Not representing DeepMind04.20 The narrative of the Big Bang in 2012, when Machine Learning started to work in AI.05.15 What machine learning is05.45 AlexNet. Bigger data sets and more powerful computers06.40 Deep learning a subset of machine learning, and a re-branding of artificial neural networks07.27 2017 and the arrival of Transformers07.40 Attention is All You Need08.16 Before this there were LSTMs, Long Short-Term Memories08.40 Why Transformers beat LSTMs09.58 Tokenisation. Splitting text into smaller units and mapping them onto higher dimension networks10.30 3D space is defined by three numbers10.55 Humans cannot envisage multi-dimensional spaces with hundreds of dimensions, but it's OK to imagine them as 3D spaces11.55 Some dimensions of the word "princess"12.30 Black boxes13.05 People are trying to understand how machines handle the dimensions13.50 "Man is to king as woman is to queen." Using mathematical operators on this kind of relationship14.35 Not everything is explainable14.45 Machines discover the relationships themselves15.15 Supervised and self-supervised learning. Rewarding or penalising the machine for predicting labels16.25 Vectors are best viewed as arrows in 3D space, although that is over-simplifying17.20 For instance the relationship between "queen" and "woman" is a vector17.50 Self-supervised systems do their own labelling18.30 The labels and relationships have probability distributions19.20 For instance, a princess is far more likely to wear a slipper than a dog19.35 Large numbers of parameters19.40 BERT, the original Transformer, had a hundred million or so parameters20.04 Now it's in the hundreds of billions, or even trillions20.24 A parameter is analogous to a synapse in the human brain21.19 Synapses can have different weights22.10 The more parameters, the lower the loss22.35 Not just text, but images too, because images can also be represented as tokens23.00 In late 2020 Google released the first vision Transformer23.29 Dall-E and Midjourney are diffusion models, which have replaced GANsReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Sep 19, 2022 • 33min

AI overview: 3. Recent developments

In this episode, co-hosts Calum Chace and David Wood explore a number of recent developments in AI - developments that are rapidly changing what counts as "state of the art" in AI.00.05: Short recap of previous episodes00.20: A couple of Geoff Hinton stories02.27: Today's subject: the state of AI today02.53: Search03.35: Games03.58: Translation04.33: Maps05.33: Making the world understandable. Increasingly07.00: Transformers. Attention is all you need08.00: Masked language models08.18: GPT-2 and GPT-308.54: Parameters and synapses10.15: Foundation models produce much of the content on the internet10.40: Data is even more important than size11.45: Brittleness and transfer learning13.15: Do machines understand?14.05: Human understanding and stochastic parrots15.27: Chatbots16.22: Tay embarrasses Microsoft16.53: Blenderbot17.19: Far from AGI. LaMDA and Blaise Lemoine18.26: The value of anthropomorphising19.53: Automation20.25: Robotic Process Automation (RPA)20.55: Drug discovery21.45: New antibiotics. Discovering Halicin23.50: AI drug discovery as practiced by Insilico, Exscientia and others25.33: Eroom's Law26.34: AlphaFold. How 200m proteins fold28.30: Towards a complete model of the cell29.19: Analysis30.04: Air traffic controllers use only 10% of the data available to them30.36: Transfer learning can mitigate the escalating demand for compute power31.18: Next up: the short-term future of AIAudio engineering by Alexander Chace.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Sep 7, 2022 • 33min

AI overview: 2. The Big Bang and the years that followed

In this episode, co-hosts Calum Chace and David Wood continue their review of progress in AI, taking up the story at the 2012 "Big Bang".00.05: Introduction: exponential impact, big bangs, jolts, and jerks00.45: What enabled the Big Bang01.25: Moore's Law02.05: Moore's Law has always evolved since its inception in 196503.08: Intel's tick tock becomes tic tac toe03.49: GPUs - Graphic Processing Units04.29: TPUs - Tensor Processing Units04.46: Moore's Law is not dead or dying05.10: 3D chips05.32: Memristors05.54: Neuromorphic chips06.48: Quantum computing08.18: The astonishing effect of exponential growth09.08: We have seen this effect in computing already. The cost of an iPhone in the 1950s.09.42: Exponential growth can't continue forever, but Moore's Law hasn't reached any theoretical limits10.33: Reasons why Moore's Law might end: too small, too expensive, not worthwhile11.20: Counter-arguments12.01: "Plenty more room at the bottom"12.56: Software and algorithms can help keep Moore's Law going14.15: Using AI to improve chip design14.40: Data is critical15.00: ImageNet, Fei Fei Lee, Amazon Turk16.10: AIs labelling data16.35: The Big Bang17.00: Jürgen Schmidhuber challenges the narrative17.41: The Big Bang enabled AI to make money18.24: 2015 and the Great Robot Freak-Out18.43: Progress in many domains, especially natural language processing19.44: Machine Learning and Deep Learning20.25: Boiling the ocean vs the scientific method's hypothesis-driven approach21.15: Deep Learning: levels21.57: How Deep Learning systems recognise faces22.48: Supervised, Unsupervised, and Reinforcement Learning24.00: Variants, including Deep Reinforcement Learning and Self-Supervised Learning24.30: Yann LeCun's camera metaphor for Deep Learning26.05: Lack of transparency is a concern27.45: Explainable AI. Is it achievable?29.00: Other AI problems29.17: Has another Big Bang taken place? Large Language Models like GPT-330.08: Few-shot learning and transfer learning30.40: Escaping Uncanny Valley31.50: Gato and partially general AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Aug 8, 2022 • 32min

AI overview: 1. From the Greeks to the Big Bang

AI is a subject that we will all benefit from understanding better. In this episode, co-hosts Calum Chace and David Wood review progress in AI from the Greeks to the 2012 "Big Bang".00.05: A prediction01.09: AI is likely to cause two singularities in this pivotal century - a jobless economy, and superintelligence02.22: Counterpoint: it may require AGI to displace most people from the workforce. So only one singularity?03.27: Jobs are nowhere near all that matters in humans04.11: Are the "Three Cs jobs" safe? Those involving Creativity, Compassion, and Commonsense? Probably not.05.15: 2012, the Big Bang in AI05.48: AI now makes money. Google and Facebook ate Rupert Murdoch's lunch06.30: AI might make the difference between military success and military failure. So there's a geopolitical race as well as a commercial race07.18: Defining AI.09.03: Intelligence vs Consciousness10.15: Does the Turing Test test for Intelligence or Consciousness?12.30: Can customer service agents pass the Turing Test?13.07: Attributing consciousness by brain architecture or by behaviour15.13: Creativity. Move 37 in game two of AlphaGo vs Lee Sedol, and Hassabis' three buckets of creativity17.13: Music and art produced by AI as examples19.05: History: Start with the Greeks, Hephaestus (Vulcan to the Romans) built automata, and Aristotle speculated about technological unemployment19.58: AI has featured in science fiction from the beginning, eg Mary Shelley's Frankenstein, Samuel Butler's Erewhon, E.M. Forster's "The Machine Stops"20.55: Post-WW2 developments. Conference in Paris in 1951 on "Computing machines and human thought". Norbert Weiner and cybernetics22.48: The Dartmouth Conference23.55: Perceptrons - very simple models of the human brain25.13: Perceptrons debunked by Minsky and Papert, so Symbolic AI takes over25.49: This debunking was a mistake. More data and better hardware overcomes the hurdles27.20: Two AI winters, when research funding dries up 28.07: David was taught maths at Cambridge by James Lighthill, author of the report which helped cause the first AI winter28.58: The Japanese 5th generation computing project under-delivered in the 1980s. But it prompted an AI revival, and its ambitions have been realised by more recent advances30.45: No more AI winters?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Aug 2, 2022 • 31min

Why this podcast?

Co-hosts David Wood and Calum Chace share their vision and plans for the London Futurists podcast.00.20: Why we are launching this podcast. Anticipating and managing exponential impact02.45: It’s not the Fourth Industrial Revolution – it’s the Information Revolution04.58: AI’s impact. Smartphones as an example of technology’s power09.04: The obviousness of change in hindsight. Why technology implementation is often slow11.30: Technology implementation is often delayed by poor planning15:20: We were promised jetpacks. Instead, we got omniscience17.14: Technological development is not deterministic, and it contains dangers19.08: Technologies are always double-edged swords. They might be somewhat deterministic22.03: Better hindsight enables better foresight23.06: Introducing ourselves23.13: David bio24.53: Calum bio26.44: Fiction and non-fiction. We need more positive stories27.37: Topics for future episodes28.03: There are connections between all these topics28.42: Excited by technology, but realistic29.24: Securing a great futureMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app