London Futurists

London Futurists
undefined
Oct 12, 2022 • 34min

AI for organisations, with Daniel Hulme

This episode features Daniel Hulme, founder of Satalia and chief AI officer at WPP. What is AI good at today? And how can organisations increase the likelihood of deploying AI successfully?02.55 What is AI good at today?03.25 Deep learning isn’t yet being widely used in companies. Executives are wary of self-adapting systems04.15 Six categories of AI deployment today04.20 1. Automation. Using “if … then …” statements04.50 2. Generative AI, like Dall-E05.15 3. Humanisation, like DeepFake technology and natural language models05.40 4. Machine learning to extract insights from data – finding correlations that humans could not06.05 5. Complex decision making, aka operations research, or optimisation. “Companies don’t have ML problems, they have decision problems”06.25 6. Augmenting humans physically or cognitively06.50 Aren’t the tech giants using true AI systems in their operations?07.15 A/B testing is a simple form of adaptation. Google A/B tested the colours of their logo08 .00 Complex adaptive systems with many moving parts are much riskier. If they go wrong, huge damage can occur08.30 CTOs demand consistency from operational systems, and can’t tolerate the mistakes that are essential to learning09.25 Can’t the mistakes be made in simulated environments?10.20 Elon Musk says simulating the world is not how to develop self-driving cars10.45 Companies undergoing digital transformations are building ERPs, which are “glorified databases”11.20 The idea is to develop digital twins, which enable them to ask “what if…” questions11.30 The coming confluence of three digital twins: workflow, workforce, and administrative processes12.18 Why don’t supermarkets offer digital twins to their customers? They’re coming14.55 People often think that creating a data lake and adding a system like Tableau on top is deploying AI15.15 Even if you give humans better insights they often don’t make better decisions15.20 Data scientists are not equipped to address opportunities in all 6 of the categories listed earlier15.40 Companies should start by identifying and then prioritising the frictions in their organisations16.10 Some companies are taking on “tech debt” which they will have to unwind in five years16.25 Why aren’t large process industry companies boasting about massive revenue improvements or cost savings?17.00 To make those decisions you need the right data, and top optimisation skills. That’s unusual17.55 Companies ask for “quick wins” but that is an oxymoron18.10 We do see project ROIs of 200%, but most projects fail due to under-investment, or mis-understandings19.00 Don’t start by just collecting data. The example of a low-cost airline which collected data about everything except rivals’ pricing20.15 Humans usually do know where the signals are22.25 Some of Daniel’s favourite AI projects23.00 Tesco’s last-mile delivery system, which saves 20m delivery miles a year24.00 Solving PwC’s consultant allocation problem radically improved many lives25.10 In the next decade there will be a move away from pure ML towards ML+ optimisation26.35 How these systems have been applied to Satalia28.10 Daniel has thought a lot about how AI can enable companies to be very adaptable, and allocate decisions well29.00 Satalia staff used to make recommendations for their own salaries, and their colleagues would make AI-weighted votes29.30 The goal is toDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
Oct 5, 2022 • 34min

A tale of two cities: Riyadh and Dublin

Calum and David reflect on their involvement in two recent conferences, one in Riyadh, and one in Dublin. Each conference highlighted a potential disruption in a major industry: a country with large ambitions in the AI space, and a new foundation in the longevity space.00.00 A tale of two cities, two conferences, two industries00.44 First, the 2nd Saudi Global AI Conference01.03 Vision 203001.11 Saudi has always been a coalition between the fundamentalist Wahhabis and the Royal Family01.38 The King chooses reform in the wake of 9/1102.07 Mohamed bin Salman appointed Crown Prince, who embarks on reform02.28 The partial liberation of women, and the fundamentalists side-lined03.10 The “Sheikhdown” in 201703.49 The Khashoggi affair and the Yemen war lead to Saudi being shunned04.26 The West is missing what’s going on in Saudi05.00 Lifting the Saudi economy’s reliance on petrochemicals05.27 AI is central to Vision 203006.00 Can Saudi become one of the world’s top 10 or 15 AI countries?06.20 The AI duopoly between the US and China is so strong, this isn’t as hard as you might think06.55 Saudi’s advantages07.22 Saudi’s disadvantages07.54 The goal is not implausible08.10 The short-term goals of the conference. A forum for discussions, deals, and trying to open the world’s eyes09.45 Saudi is arguably on the way to becoming another Dubai. Continuation and success are not inevitable, but it is encouraging11.00 Fastest-growth country in the G20, with an oil bonanza11.25 The proposed brand-new city of Neom with The Line, a futuristic environment13.07 The second conference: the Longevity Summit in Dublin13.48 A new foundation announced14.05 Reports updating on progress in longevity research around the world14.20 A dozen were new and surprising. Four examples…14.50 1. Bats. A speaker from Dublin discussed why they live so long – 40 years – and what we can learn from that15.55 2. Parabiosis on steroids. Linking the blood flow of two animals suggests there are aging elements in our blood which can be removed17.50 3. Using AI to develop drugs. Companies like Exscientia and Insilico. Cortex Discovery is a smaller, perhaps more nimble player19.40 4. Hevolution, a new longevity fund backed with up to $1bn of Saudi money per year for 20 years22.05 As Aubrey de Grey has long said, we need engineering as much as research22.40 Aubrey thinks aging should be tackled by undoing cell damage rather than changing the human metabolism24.00 Three phases of his career. Methuselah. SENS. New foundation25.00 Let’s avoid cancer, heart disease and dementias by continually reversing aging damage26.00 He is always itchy to explore new areas. This led to a power struggle within SENS, which he lost27.00 What should previous SENS donors do now?27.15 The rich crypto investors who have provided large amounts to SENS are backing the new foundation28.30 One of the new foundation’s investment areas will be parabiosis28.55 Cryonics will be another investment area29.15 Lobbying legislators will be another29.50 Robust Mouse Rejuvenation will be the initial priority30.50 Pets may be the animal models whose rejuvenation breaks humanity’s “trance of death”31.05 David has been appointed a director the new foundation31.50 The other directors33.05 An exciting futureAudio engineering by Alexander Chace.Music: Spike ProteinDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
Sep 28, 2022 • 33min

Stability and combinations, with Aleksa Gordić

This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today’s most advanced AI systems.00.07 This episode builds on Episode 501.05 We start with GANs – Generative Adversarial Networks01.33 Solving the problem of stability, with higher resolution03.24 GANs are notoriously hard to train. They suffer from mode collapse03.45 Worse, the model might not learn anything, and the result is pure noise03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution04.37 The technique of outpainting05.55 Generating text as well as images, and producing stories06.14 AI Dungeon06.28 From GANs to Diffusion models06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs07.20 They are more stable, and don’t suffer from mode collapse07.30 They do have downsides. They are much more computation intensive08.24 What does the word diffusion mean in this context?08.40 It’s adopted from physics. It peels noise away from the image09.17 Isn’t that rewinding entropy?09.45 One application is making a photo taken in 1830 look like one taken yesterday09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc10.35 Bounding boxes generate objects of a specified class from tiny inputs11.00 The images are not taken from previously seen images on the internet, but invented from scratch11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them12.40 Failures are eliminated by amendments, as always with models like this12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months13.30 The failure modes get harder to find as the obvious ones are eliminated13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation15.18 Are you often surprised by what the models do next?15.50 The research community is like a hive mind, and you never know where the next idea will come from16.40 Often the next thing comes from a couple of students at a university16.58 How Ian Goodfellow created the first GAN17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?18.15 We should cultivate different approaches because you never know where they might lead19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking19.40 AlphaGo combined deep learning and GOFAI21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach21.30 GOFAI models had no learning element. They can’t go beyond the humans whose expertise they encapsulate22.25 The now-famous move 37 in AlphaGo’s game two against Lee Sedol in 201623.40 Moravec’s paradox. Easy things are hard, and hard things are easy24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening24.40 Will models always demand more and more compute?25.10 The human brain has far more compute power than even our biggest systems today25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient26.00 We need more compute, better algorithms, and more efficiency26.55 Dedicated AI chips will help a lot with efficiency26.25 Cerebros claims that GPT-3 could be tDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
Sep 22, 2022 • 29min

AI Transformers in context, with Aleksa Gordić

Welcome to episode 5 of the London Futurist podcast, with your co-hosts David Wood and Calum Chace.We’re attempting something rather ambitious in episodes 5 and 6. We try to explain how today’s cutting edge artificial intelligence systems work, using language familiar to lay people, rather than people with maths or computer science degrees.Understanding how Transformers and Generative Adversarial Networks (GANs) work means getting to grips with concepts like matrix transformations, vectors, and landscapes with 500 dimensions.This is challenging stuff, but do persevere. These AI systems are already having a profound impact, and that impact will only grow. Even at the level of pure self-interest, it is often said that in the short term, AIs won’t take all the jobs, but people who understand AI will take the best jobs.We are extremely fortunate to have as our guide for these episodes a brilliant AI researcher at DeepMind, Aleksa Gordić.Note that Aleksa is speaking in personal capacity and is not representing DeepMind.Aleksa's YouTube channel is https://www.youtube.com/c/TheAIEpiphany00.03 An ambitious couple of episodes01.22 Introducing Aleksa, a double rising star02.15 Keeping it simple02.50 Aleksa's current research, and previous work on Microsoft's HoloLens03.40 Self-taught in AI. Not representing DeepMind04.20 The narrative of the Big Bang in 2012, when Machine Learning started to work in AI.05.15 What machine learning is05.45 AlexNet. Bigger data sets and more powerful computers06.40 Deep learning a subset of machine learning, and a re-branding of artificial neural networks07.27 2017 and the arrival of Transformers07.40 Attention is All You Need08.16 Before this there were LSTMs, Long Short-Term Memories08.40 Why Transformers beat LSTMs09.58 Tokenisation. Splitting text into smaller units and mapping them onto higher dimension networks10.30 3D space is defined by three numbers10.55 Humans cannot envisage multi-dimensional spaces with hundreds of dimensions, but it's OK to imagine them as 3D spaces11.55 Some dimensions of the word "princess"12.30 Black boxes13.05 People are trying to understand how machines handle the dimensions13.50 "Man is to king as woman is to queen." Using mathematical operators on this kind of relationship14.35 Not everything is explainable14.45 Machines discover the relationships themselves15.15 Supervised and self-supervised learning. Rewarding or penalising the machine for predicting labels16.25 Vectors are best viewed as arrows in 3D space, although that is over-simplifying17.20 For instance the relationship between "queen" and "woman" is a vector17.50 Self-supervised systems do their own labelling18.30 The labels and relationships have probability distributions19.20 For instance, a princess is far more likely to wear a slipper than a dog19.35 Large numbers of parameters19.40 BERT, the original Transformer, had a hundred million or so parameters20.04 Now it's in the hundreds of billions, or even trillions20.24 A parameter is analogous to a synapse in the human brain21.19 Synapses can have different weights22.10 The more parameters, the lower the loss22.35 Not just text, but images too, because images can also be represented as tokens23.00 In late 2020 Google released the first vision Transformer23.29 Dall-E and Midjourney are diffusion models, wDigital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
Sep 19, 2022 • 33min

AI overview: 3. Recent developments

In this episode, co-hosts Calum Chace and David Wood explore a number of recent developments in AI - developments that are rapidly changing what counts as "state of the art" in AI.00.05: Short recap of previous episodes00.20: A couple of Geoff Hinton stories02.27: Today's subject: the state of AI today02.53: Search03.35: Games03.58: Translation04.33: Maps05.33: Making the world understandable. Increasingly07.00: Transformers. Attention is all you need08.00: Masked language models08.18: GPT-2 and GPT-308.54: Parameters and synapses10.15: Foundation models produce much of the content on the internet10.40: Data is even more important than size11.45: Brittleness and transfer learning13.15: Do machines understand?14.05: Human understanding and stochastic parrots15.27: Chatbots16.22: Tay embarrasses Microsoft16.53: Blenderbot17.19: Far from AGI. LaMDA and Blaise Lemoine18.26: The value of anthropomorphising19.53: Automation20.25: Robotic Process Automation (RPA)20.55: Drug discovery21.45: New antibiotics. Discovering Halicin23.50: AI drug discovery as practiced by Insilico, Exscientia and others25.33: Eroom's Law26.34: AlphaFold. How 200m proteins fold28.30: Towards a complete model of the cell29.19: Analysis30.04: Air traffic controllers use only 10% of the data available to them30.36: Transfer learning can mitigate the escalating demand for compute power31.18: Next up: the short-term future of AIAudio engineering by Alexander Chace.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
Sep 7, 2022 • 32min

AI overview: 2. The Big Bang and the years that followed

In this episode, co-hosts Calum Chace and David Wood continue their review of progress in AI, taking up the story at the 2012 "Big Bang".00.05: Introduction: exponential impact, big bangs, jolts, and jerks00.45: What enabled the Big Bang01.25: Moore's Law02.05: Moore's Law has always evolved since its inception in 196503.08: Intel's tick tock becomes tic tac toe03.49: GPUs - Graphic Processing Units04.29: TPUs - Tensor Processing Units04.46: Moore's Law is not dead or dying05.10: 3D chips05.32: Memristors05.54: Neuromorphic chips06.48: Quantum computing08.18: The astonishing effect of exponential growth09.08: We have seen this effect in computing already. The cost of an iPhone in the 1950s.09.42: Exponential growth can't continue forever, but Moore's Law hasn't reached any theoretical limits10.33: Reasons why Moore's Law might end: too small, too expensive, not worthwhile11.20: Counter-arguments12.01: "Plenty more room at the bottom"12.56: Software and algorithms can help keep Moore's Law going14.15: Using AI to improve chip design14.40: Data is critical15.00: ImageNet, Fei Fei Lee, Amazon Turk16.10: AIs labelling data16.35: The Big Bang17.00: Jürgen Schmidhuber challenges the narrative17.41: The Big Bang enabled AI to make money18.24: 2015 and the Great Robot Freak-Out18.43: Progress in many domains, especially natural language processing19.44: Machine Learning and Deep Learning20.25: Boiling the ocean vs the scientific method's hypothesis-driven approach21.15: Deep Learning: levels21.57: How Deep Learning systems recognise faces22.48: Supervised, Unsupervised, and Reinforcement Learning24.00: Variants, including Deep Reinforcement Learning and Self-Supervised Learning24.30: Yann LeCun's camera metaphor for Deep Learning26.05: Lack of transparency is a concern27.45: Explainable AI. Is it achievable?29.00: Other AI problems29.17: Has another Big Bang taken place? Large Language Models like GPT-330.08: Few-shot learning and transfer learning30.40: Escaping Uncanny Valley31.50: Gato and partially general AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
Aug 8, 2022 • 32min

AI overview: 1. From the Greeks to the Big Bang

AI is a subject that we will all benefit from understanding better. In this episode, co-hosts Calum Chace and David Wood review progress in AI from the Greeks to the 2012 "Big Bang".00.05: A prediction01.09: AI is likely to cause two singularities in this pivotal century - a jobless economy, and superintelligence02.22: Counterpoint: it may require AGI to displace most people from the workforce. So only one singularity?03.27: Jobs are nowhere near all that matters in humans04.11: Are the "Three Cs jobs" safe? Those involving Creativity, Compassion, and Commonsense? Probably not.05.15: 2012, the Big Bang in AI05.48: AI now makes money. Google and Facebook ate Rupert Murdoch's lunch06.30: AI might make the difference between military success and military failure. So there's a geopolitical race as well as a commercial race07.18: Defining AI.09.03: Intelligence vs Consciousness10.15: Does the Turing Test test for Intelligence or Consciousness?12.30: Can customer service agents pass the Turing Test?13.07: Attributing consciousness by brain architecture or by behaviour15.13: Creativity. Move 37 in game two of AlphaGo vs Lee Sedol, and Hassabis' three buckets of creativity17.13: Music and art produced by AI as examples19.05: History: Start with the Greeks, Hephaestus (Vulcan to the Romans) built automata, and Aristotle speculated about technological unemployment19.58: AI has featured in science fiction from the beginning, eg Mary Shelley's Frankenstein, Samuel Butler's Erewhon, E.M. Forster's "The Machine Stops"20.55: Post-WW2 developments. Conference in Paris in 1951 on "Computing machines and human thought". Norbert Weiner and cybernetics22.48: The Dartmouth Conference23.55: Perceptrons - very simple models of the human brain25.13: Perceptrons debunked by Minsky and Papert, so Symbolic AI takes over25.49: This debunking was a mistake. More data and better hardware overcomes the hurdles27.20: Two AI winters, when research funding dries up 28.07: David was taught maths at Cambridge by James Lighthill, author of the report which helped cause the first AI winter28.58: The Japanese 5th generation computing project under-delivered in the 1980s. But it prompted an AI revival, and its ambitions have been realised by more recent advances30.45: No more AI winters?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
Aug 2, 2022 • 31min

Why this podcast?

Co-hosts David Wood and Calum Chace share their vision and plans for the London Futurists podcast.00.20: Why we are launching this podcast. Anticipating and managing exponential impact02.45: It’s not the Fourth Industrial Revolution – it’s the Information Revolution04.58: AI’s impact. Smartphones as an example of technology’s power09.04: The obviousness of change in hindsight. Why technology implementation is often slow11.30: Technology implementation is often delayed by poor planning15:20: We were promised jetpacks. Instead, we got omniscience17.14: Technological development is not deterministic, and it contains dangers19.08: Technologies are always double-edged swords. They might be somewhat deterministic22.03: Better hindsight enables better foresight23.06: Introducing ourselves23.13: David bio24.53: Calum bio26.44: Fiction and non-fiction. We need more positive stories27.37: Topics for future episodes28.03: There are connections between all these topics28.42: Excited by technology, but realistic29.24: Securing a great futureMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app