London Futurists cover image

London Futurists

Latest episodes

undefined
Nov 2, 2022 • 31min

The Singularity Principles

Co-hosts Calum and David dig deep into aspects of David's recent new book "The Singularity Principles". Calum (CC) says he is, in part, unconvinced. David  (DW) agrees that the projects he recommends are hard, but suggests some practical ways forward.0.25 The technological singularity may be nearer than we think1.10 Confusions about the singularity1.35 “Taking back control of the singularity”2.40 The “Singularity Shadow”: over-confident predictions which repulse people3.30 The over-confidence includes predictions of timescale…4.00 … and outcomes4.45 The Singularity as the Rapture of the Nerds?5.20 The Singularity is not a religion …5.40 .. although if positive, it will confer almost godlike powers6.35 Much discussion of the Singularity is dystopian, but there could be enormous benefits, including…7.15 Digital twins for cells and whole bodies, and super longevity7.30 A new enlightenment7.50 Nuclear fusion8.10 Humanity’s superpower is intelligence8.30 Amplifying our intelligence should increase our power9.50 DW’s timeline: 50% chance of AGI by 2050, 10% by 203010.10 The timeline is contingent on human actions10.40 Even if AGI isn’t coming until 2070, we should be working on AI alignment today11.10 AI Impact’s survey of all contributors to NeurIPS11.35 Median view: 50% chance of AGI in 2059, and many were pessimistic12.15 This discussion can’t be left to AI researchers12.40 A bad beta version might be our last invention13.00 A few hundred people are now working on AI alignment, and tens of thousands on advancing AI13.35 The growth of the AI research population is still faster13.40 CC: Three routes to a positive outcome13.55 1. Luck. The world turns out to be configured in our favour14.30 2. Mathematical approaches to AI alignment succeed14.45 We either align AIs forever, or manage to control them. This is very hard14.55 3. We merge with the superintelligent machines15.40 Uploading is a huge engineering challenge15.55 Philosophical issues raised by uploading: is the self retained?16.10 DW: routes 2 and 3 are too binary. A fourth route is solving morality18.15 Individual humans will be augmented, indeed we already are18.55 But augmented humans won’t necessarily be benign19.30 DW: We have to solve beneficence20.00 CC: We can’t hope to solve our moral debates before AGI arrives20.20 In which case we are relying on route 1 – luck20.30 DW: Progress in philosophy *is* possible, and must be accelerated21.15 The Universal Declaration of Human Rights shows that generalised moral principles can be agreed22.25 CC: That sounds impossible. The UDHR is very broad and often ignored23.05 Solving morality is even harder than the MIRI project, and reinforces the idea that route 3 is our best hope23.50 It’s not unreasonable to hope that wisdom correlates with intelligence24.00 DW: We can proceed step by step, starting with progress on facial recognition, autonomous weapons, and such intermediate questions25.10 CC: We are so far from solving moral questions. Americans can’t even agree if a coup against their democracy was a bad thing25.40 DW: We have to make progress, and quickly. AI might help us.26.50 The essence of transhumanism is that we can use technology to improve ourselves27.20 CC: If you had a magic wand, your first wish should probably be to make all humans see each other as members of the same tribe27.50 IsPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Oct 26, 2022 • 36min

Collapsing AGI timelines, with Ross Nordby

How likely is it that, by 2030, someone will build artificial general intelligence (AGI)?Ross Nordby is an AI researcher who has shortened his AGI timelines: he has changed his mind about when AGI might be expected to exist. He recently published an article on the LessWrong community discussion site, giving his argument in favour of shortening these timelines. He now identifies 2030 as the date by which it is 50% likely that AGI will exist. In this episode, we ask Ross questions about his argument, and consider some of the implications that arise.Article by Ross: https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soonEffective Altruism Long-Term Future Fund: https://funds.effectivealtruism.org/funds/far-futureMIRI (Machine Intelligence Research Institution): https://intelligence.org/00.57 Ross’ background: real-time graphics, mostly in video games02.10 Increased familiarity with AI made him reconsider his AGI timeline02.37 He submitted a grant request to the Effective Altruism Long-Term Future Fund to move into AI safety work03.50 What Ross was researching: can we make an AI intrinsically interpretable?04.25 The AGI Ross is interested in is defined by capability, regardless of consciousness or sentience04.55 An AI that is itself "goalless" might be put to uses with destructive side-effects06.10 The leading AI research groups are still DeepMind and OpenAI06.43 Other groups, like Anthropic, are more interested in alignment07.22 If you can align an AI to any goal at all, that is progress: it indicates you have some control08.00 Is this not all abstract and theoretical - a distraction from more pressing problems?08.30 There are other serious problems, like pandemics and global warming, but we have to solve them all08.45 Globally, only around 300 people are focused on AI alignment: not enough10.05 AGI might well be less than three decades away10.50 AlphaGo surprised the community, which was expecting Go to be winnable 10-15 years later11.10 Then AlphaGo was surpassed by systems like AlphaZero and MuZero, which were actually simpler, and more flexible11.20 AlphaTensor frames matrix multiplication as a game, and becomes superhuman at it11.40 In 2018, the Transformer paper was published, but no-one forecast GPT-3’s capabilities12.00 This year, Minerva (similar to GPT-3) got 50% correct on the math dataset: high school competition math problems13.16 Illustrators now feel threatened by systems like Dall-E, Stable Diffusion, etc13.30 The conclusion is that intelligence is easier to simulate than we thought13.40 But these systems also do stupid things. They are brittle18.00 But we could use transformers more intelligently19.20 They turn out to be able to write code, and to explain jokes, and do maths reasoning21:10 Google's Gopher AI22.05 Machines don’t yet have internal models of the world, which we call common sense24.00 But an early version of GPT-3 demonstrated the ability to model a human thought process alongside a machine’s27.15 Ross’ current timeline is 50% probability of AGI by 2030, and 90+% by 205027:35 Counterarguments?29.35 So what is to be done?30.55 If convinced that AGI is coming soon, most lay people would probably demand that all AI research stops immediately. Which isn’t possible31.40 Maybe publicity would be good in order to generate resources for AI alignment. And to avoid a backlash against Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Oct 19, 2022 • 33min

The terabrain is near, with Simon Thorpe

Why do human brains consume much less power than artificial neural networks? Simon Thorpe, Research Director of CNRS, explains his view that the key to artificial general intelligence is a "terabrain" that copies from human brains the sparse-firing networks with spiking neurons.00.11 Recapping "the AI paradox"00.28 The nervousness of CTOs regarding AI00.43 Introducing Simon01.43 45 years since Oxford, working out how the brain does amazing things02.45 Brain visual perception as feed-forward vs. feedback03.40 The ideas behind the system that performed so well in the 2012 ImageNet challenge04.20 The role of prompts to alter perception05.30 Drawbacks of human perceptual expectations06.05 The video of a gorilla on the basketball court06.50 Conjuring tricks and distractions07.10 Energy consumption: human neurons vs. artificial neurons07.26 The standard model would need 500 petaflops08.40 Exaflop computing has just arrived08.50 30 MW vs. 20 W (less than a lightbulb)09.34 Companies working on low-power computing systems09.48 Power requirements for edge computing10.10 The need for 86,000 neuromorphic chips?10.25 Dense activation of neurons vs. sparse activation10.58 Real brains are event driven11.16 Real neurons send spikes not floating point numbers11.55 SpikeNET by Arnaud Delorme12.50 Why are sparse networks studied so little?14.40 A recent debate with Yann LeCun of Facebook and Bill Dally of Nvidia15.40 One spike can contain many bits of information16.24 Revisiting an experiment with eels from 1927 (Lord Edgar Adrian)17.06 Biology just needs one spike17.50 Chips moved from floating point to fixed point19.25 Other mentions of sparse systems - MoE (Mixture of Experts)19.50 Sparse systems are easier to interpret20.30 Advocacy for "grandmother cells"21.23 Chicks that imprinted on yellow boots22.35 A semantic web in the 1960s22.50 The Mozart cell23.02 An expert system implemented in a neural network with spiking neurons23.14 Power consumption reduced by a factor of one million23.40 Experimental progress23.53 Dedicated silicon: Spikenet Technology, acquired by BrainChip24.18 The Terabrain Project, using standard off-the-shelf hardware24.40 Impressive recent simulations on GPUs and on a MacBook Pro26.26 A homegrown learning rule26.44 Experiments with "frozen noise"27.28 Anticipating emulating an entire human brain on a Mac Studio M1 Ultra28.25 The likely impact of these ideas29.00 This software will be given away29.17 Anticipating "local learning" without the results being sent to Big Tech30.40 GPT-3 could run on your phone next year31.12 Our interview next year might be, not with Simon, but with his Terabrain31.22 Our phones know us better than our spouses doSimon's academic page: https://cerco.cnrs.fr/page-perso-simon-thorpe/Simon's personal blog: https://simonthorpesideas.blogspot.com/Audio engineering by Alexander Chace.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Oct 12, 2022 • 34min

AI for organisations, with Daniel Hulme

This episode features Daniel Hulme, founder of Satalia and chief AI officer at WPP. What is AI good at today? And how can organisations increase the likelihood of deploying AI successfully?02.55 What is AI good at today?03.25 Deep learning isn’t yet being widely used in companies. Executives are wary of self-adapting systems04.15 Six categories of AI deployment today04.20 1. Automation. Using “if … then …” statements04.50 2. Generative AI, like Dall-E05.15 3. Humanisation, like DeepFake technology and natural language models05.40 4. Machine learning to extract insights from data – finding correlations that humans could not06.05 5. Complex decision making, aka operations research, or optimisation. “Companies don’t have ML problems, they have decision problems”06.25 6. Augmenting humans physically or cognitively06.50 Aren’t the tech giants using true AI systems in their operations?07.15 A/B testing is a simple form of adaptation. Google A/B tested the colours of their logo08 .00 Complex adaptive systems with many moving parts are much riskier. If they go wrong, huge damage can occur08.30 CTOs demand consistency from operational systems, and can’t tolerate the mistakes that are essential to learning09.25 Can’t the mistakes be made in simulated environments?10.20 Elon Musk says simulating the world is not how to develop self-driving cars10.45 Companies undergoing digital transformations are building ERPs, which are “glorified databases”11.20 The idea is to develop digital twins, which enable them to ask “what if…” questions11.30 The coming confluence of three digital twins: workflow, workforce, and administrative processes12.18 Why don’t supermarkets offer digital twins to their customers? They’re coming14.55 People often think that creating a data lake and adding a system like Tableau on top is deploying AI15.15 Even if you give humans better insights they often don’t make better decisions15.20 Data scientists are not equipped to address opportunities in all 6 of the categories listed earlier15.40 Companies should start by identifying and then prioritising the frictions in their organisations16.10 Some companies are taking on “tech debt” which they will have to unwind in five years16.25 Why aren’t large process industry companies boasting about massive revenue improvements or cost savings?17.00 To make those decisions you need the right data, and top optimisation skills. That’s unusual17.55 Companies ask for “quick wins” but that is an oxymoron18.10 We do see project ROIs of 200%, but most projects fail due to under-investment, or mis-understandings19.00 Don’t start by just collecting data. The example of a low-cost airline which collected data about everything except rivals’ pricing20.15 Humans usually do know where the signals are22.25 Some of Daniel’s favourite AI projects23.00 Tesco’s last-mile delivery system, which saves 20m delivery miles a year24.00 Solving PwC’s consultant allocation problem radically improved many lives25.10 In the next decade there will be a move away from pure ML towards ML+ optimisation26.35 How these systems have been applied to Satalia28.10 Daniel has thought a lot about how AI can enable companies to be very adaptable, and allocate decisions well29.00 Satalia staff used to make recommendations for their own salaries, and their colleagues would make AI-weighted votes29.30 The goal is to scale this approach not just Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Oct 5, 2022 • 34min

A tale of two cities: Riyadh and Dublin

Calum and David reflect on their involvement in two recent conferences, one in Riyadh, and one in Dublin. Each conference highlighted a potential disruption in a major industry: a country with large ambitions in the AI space, and a new foundation in the longevity space.00.00 A tale of two cities, two conferences, two industries00.44 First, the 2nd Saudi Global AI Conference01.03 Vision 203001.11 Saudi has always been a coalition between the fundamentalist Wahhabis and the Royal Family01.38 The King chooses reform in the wake of 9/1102.07 Mohamed bin Salman appointed Crown Prince, who embarks on reform02.28 The partial liberation of women, and the fundamentalists side-lined03.10 The “Sheikhdown” in 201703.49 The Khashoggi affair and the Yemen war lead to Saudi being shunned04.26 The West is missing what’s going on in Saudi05.00 Lifting the Saudi economy’s reliance on petrochemicals05.27 AI is central to Vision 203006.00 Can Saudi become one of the world’s top 10 or 15 AI countries?06.20 The AI duopoly between the US and China is so strong, this isn’t as hard as you might think06.55 Saudi’s advantages07.22 Saudi’s disadvantages07.54 The goal is not implausible08.10 The short-term goals of the conference. A forum for discussions, deals, and trying to open the world’s eyes09.45 Saudi is arguably on the way to becoming another Dubai. Continuation and success are not inevitable, but it is encouraging11.00 Fastest-growth country in the G20, with an oil bonanza11.25 The proposed brand-new city of Neom with The Line, a futuristic environment13.07 The second conference: the Longevity Summit in Dublin13.48 A new foundation announced14.05 Reports updating on progress in longevity research around the world14.20 A dozen were new and surprising. Four examples…14.50 1. Bats. A speaker from Dublin discussed why they live so long – 40 years – and what we can learn from that15.55 2. Parabiosis on steroids. Linking the blood flow of two animals suggests there are aging elements in our blood which can be removed17.50 3. Using AI to develop drugs. Companies like Exscientia and Insilico. Cortex Discovery is a smaller, perhaps more nimble player19.40 4. Hevolution, a new longevity fund backed with up to $1bn of Saudi money per year for 20 years22.05 As Aubrey de Grey has long said, we need engineering as much as research22.40 Aubrey thinks aging should be tackled by undoing cell damage rather than changing the human metabolism24.00 Three phases of his career. Methuselah. SENS. New foundation25.00 Let’s avoid cancer, heart disease and dementias by continually reversing aging damage26.00 He is always itchy to explore new areas. This led to a power struggle within SENS, which he lost27.00 What should previous SENS donors do now?27.15 The rich crypto investors who have provided large amounts to SENS are backing the new foundation28.30 One of the new foundation’s investment areas will be parabiosis28.55 Cryonics will be another investment area29.15 Lobbying legislators will be another29.50 Robust Mouse Rejuvenation will be the initial priority30.50 Pets may be the animal models whose rejuvenation breaks humanity’s “trance of death”31.05 David has been appointed a director the new foundation31.50 The other directors33.05 An exciting futureAudio engineering by Alexander Chace.Music: Spike Protein, by Koi Discovery, available Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Sep 28, 2022 • 32min

Stability and combinations, with Aleksa Gordić

This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today’s most advanced AI systems.00.07 This episode builds on Episode 501.05 We start with GANs – Generative Adversarial Networks01.33 Solving the problem of stability, with higher resolution03.24 GANs are notoriously hard to train. They suffer from mode collapse03.45 Worse, the model might not learn anything, and the result is pure noise03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution04.37 The technique of outpainting05.55 Generating text as well as images, and producing stories06.14 AI Dungeon06.28 From GANs to Diffusion models06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs07.20 They are more stable, and don’t suffer from mode collapse07.30 They do have downsides. They are much more computation intensive08.24 What does the word diffusion mean in this context?08.40 It’s adopted from physics. It peels noise away from the image09.17 Isn’t that rewinding entropy?09.45 One application is making a photo taken in 1830 look like one taken yesterday09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc10.35 Bounding boxes generate objects of a specified class from tiny inputs11.00 The images are not taken from previously seen images on the internet, but invented from scratch11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them12.40 Failures are eliminated by amendments, as always with models like this12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months13.30 The failure modes get harder to find as the obvious ones are eliminated13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation15.18 Are you often surprised by what the models do next?15.50 The research community is like a hive mind, and you never know where the next idea will come from16.40 Often the next thing comes from a couple of students at a university16.58 How Ian Goodfellow created the first GAN17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?18.15 We should cultivate different approaches because you never know where they might lead19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking19.40 AlphaGo combined deep learning and GOFAI21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach21.30 GOFAI models had no learning element. They can’t go beyond the humans whose expertise they encapsulate22.25 The now-famous move 37 in AlphaGo’s game two against Lee Sedol in 201623.40 Moravec’s paradox. Easy things are hard, and hard things are easy24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening24.40 Will models always demand more and more compute?25.10 The human brain has far more compute power than even our biggest systems today25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient26.00 We need more compute, better algorithms, and more efficiency26.55 Dedicated AI chips will help a lot with efficiency26.25 Cerebros claims that GPT-3 could be trained on a single chip27Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Sep 22, 2022 • 29min

AI Transformers in context, with Aleksa Gordić

Welcome to episode 5 of the London Futurist podcast, with your co-hosts David Wood and Calum Chace.We’re attempting something rather ambitious in episodes 5 and 6. We try to explain how today’s cutting edge artificial intelligence systems work, using language familiar to lay people, rather than people with maths or computer science degrees.Understanding how Transformers and Generative Adversarial Networks (GANs) work means getting to grips with concepts like matrix transformations, vectors, and landscapes with 500 dimensions.This is challenging stuff, but do persevere. These AI systems are already having a profound impact, and that impact will only grow. Even at the level of pure self-interest, it is often said that in the short term, AIs won’t take all the jobs, but people who understand AI will take the best jobs.We are extremely fortunate to have as our guide for these episodes a brilliant AI researcher at DeepMind, Aleksa Gordić.Note that Aleksa is speaking in personal capacity and is not representing DeepMind.Aleksa's YouTube channel is https://www.youtube.com/c/TheAIEpiphany00.03 An ambitious couple of episodes01.22 Introducing Aleksa, a double rising star02.15 Keeping it simple02.50 Aleksa's current research, and previous work on Microsoft's HoloLens03.40 Self-taught in AI. Not representing DeepMind04.20 The narrative of the Big Bang in 2012, when Machine Learning started to work in AI.05.15 What machine learning is05.45 AlexNet. Bigger data sets and more powerful computers06.40 Deep learning a subset of machine learning, and a re-branding of artificial neural networks07.27 2017 and the arrival of Transformers07.40 Attention is All You Need08.16 Before this there were LSTMs, Long Short-Term Memories08.40 Why Transformers beat LSTMs09.58 Tokenisation. Splitting text into smaller units and mapping them onto higher dimension networks10.30 3D space is defined by three numbers10.55 Humans cannot envisage multi-dimensional spaces with hundreds of dimensions, but it's OK to imagine them as 3D spaces11.55 Some dimensions of the word "princess"12.30 Black boxes13.05 People are trying to understand how machines handle the dimensions13.50 "Man is to king as woman is to queen." Using mathematical operators on this kind of relationship14.35 Not everything is explainable14.45 Machines discover the relationships themselves15.15 Supervised and self-supervised learning. Rewarding or penalising the machine for predicting labels16.25 Vectors are best viewed as arrows in 3D space, although that is over-simplifying17.20 For instance the relationship between "queen" and "woman" is a vector17.50 Self-supervised systems do their own labelling18.30 The labels and relationships have probability distributions19.20 For instance, a princess is far more likely to wear a slipper than a dog19.35 Large numbers of parameters19.40 BERT, the original Transformer, had a hundred million or so parameters20.04 Now it's in the hundreds of billions, or even trillions20.24 A parameter is analogous to a synapse in the human brain21.19 Synapses can have different weights22.10 The more parameters, the lower the loss22.35 Not just text, but images too, because images can also be represented as tokens23.00 In late 2020 Google released the first vision Transformer23.29 Dall-E and Midjourney are diffusion models, which have replaced GANs24Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Sep 19, 2022 • 33min

AI overview: 3. Recent developments

In this episode, co-hosts Calum Chace and David Wood explore a number of recent developments in AI - developments that are rapidly changing what counts as "state of the art" in AI.00.05: Short recap of previous episodes00.20: A couple of Geoff Hinton stories02.27: Today's subject: the state of AI today02.53: Search03.35: Games03.58: Translation04.33: Maps05.33: Making the world understandable. Increasingly07.00: Transformers. Attention is all you need08.00: Masked language models08.18: GPT-2 and GPT-308.54: Parameters and synapses10.15: Foundation models produce much of the content on the internet10.40: Data is even more important than size11.45: Brittleness and transfer learning13.15: Do machines understand?14.05: Human understanding and stochastic parrots15.27: Chatbots16.22: Tay embarrasses Microsoft16.53: Blenderbot17.19: Far from AGI. LaMDA and Blaise Lemoine18.26: The value of anthropomorphising19.53: Automation20.25: Robotic Process Automation (RPA)20.55: Drug discovery21.45: New antibiotics. Discovering Halicin23.50: AI drug discovery as practiced by Insilico, Exscientia and others25.33: Eroom's Law26.34: AlphaFold. How 200m proteins fold28.30: Towards a complete model of the cell29.19: Analysis30.04: Air traffic controllers use only 10% of the data available to them30.36: Transfer learning can mitigate the escalating demand for compute power31.18: Next up: the short-term future of AIAudio engineering by Alexander Chace.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Sep 7, 2022 • 32min

AI overview: 2. The Big Bang and the years that followed

In this episode, co-hosts Calum Chace and David Wood continue their review of progress in AI, taking up the story at the 2012 "Big Bang".00.05: Introduction: exponential impact, big bangs, jolts, and jerks00.45: What enabled the Big Bang01.25: Moore's Law02.05: Moore's Law has always evolved since its inception in 196503.08: Intel's tick tock becomes tic tac toe03.49: GPUs - Graphic Processing Units04.29: TPUs - Tensor Processing Units04.46: Moore's Law is not dead or dying05.10: 3D chips05.32: Memristors05.54: Neuromorphic chips06.48: Quantum computing08.18: The astonishing effect of exponential growth09.08: We have seen this effect in computing already. The cost of an iPhone in the 1950s.09.42: Exponential growth can't continue forever, but Moore's Law hasn't reached any theoretical limits10.33: Reasons why Moore's Law might end: too small, too expensive, not worthwhile11.20: Counter-arguments12.01: "Plenty more room at the bottom"12.56: Software and algorithms can help keep Moore's Law going14.15: Using AI to improve chip design14.40: Data is critical15.00: ImageNet, Fei Fei Lee, Amazon Turk16.10: AIs labelling data16.35: The Big Bang17.00: Jürgen Schmidhuber challenges the narrative17.41: The Big Bang enabled AI to make money18.24: 2015 and the Great Robot Freak-Out18.43: Progress in many domains, especially natural language processing19.44: Machine Learning and Deep Learning20.25: Boiling the ocean vs the scientific method's hypothesis-driven approach21.15: Deep Learning: levels21.57: How Deep Learning systems recognise faces22.48: Supervised, Unsupervised, and Reinforcement Learning24.00: Variants, including Deep Reinforcement Learning and Self-Supervised Learning24.30: Yann LeCun's camera metaphor for Deep Learning26.05: Lack of transparency is a concern27.45: Explainable AI. Is it achievable?29.00: Other AI problems29.17: Has another Big Bang taken place? Large Language Models like GPT-330.08: Few-shot learning and transfer learning30.40: Escaping Uncanny Valley31.50: Gato and partially general AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Aug 8, 2022 • 32min

AI overview: 1. From the Greeks to the Big Bang

AI is a subject that we will all benefit from understanding better. In this episode, co-hosts Calum Chace and David Wood review progress in AI from the Greeks to the 2012 "Big Bang".00.05: A prediction01.09: AI is likely to cause two singularities in this pivotal century - a jobless economy, and superintelligence02.22: Counterpoint: it may require AGI to displace most people from the workforce. So only one singularity?03.27: Jobs are nowhere near all that matters in humans04.11: Are the "Three Cs jobs" safe? Those involving Creativity, Compassion, and Commonsense? Probably not.05.15: 2012, the Big Bang in AI05.48: AI now makes money. Google and Facebook ate Rupert Murdoch's lunch06.30: AI might make the difference between military success and military failure. So there's a geopolitical race as well as a commercial race07.18: Defining AI.09.03: Intelligence vs Consciousness10.15: Does the Turing Test test for Intelligence or Consciousness?12.30: Can customer service agents pass the Turing Test?13.07: Attributing consciousness by brain architecture or by behaviour15.13: Creativity. Move 37 in game two of AlphaGo vs Lee Sedol, and Hassabis' three buckets of creativity17.13: Music and art produced by AI as examples19.05: History: Start with the Greeks, Hephaestus (Vulcan to the Romans) built automata, and Aristotle speculated about technological unemployment19.58: AI has featured in science fiction from the beginning, eg Mary Shelley's Frankenstein, Samuel Butler's Erewhon, E.M. Forster's "The Machine Stops"20.55: Post-WW2 developments. Conference in Paris in 1951 on "Computing machines and human thought". Norbert Weiner and cybernetics22.48: The Dartmouth Conference23.55: Perceptrons - very simple models of the human brain25.13: Perceptrons debunked by Minsky and Papert, so Symbolic AI takes over25.49: This debunking was a mistake. More data and better hardware overcomes the hurdles27.20: Two AI winters, when research funding dries up 28.07: David was taught maths at Cambridge by James Lighthill, author of the report which helped cause the first AI winter28.58: The Japanese 5th generation computing project under-delivered in the 1980s. But it prompted an AI revival, and its ambitions have been realised by more recent advances30.45: No more AI winters?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner