London Futurists

London Futurists
undefined
Nov 30, 2022 • 31min

Anticipating Longevity Escape Velocity, with Aubrey de Grey

One area of technology that is frequently in the news these days is rejuvenation biotechnology, namely the possibility of undoing key aspects of biological aging via a suite of medical interventions. What these interventions target isn't individual diseases, such as cancer, stroke, or heart disease, but rather the common aggravating factors that lie behind the increasing prevalence of these diseases as we become older.Our guest in this episode is someone who has been at the forefront for over 20 years of a series of breakthrough initiatives in this field of rejuvenation biotechnology. He is Dr Aubrey de Grey, co-founder of the Methuselah Foundation, the SENS Research Foundation, and, most recently, the LEV Foundation - where 'LEV' stands for Longevity Escape Velocity.Topics discussed include:*) Different concepts of aging and damage repair;*) Why the outlook for damage repair is significantly more tangible today than it was ten years ago;*) The role of foundations in supporting projects which cannot receive funding from commercial ventures;*) Questions of pace of development: cautious versus bold;*) Changing timescales for the likely attainment of robust mouse rejuvenation ('RMR') and longevity escape velocity ('LEV');*) The "Less Death" initiative;*) "Anticipating anticipation" - preparing for likely sweeping changes in public attitude once understanding spreads about the forthcoming available of powerful rejuvenation treatments;*) Various advocacy initiatives that Aubrey is supporting;*) Ways in which listeners can help to accelerate the attainment of LEV.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://levf.orghttps://lessdeath.orgHow Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify
undefined
Nov 23, 2022 • 33min

Expanding humanity's moral circle, with Jacy Reese Anthis

A Venn diagram of people interested in how AI will shape our future, and members of the effective altruism community (often abbreviated to EA), would show a lot of overlap. One of the rising stars in this overlap is our guest in this episode, the polymath Jacy Reese Anthis.Our discussion picks up themes from Jacy's 2018 book “The End of Animal Farming”, including an optimistic roadmap toward an animal-free food system, as well as factors that could alter that roadmap.We also hear about the work of an organisation co-founded by Jacy: the Sentience Institute, which researches - among other topics - the expansion of moral considerations to non-human entities. We discuss whether AIs can be sentient, how we might know if an AI is sentient, and whether the design choices made by developers of AI will influence the degree and type of sentience of AIs.The conversation concludes with some ideas about how various techniques can be used to boost personal effectiveness, and considers different ways in which people can relate to the EA community.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationSome follow-up reading:https://www.sentienceinstitute.org/https://jacyanthis.com/How Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify
undefined
Nov 16, 2022 • 30min

Hacking the simulation, with Roman Yampolskiy

In the 4th century BC, the Greek philosopher Plato theorised that humans do not perceive the world as it really is. All we can see is shadows on a wall.In 2003, the Swedish philosopher Nick Bostrom published a paper which formalised an argument to prove Plato was right. The paper argued that one of the following three statements is true:1. We will go extinct fairly soon2. Advanced civilisations don’t produce simulations containing entities which think they are naturally-occurring sentient intelligences. (This could be because it is impossible.)3. We are in a simulation.The reason for this is that if it is possible, and civilisations can become advanced without exploding, then there will be vast numbers of simulations, and it is vanishingly unlikely that any randomly selected civilisation (like us) is a naturally-occurring one.Some people find this argument pretty convincing. As we will hear later, some of us have added twists to the argument. But some people go even further, and speculate about how we might bust out of the simulation.One such person is our friend and our guest in this episode, Roman Yampolskiy, Professor of Computer Science at the University of Louisville.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFurther reading:"How to Hack the Simulation" by Roman Yampolskiy: https://www.researchgate.net/publication/364811408_How_to_Hack_the_Simulation"The Simulation Argument" by Nick Bostrom: https://www.simulation-argument.com/How Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify
undefined
Nov 9, 2022 • 40min

Pioneering AI drug development, with Alex Zhavoronkov

This episode discusses progress at Insilico Medicine, the AI drug development company founded by our guest, longevity pioneer Alex Zhavoronkov.1.20 In Feb 2022, Insilico got an IPF drug into phase 1 clinical trials: a first for a wholly AI-developed drug1.50  Insilico is now well-funded; its software is widely used in the pharma industry2.30 How drug development works. First you create a hypothesis about what causes a disease4.00 Pandaomics is Insilico’s software to generate hypotheses. It combines 20+ AI models, and huge public data repositories6.00 This first phase is usually done in academia. It usually costs $ billions to develop a hypothesis. 95% of them fail6.50 The second phase is developing a molecule which might treat the disease7.15 This is the job of Insilico’s Chemistry 42 platform7.30 The classical approach is to test thousands of molecules to see if they bind to the target protein7.50 AI, by contrast, is able to "imagine" a novel molecule which might bind to it8.00 You then test 10-15 molecules which have the desired characteristics8.20 This is done with a variety of genetic algorithms, Generative Adversarial Networks (GANs), and some Transformer networks8.35 Insilico has a “zoo” of 40 validated models10.40 Given the ten-fold improvement, why hasn’t the whole drug industry adopted this process?10.50 They do all have AI groups and they are trying to change, but they are huge companies, and it takes time11.50 Is it better to invent new molecules, or re-purpose old drugs, which are already known to be safe in humans?13.00 You can’t gain IP with re-purposed drugs: either somebody else “owns” them, or they are already generic15.00 The IPF drug was identified during aging research, using aging clocks, and a deep neural net trained on longitudinal data17.10 The third phase is where Insilico’s other platform, InClinico, comes into play17.35 InClinico predicts the results of phase 2 (clinical efficacy) trials18.15 InClinico is trained on massive data sets about previous trials19.40 InClinico is actually Insilico’s oldest system. Its value has only been ascertained now that some drugs have made it all the way through the pipeline22.05 A major pharma company asked Insilico to predict the outcome of ten of its trials22.30 Nine of these ten trials were predicted correctly23.00 But the company decided that adopting this methodology would be too much of an upheaval; it was unwilling to rely on outsiders so heavily24.15 Hedge funds and banks have no such qualms24.25 Insilico is doing pilots for their investments in biotech startups26.30 Alex is from Latvia originally, studied in Canada, started his career in the US, but Insilico was established in Hong Kong. Why?27.00 Chinese CROs, Contract Research Organisations, enable you to do research without having your own wetlab 28.00 Like Apple, Insilico designs in the US and does operations in China. You can also do clinical studies there28.45 They needed their own people inside those CROs, so had to be co-located29.10 Hong Kong still has great IP protection, financial expertise, scientific resources, and is a beautiful place to live29.40 Post-Covid, Insilico also had to set up a site in Shanghai30.35 It is very frustrating how much opposition has built up against internHow Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify PodMatchPodMatch Automatically Matches Ideal Podcast Guests and Hosts For Interviews
undefined
Nov 2, 2022 • 32min

The Singularity Principles

Co-hosts Calum and David dig deep into aspects of David's recent new book "The Singularity Principles". Calum (CC) says he is, in part, unconvinced. David  (DW) agrees that the projects he recommends are hard, but suggests some practical ways forward.0.25 The technological singularity may be nearer than we think1.10 Confusions about the singularity1.35 “Taking back control of the singularity”2.40 The “Singularity Shadow”: over-confident predictions which repulse people3.30 The over-confidence includes predictions of timescale…4.00 … and outcomes4.45 The Singularity as the Rapture of the Nerds?5.20 The Singularity is not a religion …5.40 .. although if positive, it will confer almost godlike powers6.35 Much discussion of the Singularity is dystopian, but there could be enormous benefits, including…7.15 Digital twins for cells and whole bodies, and super longevity7.30 A new enlightenment7.50 Nuclear fusion8.10 Humanity’s superpower is intelligence8.30 Amplifying our intelligence should increase our power9.50 DW’s timeline: 50% chance of AGI by 2050, 10% by 203010.10 The timeline is contingent on human actions10.40 Even if AGI isn’t coming until 2070, we should be working on AI alignment today11.10 AI Impact’s survey of all contributors to NeurIPS11.35 Median view: 50% chance of AGI in 2059, and many were pessimistic12.15 This discussion can’t be left to AI researchers12.40 A bad beta version might be our last invention13.00 A few hundred people are now working on AI alignment, and tens of thousands on advancing AI13.35 The growth of the AI research population is still faster13.40 CC: Three routes to a positive outcome13.55 1. Luck. The world turns out to be configured in our favour14.30 2. Mathematical approaches to AI alignment succeed14.45 We either align AIs forever, or manage to control them. This is very hard14.55 3. We merge with the superintelligent machines15.40 Uploading is a huge engineering challenge15.55 Philosophical issues raised by uploading: is the self retained?16.10 DW: routes 2 and 3 are too binary. A fourth route is solving morality18.15 Individual humans will be augmented, indeed we already are18.55 But augmented humans won’t necessarily be benign19.30 DW: We have to solve beneficence20.00 CC: We can’t hope to solve our moral debates before AGI arrives20.20 In which case we are relying on route 1 – luck20.30 DW: Progress in philosophy *is* possible, and must be accelerated21.15 The Universal Declaration of Human Rights shows that generalised moral principles can be agreed22.25 CC: That sounds impossible. The UDHR is very broad and often ignored23.05 Solving morality is even harder than the MIRI project, and reinforces the idea that route 3 is our best hope23.50 It’s not unreasonable to hope that wisdom correlates with intelligence24.00 DW: We can proceed step by step, starting with progress on facial recognition, autonomous weapons, and such intermediate questions25.10 CC: We are so far from solving moral questions. Americans can’t even agree if a coup against their democracy was a bad thing25.40 DW: We have to make progress, and quickly. AI might help us.26.50 The essence of transhumanism is that we can use technology to improve ourselves27.20 CC: If you had a magic wand, your first wish should probably be to make all humans see each other as members of the same tribe27.50 Is How Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify
undefined
Oct 26, 2022 • 36min

Collapsing AGI timelines, with Ross Nordby

How likely is it that, by 2030, someone will build artificial general intelligence (AGI)?Ross Nordby is an AI researcher who has shortened his AGI timelines: he has changed his mind about when AGI might be expected to exist. He recently published an article on the LessWrong community discussion site, giving his argument in favour of shortening these timelines. He now identifies 2030 as the date by which it is 50% likely that AGI will exist. In this episode, we ask Ross questions about his argument, and consider some of the implications that arise.Article by Ross: https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soonEffective Altruism Long-Term Future Fund: https://funds.effectivealtruism.org/funds/far-futureMIRI (Machine Intelligence Research Institution): https://intelligence.org/00.57 Ross’ background: real-time graphics, mostly in video games02.10 Increased familiarity with AI made him reconsider his AGI timeline02.37 He submitted a grant request to the Effective Altruism Long-Term Future Fund to move into AI safety work03.50 What Ross was researching: can we make an AI intrinsically interpretable?04.25 The AGI Ross is interested in is defined by capability, regardless of consciousness or sentience04.55 An AI that is itself "goalless" might be put to uses with destructive side-effects06.10 The leading AI research groups are still DeepMind and OpenAI06.43 Other groups, like Anthropic, are more interested in alignment07.22 If you can align an AI to any goal at all, that is progress: it indicates you have some control08.00 Is this not all abstract and theoretical - a distraction from more pressing problems?08.30 There are other serious problems, like pandemics and global warming, but we have to solve them all08.45 Globally, only around 300 people are focused on AI alignment: not enough10.05 AGI might well be less than three decades away10.50 AlphaGo surprised the community, which was expecting Go to be winnable 10-15 years later11.10 Then AlphaGo was surpassed by systems like AlphaZero and MuZero, which were actually simpler, and more flexible11.20 AlphaTensor frames matrix multiplication as a game, and becomes superhuman at it11.40 In 2018, the Transformer paper was published, but no-one forecast GPT-3’s capabilities12.00 This year, Minerva (similar to GPT-3) got 50% correct on the math dataset: high school competition math problems13.16 Illustrators now feel threatened by systems like Dall-E, Stable Diffusion, etc13.30 The conclusion is that intelligence is easier to simulate than we thought13.40 But these systems also do stupid things. They are brittle18.00 But we could use transformers more intelligently19.20 They turn out to be able to write code, and to explain jokes, and do maths reasoning21:10 Google's Gopher AI22.05 Machines don’t yet have internal models of the world, which we call common sense24.00 But an early version of GPT-3 demonstrated the ability to model a human thought process alongside a machine’s27.15 Ross’ current timeline is 50% probability of AGI by 2030, and 90+% by 205027:35 Counterarguments?29.35 So what is to be done?30.55 If convinced that AGI is coming soon, most lay people would probably demand that all AI research stops immediately. Which isn’t possible31.40 Maybe publicity would be good in order to generate resources for AI alignment. And to avoid a backlash against sHow Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify
undefined
Oct 19, 2022 • 33min

The terabrain is near, with Simon Thorpe

Why do human brains consume much less power than artificial neural networks? Simon Thorpe, Research Director of CNRS, explains his view that the key to artificial general intelligence is a "terabrain" that copies from human brains the sparse-firing networks with spiking neurons.00.11 Recapping "the AI paradox"00.28 The nervousness of CTOs regarding AI00.43 Introducing Simon01.43 45 years since Oxford, working out how the brain does amazing things02.45 Brain visual perception as feed-forward vs. feedback03.40 The ideas behind the system that performed so well in the 2012 ImageNet challenge04.20 The role of prompts to alter perception05.30 Drawbacks of human perceptual expectations06.05 The video of a gorilla on the basketball court06.50 Conjuring tricks and distractions07.10 Energy consumption: human neurons vs. artificial neurons07.26 The standard model would need 500 petaflops08.40 Exaflop computing has just arrived08.50 30 MW vs. 20 W (less than a lightbulb)09.34 Companies working on low-power computing systems09.48 Power requirements for edge computing10.10 The need for 86,000 neuromorphic chips?10.25 Dense activation of neurons vs. sparse activation10.58 Real brains are event driven11.16 Real neurons send spikes not floating point numbers11.55 SpikeNET by Arnaud Delorme12.50 Why are sparse networks studied so little?14.40 A recent debate with Yann LeCun of Facebook and Bill Dally of Nvidia15.40 One spike can contain many bits of information16.24 Revisiting an experiment with eels from 1927 (Lord Edgar Adrian)17.06 Biology just needs one spike17.50 Chips moved from floating point to fixed point19.25 Other mentions of sparse systems - MoE (Mixture of Experts)19.50 Sparse systems are easier to interpret20.30 Advocacy for "grandmother cells"21.23 Chicks that imprinted on yellow boots22.35 A semantic web in the 1960s22.50 The Mozart cell23.02 An expert system implemented in a neural network with spiking neurons23.14 Power consumption reduced by a factor of one million23.40 Experimental progress23.53 Dedicated silicon: Spikenet Technology, acquired by BrainChip24.18 The Terabrain Project, using standard off-the-shelf hardware24.40 Impressive recent simulations on GPUs and on a MacBook Pro26.26 A homegrown learning rule26.44 Experiments with "frozen noise"27.28 Anticipating emulating an entire human brain on a Mac Studio M1 Ultra28.25 The likely impact of these ideas29.00 This software will be given away29.17 Anticipating "local learning" without the results being sent to Big Tech30.40 GPT-3 could run on your phone next year31.12 Our interview next year might be, not with Simon, but with his Terabrain31.22 Our phones know us better than our spouses doSimon's academic page: https://cerco.cnrs.fr/page-perso-simon-thorpe/Simon's personal blog: https://simonthorpesideas.blogspot.com/Audio engineering by Alexander Chace.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationHow Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify
undefined
Oct 12, 2022 • 34min

AI for organisations, with Daniel Hulme

This episode features Daniel Hulme, founder of Satalia and chief AI officer at WPP. What is AI good at today? And how can organisations increase the likelihood of deploying AI successfully?02.55 What is AI good at today?03.25 Deep learning isn’t yet being widely used in companies. Executives are wary of self-adapting systems04.15 Six categories of AI deployment today04.20 1. Automation. Using “if … then …” statements04.50 2. Generative AI, like Dall-E05.15 3. Humanisation, like DeepFake technology and natural language models05.40 4. Machine learning to extract insights from data – finding correlations that humans could not06.05 5. Complex decision making, aka operations research, or optimisation. “Companies don’t have ML problems, they have decision problems”06.25 6. Augmenting humans physically or cognitively06.50 Aren’t the tech giants using true AI systems in their operations?07.15 A/B testing is a simple form of adaptation. Google A/B tested the colours of their logo08 .00 Complex adaptive systems with many moving parts are much riskier. If they go wrong, huge damage can occur08.30 CTOs demand consistency from operational systems, and can’t tolerate the mistakes that are essential to learning09.25 Can’t the mistakes be made in simulated environments?10.20 Elon Musk says simulating the world is not how to develop self-driving cars10.45 Companies undergoing digital transformations are building ERPs, which are “glorified databases”11.20 The idea is to develop digital twins, which enable them to ask “what if…” questions11.30 The coming confluence of three digital twins: workflow, workforce, and administrative processes12.18 Why don’t supermarkets offer digital twins to their customers? They’re coming14.55 People often think that creating a data lake and adding a system like Tableau on top is deploying AI15.15 Even if you give humans better insights they often don’t make better decisions15.20 Data scientists are not equipped to address opportunities in all 6 of the categories listed earlier15.40 Companies should start by identifying and then prioritising the frictions in their organisations16.10 Some companies are taking on “tech debt” which they will have to unwind in five years16.25 Why aren’t large process industry companies boasting about massive revenue improvements or cost savings?17.00 To make those decisions you need the right data, and top optimisation skills. That’s unusual17.55 Companies ask for “quick wins” but that is an oxymoron18.10 We do see project ROIs of 200%, but most projects fail due to under-investment, or mis-understandings19.00 Don’t start by just collecting data. The example of a low-cost airline which collected data about everything except rivals’ pricing20.15 Humans usually do know where the signals are22.25 Some of Daniel’s favourite AI projects23.00 Tesco’s last-mile delivery system, which saves 20m delivery miles a year24.00 Solving PwC’s consultant allocation problem radically improved many lives25.10 In the next decade there will be a move away from pure ML towards ML+ optimisation26.35 How these systems have been applied to Satalia28.10 Daniel has thought a lot about how AI can enable companies to be very adaptable, and allocate decisions well29.00 Satalia staff used to make recommendations for their own salaries, and their colleagues would make AI-weighted votes29.30 The goal is to scale this approach not just aHow Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify
undefined
Oct 5, 2022 • 35min

A tale of two cities: Riyadh and Dublin

Calum and David reflect on their involvement in two recent conferences, one in Riyadh, and one in Dublin. Each conference highlighted a potential disruption in a major industry: a country with large ambitions in the AI space, and a new foundation in the longevity space.00.00 A tale of two cities, two conferences, two industries00.44 First, the 2nd Saudi Global AI Conference01.03 Vision 203001.11 Saudi has always been a coalition between the fundamentalist Wahhabis and the Royal Family01.38 The King chooses reform in the wake of 9/1102.07 Mohamed bin Salman appointed Crown Prince, who embarks on reform02.28 The partial liberation of women, and the fundamentalists side-lined03.10 The “Sheikhdown” in 201703.49 The Khashoggi affair and the Yemen war lead to Saudi being shunned04.26 The West is missing what’s going on in Saudi05.00 Lifting the Saudi economy’s reliance on petrochemicals05.27 AI is central to Vision 203006.00 Can Saudi become one of the world’s top 10 or 15 AI countries?06.20 The AI duopoly between the US and China is so strong, this isn’t as hard as you might think06.55 Saudi’s advantages07.22 Saudi’s disadvantages07.54 The goal is not implausible08.10 The short-term goals of the conference. A forum for discussions, deals, and trying to open the world’s eyes09.45 Saudi is arguably on the way to becoming another Dubai. Continuation and success are not inevitable, but it is encouraging11.00 Fastest-growth country in the G20, with an oil bonanza11.25 The proposed brand-new city of Neom with The Line, a futuristic environment13.07 The second conference: the Longevity Summit in Dublin13.48 A new foundation announced14.05 Reports updating on progress in longevity research around the world14.20 A dozen were new and surprising. Four examples…14.50 1. Bats. A speaker from Dublin discussed why they live so long – 40 years – and what we can learn from that15.55 2. Parabiosis on steroids. Linking the blood flow of two animals suggests there are aging elements in our blood which can be removed17.50 3. Using AI to develop drugs. Companies like Exscientia and Insilico. Cortex Discovery is a smaller, perhaps more nimble player19.40 4. Hevolution, a new longevity fund backed with up to $1bn of Saudi money per year for 20 years22.05 As Aubrey de Grey has long said, we need engineering as much as research22.40 Aubrey thinks aging should be tackled by undoing cell damage rather than changing the human metabolism24.00 Three phases of his career. Methuselah. SENS. New foundation25.00 Let’s avoid cancer, heart disease and dementias by continually reversing aging damage26.00 He is always itchy to explore new areas. This led to a power struggle within SENS, which he lost27.00 What should previous SENS donors do now?27.15 The rich crypto investors who have provided large amounts to SENS are backing the new foundation28.30 One of the new foundation’s investment areas will be parabiosis28.55 Cryonics will be another investment area29.15 Lobbying legislators will be another29.50 Robust Mouse Rejuvenation will be the initial priority30.50 Pets may be the animal models whose rejuvenation breaks humanity’s “trance of death”31.05 David has been appointed a director the new foundation31.50 The other directors33.05 An exciting futureAudio engineering by Alexander Chace.Music: Spike Protein, by Koi Discovery, available uHow Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify
undefined
Sep 28, 2022 • 33min

Stability and combinations, with Aleksa Gordić

This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today’s most advanced AI systems.00.07 This episode builds on Episode 501.05 We start with GANs – Generative Adversarial Networks01.33 Solving the problem of stability, with higher resolution03.24 GANs are notoriously hard to train. They suffer from mode collapse03.45 Worse, the model might not learn anything, and the result is pure noise03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution04.37 The technique of outpainting05.55 Generating text as well as images, and producing stories06.14 AI Dungeon06.28 From GANs to Diffusion models06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs07.20 They are more stable, and don’t suffer from mode collapse07.30 They do have downsides. They are much more computation intensive08.24 What does the word diffusion mean in this context?08.40 It’s adopted from physics. It peels noise away from the image09.17 Isn’t that rewinding entropy?09.45 One application is making a photo taken in 1830 look like one taken yesterday09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc10.35 Bounding boxes generate objects of a specified class from tiny inputs11.00 The images are not taken from previously seen images on the internet, but invented from scratch11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them12.40 Failures are eliminated by amendments, as always with models like this12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months13.30 The failure modes get harder to find as the obvious ones are eliminated13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation15.18 Are you often surprised by what the models do next?15.50 The research community is like a hive mind, and you never know where the next idea will come from16.40 Often the next thing comes from a couple of students at a university16.58 How Ian Goodfellow created the first GAN17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?18.15 We should cultivate different approaches because you never know where they might lead19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking19.40 AlphaGo combined deep learning and GOFAI21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach21.30 GOFAI models had no learning element. They can’t go beyond the humans whose expertise they encapsulate22.25 The now-famous move 37 in AlphaGo’s game two against Lee Sedol in 201623.40 Moravec’s paradox. Easy things are hard, and hard things are easy24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening24.40 Will models always demand more and more compute?25.10 The human brain has far more compute power than even our biggest systems today25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient26.00 We need more compute, better algorithms, and more efficiency26.55 Dedicated AI chips will help a lot with efficiency26.25 Cerebros claims that GPT-3 could be trained on a single chip27.How Hacks HappenHacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...Listen on: Apple Podcasts   Spotify

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app