
London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum ChaceCalum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
Latest episodes

Aug 2, 2023 • 40min
Investing in AI, with John Cassidy
Our topic in this episode is investing in AI, so we're delighted to have as our guest John Cassidy, a Partner at Kindred Capital, a UK-based venture capital firm. Before he became an investment professional, John co-founded CCG.ai, a precision oncology company which exited to Dante Labs in 2019.We discuss how the investment landscape is being transformed by the possibilities enabled by generative AI .Selected follow-ups:https://kindredcapital.vc/https://cradle.bio/https://scarletcomply.com/https://www.five.ai/Topics addressed in this episode include:*) The argument for investing not just in "platforms" but also in "picks and shovels" - items within the orchestration or infrastructure layers of new solutions*) Examples of recent investments by Kindred Capital*) Comparisons between the surge of excitement around generative AI and previous surges of excitement around crypto and dot-com*) Companies such as Amazon, Google, and Microsoft kept delivering value despite the crash of the dot-com bubble; will something similar apply with generative AI?*) The example of how Nvidia captures significant value in the chip manufacturing industry*) However, looking further back in history, many people who invested in the infrastructure of railways and canals lost lots of money*) Reasons why generative AI might produce large amounts of real value more quickly than previous technologies*) The example of Cradle Bio as enablers of protein engineering - and what might happen if Google upgrade their protein folding prediction software from AlphaFold 2 to AlphaFold 3*) Despite the changes in technological possibilities, what most interests VCs is the calibre of a company's founding team*) The search for individuals who have "creative destruction in their being" - people with a particular kind of irrational self-belief*) The contrast between crystallized intelligence and fluid intelligence - and why both are needed*) Advantages and disadvantages for investors being located in the UK vs. being located in the US*) Why doesn't Europe have tech giants?*) Complications with government regulation of tech industries*) The example of Scarlet as a company helping to streamline the regulation of medical software that is frequently updated*) Why government regulators need to engage with people in industry who are already immersed in considering safety and efficacy of products*) Wherever they are located, companies need to plan ahead for their products reaching new jurisdictions*) Ways in which AI is likely to impact industries in new ways in the near future*) The particular need to improve the efficiency of the later stages of clinical trials of new medical treatmentsAudio engineering by Alexander Chace.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Jul 26, 2023 • 36min
Transformational transformers, with Jeremy Kahn
Our guest in this episode is Jeremy Kahn, a senior writer at Fortune Magazine, based in the UK. He writes about artificial intelligence and other disruptive technologies, from quantum computing to augmented reality. Previously he was at Bloomberg for eight years, again writing mostly about technology, and in moving to Fortune he was returning to his journalistic roots, as he started his career there in 1997, when he was based in New York.David and Calum invited Jeremy onto the show because they think his weekly newsletter “Eye on AI” is one of the very best non-technical sources of news and views about the technology.Jeremy has some distinctive views on the significance of transformers and the LLMs (Large Language Models) they enable.Selected follow-ups:https://www.fortune.com/newsletters/eye-on-aihttps://fortune.com/author/jeremy-kahn/Topics addressed in this episode include:*) Jeremy's route into professional journalism, focussing on technology*) Assessing the way technology changes: exponential, linear with a steep incline, linear with leaps, or something else?*) Some characteristics of LLMs that appear to "emerge" out of nowhere at larger scale, can actually be seen developing linearly when attention is paid to the second or third prediction of the model*) Some leaps in capability depend, not on underlying technological power, but on improvements in interfaces - as with ChatGPT*) Some leaps in capability require, not just step-ups in technological power, but changes in how people organise their work around the new technology*) The decades-long conversion of factories from steam-powered to electricity-powered*) Reasons to anticipate significant boosts in productivity in many areas of the economy within just two years, with assistance from AI co-pilots and from "universal digital assistants"*) Related forthcoming economic impacts: slow-downs in hiring, and depression of some wages (akin to how Uber drivers reduced how much yellow cab drivers could charge for fares)*) The potential, not just for companies to learn to make good use of existing transformer technologies, but for forthcoming next generation transformers to cause larger disruptions*) Models that predict, not "the next most likely word", but "the next most likely action to take to achieve a given goal"*) Recent AI startups with a focus on using transformers for task automation include Adept and Inflection*) Risks when LLMs lack sufficient common sense, and might take actions which a human assistant would know to check beforehand with their supervisor*) Ways in which LLMs could acquire sufficient common sense*) Ways in which observers can be misled about how much common sense is possessed by an LLM*) Reasons why some companies have instructed their employees not to use consumer-facing versions of LLMs*) The case, nevertheless, for companies to encourage bottom-up massive experimentation with LLMs by employees*) The possibility for companies to have departments without any people in them*) Implications of LLMs for geo-security and international relations*) A possible agency, akin to the International Atomic Energy Agency, to monitor the training and use of next generation LLMs*) Interest by the Pentagon (and also in China) for LLMs that can act as "battlefield advisors"*) A call to action: people need to get their heads around transformers, and understand both the upsides and the risksAudio engineering assisted byPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Jul 19, 2023 • 33min
The Death of Death, with José Cordeiro
An intriguing possibility created by the exponential growth in the power of our technology is that within the lifetimes of people already born, death might become optional. Show co-hosts Calum and David are both excited about this idea, but our excitement is as nothing compared to the exuberant enthusiasm of our guest in this episode, José Cordeiro.José was born in Venezuela, to parents who fled Franco’s dictatorship in Spain. He has closed the circle, by returning to Spain (via the USA) while another dictatorship grips Venezuela. His education and early career was thoroughly blue chip – MIT, Georgetown University, INSEAD, and then Schlumberger and Booz Allen.Today, José is the most prominent transhumanist in Spain and Latin America, and indeed a leading light in transhumanist circles worldwide. He is a loyal follower of the ideas of Ray Kurzweil, and in 2018 he co-wrote "La Muerte de la Muerte", which has since been updated and is being published in English as “The Death of Death”. By way of full disclosure, his co-author was David.Selected follow-ups:https://thedeathofdeath.org/https://cordeiro.org/Forthcoming anti-aging conferences:New York, 10-11 Aug: https://www.lifespan.io/ending-age-related-diseases-2023Dublin, 17-20 Aug: https://longevitysummitdublin.comJohannesburg, 23-24 Aug: https://conference.taffds.orgCopenhagen, 28 Aug - 1 Sept: https://agingpharma.orgAnaheim (CA), 7-10 Sept: https://raadfest.com/2023Topics addressed in this episode include:*) An engineering approach to improving health and longevity*) Some cells and some organisms are already biologically immortal*) How José met Marvin Minsky and Ray Kurzweil at MIT*) Does death give purpose to life?*) Why people have often resolved "to live with death"*) Potential timescales for the attainment of longevity escape velocity for humans*) Examples of changing lifespans for various animal species*) The significance of the Nobel prize-winning research of Shinya Yamanaka*) Limits of the capabilities of evolution*) Different theories as to why aging happens: wear-and-tear vs. built-in obsolescence*) Learning from animals that have extended lifespans - including anti-cancer mechanisms*) Exponential progress: more funding, more people, more resources, more discoveries*) Why longevity may soon become the largest industry in the history of humanity*) The Longevity Dividend: "making money out of people not aging"*) The role of politicians in accelerating the benefits of the Longevity Dividend*) Which bold political leader will change history by being the first to declare aging as a curable disease?*) The case for a European anti-aging agency*) Things to say to people who insist that 80 to 85 years is a sufficiently long lifespan*) The case for optimism, from Victor Frankl*) The prevalence of irrational attitudes toward curing aging vs. curing cancer*) How the MIT Technology Review changed its tune about longevity pioneer Aubrey de Grey*) The three phases in the reception of powerful new ideas*) Aspects of our present lifestyles that will be viewed, in 2045, as being barbaric*) The world's most altruistic causeMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Jul 12, 2023 • 37min
AI transforming professional services, with Shamus Rae
Our guest in this episode is Shamus Rae. Shamus is the co-founder of Engine B, a startup which aims to expedite the digitisation of the professional services industry (in particular the accounting and legal professions) and level the playing field, so that small companies can compete with larger ones. It is supported by the Institute of Chartered Accountants in England and Wales (the ICAEW) and the main audit firms.Shamus was ideally placed to launch Engine B, having spent 13 years as a partner at the audit firm KPMG, where he was Head of Innovation and Digital Disruption. But his background is in technology, not accounting, which will become clear as we talk: he is commendably sleeves-rolled-up and hands-on with AI models. Back in the 1990s he founded and sold a technology-oriented outsourcing business, and then built a 17,000-strong outsourcing business for IBM in India from scratch.Selected follow-ups:https://engineb.com/https://www.icaew.com/Topics addressed in this episode include:*) AI in many professional services contexts depends on the quality of the formats used for the data they orchestrate (e.g. financial records and legal contracts)*) "Plumbing for accountants and lawyers"*) Why companies within an industry generally shouldn't seek competitive advantage on the basis of the data formats they are using*) Data lakes contrasted with data swamps*) Automated data extraction can coexist with data security and data privacy*) The significance of knowledge graphs*) Will advanced AI make it harder for tomorrow’s partners to acquire the skills they need?*) Examples of how AI-powered "co-pilots" augment the skills of junior members of a company*) Should junior staff still be expected to work up to 18 hours a day, "ticking and bashing" or similar, if AI allows them to tackle tedious work much more quickly than before?*) Will advanced AI will destroy the billable hours business model used by many professional services companies?*) Alternative business models that can be adopted*) Anticipating an economy of abundance, but with an unclear transitional path from today's economy*) Reasons why consulting reports often downplay the likely impact of AI on jobs*) Some ways in which Google might compete against the GPT models of OpenAI*) Prospects for improved training of AI models using videos, using new forms of reinforcement learning from human feedback, and fuller use of knowledge graphs*) Geoff Hinton's "Forward-Forward" algorithm as a potential replacement for back propagation*) Might a "third AI big bang" already have started, without most observers being aware of it?*) The book by Mark Humphries, "The Spike: An Epic Journey Through the Brain in 2.1 Seconds"*) Comparisons between the internal models used by GPT 3.5 and GPT 4*) A comparison with the globalisation of the 1990s, with people denying that their own jobs will be part of the change they foreseeAudio engineering assisted by Alexander Chace.Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Jul 6, 2023 • 34min
Innovating in education: the Codam experience, with David Giron
In this episode our guest is David Giron, the Director at what is arguably one of the world's most innovative educational initiatives, Codam College in Amsterdam. David was previously the head of studies at Codam's famous parent school 42 in Paris, and he has now spent 10 years putting into practice the somewhat revolutionary ideas of the 42 network. We ask David about what he has learned during these ten years, but we're especially interested in his views on how the world of education stands to be changed even further in the months and years ahead by generative AI.Selected follow-ups:https://www.codam.nl/en/teamhttps://42.fr/en/network-42/Topics addressed in this episode include:*) David's background at Epitech and 42 before joining Codam*) The peer-to-peer framework at the heart of 42*) Learning without teachers*) Student assessment without teachers*) Connection with the "competency-based learning" or "mastery learning" ideas of Sir Ken Robinson*) Extending the 42 learning method beyond software engineering to other fields*) Two ways of measuring whether the learning method is successful*) Is it necessary for a school to fail some students from time to time?*) The impact of Covid on the offline collaborative approach of Codam*) ChatGPT is more than a tool; it is a "topic", on which people are inclined to take sides*) Positive usage models for ChatGPT within education*) Will ChatGPT make the occupation of software engineering a "job from the past"?*) Software engineers will shift their skills from code-writing to prompt-writing*) Why generative AI is likely to have a faster impact on work than the introduction of mechanisation*) The adoption rate of generative AI by Codam students - and how it might change later this year*) Code first or comment first?*) The level of interest in Codam shown by other educational institutions*) The resistance to change within traditional educational institutions*) "The revolution is happening outside"*) From "providing knowledge" to "creating a learning experience"*) From large language models to full video systems that are individually tailored to help each person learn whatever they need in order to solve problems*) Learning to code as a proxy for the more fundamental skill of learning to learnMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Jun 29, 2023 • 44min
Generative AI drug discovery breakthrough, with Alex Zhavoronkov
Alex Zhavoronkov is our first guest to make a repeat appearance, having first joined us in episode 12, last November. We are delighted to welcome him back, because he is doing some of the most important work on the planet, and he has some important news.In 2014, Alex founded Insilico Medicine, a drug discovery company which uses artificial intelligence to identify novel targets and novel molecules for pharmaceutical companies. Insilico now has drugs designed with AI in human clinical trials, and it is one of a number of companies that are demonstrating that developing drugs with AI can cut the time and money involved in the process by as much as 90%. Selected follow-ups:https://insilico.com/ARDD 2023: https://agingpharma.org/Topics addressed in this episode include:*) For the first time, an AI-generated molecule has entered phase 2 human clinical trials; it's a candidate treatment for IPF (idiopathic pulmonary fibrosis)*) The sequence of investigation: first biology (target identification), then chemistry (molecule selection), then medical trials; all three steps can be addressed via AI*) Pros and cons of going after existing well-known targets (proteins) for clinical intervention, versus novel targets*) Pros and cons of checking existing molecules for desired properties, versus imagining (generating) novel molecules with these properties*) Alex's experience with generative AI dates back to 2015 (initially with GANs - "generative adversarial networks")*) The use of interacting ensembles of different AI systems - different generators, and different predictors, allocating rewards*) The importance of "diversity" within biochemistry*) A way in which Insilico follows "the Apple model"*) What happens in Phase 2 human trials - and what Insilico did before reaching Phase 2*) IPF compared with fibrosis in other parts of the body, and a connection with aging*) Why probability of drug success is more important than raw computational speed or the cost of individual drug investigations*) Recent changes in the AI-assisted drug development industry: an investment boom in the wake of Covid, spiced-up narratives devoid of underlying substance, failures, downsizing, consolidation, and improved understanding by investors and by big pharma*) The AI apps created by Insilico can be accessed by companies or educational institutes*) Insilico research into quantum computing: this might transform drug discovery in as little as two years*) Real-world usage of quantum computers from IBM, Microsoft, and Google*) Success at Insilico depended on executive management task reallocation*) Can Longevity Escape Velocity be achieved purely by pharmacological interventions?*) Insilico's Precious1GPT approach to multimodal measurements of biological aging, and its ability to suggest new candidate targets for age-associated diseases: "one clock to rule them all"*) Reasons to mentally prepare to live to 120 or 150*) Hazards posed to longevity research by geopolitical tensions*) Reasons to attend ARDD in Copenhagen, 28 Aug to 1 Sept*) From longevity bunkers to the longevity dividendMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Jun 21, 2023 • 32min
Catastrophe and consent
In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.Topics addressed in this episode include:*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence*) The existential threat of the entire human species being wiped out*) The vulnerabilities of our shared infrastructure*) An AGI may pursue goals even without it being conscious or having agency*) The risks of accidental and/or coincidental catastrophe*) A single technical fault caused the failure of automated passport checking throughout the UK*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles*) Reasons why an AGI might decide to eliminate humans*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders*) Why "Consent" is a better name than "Celebration"*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"*) Enhanced human intelligence could play a role in avoiding a surge of panic*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans*) Is collaboration a self-evident virtue?*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?*) The inscrutability of AGI motivation*) A reason to consider "Consent" as the most likely outcome*) A fifth 'C' word, as discussed by Max Tegmark*) A reason to keep working on a moonshot solution for "Control"*) Practical steps to reduce the risk of public panicMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Jun 16, 2023 • 33min
The 4 Cs of Superintelligence
The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.Topics addressed in this episode include:*) Reasons why superintelligence might never be created*) Timelines for the arrival of superintelligence have been compressed*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?*) The flaws in the "Level zero futurist" position*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there*) A startling illustration of the dramatic power of exponential growth*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks*) Why the "Cease" option is looking more credible nowadays than it did a few years ago*) Might "Cease" become a "Plan B" option?*) Examples of political dictators who turned away from acquiring or using various highly risky weapons*) Challenges facing a "Turing Police" who monitor for dangerous AI developments*) If a superintelligence has agency (volition), it seems that "Control" is impossible*) Ideas for designing superintelligence without agency or volition*) Complications with emergent sub-goals (convergent instrumental goals)*) A badly configured superintelligent coffee fetcher*) Bad actors may add agency to a superintelligence, thinking it will boost its performance*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever*) Human civilisations contain many diametrically opposed goals*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?*) A cliff-hanger endingThe survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

Jun 8, 2023 • 47min
GPT-4 transforming education, with Donald Clark
The launch of GPT-4 on 14th March has provoked concerns and searching questions, and nowhere more so than in the education sector. Earlier this month, the share price of US edutech company Chegg halved when its CEO admitted that GPT technology was a threat to its business model. Looking ahead, GPT models seem to put flesh on the bones of the idea that all students could one day have a personal tutor as effective as Aristotle, who was Alexander the Great’s personal tutor. When that happens, students should leave school and university far, far better educated than we did.Donald Clark is the ideal person to discuss this with. He founded Epic Group in 1983, and made it the UK’s largest provider of bespoke online education services before selling it in 2005. He is now the CEO of an AI learning company called WildFire, and an investor in and Board member of several other education technology businesses. In 2020 he published a book called Artificial Intelligence for Learning.Selected follow-ups:https://donaldclarkplanb.blogspot.com/https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_educationhttps://www.gatesnotes.com/The-Age-of-AI-Has-Begunhttps://www.amazon.co.uk/Case-against-Education-System-Waste/dp/0691196451/https://www.amazon.co.uk/Head-Hand-Heart-Intelligence-Over-Rewarded/dp/1982128461/Topics addressed in this episode include:*) "Education is a bit of a slow learner"*) Why GPT-4 has unprecedented potential to transform education*) The possibility of an online universal teacher*) Traditional education sometimes fails to follow best pedagogical practice*) Accelerating "time to competence" via personalised tuition*) Calum's experience learning maths*) How Khan Academy and DuoLingo are partnering with GPT-4*) The significance of the large range of languages covered by ChatGPT*) The recent essay on "The Age of AI" by Bill Gates*) Students learning social skills from each other*) An imbalanced societal focus on educating and valuing "head" rather than "heart" or "hand"*) "The case against education" by Bryan Caplan*) Evidence of wide usage of ChatGPT by students of all ages*) Three gaps between GPT-4 and AGI, and how they are being bridged by including GPT-4 in "ensembles"*) GPT-4 has a better theory of physics than GPT 3.5*) Encouraging a generative AI to learn about a worldview via its own sensory input, rather than directly feeding a worldview into it*) Pros and cons of "human exceptionalism"*) How GPT-4 is upending our ideas on the relation between language and intelligence*) Generative AI, the "C skills", and the set of jobs left for humans to do*) Custer's last stand?*) Three camps regarding progress toward AGI*) Investors' reactions to Italy banning ChatGPT (subsequently reversed)*) Different views on GDPR and European legislation*) Further thoughts on implications of GPT-4 for the education industry*) Shocking statistics on declining enrolment numbers in US universities*) Beyond exclusivity: "A tutorial system for everybody"?*) A boon for Senegal and other countries in the global south?Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify

May 31, 2023 • 32min
GPT-4 and the EU’s AI Act, with John Higgins
The European Commission and Parliament were busily debating the Artificial Intelligence Act when GPT-4 launched on 14 March. As people realised that GPT technology was a game-changer, they called for the Act to be reconsidered.Famously, the EU contains no tech giants, so cutting edge AI is mostly developed in the US and China. But the EU is more than happy to act as the world’s most pro-active regulator of digital technologies, including AI. The 2016 General Data Protection Regulation (or GDPR) seeks to regulate data protection and privacy, and its impacts remain controversial today.The AI Act was proposed in 2021. It does not confer rights on individuals, but instead regulates the providers of artificial intelligence systems. It is a risk-based approach.John Higgins joins us in this episode to discuss the AI Act. John is the Chair of the Global Digital Foundation, a think tank, and last year he was president of BCS (British Computer Society), the professional body for the IT industry. He has had a long and distinguished career helping to shape digital policy in the UK and the EU.Follow-up reading:https://www.globaldigitalfoundation.org/https://artificialintelligenceact.eu/Topics addressed in this episode include:*) How different is generative AI from the productivity tools that have come before?*) Two approaches to regulation compared: a "Franco-German" approach and an "Anglo-American" approach*) The precautionary principle, for when a regulatory framework needs to be established in order to provide market confidence*) The EU's preference for regulating applications rather than regulating technology*) The types of application that matter most - when there is an impact on human rights and/or safety*) Regulations in the Act compared to the principles that good developers will in any case be following*) Problems with lack of information about the data sets used to train LLMs (Large Language Models)*) Enabling the flow, between the different "providers" within the AI value chain, of information about compliance*) Two potential alternatives to how the EU aims to regulate AI*) How an Act passes through EU legislation*) Conflicting assessments of the GDPR: a sledgehammer to crack a nut?*) Is it conceivable that LLMs will be banned in Europe?*) Why are there no tech giants in Europe? Does it matter?*) Other metrics for measuring the success of AI within Europe*) Strengths and weaknesses of the EU single market*) Reasons why the BCS opposed the moratorium proposed by the FLI: impracticality, asymmetry, benefits held back*) Some counterarguments in favour of the FLI position*) Projects undertaken by the Global Digital Foundation*) The role of AI in addressing (as well as exacerbating) hate speech*) Growing concerns over populism, polarisation, and post-truth*) The need for improved transparency and improved understandingMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts Spotify
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.