London Futurists cover image

London Futurists

Latest episodes

undefined
15 snips
Apr 19, 2023 • 39min

A defence of human uniqueness against AI encroachment, with Kenn Cukier

Despite the impressive recent progress in AI capabilities, there are reasons why AI may be incapable of possessing a full "general intelligence". And although AI will continue to transform the workplace, some important jobs will remain outside the reach of AI. In other words, the Economic Singularity may not happen, and AGI may be impossible.These are views defended by our guest in this episode, Kenneth Cukier, the Deputy Executive Editor of The Economist newspaper.For the past decade, Kenn was the host of its weekly tech podcast Babbage. He is co-author of the 2013 book “Big Data", a New York Times best-seller that has been translated into over 20 languages. He is a regular commentator in the media, and a popular keynote speaker, from TED to the World Economic Forum.Kenn recently stepped down as a board director of Chatham House and a fellow at Oxford's Saïd Business School. He is a member of the Council on Foreign Relations. His latest book is "Framers", on the power of mental models and the limits of AI.Follow-up reading:http://www.cukier.com/https://mediadirectory.economist.com/people/kenneth-cukier/https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/Kurzweil's version of the Turing Test: https://longbets.org/1/Topics addressed in this episode include:*) Changing attitudes at The Economist about how to report on the prospects for AI*) The dual roles of scepticism regarding claims made for technology*) 'Calum's rule' about technology forecasts that omit timing*) Options for magazine coverage of possible developments more than 10 years into the future*) Some leaders within AI research, including Sam Altman of OpenAI, think AGI could happen within a decade*) Metaculus community aggregate forecasts for the arrival of different forms of AGI*) A theme for 2023: the increased 'emergence' of unexpected new capabilities within AI large language models - especially when these models are combined with other AI functionality*) Different views on the usefulness of the Turing Test - a test of human idiocy rather than machine intelligence?*) The benchmark of "human-level general intelligence" may become as anachronistic as the benchmark of "horsepower" for rockets*) The drawbacks of viewing the world through a left-brained hyper-rational "scientistic" perspective*) Two ways the ancient Greeks said we could find truth: logos and mythos*) People in 2023 finding "mythical, spiritual significance" in their ChatGPT conversations*) Appropriate and inappropriate applause for what GPTs can do*) Another horse analogy: could steam engines that lack horse-like legs really replace horses?*) The Ship of Theseus argument that consciousness could be transferred from biology to silicon*) The "life force" and its apparently magical, spiritual aspects*) The human superpower to imaginatively reframe mental models*) People previously thought humans had a unique superpower to create soul-moving music, but a musical version of the Turing Test changed mReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
Apr 12, 2023 • 35min

Against pausing AI research, with Pedro Domingos

Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.Follow-up reading:https://homes.cs.washington.edu/~pedrod/https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543https://futureoflife.org/open-letter/pause-giant-ai-experiments/Topics addressed in this episode include:*) The five tribes of AI research - why there's a lot more to AI than deep learning*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example*) A wide spread of estimates of the timescale to reach AGI*) Different views as to the true risks from advanced AI*) The case that risks arise from AI incompetence rather than from increased AI competence*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI*) The case for using AI to prevent misuse of AI*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective*) How AIs that operate beyond our understanding could still remain under human control*) How fully can evolution be trusted to produce outputs in line with a specified objective function?*) The example of humans taming wolves into dogs that pose no threat to us*) The counterexample of humans pursuing goals contrary to our in-built genetic drives*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models*) Pros and cons of escalating a sense of urgency*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?*) The need for a more rational discussion of the issues raised by increasingly competent AIsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Apr 5, 2023 • 33min

Facing our Futures, with Nikolas Badminton

2023 is still young, but there's already a change in the attitudes of many business people regarding the future. Previously, businesses expressed occasional interest in possible disruptive scenarios, but their attention often quickly turned back to the apparently more pressing tasks of business-as-usual. But recent news of changes in AI capabilities, along with possible social transformations due to pandemics, geopolitics, and industrial unrest, is leading more and more business people to wonder: How can they become more effective in anticipating and managing potential significant changes in their business landscape?In this context, the new book by our guest in this episode, Nikolas Badminton, is particularly timely. It's called "Facing our Futures: How foresight, futures design and strategy creates prosperity and growth".Over the last few years, Nikolas has worked with over 300 organizations including Google, Microsoft, NASA, the United Nations, American Express, and Rolls Royce, and he advised Robert Downey Jr.’s team for the ‘Age of A.I.’ documentary series.Selected follow-up reading:https://nikolasbadminton.com/https://futurist.com/https://www.bloomsbury.com/uk/facing-our-futures-9781399400237/Topics in this conversation include:*) A personal journey to becoming a futurist - with some "hot water" along the way*) The "Dark Futures" project: "what might happen if we take the wrong path forward"*) The dangers of ignoring how bad things might become*) Are we heading toward "the end times"?*) Being in a constant state of collapse*) Human resilience, and how to strengthen it*) Futurists as "hope engineers"*) Pros and cons of the "anti-growth" or "de-growth" initiative*) The useful positive influence of "design fiction" (including futures that are "entirely imaginary")*) The risks of a "pay to play" abundance future*) The benefits of open medicine and open science*) Examples of decisions taken by corporations after futures exercises*) Tips for people interested in a career as a futurist*) Pros and cons of "pop futurists"*) The single biggest danger in our future?*) Evidence from Rene Rohrbeck and Menes Etingue Kum that companies who apply futures thinking significantly out-perform their competitors in profitability and growth*) The idea of an "apocalypse windfall" from climate change*) Some key messages from the book "Facing our Futures": recommended mindset changes*) Having the honesty and courage to face up to our mistakes*) What if... former UK Prime Minister David Cameron had conducted a futures study before embarking on the Brexit project?*) A multi-generational outlook on the future - learning from the IroquoisMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
10 snips
Mar 29, 2023 • 33min

GPT-4 and the Two Singularities

In the last few weeks, the pace of change in AI has been faster than ever before. The changes aren't just announcements of future capabilities - announcements that could have been viewed, perhaps, as hype. The changes are new versions of AI systems that are available for users around the world to experiment with, directly, here and now. These systems are being released by multiple different companies, and also by open-source collaborations. And users of these systems are frequently expressing surprise: the systems are by no means perfect, but they regularly out-perform previous expectations, sometimes in astonishing ways.In this episode, Calum Chace and David Wood, the co-hosts of this podcast series, discuss the wider implications of these new AI systems. David asks Calum if he has changed any of his ideas about what he has called "the two singularities", namely the Economic Singularity and the Technological Singularity, as covered in a number of books he has written.Calum has been a full-time writer and speaker on the subject of AI since 2012. Earlier in his life, he studied philosophy, politics, and economics at Oxford University, and trained as a journalist at the BBC. He wrote a column in the Financial Times and nowadays is a regular contributor to Forbes magazine. In between, he held a number of roles in business, including leading a media practice at KPMG. In the last few days, he has been taking a close look at GPT-4.Selected follow-up reading:https://calumchace.com/the-economic-singularity/https://calumchace.com/surviving-ai-synopsis/Topics in this conversation include:*) Is the media excitement about GPT-4 and its predecessor ChatGPT overblown, or are these systems signs of truly important disruptions?*) How do these new AI systems compare with earlier AIs?*) The two "big bangs" in AI history*) How transformers work*) The difference between self-supervised learning and supervised learning*) The significance of OpenAI enabling general public access to ChatGPT*) Market competition between Microsoft Bing and Google Search*) Unwholesome replies by Microsoft Sydney and Google Bard - and the intended role of RLHF (Reinforcement Learning with Human Feedback)*) How basic reasoning seems to emerge (unexpectedly) from pattern recognition at sufficient scale*) Examples of how the jobs of knowledge workers are being changed by GPT-4*) What will happen to departments where each human knowledge workers has a tenfold productivity boost?*) From the job churns of the past to the Great Churn of the near future*) The forthcoming wave of automation is not only more general than past waves, but will also proceed at a much faster pace*) Improvements in the writing AI produces, such as book chapters*) Revisions of timelines for the Economic and Technological Singularity?*) It now seems that human intelligence is less hard to replicate than was previously thought*) The Technological Singularity might arrive before an Economic Singularity*) The liberating vision of people no longer needing to be wage slaves, and the threat of almost everyone living in poverty*) The insufficiency of UBI (Universal Basic Income) unless an economy of abundance is achieved (bringing the costs of goods and services down toward zero)*) Is the creation of AI now out of control, with a rush to release new versions?*) The infeasibility of the idea of AGI relinquishment*) OpenAI's recent actions assessed*) ExReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Mar 22, 2023 • 39min

Creating Benevolent Decentralized AGI, with Ben Goertzel

Ben Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of Humanity+.Ben is perhaps best-known for popularising the term 'artificial general intelligence', or AGI, a machine with all the cognitive abilities of an adult human. He thinks that the way to create this machine is to start with a baby-like AI, and raise it, as we raise children. We would do this either in VR, or in robot form. Hence he works with the robot-builder David Hanson to create robots like Sophia and Grace.Ben is a unique and engaging speaker, and gives frequent keynotes all round the world. Both his appearance and his views have been described as counter-cultural. In this episode, we hear about Ben's vision for the creation of benevolent decentralized AGI.Selected follow-up reading:https://singularitynet.io/http://goertzel.org/http://multiverseaccordingtoben.blogspot.com/Topics in this conversation include:*) Occasional hazards of humans and robots working together*) "The future is already here, it's just not wired together properly"*) Ben's definition of AGI*) Ways in which humans lack "general intelligence"*) Changes in society expected when AI reaches "human level"*) Is there "one key thing" which will enable the creation of AGI?*) Ben's OpenCog Hyperon project combines three approaches: neural pattern recognition and synthesis, rigorous symbolic reasoning, and  evolutionary creativity*) Parallel combinations versus sequential combinations of AI capabilities: why the former is harder, but more likely to create AGI*) Three methods to improve the scalability of AI algorithms: mathematical innovations, efficient concurrent processing, and an AGI hardware board*) "We can reach the Singularity in ten years if we really, really try"*) ... but humanity has, so far, not "really tried" to apply sufficient resources to creating AGI*) Sam Altman: "If you talk about the upsides of what AGI could do for us, you sound like a crazy person"*) "The benefits of AGI will challenge our concept of 'what is a benefit'"*) Options for human life trajectories, if AGIs are well disposed towards humans*) We will be faced with the questions of "what do we want" and "what are our values"*) The burning issue is "what is the transition phase" to get to AGI*) Ben's disagreements with Nick Bostrom and Eliezer Yudkowsky*) Assessment of the approach taken by OpenAI to create AGI*) Different degrees of faith in big tech companies as a venue for hosting the breakthroughs in creating AGI*) Should OpenAI be renamed as "ClosedAI"?*) The SingularityNET initiative to create a decentralized, democratically controlled infrastructure for AGI*) The development of AGI should be "more like Linux or the Internet than Windows or the mobile phone ecosystem"*) Limitations of neural net systems in self-understanding*) Faith in big tech and capitalism vs. faith in humanity as a whole vs. faith in reward maximizatioReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify Digital Disruption with Geoff Nielson Discover how technology is reshaping our lives and livelihoods.Listen on: Apple Podcasts   Spotify
undefined
Mar 15, 2023 • 37min

What the good future could look like, with Gerd Leonhard

At a time when many people find it depressingly easy to see how "bad futures" could arise, what is a credible narrative of a "good future"? That question is of central concern to our guest in this episode, Gerd Leonhard.Gerd is one of the most successful futurists on the international speaker circuit. He estimates that he has spoken to a combined audience of 2.5 million people in more than 50 countries.He left his home country of Germany in 1982 to go to the USA to study music. While he was in the US, he set up one of the first internet-based music businesses, and then he parlayed that into his current speaking career. His talks and videos are known for their engaging use of technology and design, and he prides himself on his rigorous use of research and data to back up his claims and insights.Selected follow-ups:https://www.futuristgerd.com/https://www.futuristgerd.com/sharing/thegoodfuturefilm/Topics in this conversation include:*) The need for a positive antidote to all the negative visions of the future that are often in people's minds*) People, planet, purpose, and prosperity - rather than an over-focus on profit and economic growth*) Anticipating stock markets that work differently, and with additional requirements before dividends can be paid*) A reason to be an optimist: not because we have less problems (we don't), but because we have more capacity to deal with these problems*) From "capitalism" to "progressive capitalism" (another name could be "social capitalism")*) Kevin Kelly's concept of "protopia" as a contrast to both utopia and dystopia*) Too much of a good thing can be... a bad thing*) How governments and the state interact with free markets*) Managers who try to prioritise people, planet, or purpose (rather than profits and dividends) are "whacked by the stock market"*) The example of the Montreal protocol regarding the hole in the ozone layer, when governments gave a strong direction to the chemical industry*) Some questions about people, planet, purpose, and prosperity are relatively straightforward, but others are much more contested*) Conflicting motivations within high tech firms regarding speed-to-market vs. safety*) Controlling the spread of potentially dangerous AI may be much harder than controlling the spread of nuclear weapons technology, especially as costs reduce for AI development and deployment*) Despite geopolitical tensions, different countries are already collaborating behind the scenes on matters of AGI safety*) How much "financial freedom" should the definition of a good future embrace?*) Universal Basic Income and "the Star Trek economy" as potential responses to the Economic Singularity*) Differing assessments of the role of transhumanism in the good future*) Risks when humans become overly dependent on technology*) Most modern humans can't make a fire from scratch: does that matter?*) The Carrington Event of 1859: the most intense geomagnetic storm in recorded history*) How views changed in the 19th century about giving anaesthetics to women to counter the (biblically mandated?) intense pains of childbirth*) Will views change in a similar way about the possibility of external wombs (ectogenesis)?*) Jamie Bartlett's concept of "the moral singularity" when humans lose the ability to take hard decisions*) Can AI provide useful advice about human-human relationships?*) Is everything truly important about humans locaReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Mar 8, 2023 • 36min

ChatGPT raises old and new concerns about AI, with Francesca Rossi

Our guest in this episode is Francesca Rossi. Francesca studied computer science at the University of Pisa in Italy, where she became a professor, before spending 20 years at the University of Padova. In 2015 she joined IBM's T.J. Watson Research Lab in New York, where she is now an IBM Fellow and also IBM's AI Ethics Global Leader.Francesca is a member of numerous international bodies concerned with the beneficial use of AI, including being a board member at the Partnership on AI, a Steering Committee member and designated expert at the Global Partnership on AI, a member of the scientific advisory board of the Future of Life Institute, and Chair of the international conference on Artificial Intelligence, Ethics, and Society which is being held in Montreal in August this year.From 2022 until 2024 she holds the prestigious role of the President of the AAAI, that is, the Association for the Advancement of Artificial Intelligence. The AAAI has recently held its annual conference, and in this episode, Francesca shares some reflections on what happened there.Selected follow-ups:https://researcher.watson.ibm.com/researcher/view.php?person=ibm-Francesca.Rossi2https://en.wikipedia.org/wiki/Francesca_Rossihttps://partnershiponai.org/https://gpai.ai/Topics in this conversation include:*) How a one-year sabbatical at the Harvard Radcliffe Institute changed the trajectory of Francesca's life*) New generative AI systems such as ChatGPT expand previous issues involving bias, privacy, copyright, and content moderation - because they are trained on very large data sets that have not been curated*) Large language models (LLMs) have been optimised, not for "factuality", but for creating language that is syntactically correct*) Compared to previous AIs, the new systems impact a wider range of occupations, and they also have major implications for education*) Are the "AI ethics" and "responsible AI" approaches that address the issues of existing AI systems also the best approaches for the "AI alignment" and "AI safety" issues raised by artificial general intelligence?*) Different ideas on how future LLMs could acquire mastery, not only over language, but also over logic, inference, and reasoning*) Options for combining classical AI techniques focussing on knowledge and reasoning, with the data-intensive approaches of LLMs*) How "foundation models" allow training to be split into two phases, with a shorter supervised phase customising the output from a prior longer unsupervised phase*) Even experts face the temptation to anthropomorphise the behaviour of LLMs*) On the other hand, unexpected capabilities have emerged within LLMs*) The interplay of "thinking fast" and "thinking slow" - adapting, for the context of AI, insights from Daniel Kahneman about human intelligence*) Cross-fertilisation of ideas from different communities at the recent AAAI conference*) An extension of that "bridge" theme to involve ideas from outside of AI itself, including the use of methods of physics to observe and interpret LLMs from the outside*) Prospects for interpretability, explainability, and transparency of AI - and implications for trust and cooperation between humans and AIs*) The roles played by different international bodies, such as PAI and GPAI*) Pros and cons of including China in the initial phase of GPAI*) Designing regulations to be future-proof, with parts that can change quickly*) An important new goal Real Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Mar 1, 2023 • 37min

ChatGPT has woken up the House of Commons, with Tim Clement-Jones

In this episode, Tim Clement-Jones brings us up to date on the reactions by members of the UK's House of Commons to recent advances in the capabilities of AI systems, such as ChatGPT. He also looks ahead to larger changes, in the UK and elsewhere.Lord Clement-Jones CBE, or Tim, as he prefers to be known, has been a very successful lawyer, holding senior positions at ITV and Kingfisher among others, and later becoming London Managing Partner of law firm DLA Piper.He is better known as a politician. He became a life peer in 1998, and has been the Liberal Democrats’ spokesman on a wide range of issues. The reason we are delighted to have him as a guest on the podcast is that he was the chair of the AI Select Committee, Co-Chair of the All-Party Parliamentary Group on AI, and is now a member of a special inquiry on the use of AI in Weapons Systems.Tim also has multiple connections with universities and charities in the UK.Selected follow-up reading:https://www.lordclementjones.org/https://www.parallelparliament.co.uk/APPG/artificial-intelligencehttps://arcs.qmul.ac.uk/governance/council/council-membership/timclement-jones.htmlTopics in this conversation include:*) Does "the Westminster bubble" understand the importance of AI?*) Evidence that "the tide is turning" - MPs are demonstrating a spirit of inquiry*) The example of Sir Peter Bottomley, the Father of the House (who has been an MP continuously since 1975)*) New AI systems are showing characteristics that had not been expected to arrive for another 5 or 10 years, taking even AI experts by surprise*) The AI duopoly (the US and China) and the possible influence of the UK and the EU*) The forthcoming EU AI Act and the risk-based approach it embodies*) The importance of regulatory systems being innovation-friendly*) How might the EU support the development of some European AI tech giants?*) The inevitability(?) of the UK needing to become "a rule taker"*) Cynical and uncynical explanations for why major tech companies support EU AI regulation*) The example of AI-powered facial recognition: benefits and risks*) Is Brexit helping or hindering the UK's AI activities?*) Complications with the funding of AI research in the UK's universities*) The risks of a slow-down in the UK's AI start-up ecosystem*) Looking further afield: AI ambitions in the UAE and Saudi Arabia*) The particular risks of lethal autonomous weapons systems*) Future conflicts between AI-controlled tanks and human-controlled tanks*) Forecasts for the arrival of artificial general intelligence: 10-15 years from now?*) Superintelligence may emerge from a combination of separate AI systems*) The case for "technology-neutral" regulationMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Feb 22, 2023 • 31min

Assessing the AI duopoly, with Jeff Ding

Advanced AI is currently pretty much a duopoly between the USA and China. The US is the clear leader, thanks largely to its tech giants – Google, Meta, Microsoft, Amazon, and Apple. China also has a fistful of tech giants – Baidu, Alibaba, and Tencent are the ones usually listed, but the Chinese government has also taken a strong interest in AI since Deep Mind’s Alpha Go system beat the world’s best Go player in 2016.People in the West don’t know enough about China’s current and future role in AI. Some think its companies just copy their Western counterparts, while others think it is an implacable and increasingly dangerous enemy, run by a dictator who cares nothing for his people. Both those views are wrong.One person who has been trying to provide a more accurate picture of China and AI in recent years is Jeff Ding, the author of the influential newsletter ChinAI.Jeff grew up in Iowa City and is now an Assistant Professor of Political Science at George Washington University. He earned a PhD at Oxford University, where he was a Rhodes Scholar, and wrote his thesis on how past technological revolutions influenced the rise and fall of great powers, with implications for U.S.-China competition. After gaining his doctorate he worked at Oxford’s Future of Humanity Institute and Stanford’s Institute for Human-Centered Artificial Intelligence.Selected follow-up reading:https://jeffreyjding.github.io/https://chinai.substack.com/https://www.tortoisemedia.com/intelligence/global-ai/Topics in this conversation include:*) The Thucydides Trap: Is conflict inevitable as a rising geopolitical power approaches parity with an established power?*) Different ways of trying to assess how China's AI industry compares with that of the U.S.*) Measuring innovations in creating AI is different from measuring adoption of AI solutions across multiple industries*) Comparisons of papers submitted to AI conferences such as NeurIPS, citations, patents granted, and the number of data scientists*) The biggest misconceptions westerners have about China and AI*) A way in which Europe could still be an important player alongside the duopoly*) Attitudes in China toward data privacy and facial recognition*) Government focus on AI can be counterproductive*) Varieties of government industrial policy: the merits of encouraging decentralised innovation*) The Titanic and the origin of Silicon Valley*) Mariana Mazzucato's question: "Who created the iPhone?"*) Learning from the failure of Japan's 5th Generation Computers initiative*) The evolution of China's Social Credit systems*) Research by Shazeda Ahmed and Jeremy Daum*) Factors encouraging and discouraging the "splinternet" separation of US and Chinese tech ecosystems*) Connections that typically happen outside of the public eye*) Financial interdependencies*) Changing Chinese government attitudes toward Chinese Internet giants*) A broader tension faced by the Chinese government*) Future scenarios: potential good and bad developments*) Transnational projects to prevent accidents or unauthorised use of powerful AI systemsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify
undefined
Feb 15, 2023 • 33min

Peter James, best-selling crime-writer and transhumanist

Peter James is one of the world’s most successful crime writers. His "Roy Grace" series, about a detective in Brighton, England, near where Peter lives, has produced a remarkable 19 consecutive Sunday Times Number One bestsellers. His legions of devoted fans await each new release eagerly. The books have been televised, with the third series of "Grace", starting John Simm, being commissioned for next year.Peter has worked in other genres too, having written 36 novels altogether. When Calum first met Peter in the mid-1990s, Peter's science fiction novel “Host” was generating rave reviews. It was the world’s first electronically published novel, and a copy of its floppy disc version is on display in London’s Science Museum.Peter is also a self-confessed petrol-head, with an enviable collection of classic cars, and a pretty successful track record of racing some of them. The discussion later in the episode addresses the likely arrival of self-driving cars. But we start with the possibility of mind uploading, which is the subject of “Host”.Selected follow-up reading:https://www.peterjames.com/https://www.alcor.org/Topics in this conversation include:*) Peter's passion for the future*) The transformative effect of the 1990 book "Great Mambo Chicken and the Transhuman Condition"*) A Christmas sojourn at MIT and encounters with AI pioneer Marvin Minsky*) The origins of the ideas behind "Host"*) Meeting Alcor, the cryonics organisation, in Riverside California*) How cryonics has evolved over the decades*) "The first person to live to 200 has already been born"*) Quick summaries of previous London Futurists Podcast episodes featuring Aubrey de Grey and Andrew Steele*) The case for doing better than nature*) Peter's novel "Perfect People" and the theme of "designer babies"*) Possible improvements in the human condition from genetic editing*) The risk of a future "genetic underclass"*) Technology divides often don't last: consider the "fridge divide" and the "smartphone divide"*) Calum's novel "Pandora's Brain"*) Why Peter is comfortable with the label "transhumanist"*) Various ways of reading (many) more books*) A thought experiment involving a healthy 99 year old*) If people lived a lot longer, we might take better care of our planet*) Peter's views on technology assisting writers*) Strengths and weaknesses of present-day ChatGPT as a writer*) Prospects for transhumans to explore space*) The "bunker experiments" into the circadian cycle, which suggest that humans naturally revert to a daily cycle closer to 26 hours than 24 hours*) Possible answers to Fermi's question about lack of any sign of alien civilisations*) Reflections on "The Pale Blue Dot of Earth" (originally by Carl Sagan)*) The likelihood of incredible surprises in the next few decades*) Pros and cons of humans driving on public roads (especially when drivers are using mobile phones)*) Legal and ethical issues arising from autonomous cars*) Exponential change often involves a frustrating slow phase before fast breakthroughs*) Anticipating the experience of driving inside immersive virtual reality*) The tragic background to Peter's book "Possession"*) A concluding message from the science fiction writer Kurt VonnegutMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationReal Talk About MarketingAn Acxiom podcast where we discuss marketing made better, bringing you real...Listen on: Apple Podcasts   Spotify

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app