London Futurists cover image

London Futurists

Latest episodes

undefined
11 snips
May 24, 2023 • 34min

Longevity, the 56 trillion dollar opportunity, with Andrew Scott

Technological changes have economic impact. It's not just that technology allows more goods and services to be produced more efficiently and at greater scale. It's also that these changes disrupt previous assumptions about the conduct of human lives, human relationships, and the methods to save money to buy goods and services. A society in which people expect to die around the age of 100, or even older, needs to make different plans than a society in which people expect to die in their 70s.Some politicians, in unguarded moments, have even occasionally expressed a desire for retired people to "hurry up and die", on account of the ballooning costs of pension payments and healthcare costs for the elderly. These politicians worry about the negative consequences of longer lives. In their viewpoint, longer lives would be bad for the economy.But not everyone thinks that way. Indeed, a distinguished professor of economics, from the London Business School, Andrew J Scott, has studied a variety of different future scenarios about the economic consequences of longer lives. He is our guest in this episode.In addition to his role at the London Business School, Andrew is a Research Fellow at the Centre for Economic Policy Research and a consulting scholar at Stanford University’s Center on Longevity.His research has been widely published in leading journals in economics and health. His book, "The 100-Year Life", has been published in 15 languages, is an Amazon bestseller and was runner up in both the FT/McKinsey and Japanese Business Book of the Year Awards.Andrew has been an advisor on policy to a range of governments. He is currently on the advisory board of the UK’s Office for Budget Responsibility, the Cabinet Office Honours Committee (Science and Technology), co-founder of The Longevity Forum, a member of the National Academy of Medicine’s International Commission on Health Longevity, and the WEF council on Healthy Ageing and Longevity.Follow-up reading:https://profandrewjscott.com/https://www.nature.com/articles/s43587-021-00080-0Topics addressed in this episode include:*) Why Andrew wrote the book "The 100-Year Life" (co-authored with Lynda Gratton)*) Shortcomings of the conventional narrative of "the aging society"*) The profound significance of aging being malleable*) Joint research with David Sinclair (Harvard) and Martin Ellison (Oxford): Economic modelling of the future of healthspan and lifespan*) Four different scenarios: Struldbruggs, Dorian Gray, Peter Pan, and Wolverine*) The multi-trillion dollar economic value of everyone in the USA gaining one additional year of life in good health*) The first and second longevity revolutions*) The virtuous circle around aging research*) Options for lives that are significantly longer even than 100 years*) The ill-preparedness of our social structures for extensions in longevity - and, especially, for the attainment of longevity escape velocity*) The possibility of rapid changes in society's expectations*) The three-dimensional longevity dividend*) Developments in Singapore and the UAE*) Two important political initiatives: supporting the return to the workforce of people who are aged over 50, and paying greater attention to national statistics on expected healthspan*) Themes from Andrew's forthcoming new book "Evergreen"*) Why 57 isn't the new 40: it's the new 57*) Making a friend of your future selfMusic: Spike Protein, byPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
May 17, 2023 • 35min

The key workforce skills for 2026, with Mike Howells

One of the questions audiences frequently used to ask futurists was, which careers are most likely to be future-proof? However, that question has changed in recent years. It's now more widely understood that every career is subject to disruption by technological and social trends. No occupation is immune to change. So the question has switched, away from possible future-proof careers, to the skills that are most likely to be useful in these fast-changing circumstances. For example, should everyone be learning to code, or deepen their knowledge of STEM - that is, Science, Technology, Engineering, and Maths? Or should there be more focus on so-called human skills or soft skills?Who better to answer that question than our guest in this episode, Mike Howells? Mike is the President of the Workforce Skills Division at Pearson, the leading learning company.The perennial debate about when and how advanced AI will cause widespread disruption in education has been given extra impetus by the launch of ChatGPT last November, and GPT-4 in March. Pearson, a venerable British company which has gone through various incarnations, is one of the companies at the sharp end of this debate about the changing role of technology in education. The share price of several of these companies suffered a temporary setback recently, due to a perception that GPT technology would replace many of their services. However, Pearson and its peers have rebutted these claims, and the stock has largely recovered.Indeed, with what could be viewed as considerable prescience, Pearson carried out a major piece of research before ChatGPT was launched, to identify which skills employers are prioritising for their new hires - new employees who will be in their stride in 2026 - three years from now.Follow-up reading:https://www.pearson.com/https://plc.pearson.com/en-GB/insights/pearson-skills-outlook-powerskillsTopics addressed in this episode include:*) Some lessons from Mike's own career trajectory*) How Pearson used AI in their survey of key workforce skills*) The growing importance - and growing value - of human skills*) The top 5 "power skills" that employers are seeking today*) The top 5 "power skills" that are projected to be most in-demand by 2026 - and which are in need of greatest improvement and investment*) Given that there are no university courses in these skill areas, how can people gain proficiency in them?*) Three ways of inferring evidence of someone's proficiency in these skill areas*) How the threat of automation has moved from blue collar jobs to white collar jobs*) People are used to taking data-driven decisions in many areas of their lives - e.g. which restaurants to visit or which holidays to book - but the data about the effect of various educational courses is surprisingly thin*) The increasing need for data-driven retraining*) Ways in which the retraining experience can be improved by AI and VR/AR/XR*) The attraction of digital assistants that can provide personalised tuition, especially as costs drop*) School-age children often already use their skills with existing technology to augment and personalise their learning*) Complications with privacy, security, consent, and measuring efficacy*) "It's not about what you've done; it's about what you can do"*) A closer look at "personal learning and mastery" and "cultural and social intelligence"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Promoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
May 10, 2023 • 43min

How to use GPT-4 yourself, with Ted Lappas

The last few episodes of our podcast have explored what GPT (generative pre-trained transformer) technology is and how it works, and also the call for a pause in the development of advanced AI. In this latest episode, Ted Lappas, a data scientist and academic, helps us to take a pragmatic turn - to understand what GPT technology can do for each of us individually.Ted is Assistant Professor at Athens University of Economics and Business, and he also works at Satalia, which was London's largest independent AI consultancy before it was acquired last year by the media giant WPP.Follow-up reading:https://satalia.com/https://www.linkedin.com/in/theodoros-lappas-82771451/Topics addressed in this episode include:*) The "GPT paradox": If GPT-4 is so good, why aren't more people using it to boost their effectiveness in their workplace?*) Concerns in some companies that data entered into GPTs will leak out and assist their competitors*) Uses of GPTs to create or manipulate text, and to help developers to understand new code*) GPTs as "brains" that lack the "limbs" that would make them truly useful*) GPT capabilities are being augmented via plug-ins that access sites like Expedia, Instacart, or Zapier*) Agent-based systems such as AutoGPT and AgentGPT that utilise GPTs to break down tasks into steps and then carry out these steps*) Comparison with the boost given to Apple iPhone adoption by the launch, one year later, of the iOS App Store*) Ted's use of GPT-4 in his role as a meta-reviewer for papers submitted to an academic conference - with Ted becoming an orchestrator more than a writer*) The learning curve is easier for vanilla GPTs than for agent systems that use GPTs*) GPTs are currently more suited to low-end writing than to high-end writing, but are expected to move up the value chain*) Ways to configure a GPT so that it can reproduce the quality level or textual style of a specific writer*) Calum's use of GPT-4 in his side-project as a travel writer*) Ways to stop GPTs inventing false anecdotes*) Some users of GPTs will lose all faith in them due to just a single hallucination*) Teaching GPTs to say "I don't know" or to state their level of confidence about claims they make*) Creating an embedding space search engine*) The case for gaining a working knowledge of the programming language Python*) The growth of technology-explainer videos on TikTok and Instagram*) "Explain this to me like I'm ten years old"*) The way to learn more about GPTs is to use them in a meaningful project*) Learning about GPTs such as DALL-E or Midjourney that generate not text but images*) Uses of GPTs for inpainting - blending new features into an image*) The advantages of open source tools, such as those available on Hugging Face*) Images will be largely solved in 2023; 2024 will be the year for video*) An appeal to "dive in, the sooner the better"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
May 3, 2023 • 34min

GPT: To ban or not to ban, that is the question

On March 14th, OpenAI launched GPT-4 , which took the world by surprise and storm. Almost everybody, including people within the AI community, was stunned by its capabilities. A week later, the Future of Life Institute (FLI) published an open letter calling on the world’s AI labs to pause the development of larger versions of GPT (generative pre-trained transformer) models until their safety can be ensured.Recent episodes of this podcast have presented arguments for and against this call for a moratorium. Jaan Tallin, one of the co-founders of FLI, made the case in favour. Pedro Domingos, an eminent AI researcher, and Kenn Cukier, a senior editor at The Economist, made variants of the case against. In this episode, co-hosts Calum Chace and David Wood highlight some key implications and give our own opinions. Expect some friendly disagreements along the way.Follow-up reading:https://futureoflife.org/open-letter/pause-giant-ai-experiments/https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/Topics addressed in this episode include:*) Definitions of Artificial General Intelligence (AGI)*) Many analysts knowledgeable about AI have recently brought forward their estimates of when AGI will become a reality*) The case that AGI poses an existential risk to humanity*) The continued survival of the second smartest species on the planet depends entirely on the actions of the actual smartest species*) One species can cause another to become extinct, without that outcome being intended or planned*) Four different ways in which advanced AI could have terrible consequences for humanity: bugs in the implementation; the implementation being hacked (or jail broken); bugs in the design; and the design being hacked by emergent new motivations*) Near future AIs that still fall short of being AGI could have effects which, whilst not themselves existential, would plunge society into such a state of dysfunction and distraction that we are unable to prevent subsequent AGI-induced disaster*) Calum's "4 C's" categorisation of possible outcomes regarding AGI existential risks: Cease, Control, Catastrophe, and Consent*) 'Consent' means a superintelligence decides that we humans are fun, enjoyable, interesting, worthwhile, or simply unobjectionable, and consents to let us carry on as we are, or to help us, or to allow us to merge with it*) The 'Control' option arguably splits into "control while AI capabilities continue to proceed at full speed" and "control with the help of a temporary pause in the development of AI capabilities"*) Growing public support for stopping AI development - driven by a sense of outrage that the future of humanity is seemingly being decided by a small number of AI lab executives*) A comparison with how the 1983 film "The Day After" triggered a dramatic change in public opinion regarding the nuclear weapons arms race*) How much practical value could there be in a six-month pause? Or will the six-months be extended into an indefinite ban?*) Areas where there could be at least some progress: methods to validate the output of giant AI models, and choices of initial configurations that would make the 'Consent' scenario more likely*) Designs that might avoid the emergence of agency (convergent instrumental goals) within AI models as they acquire more intelligence*) Why 'Consent' might be the most likely outcome*) The longer a ban remains in place, the larger the risks of bad actors buildiPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
9 snips
Apr 26, 2023 • 30min

The AI suicide race, with Jaan Tallinn

The race to create advanced AI is becoming a suicide race. That's part of the thinking behind the open letter from the Future of Life Institute which "calls on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4".In this episode, our guest, Jaan Tallinn, explains why he sees this pause as a particularly important initiative.In the 1990s and 20-noughts, Jaan led much of the software engineering for the file-sharing application Kazaa and the online communications tool Skype. He is also known as one of the earliest investors in DeepMind, before they were acquired by Google.More recently, Jaan has been a prominent advocate for study of existential risks, including the risks from artificial superintelligence. He helped set up the Centre for the Study of Existential Risk (CSER) in 2012 and the Future of Life Institute (FLI) in 2014.Follow-up reading:https://futureoflife.org/open-letter/pause-giant-ai-experiments/https://www.cser.ac.uk/https://en.wikipedia.org/wiki/Jaan_TallinnTopics addressed in this episode include:*) The differences between CSER and FLI*) Do the probabilities for the occurrence of different existential risks vary by orders of magnitude?*) The principle that "arguments screen authority"*) The possibility that GPT-6 will be built, not by humans, but by GPT-5*) Growing public concern, all over the world, that the fate of all humanity is, in effect, being decided by the actions of just a small number of people in AI labs*) Two reasons why FLI recently changed its approach to AI risk*) The AI safety conference in 2015 in Puerto Rico was initially viewed as a massive success, but it has had little lasting impact*) Uncertainty about a potential cataclysmic event doesn't entitle people to conclude it won't happen any time soon*) The argument that LLMs (Large Language Models) are an "off ramp" rather than being on the road to AGI*) Why the duration of 6 months was selected for the proposed pause*) The "What about China?" objection to the pause*) Potential concrete steps that could take place during the pause*) The FLI document "Policymaking in the pause"*) The article by Luke Muehlhauser of Open Philanthropy, "12 tentative ideas for US AI policy"*) The "summon and tame" way of thinking about the creation of LLMs - and the risk that minds summoned in this way won't be able to be tamed*) Scenarios in which the pause might be ignored by various entities, such as authoritarian regimes, organised crime, rogue corporations, and extraordinary individuals such as Elon Musk and John Carmack*) A meta-principle for deciding which types of AI research should be paused*) 100 million dollar projects become even harder when they are illegal*) The case for requiring the pre-registration of largescale mind-summoning experiments*) A possible 10^25 limit on the number of FLOPs (Floating Point Operations) an AI model can spend*) The reactions by AI lab leaders to the widescale public response to GPT-4 and to the pause letter*) Even Sundar Pichai, CEO of Google/Alphabet, has called for government intervention regarding AI*) The hardware overhang complication with the pause*) Not letting "the perfect" be "the enemy of the good"*) Elon Musk's involvement with FLI and with the pause letter*) "Humanity now has cancer"Music: Spike Protein, by Koi Discovery, available under CC0 1.0 PublPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
15 snips
Apr 19, 2023 • 38min

A defence of human uniqueness against AI encroachment, with Kenn Cukier

Despite the impressive recent progress in AI capabilities, there are reasons why AI may be incapable of possessing a full "general intelligence". And although AI will continue to transform the workplace, some important jobs will remain outside the reach of AI. In other words, the Economic Singularity may not happen, and AGI may be impossible.These are views defended by our guest in this episode, Kenneth Cukier, the Deputy Executive Editor of The Economist newspaper.For the past decade, Kenn was the host of its weekly tech podcast Babbage. He is co-author of the 2013 book “Big Data", a New York Times best-seller that has been translated into over 20 languages. He is a regular commentator in the media, and a popular keynote speaker, from TED to the World Economic Forum.Kenn recently stepped down as a board director of Chatham House and a fellow at Oxford's Saïd Business School. He is a member of the Council on Foreign Relations. His latest book is "Framers", on the power of mental models and the limits of AI.Follow-up reading:http://www.cukier.com/https://mediadirectory.economist.com/people/kenneth-cukier/https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/Kurzweil's version of the Turing Test: https://longbets.org/1/Topics addressed in this episode include:*) Changing attitudes at The Economist about how to report on the prospects for AI*) The dual roles of scepticism regarding claims made for technology*) 'Calum's rule' about technology forecasts that omit timing*) Options for magazine coverage of possible developments more than 10 years into the future*) Some leaders within AI research, including Sam Altman of OpenAI, think AGI could happen within a decade*) Metaculus community aggregate forecasts for the arrival of different forms of AGI*) A theme for 2023: the increased 'emergence' of unexpected new capabilities within AI large language models - especially when these models are combined with other AI functionality*) Different views on the usefulness of the Turing Test - a test of human idiocy rather than machine intelligence?*) The benchmark of "human-level general intelligence" may become as anachronistic as the benchmark of "horsepower" for rockets*) The drawbacks of viewing the world through a left-brained hyper-rational "scientistic" perspective*) Two ways the ancient Greeks said we could find truth: logos and mythos*) People in 2023 finding "mythical, spiritual significance" in their ChatGPT conversations*) Appropriate and inappropriate applause for what GPTs can do*) Another horse analogy: could steam engines that lack horse-like legs really replace horses?*) The Ship of Theseus argument that consciousness could be transferred from biology to silicon*) The "life force" and its apparently magical, spiritual aspects*) The human superpower to imaginatively reframe mental models*) People previously thought humans had a unique superpower to create soul-moving music, but a musical version of the Turing Test changed minds*) Different levels of creativity: not just playing games well but inventing new games*) How many people will have paid jobs in the future?*) Two final arguments why key human abilities will remain unique*) The "pragmatic turn" in AI: duplicating without understanding*) The special value, not of information, but of the absence of information (emptiness, kenosis, the "cloud of unknowing")*) The tePromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Apr 12, 2023 • 35min

Against pausing AI research, with Pedro Domingos

Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.Follow-up reading:https://homes.cs.washington.edu/~pedrod/https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543https://futureoflife.org/open-letter/pause-giant-ai-experiments/Topics addressed in this episode include:*) The five tribes of AI research - why there's a lot more to AI than deep learning*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example*) A wide spread of estimates of the timescale to reach AGI*) Different views as to the true risks from advanced AI*) The case that risks arise from AI incompetence rather than from increased AI competence*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI*) The case for using AI to prevent misuse of AI*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective*) How AIs that operate beyond our understanding could still remain under human control*) How fully can evolution be trusted to produce outputs in line with a specified objective function?*) The example of humans taming wolves into dogs that pose no threat to us*) The counterexample of humans pursuing goals contrary to our in-built genetic drives*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models*) Pros and cons of escalating a sense of urgency*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?*) The need for a more rational discussion of the issues raised by increasingly competent AIsMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Apr 5, 2023 • 33min

Facing our Futures, with Nikolas Badminton

2023 is still young, but there's already a change in the attitudes of many business people regarding the future. Previously, businesses expressed occasional interest in possible disruptive scenarios, but their attention often quickly turned back to the apparently more pressing tasks of business-as-usual. But recent news of changes in AI capabilities, along with possible social transformations due to pandemics, geopolitics, and industrial unrest, is leading more and more business people to wonder: How can they become more effective in anticipating and managing potential significant changes in their business landscape?In this context, the new book by our guest in this episode, Nikolas Badminton, is particularly timely. It's called "Facing our Futures: How foresight, futures design and strategy creates prosperity and growth".Over the last few years, Nikolas has worked with over 300 organizations including Google, Microsoft, NASA, the United Nations, American Express, and Rolls Royce, and he advised Robert Downey Jr.’s team for the ‘Age of A.I.’ documentary series.Selected follow-up reading:https://nikolasbadminton.com/https://futurist.com/https://www.bloomsbury.com/uk/facing-our-futures-9781399400237/Topics in this conversation include:*) A personal journey to becoming a futurist - with some "hot water" along the way*) The "Dark Futures" project: "what might happen if we take the wrong path forward"*) The dangers of ignoring how bad things might become*) Are we heading toward "the end times"?*) Being in a constant state of collapse*) Human resilience, and how to strengthen it*) Futurists as "hope engineers"*) Pros and cons of the "anti-growth" or "de-growth" initiative*) The useful positive influence of "design fiction" (including futures that are "entirely imaginary")*) The risks of a "pay to play" abundance future*) The benefits of open medicine and open science*) Examples of decisions taken by corporations after futures exercises*) Tips for people interested in a career as a futurist*) Pros and cons of "pop futurists"*) The single biggest danger in our future?*) Evidence from Rene Rohrbeck and Menes Etingue Kum that companies who apply futures thinking significantly out-perform their competitors in profitability and growth*) The idea of an "apocalypse windfall" from climate change*) Some key messages from the book "Facing our Futures": recommended mindset changes*) Having the honesty and courage to face up to our mistakes*) What if... former UK Prime Minister David Cameron had conducted a futures study before embarking on the Brexit project?*) A multi-generational outlook on the future - learning from the IroquoisMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
10 snips
Mar 29, 2023 • 33min

GPT-4 and the Two Singularities

In the last few weeks, the pace of change in AI has been faster than ever before. The changes aren't just announcements of future capabilities - announcements that could have been viewed, perhaps, as hype. The changes are new versions of AI systems that are available for users around the world to experiment with, directly, here and now. These systems are being released by multiple different companies, and also by open-source collaborations. And users of these systems are frequently expressing surprise: the systems are by no means perfect, but they regularly out-perform previous expectations, sometimes in astonishing ways.In this episode, Calum Chace and David Wood, the co-hosts of this podcast series, discuss the wider implications of these new AI systems. David asks Calum if he has changed any of his ideas about what he has called "the two singularities", namely the Economic Singularity and the Technological Singularity, as covered in a number of books he has written.Calum has been a full-time writer and speaker on the subject of AI since 2012. Earlier in his life, he studied philosophy, politics, and economics at Oxford University, and trained as a journalist at the BBC. He wrote a column in the Financial Times and nowadays is a regular contributor to Forbes magazine. In between, he held a number of roles in business, including leading a media practice at KPMG. In the last few days, he has been taking a close look at GPT-4.Selected follow-up reading:https://calumchace.com/the-economic-singularity/https://calumchace.com/surviving-ai-synopsis/Topics in this conversation include:*) Is the media excitement about GPT-4 and its predecessor ChatGPT overblown, or are these systems signs of truly important disruptions?*) How do these new AI systems compare with earlier AIs?*) The two "big bangs" in AI history*) How transformers work*) The difference between self-supervised learning and supervised learning*) The significance of OpenAI enabling general public access to ChatGPT*) Market competition between Microsoft Bing and Google Search*) Unwholesome replies by Microsoft Sydney and Google Bard - and the intended role of RLHF (Reinforcement Learning with Human Feedback)*) How basic reasoning seems to emerge (unexpectedly) from pattern recognition at sufficient scale*) Examples of how the jobs of knowledge workers are being changed by GPT-4*) What will happen to departments where each human knowledge workers has a tenfold productivity boost?*) From the job churns of the past to the Great Churn of the near future*) The forthcoming wave of automation is not only more general than past waves, but will also proceed at a much faster pace*) Improvements in the writing AI produces, such as book chapters*) Revisions of timelines for the Economic and Technological Singularity?*) It now seems that human intelligence is less hard to replicate than was previously thought*) The Technological Singularity might arrive before an Economic Singularity*) The liberating vision of people no longer needing to be wage slaves, and the threat of almost everyone living in poverty*) The insufficiency of UBI (Universal Basic Income) unless an economy of abundance is achieved (bringing the costs of goods and services down toward zero)*) Is the creation of AI now out of control, with a rush to release new versions?*) The infeasibility of the idea of AGI relinquishment*) OpenAI's recent actions assessed*) ExpectatPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify
undefined
Mar 22, 2023 • 38min

Creating Benevolent Decentralized AGI, with Ben Goertzel

Ben Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of Humanity+.Ben is perhaps best-known for popularising the term 'artificial general intelligence', or AGI, a machine with all the cognitive abilities of an adult human. He thinks that the way to create this machine is to start with a baby-like AI, and raise it, as we raise children. We would do this either in VR, or in robot form. Hence he works with the robot-builder David Hanson to create robots like Sophia and Grace.Ben is a unique and engaging speaker, and gives frequent keynotes all round the world. Both his appearance and his views have been described as counter-cultural. In this episode, we hear about Ben's vision for the creation of benevolent decentralized AGI.Selected follow-up reading:https://singularitynet.io/http://goertzel.org/http://multiverseaccordingtoben.blogspot.com/Topics in this conversation include:*) Occasional hazards of humans and robots working together*) "The future is already here, it's just not wired together properly"*) Ben's definition of AGI*) Ways in which humans lack "general intelligence"*) Changes in society expected when AI reaches "human level"*) Is there "one key thing" which will enable the creation of AGI?*) Ben's OpenCog Hyperon project combines three approaches: neural pattern recognition and synthesis, rigorous symbolic reasoning, and  evolutionary creativity*) Parallel combinations versus sequential combinations of AI capabilities: why the former is harder, but more likely to create AGI*) Three methods to improve the scalability of AI algorithms: mathematical innovations, efficient concurrent processing, and an AGI hardware board*) "We can reach the Singularity in ten years if we really, really try"*) ... but humanity has, so far, not "really tried" to apply sufficient resources to creating AGI*) Sam Altman: "If you talk about the upsides of what AGI could do for us, you sound like a crazy person"*) "The benefits of AGI will challenge our concept of 'what is a benefit'"*) Options for human life trajectories, if AGIs are well disposed towards humans*) We will be faced with the questions of "what do we want" and "what are our values"*) The burning issue is "what is the transition phase" to get to AGI*) Ben's disagreements with Nick Bostrom and Eliezer Yudkowsky*) Assessment of the approach taken by OpenAI to create AGI*) Different degrees of faith in big tech companies as a venue for hosting the breakthroughs in creating AGI*) Should OpenAI be renamed as "ClosedAI"?*) The SingularityNET initiative to create a decentralized, democratically controlled infrastructure for AGI*) The development of AGI should be "more like Linux or the Internet than Windows or the mobile phone ecosystem"*) Limitations of neural net systems in self-understanding*) Faith in big tech and capitalism vs. faith in humanity as a whole vs. faith in reward maximization as a paradigm for intelligence*) Open-ended intelligence vs. intelligence created by reward maximization*) A concern regarding Effective Altruism*) There's more to intelligence than pursuit of an overarching goal*) A broader view of evolution than drives to survive and to reproduce*) "What the fate of humanity depends on" - selecting the right approach to the creation of AGIMusic: Spike ProteiPromoguy Talk PillsAgency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...Listen on: Apple Podcasts   Spotify

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner