The tradeoff that one has is if you have a system that's simple enough that you can predict what it will do, then in a sense, there's no point in it doing it. You don't get to have both of those options. The period of time when you kind of know what was going to happen is sort of coming to an end. And people really haven't gotten used to that.
In this thought-provoking podcast episode, we discuss the mysteries surrounding Large Language Models (LLMs) and their implications for the future of artificial intelligence (AI) with Stephen Wolfram, creator of Mathematica, Wolfram | Alpha & Wolfram Language.
We dive into the fascinating world of Large Language Models (LLMs) and their surprising capabilities. He also discusses the underlying scientific principles behind LLMs, including the concept of computational irreducibility, and how they function as probabilistic sentence finishers. Wolfram reflects on the potential applications of LLMs in computational contracts and the challenges of aligning AI systems with human aspirations.
As the conversation delves into the future of AI governance, Wolfram explores the complexities of regulating AI and the importance of finding a balance between human intervention and autonomous decision-making. This episode offers a captivating exploration of the evolving landscape of AI and its impact on various industries and job roles.
If you enjoy the podcast, please follow us on Spotify and rate us 5 stars on Apple Podcasts.
Socials
Follow Delphi Digital
Disclosures: This conversation is for informational purposes only and does not constitute legal or investment advice. Actual results may vary materially from any forward looking statements made and are subject to risks and uncertainties. This podcast is not investment advice. Do not buy or sell tokens based off this episode.