Neural nets are good at capturing what brains are also good at doing, which is extrapolating from large amounts of fairly structured data. So in a sense, the sort of power tool for, for making new things is something not very human, so to speak. And you can always say, well, the thing you didn't capture in this technological system is the thing that's essential about being human. Um, and that's a, you know, but then you're going to find out the only thing that can be a human is a human,So to speak.
In this thought-provoking podcast episode, we discuss the mysteries surrounding Large Language Models (LLMs) and their implications for the future of artificial intelligence (AI) with Stephen Wolfram, creator of Mathematica, Wolfram | Alpha & Wolfram Language.
We dive into the fascinating world of Large Language Models (LLMs) and their surprising capabilities. He also discusses the underlying scientific principles behind LLMs, including the concept of computational irreducibility, and how they function as probabilistic sentence finishers. Wolfram reflects on the potential applications of LLMs in computational contracts and the challenges of aligning AI systems with human aspirations.
As the conversation delves into the future of AI governance, Wolfram explores the complexities of regulating AI and the importance of finding a balance between human intervention and autonomous decision-making. This episode offers a captivating exploration of the evolving landscape of AI and its impact on various industries and job roles.
If you enjoy the podcast, please follow us on Spotify and rate us 5 stars on Apple Podcasts.
Socials
Follow Delphi Digital
Disclosures: This conversation is for informational purposes only and does not constitute legal or investment advice. Actual results may vary materially from any forward looking statements made and are subject to risks and uncertainties. This podcast is not investment advice. Do not buy or sell tokens based off this episode.