AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Complex systems, unlike purely physical systems, encode a coarse-grained history of their adaptive behavior. What distinguishes complex systems is the presence of a schema that encapsulates this history.
The concept of construction, which is a way of encoding and representing reality, offers a useful lens to understand complex systems. Complex systems can be seen as theories of the world, where each element within the system, whether it's a microbe or a human brain, acts as a theorizer that captures and encodes certain aspects of reality.
By training deep neural networks, we are essentially constructing a theory or rule system of the phenomenon being modeled. These models represent a form of schema, encoding the regularities and patterns observed in the data. However, it should be noted that these models primarily serve as predictive engines, providing outputs based on inputs, and their ability to generate new insights and discoveries may be limited.
While deep learning models excel at prediction and providing reference material, they may not be effective at driving scientific discovery. The nature of these models as libraries of established knowledge makes them less conducive to innovative and out-of-the-box thinking. However, the interaction and composition of different models, along with human creativity and insight, may lead to new theoretical frameworks and insights in various domains.
Large language models like GPT-4 and GPT-5 have shown impressive abilities to manipulate language and generate creative work. They can summarize complex literature, compare and contrast different concepts, and even create unique ideas. The models demonstrate remarkable compositionality and can potentially be used for generating creative output in various domains. Additionally, these models have shown potential for analyzing and synthesizing information, making them useful tools for research and analysis. However, it's important to note their limitations, as they lack the ability to understand geometry or solve complex scientific problems on their own.
Historically, scientific breakthroughs and revolutions have been driven by constraints, rather than excess power or computational resources. Examples such as Darwin's theory of evolution and Mendeleev's periodic table demonstrate how simple principles and observations led to major scientific advancements. These achievements were not dependent on comprehensive knowledge of microscopic details, but instead on recognizing patterns, observing repetition, and proposing elegant explanations. It raises questions about the role of constraints and parsimony in scientific progress, and emphasizes the importance of understanding and leveraging simple processes that can generate complex phenomena.
The discussion around the risks and potential of AI often lacks empirical precedent and is often sensationalized. While there are significant risks associated with AI, such as misuse of narrow AI or unintended negative consequences, it is crucial to approach the topic with an informed and measured perspective. Precedent from past technologies, like genetic engineering and nuclear weapons, shows that responsible regulation and self-imposed moratorium can effectively manage risks. Rather than dwelling on doomsday scenarios, a more productive approach is to identify reasonable risks, adapt regulations appropriately, and explore the potential positive applications of AI, like developing info agents to filter and curate information.
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode