AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Hidden Variables and Deviations from Quantum Mechanics
This chapter explores the observer problem and measurement problem in quantum mechanics, discussing potential solutions such as the many worlds interpretation and hidden variables. The concept of hidden variables as non-trainable variables is introduced, suggesting that deviations from quantum mechanics can be explained by their presence. However, further experiments are needed to confirm this theory.
What do machine learning, physics and biology have in common? What maths emerges when we apply learning dynamics to physics, and can it reconcile quantum mechanics and general relativity? If we see all nature as neuroplastic and constantly learning, like a neural network, what can this tell us about the fine tuning in the universe and the emergence of life and observers?
In this episode we have the fascinating possibility that the world is like a neural network to consider. On the show we’ve already deeply considered the way in which particles and sometimes even minds seem to be inter-connected in the universe, even beyond the apparent causal links in space and time. We also covered the brain science of neuroplasticity, for listeners who want to understand how that works. Applying that idea to the universe, that in some way the dynamic evolution of systems in the universe, over time adapt depending on the requirements could explain the extraordinary fine tuning we see in the universe, that permitted the arising of life in the first place. Along the way it could potentially fix some of the other gaping holes of disagreement in our best theories of physics.
Our guest in this episode, the Russian physicist Vitaly Vanchurin, has not only developed this theory from the ground up, apparently reconciling quantum mechanics and general relativity, but is connecting it with biological systems and even developing a new type of computer processor to model it. After many years at the University of Minnesota, he’s taken a position at the National Institute of Health, and has more or less simultaneously launched a new multidisciplinary company ‘Artificial Neural Computing’ that connects physics, biology, and machine learning.
What we discuss:
00:00 Intro
05:21 The world as a neural network
06:00 Deep learning in the systems of the universe, neural learning and machine learning
09:00 The universe is learning as it evolves
11:30 Cosmic storage of learning, leads us to a cosmic consciousness model
12:40 The efficiency of learning defines its level of consciousness
13:30 A super-observer
16:00 It’s a useful model, but it’s likely how the universe actually works too
18:20 Fast changing non-trainable variables VS slow changing trainable variables
20:00 When the trainable variables change they could modify the laws of physics
21:20 Trainable variables in machine learning, are similar to genetic adaptation in biology
22:00 Connecting machine learning, physics and biological adaptation
31:40 What experiments could confirm this model?
42:00 At large scale entropy’s actually reduced by learning.
43:00 The emergence of life has a low chance of emerging by chance, more likely by pursuit of learning
44:50 Learning theory explains fine tuning in the universe
49:20 Neuroplasticity at a cosmic level: increasing efficiency and collective consciousness
54:30 The observer problem solved - hidden variables are trainable variables learning
58:30 Getting comfortable with variances from our best theories: models are only mental constructs
01:01:30 Vitaly’s new company 'Artificial Neural Computing’ - an interdisciplinary method marrying machine learning, physics and biology
01:11:00 What is emergent quantumness?
01:13:15 The implications of neuromorphic machine learning technology
01:17:30The implications for AGI
01:18:30 Self-driving car efficiency
01:21:00 Biology is a technology
01:27:40 You can think of space-time as many communication channels or neural connections
01:28:30 We are like one organism, a super-consciousness
References:
Vitaly Vanchurin - The World as a Neural Network Paper
Vitaly Vanchirin - Toward a theory of evolution as multilevel learning paper
Vitaly's new company, Artificial Neural Computing
Stochastic (Adj) = Random and predictable only using probability distributions
Learning equilibrium = when learning in a system equalises with the level of knowledge in the wider system
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode