In this conversation I speak with Jill Nephew. Jill, a former AI black box algorithm engineer with extensive experience in developing software architectures, holds a highly heterodox perspective on the risks associated with LLM AIs. In this conversation we explore Jill's argument that using LLMs like ChatGPT or Bard are like eating plastic for your cognitive agency and natural intelligence, how it is that AIs could cause the rise of new 'supercults', and how another world is possible, if only we learn to ask the right questions.
If you enjoy this podcast and want to support it please consider becoming a Patreon supporter.
- [3:52] The critical difference between cognition and thinking
- [9:49] Why is using LLMs like eating plastic for our cognition?
- [16:04] What LLMs represent in the context of the meta-crisis
- [24:51] How LLMs signal trustworthiness and use randomness to confuse us and unground our cognition
- [36:00] What we can expect to see as LLMs introduce more plastic into our cognition
- [38:29] What ways of interacting with LLMs might be safe?
- [47:52] What are cults and how do they function in relationship to our cognition?
- [53:29] The possibility of an AI 'supercult'
- [55:27] The most dangerous thing we do to each other
- [59:48] The deep meaningfulness and richness of grounded cognition, going beyond trauma healing, beyond the monkey mind
- [1:07:17] Technology to reclaim natural intelligence, the practice of inquiry, the difference between 'good' inquiry and 'bad' inquiry
- [1:12:31] The rigorous engineering behind good inquiry forms
- [1:13:29] The power of inquiry, the feeling of insight, how to achieve a quiet mind
- [1:18:29] Jill's advice for how to respond to the acceleration of planetary destruction
Jill's Conversation with Layman Pascal on the Integral Stage
Inqwire, the software Jill has developed to help people reclaim their natural intelligence.
Support Daniel on Patreon