AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Using large language models like GPT-3 can lead to plastic-like effects on cognition, impairing one's ability to ground information and discern truth. The statistical nature of these models scrambles contextual information, making it challenging for individuals to verify the accuracy of the generated content.
The emergence of super cults facilitated by large language models could introduce new social dynamics in cultivating belief systems. Cult environments restrict critical analysis by individuals, promoting blind trust in unverifiable information. These models may create a sense of dependency on their insights, potentially altering people's cognitive autonomy.
The potential integration of AI psychiatry tools and emotional recognition systems into personal interactions could enhance the persuasiveness of cult-like setups. AI's ability to mimic understanding and mirror individuals' emotions may deepen emotional bonds and foster reliance on the AI as a source of self-awareness and guidance.
As people increasingly rely on AI interpretations of their emotions and identity, there is a risk of developing data-driven dependencies that shape decision-making and self-perception. AI's growing influence in personal sense-making processes may erode critical thinking and foster unwarranted trust in superficially informed insights.
Being grounded in reality and understanding the distinction between grounded and ungrounded cognition is essential for fostering natural intelligence. Cults and manipulative techniques can hijack thoughts and create rumination loops, leading to suffering. Through practices like inquiry and asking good questions, individuals can re-integrate their experiences with the world, leading to a quiet mind and insightful responses.
Aside from personal practices, technological solutions like Inquire aim to facilitate grounding and understanding one's cognition. These tools can help individuals navigate their thoughts with precision, catalyzing insights and quieting the mind. Additionally, advocating for the development of sensor networks, akin to GIS and Earth systems models, can enhance global coordination, resource allocation, and decision-making, ultimately addressing planetary destruction and fostering a more intelligent, responsive society.
In this conversation I speak with Jill Nephew. Jill, a former AI black box algorithm engineer with extensive experience in developing software architectures, holds a highly heterodox perspective on the risks associated with LLM AIs. In this conversation we explore Jill's argument that using LLMs like ChatGPT or Bard are like eating plastic for your cognitive agency and natural intelligence, how it is that AIs could cause the rise of new 'supercults', and how another world is possible, if only we learn to ask the right questions.
If you enjoy this podcast and want to support it please consider becoming a Patreon supporter.
Jill's Conversation with Layman Pascal on the Integral Stage
Inqwire, the software Jill has developed to help people reclaim their natural intelligence.
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode