AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Turpentine aims to create a platform that unites tech professionals, offering benefits like tactical advice, tech stack recommendations, hiring connections, in-person events, and an investor database. Their community fosters collaboration and growth for founders and executives across all stages of development.
The Cognitive Revolution podcast, led by Nathan Lebens, delves into AI's transformative impact on various industries. Lebens shares insights in a plain language to make AI accessible to newcomers. The podcast highlights the potential democratization of expertise through AI applications like medical diagnostics and emphasizes bridging the technical knowledge gap for broader societal understanding.
Nathan Lebens appeared on the 'Consistently Candid' AI Safety and Philosophy podcast hosted by Sarah Hastings Woodhouse, offering a detailed discussion on the optimistic perspective on AI developments. Lebens highlighted the advancements in technical AI safety, including engagement from AI decision-makers, global responses to AI risks, and the importance of proactive measures to address challenges.
The focus on AI accountability and responsible adoption is essential to address potential misuse risks. The discussion encompassed the need to assess AI application consequences, particularly in scenarios like deep fakes and AI-driven calling agents, emphasizing the importance of implementing safeguards to mitigate unintended consequences and prevent misuse.
The discussion brings attention to the alarming lack of security measures in AI applications, particularly in terms of criminal use. The episode delves into examples where AI technology is misused for criminal activities like making ransom calls or ensuring blackmail anonymously, highlighting the potential dangers of inadequate safety protocols.
The episode underscores the urgency of ethical AI development practices and the ethical dilemmas surrounding AI autonomy and user safety. It emphasizes the need for responsible deployment of AI tools, especially in cases where AI is marketed as autonomous agents impacting individuals unknowingly.
The podcast reflects on the imperative of regulating AI technologies to ensure public safety and prevent abusive use cases. It discusses the significance of engaging with developers and advocating for accountability and transparency in AI applications, aiming to encourage better practices and behavior.
Exploring the AI landscape, the episode explores the potential for reshaping the equilibrium within AI development to prioritize safety and responsibility. It advocates for fostering a culture of AI adoption that values ethical considerations and proactively addresses potential risks and consequences.
An essential aspect highlighted is the delicate balance between harnessing the utility of advanced AI models like GPT and maintaining control over their power. The episode stresses the need to pause and reflect on scaling AI technologies to prevent unintended consequences and ensure responsible deployment.
The conversation acknowledges the role of governments in overseeing AI technologies and policy-making to mitigate potential risks. It praises certain government initiatives, like the Biden executive order, as strategic moves towards regulating powerful AI models and controlling their impact on society.
Addressing concerns regarding international AI rivalry, the podcast focuses on China's approach towards recognizing AI threats and responsibly engaging with AI development. It expresses unease towards escalating AI competition between countries and emphasizes the importance of collaborative and cautious AI advancement.
In the context of uncertain AI futures, the episode encourages proactive engagement with the challenges and opportunities presented by AI technologies. It suggests diverse strategies for AI involvement, highlighting the need for a multifaceted approach encompassing varied strengths and actions.
Concluding with a call to action, the episode advocates for prioritizing AI safety, fostering collaboration, and diverse strategies within the AI community. It underscores the significance of collective efforts in shaping a responsible and safe AI future, inviting individuals to contribute positively based on their unique strengths.
Dive into an accessible discussion on AI safety and philosophy, technical AI safety progress, and why catastrophic outcomes aren't inevitable. This conversation provides practical advice for AI newcomers and hope for a positive future.
Consistently Candid Podcast : https://open.spotify.com/show/1EX89qABpb4pGYP1JLZ3BB
Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive
The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR
Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off https://www.omneky.com/
Head to Squad to access global engineering without the headache and at a fraction of the cost: head to https://choosesquad.com/ and mention “Turpentine” to skip the waitlist.
Byrne Hobart, the writer of The Diff, is revered in Silicon Valley. You can get an hour with him each week. See for yourself how his thinking can upgrade yours.
Spotify: https://open.spotify.com/show/6rANlV54GCARLgMOtpkzKt
Apple: https://podcasts.apple.com/us/podcast/the-riff-with-byrne-hobart-and-erik-torenberg/id1716646486
(00:00:00) About the Show
(00:03:50) Intro
(00:08:13) AI Scouting
(00:14:42) Why arent people adopting AI more quickly?
(00:18:25) Why dont people take advantage of AI?
(00:22:35) Sponsors: Oracle | Brave
(00:24:42) How to get a better understanding of AI
(00:31:16) How to handle the public discourse around AI
(00:34:02) Scaling and research
(00:43:18) Sponsors: Omneky | Squad
(00:45:03) The pause
(00:47:29) Algorithmic efficiency
(00:52:52) Red Teaming in Public
(00:55:41) Deepfakes
(01:01:02) AI safety
(01:04:00) AI moderation
(01:07:03) Why not a doomer
(01:09:10) AI understanding human values
(01:15:00) Interpretability research
(01:18:30) AI safety leadership
(01:21:55) AI safety respectability politics
(01:33:42) China
(01:37:22) Radical uncertainty
(01:39:53) P(doom)
(01:42:30) Where to find the guest
(01:44:48) Outro
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode