AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
With deepfakes and misinformation on the rise, distinguishing truth from lies has never been more challenging.
A recent Harvard paper stated how generative AI, like ChatGPT, is increasingly showing up in academic journals, archives, and repositories. So, how can we fight back against this growing misinformation crisis?
This week, Ryan Connell sits down with Nathan Manzotti, Managing Director of Data Analytics and AI Centers of Excellence at GSA, to explore groundbreaking ideas for combating misinformation using blockchain and AI.
Learn how an AI training program is empowering thousands and how a government-wide AI model repository could transform federal efficiency.
If you’re passionate about the future of technology, truth, and public service, this episode is a must-listen.
Tune in now!
Key Takeaways:
(00:00) Introduction
(00:59) Meet Nathan Manzotti and his journey into AI
(04:14) The rise of deepfakes and the misinformation crisis
(06:08) How to combat deepfakes with blockchain technology
(10:12) The role of personal passion in driving government innovation
(15:21) Expansion of AI training program across the federal workforce
(19:18) Building collaborative community platforms for progress
(24:22) GSA’s fee-of-service consulting model
(29:58) Nate’s vision for a government-backed model repository
Additional Resources:
👉Follow Ryan Connell on LinkedIn: https://www.linkedin.com/in/ryan-connell-8413a03a/
👉Learn, acquire, and deliver tech on Tradewinds here: https://www.tradewindai.com/
👉Visit CDAO for updates: https://www.ai.mil/
👉Follow us on Spotify: https://open.spotify.com/show/6MLAqMOVnLWbmB5yZbZ9lC
👉Subscribe to our YouTube channel: https://www.youtube.com/@DefenseMavericks
Connect with Nathan Manzotti
🔹Follow Nathan on LinkedIn: https://www.linkedin.com/in/nathanmanzotti/
🔹Harvard’s publication on GPT-fabricated papers: https://misinforeview.hks.harvard.edu/article/gpt-fabricated-scientific-papers-on-google-scholar-key-features-spread-and-implications-for-preempting-evidence-manipulation/
🔹Visit Harvard’s Misinformation Review for more information: https://misinforeview.hks.harvard.edu/
—
Defense Mavericks is a podcast that uncovers the untapped potential of AI within the federal government through authentic and disruptive conversations with our nation’s brightest minds.
Follow us on your favorite streaming platform so you won’t miss an episode!