Lock and Code

Malwarebytes
undefined
Jun 5, 2023 • 44min

Trusting AI not to lie: The cost of truth

In May, a lawyer who was defending their client in a lawsuit against Columbia's biggest airline, Avianca, submitted a legal filing before a court in Manhattan, New York, that listed several previous cases as support for their main argument to continue the lawsuit.But when the court reviewed the lawyer's citations, it found something curious: Several were entirely fabricated. The lawyer in question had gotten the help of another attorney who, in scrounging around for legal precedent to cite, utilized the "services" of ChatGPT. ChatGPT was wrong. So why do so many people believe it's always right? Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes security evangelist Mark Stockley and Malwarebytes Labs editor-in-chief Anna Brading to discuss the potential consequences of companies and individuals embracing natural language processing tools—like ChatGPT and Google's Bard—as arbiters of truth. Far from being understood simply as chatbots that can produce remarkable mimicries of human speech and dialogue, these tools are becoming sources of truth for countless individuals, while also gaining attraction amongst companies that see artificial intelligence (AI) and large language models (LLM) as the future, no matter what industry they operate in. The future could look eerily similar to an earlier change in translation services, said Stockley, who witnessed the rapid displacement of human workers in favor of basic AI tools. The tools were far, far cheaper, but the quality of the translations—of the truth, Stockley said—was worse. "That is an example of exactly this technology coming in and being treated as the arbiter of truth in the sense that there is a cost to how much truth we want."Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
May 22, 2023 • 48min

Identity crisis: How an anti-porn crusade could jam the Internet, featuring Alec Muffett

On January 1, 2023, the Internet in Louisiana looked a little different than the Internet in Texas, Mississippi, and Arkansas—its next-door state neighbors. And on May 1, the Internet in Utah looked quite different, depending on where you looked, than the Internet in Arizona, or Idaho, or Nevada, or California or Oregon or Washington or, really, much of the rest of the United States. The changes are, ostensibly, over pornography. In Louisiana, today, visitors to the online porn site PornHub are asked to verify their age before they can access the site, and that age verification process hinges on a state-approved digital ID app called LA Wallet. In the United Kingdom, sweeping changes to the Internet are being proposed that would similarly require porn sites to verify the ages of their users to keep kids from seeing sexually explicit material. And in Australia, similar efforts to require age verification for adult websites might come hand-in-hand with the deployment of a government-issued digital ID. But the large problem with all these proposals is not that they would make a new Internet only for children, but a new Internet for everyone.Look no further than Utah. On May 1, after new rules came into effect to make porn sites verify the ages of their users, the site PornHub decided to refuse to comply with the law and instead, to block access to the site for anyone visiting from an IP address based in Utah. If you’re in Utah, right now, and connecting to the Internet with an IP address located in Utah, you cannot access PornHub. Instead, you’re presented with a message from adult film star Cheri Deville who explains that:“As you may know, your elected officials have required us to verify your age before granting you access to our website. While safety and compliance are at the forefront of our mission, giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting our users, and in fact, will put children and your privacy at risk.”Today, on the Lock and Code podcast with host David Ruiz, we speak with longtime security researcher Alec Muffett (who has joined us before to talk about Tor) to understand what is behind these requests to change the Internet, what flaws he's seen in studying past age verification proposals, and whether many members of the public are worrying about the wrong thing in trying to solve a social issue with technology. "The battle cry of these people have has always been—either directly or mocked as being—'Could somebody think of the children?' And I'm thinking about the children because I want my daughter to grow up with an untracked, secure private internet when she's an adult. I want her to be able to have a private conversation. I want her to be able to browse sites without giving over any information or linking it to her identity."Muffett continued:"I'm trying to protect that for her. I'd like to see more people grasping for that."Tune in today.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Additional Resources and Links for today's episode:"A Sequence of Spankingly Bad Ideas." - An analysis of age verification technology presentations from 2016. Alec Muffett."Adults might have to buy £10 ‘porn passes’ from newsagents to prove their age online." - The United Kingdom proposes an "adult pass" for purchase in 2018 to comply with earlier efforts for online age verification. Metro. "Age verification won't block porn. But it will spell the end of ethical porn." - An independent porn producer explains how compliance costs for age verification could shut down small outfits that make, film, and sell ethical pornography. The Guardian. "Minnesota’s Attempt to Copy California’s Constitutionally Defective Age Appropriate Design Code is an Utter Fail." - Age verification creeps into US proposals. Technology and Marketing Law Blog, run by Eric Goldman. "Nationwide push to require social media age verification raises questions about privacy, industry standards." - Cyberscoop."The Fundamental Problems with Social Media Age Verification Legislation." - R Street Institute.YouTube's age verification in action. - Various methods and requirements shown in Google's Support center for ID verification across the globe. "When You Try to Watch Pornhub in Utah, You See Me Instead. Here’s Why." - Cheri Deville's call for specialized phones for minors. Rolling Stone. 
undefined
May 8, 2023 • 51min

The rise of "Franken-ransomware," with Allan Liska

Ransomware is becoming bespoke, and that could mean trouble for businesses and law enforcement investigators. It wasn't always like this. For a few years now, ransomware operators have congregated around a relatively new model of crime called "Ransomware-as-a-Service." In the Ransomware-as-a-Service model, or RaaS model, ransomware itself is not delivered to victims by the same criminals that make the ransomware. Instead, it is used almost "on loan" by criminal groups called "affiliates" who carry out attacks with the ransomware and, if successful, pay a share of their ill-gotten gains back to the ransomware’s creators.This model allows ransomware developers to significantly increase their reach and their illegal hauls. By essentially leasing out their malicious code to smaller groups of cybercriminals around the world, the ransomware developers can carry out more attacks, steal more money from victims, and avoid any isolated law enforcement action that would put their business in the ground, as the arrest of one affiliate group won't stop the work of dozens of others. And not only do ransomware developers lean on other cybercriminals to carry out attacks, they also rely on an entire network of criminals to carry out smaller, specialized tasks. There are "Initial Access Brokers" who break into company networks and then sell that illegal method of access online. "You also have coders that you can contract out to," Liska said. "You have pen testers that you can contract out to. You can contract negotiators if you want. You can contract translators if you want."But as Liska explained, as the ransomware "business" spreads out, so do new weak points: disgruntled criminals. "This whole underground marketplace that exists to serve ransomware means that your small group can do a lot," Liska said. "But that also means that you are entrusting the keys to your kingdom to these random contractors that you're paying in Bitcoin every now and then. And that, for example, is why the LockBit code got leaked—dude didn't pay his contractor."With plenty of leaked code now circulating online, some smaller cybercriminals gangs have taken to making minor alterations and then sending that new variant of ransomware out into the world—no affiliate model needed. "Most of what we see is just repurposed code and we see a lot of what I call 'Franken-ransomware.'" Today, on the Lock and Code podcast with host David Ruiz, Liska explains why Franken-ransomware poses unique challenges to future victims, cybersecurity companies, and law enforcement investigators. Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Apr 24, 2023 • 41min

Removing the human: When should AI be used in emotional crisis?

In January, a mental health nonprofit admitted that it had used Artificial Intelligence to help talk to people in distress. Prompted first by a user's longing for personal improvement—and the difficulties involved in that journey—the AI tool generated a reply, which, with human intervention, could be sent verbatim in a chat box, or edited and fine-tuned to better fit the situation. The AI said:“I hear you. You’re trying to become a better person and it’s not easy. It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone. There are people here who care about you and want to help you. I’m proud of you for making the decision to try to improve your life. It takes a lot of courage and strength. I send you love and support as you go through this journey.”This was experimental work from Koko, a mental health nonprofit that integrated the GPT-3 large language model into its product for a short period of time that is now over. In a video demonstration posted on Twitter earlier this year, Koko co-founder Rob Morris revealed that the nonprofit had used AI to provide "mental health support to about 4,000 people" across "about 30,000 messages." Though Koko pulled GPT-3 from its system after a reportedly short period of time, Morris said on Twitter that there are several questions left from the experience. "The implications here are poorly understood," Morris said. "Would people eventually seek emotional support from machines, rather than friends and family?"Today, on the Lock and Code podcast with host David Ruiz, we speak with Courtney Brown, a social services administrator with a history in research and suicidology, to dig into the ethics, feasibility, and potential consequences of relying increasingly on AI tools to help people in distress. For Brown, the immediate implications draw up several concerns. "It disturbed me to see AI using 'I care about you,' or 'I'm concerned,' or 'I'm proud of you.' That made me feel sick to my stomach. And I think it was partially because these are the things that I say, and it's partially because I think that they're going to lose power as a form of connecting to another human."But, importantly, Brown is not the only voice in today's podcast with experience in crisis support. For six years and across 1,000 hours, Ruiz volunteered on his local suicide prevention hotline. He, too, has a background to share. Tune in today as Ruiz and Brown explore the boundaries for deploying AI on people suffering from emotional distress, whether the "support" offered by any AI will be as helpful and genuine as that of a human, and, importantly, whether they are simply afraid of having AI encroach on the most human experiences. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Apr 10, 2023 • 47min

How the cops buy a "God view" of your location data, with Bennett Cyphers

The list of people and organizations that are hungry for your location data—collected so routinely and packaged so conveniently that it can easily reveal where you live, where you work, where you shop, pray, eat, and relax—includes many of the usual suspects.Advertisers, obviously, want to send targeted ads to you and they believe those ads have a better success rate if they're sent to, say, someone who spends their time at a fast-food drive-through on the way home from the office, as opposed to someone who doesn't, or someone whose visited a high-end department store, or someone who, say, vacations regularly at expensive resorts. Hedge funds, interestingly, are also big buyers of location data, constantly seeking a competitive edge in their investments, which might mean understanding whether a fast food chain's newest locations are getting more foot traffic, or whether a new commercial real estate development is walkable from nearby homes. But perhaps unexpected on this list is police.According to a recent investigation from Electronic Frontier Foundation and The Associated Press, a company called Fog Data Science has been gathering Americans' location data and selling it exclusively to local law enforcement agencies in the United States. Fog Data Science's tool—a subscription-based platform that charges clients for queries of the company's database—is called Fog Reveal. And according to Bennett Cyphers, one of the investigators who uncovered Fog Reveal through a series of public record requests, it's rather powerful. "What [Fog Data Science] sells is, I would say, like a God view mode for the world... It's a map and you draw a shape on the map and it will show you every device that was in that area during a specified timeframe."Today, on the Lock and Code podcast with host David Ruiz, we speak to Cyphers about how he and his organization uncovered a massive data location broker that seemingly works only with local law enforcement, how that data broker collected Americans' data in the first place, where this data comes from, and why it is so easy to sell. Tune in now. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Mar 27, 2023 • 38min

Solving the password’s hardest problem with passkeys, featuring Anna Pobletts

How many passwords do you have? If you're at all like our Lock and Code host David Ruiz, that number hovers around 200. But the important follow up question is: How many of those passwords can you actually remember on your own? Prior studies suggest a number that sounds nearly embarrassing—probably around six. After decades of requiring it, it turns out that the password has problems, the biggest of which is that when users are forced to create a password for every online account, they resort to creating easy-to-remember passwords that are built around their pets' names, their addresses, even the word "password." Those same users then re-use those weak passwords across multiple accounts, opening them up to easy online attacks that rely on entering the compromised credentials from one online account to crack into an entirely separate online account. As if that weren't dangerous enough, passwords themselves are vulnerable to phishing attacks, where hackers can fraudulently pose as businesses that ask users to enter their login information on a website that looks legitimate, but isn't. Thankfully, the cybersecurity industry has built a few safeguards around password use, such as multifactor authentication, which requires a second form of approval from a user beyond just entering their username and password. But, according to 1Password Head of Passwordless Anna Pobletts, many attempts around improving and replacing passwords have put extra work into the hands of users themselves:"There's been so many different attempts in the last 10, 20 years to replace passwords or improve passwords and the security around. But all of these attempts have been at the expense of the user."For Pobletts, who is our latest guest on the Lock and Code podcast, there is a better option now available that does not trade security for ease-of-use. Instead, it ensures that the secure option for users is also the easy option. That latest option is the use of "passkeys." Resistant to phishing attacks, secured behind biometrics, and free from any requirement by users to create new ones on their own, passkeys could dramatically change our security for the better. Today, we speak with Pobletts about whether we'll ever truly live in a passwordless future, along with what passkeys are, how they work, and what industry could see huge benefit from implementation. Tune in now. Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Mar 13, 2023 • 48min

"Brad Pitt," a still body, ketchup, and a knife, or the best trick ever played on a romance scammer, with Becky Holmes

Becky Holmes knows how to throw a romance scammer off script—simply bring up cannibalism. In January, Holmes shared on Twitter that an account with the name "Thomas Smith" had started up a random chat with her that sounded an awful lot like the beginnins stages of a romance scam. But rather than instantly ignoring and blocking the advances—as Holmes recommends everyone do in these types of situations—she first had a little fun. "I was hoping that you'd let me eat a small part of you when we meet," Holmes said. "No major organs or anything obviously. I'm not weird lol." By just a few messages later, "Thomas Smith" had run off, refusing to respond to Holmes' follow-up requests about what body part she fancied, along with her preferred seasoning (paprika). Romance scams are a serious topic. In 2022, the US Federal Trade Commission reported that, in the five years prior, victims of romance scams had reported losing a collective $1.3 billion. In just 2021, that number was $547 million, and the average amount of money reported stolen per person was $2,400. Worse, romance scammers themselves often target vulnerable people, including seniors, widows, and the recently divorced, and they show no remorse when developing long-lasting online relationships, all bit on lies, so that they can emotionally manipulate their victims into handing over hundreds or thousands of dollars. But what would you do if you knew a romance scammer had contacted you and you, like our guest on today's Lock and Code podcast with host David Ruiz, had simply had enough? If you were Becky Holmes, you'd push back. For a couple of years now, Holmes has teased, mocked, strung along, and shut down online romance scammers, much of her work in public view as she shares some of her more exciting stories on Twitter. There's the romance scammer who she scared by not only accepting an invitation to meet, but ratcheting up the pressure by pretending to pack her bags, buy a ticket to Stockholm, and research venues for a perhaps too-soon wedding. There's the scammer she scared off by asking to eat part of his body. And, there's the story of the fake Brad Pitt:" My favorite story is Brad Pitt and the the dead tumble dryer repairman. And I honestly have to say, I don't think I'm ever going to top that. Every time ...I put a new tweet up, I think, oh, if only it was Brad Pitt and the dead body. I'm just never gonna get better."Tune in today to hear about Holmes' best stories, her first ever effort to push back, her insight into why she does what she does, and what you can do to spot a romance scam—and how to safely respond to one. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog. And you can read our most recent report, the 2023 State of Malware, which reveals the top five cyberthreats targeting businesses this year, along with important data on how cybercriminals have responded to our industry’s increasing capabilities to keep them out. Download the report at malwarebytes.com/SoM. Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Feb 27, 2023 • 60min

Fighting censorship online, or, encryption’s latest surprise use-case, with Mallory Knodel

Government threats to end-to-end encryption—the technology that secures your messages and shared photos and videos—have been around for decades, but the most recent threats to this technology are unique in how they intersect with a broader, sometimes-global effort to control information on the Internet.Take two efforts in the European Union and the United Kingdom. New proposals there would require companies to scan any content that their users share with one another for Child Sexual Abuse Material, or CSAM. If a company offers end-to-end encryption to its users, effectively locking the company itself out of being able to access the content that its users share, then it's tough luck for those companies. They will still be required to find a way to essentially do the impossible—build a system that keeps everyone else out, while letting themselves and the government in. While these government proposals may sound similar to previous global efforts to weaken end-to-end encryption in the past, like the United States' prolonged attempt to tarnish end-to-end encryption by linking it to terrorist plots, they differ because of how easily they could become tools for censorship. Today, on the Lock and Code podcast with host David Ruiz, we speak with Mallory Knodel, chief technology officer for Center for Democracy and Technology, about new threats to encryption, old and bad repeated proposals, who encryption benefits (everyone), and how building a tool to detect one legitimate harm could, in turn, create a tool to detect all sorts of legal content that other governments simply do not like. "In many places of the world where there's not such a strong feeling about individual and personal privacy, sometimes that is replaced by an inability to access mainstream media, news, accurate information, and so on, because there's a heavy censorship regime in place," Knodel said.  "And I think that drawing that line between 'You're going to censor child sexual abuse material, which is illegal and disgusting and we want it to go away,' but it's so very easy to slide that knob over into 'Now you're also gonna block disinformation,' and you might at some point, take it a step further and block other kinds of content, too, and you just continue down that path."Knodel continued:"Then you do have a pretty easy way of mass-censoring certain kinds of content from the Internet that probably shouldn't be censored."Tune in today. You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Feb 13, 2023 • 45min

What is AI ”good” at (and what the heck is it, actually), with Josh Saxe

In November of last year, the AI research and development lab OpenAI revealed its latest, most advanced language project: A tool called ChatGPT.ChatGPT is so much more than "just" a chatbot. As users have shown with repeated testing and prodding, ChatGPT seems to "understand" things.  It can give you recipes that account for whatever dietary restrictions you have. It can deliver basic essays about moments in history. It can—and has been—used to cheat by university students who are giving a new meaning to plagiarism, stealing work that is not theirs. It can write song lyrics about X topic as though composed by Y artist. It can even have fun with language. For example, when ChatGPT was asked to “Write a Biblical verse in the style of the King James Bible explaining how to remove a peanut butter sandwich from a VCR,” ChatGPT responded in part:“And it came to pass that a man was troubled by a peanut butter sandwich, for it had been placed within his VCR, and he knew not how to remove it. And he cried out to the Lord, saying ‘Oh Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge.’”Is this fun? Yes. Is it interesting? Absolutely. But what we're primarily interested about in today's episode of Lock and Code, with host David Ruiz, is where artificial intelligence and machine learning—ChatGPT included—can be applied to cybersecurity, because as some users have already discovered, ChatGPT can be used to some success to analyze lines of code for flaws.It is a capability that has likely further energized the multibillion-dollar endeavor to apply AI to cybersecurity.Today, on Lock and Code, we speak to Joshua Saxe about what machine learning is "good" at, what problems it can make worse, whether we have defenses to those problems, and what place machine learning and artificial intelligence have in the future of cybersecurity. According to Saxe, there are some areas where, under certain conditions, machine learning will never be able to compete."If you're, say, gonna deploy a set of security products on a new computer network that's never used your security products before, and you want to detect, for example, insider threats—like insiders moving files around in ways that look suspicious—if you don't have any known examples of people at the company doing that, and also examples of people not doing that, and if you don't have thousands of known examples of people at the company doing that, that are current and likely to reoccur in the future, machine learning is just never going to compete with just manually writing down some heuristics around what we think bad looks like."Saxe continued: "Because basically in this case, the machine learning is competing with the common sense model of the world and expert knowledge of a security analyst, and there's no way machine learning is gonna compete with the human brain in this context."Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Jan 30, 2023 • 46min

A private moment, caught by a Roomba, ended up on Facebook. Eileen Guo explains how

In 2020, a photo of a woman sitting on a toilet—her shorts pulled half-way down her thighs—was shared on Facebook, and it was shared by someone whose job it was to look at that photo and, by labeling the objects in it, help train an artificial intelligence system for a vacuum.Bizarre? Yes. Unique? No. In December, MIT Technology Review investigated the data collection and sharing practices of the company iRobot, the developer of the popular self-automated Roomba vacuums. In their reporting, MIT Technology Review discovered a series of 15 images that were all captured by development versions of Roomba vacuums. Those images were eventually shared with third-party contractors in Venezuela who were tasked with the responsibility of "annotation"—the act of labeling photos with identifying information. This work of, say, tagging a cabinet as a cabinet, or a TV as a TV, or a shelf as a shelf, would help the robot vacuums "learn" about their surroundings when inside people's homes. In response to MIT Technology Review's reporting, iRobot stressed that none of the images found by the outlet came from customers. Instead, the images were "from iRobot development robots used by paid data collectors and employees in 2020." That meant that the images were from people who agreed to be part of a testing or "beta" program for non-public versions of the Roomba vacuums, and that everyone who participated had signed an agreement as to how iRobot would use their data.According to the company's CEO in a post on LinkedIn: "Participants are informed and acknowledge how the data will be collected."But after MIT Technology Review published its investigation, people who'd previously participated in iRobot's testing environments reached out. According to several of them, they felt misled. Today, on the Lock and Code podcast with host David Ruiz, we speak with the investigative reporter of the piece, Eileen Guo, about how all of this happened, and about how, she said, this story illuminates a broader problem in data privacy today."What this story is ultimately about is that conversations about privacy, protection, and what that actually means, are so lopsided because we just don't know what it is that we're consenting to."Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app