Lock and Code

Malwarebytes
undefined
Aug 14, 2023 • 38min

A new type of "freedom," or, tracking children with AirTags, with Heather Kelly

"Freedom" is a big word, and for many parents today, it's a word that includes location tracking. Across America, parents are snapping up Apple AirTags, the inexpensive location tracking devices that can help owners find lost luggage, misplaced keys, and—increasingly so—roving toddlers setting out on mini-adventures. The parental fear right now, according to The Washington Post technology reporter Heather Kelly, is that "anybody who can walk, therefore can walk away." Parents wanting to know what their children are up to is nothing new. Before the advent of the Internet—and before the creation of search history—parents read through diaries. Before GPS location tracking, parents called the houses that their children were allegedly staying at. And before nearly every child had a smart phone that they could receive calls on, parents relied on a much simpler set of tools for coordination: Going to the mall, giving them a watch, and saying "Be at the food court at noon." But, as so much parental monitoring has moved to the digital sphere, there's a new problem: Children become physically mobile far faster than they become responsible enough to own a mobile. Enter the AirTag: a small, convenient device for parents to affix to toddlers' wrists, place into their backpacks, even sew into their clothes, as Kelly reported in her piece for The Washington Post. In speaking with parents, families, and childcare experts, Kelly also uncovered an interesting dynamic. Parents, she reported, have started relying on Apple AirTags as a means to provide freedom, not restrictions, to their children. Today, on the Lock and Code podcast with host David Ruiz, we speak with Kelly about why parents are using AirTags, how childcare experts are reacting to the recent trend, and whether the devices can actually provide a balm to increasingly stressed parents who may need a moment to sit back and relax. Or, as Kelly said:"In the end, parents need to chill—and if this lets them chill, and if it doesn't impact the kids too much, and it lets them go do silly things like jumping in some puddles with their friends or light, really inconsequential shoplifting, good for them."Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Jul 31, 2023 • 40min

How Apple fixed what Microsoft hasn't, with Thomas Reed

Earlier this month, a group of hackers was spotted using a set of malicious tools—that originally gained popularity with online video game cheaters—to hide their Windows-based malware from being detected.Sounds unique, right? Frustratingly, it isn't, as the specific security loophole that was abused by the hackers has been around for years, and Microsoft's response, or lack thereof, is actually a telling illustration of the competing security environments within Windows and macOS. Even more perplexing is the fact that Apple dealt with a similar issue nearly 10 years ago, locking down the way that certain external tools are given permission to run alongside the operating system's critical, core internals. Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes' own Director of Core Tech Thomas Reed about everyone's favorite topic: Windows vs. Mac. But this isn't a conversation about the original iPod vs. Microsoft's Zune (we're sure you can find countless, 4-hour diatribes on YouTube for that), but instead about how the companies behind these operating systems can respond to security issues in their own products. Because it isn't fair to say that Apple or Microsoft are wholesale "better" or "worse" about security. Instead, they're hampered by their users and their core market segments—Apple excels in the consumer market, whereas Microsoft excels with enterprises. And when your customers include hospitals, government agencies, and pretty much any business over a certain headcount, well, it comes with complications in deciding how to address security problems that won't leave those same customers behind. Still, there's little excuse in leaving open the type of loophole that Windows has, said Reed:"Apple has done something that was pretty inconvenient for developers, but it really secured their customers because it basically meant we saw a complete stop in all kernel-level malware. It just shows you [that] it can be done. You're gonna break some eggs in the process, and Microsoft has not done that yet... They're gonna have to."Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Jul 17, 2023 • 39min

Spy vs. spy: Exploring the LetMeSpy hack, with maia arson crimew

The language of a data breach, no matter what company gets hit, is largely the same. There's the stolen data—be it email addresses, credit card numbers, or even medical records. There are the users—unsuspecting, everyday people who, through no fault of their own, mistakenly put their trust into a company, platform, or service to keep their information safe. And there are, of course, the criminals. Some operate in groups. Some act alone. Some steal data as a means of extortion. Others steal it as a point of pride. All of them, it appears, take something that isn't theirs. But what happens if a cybercriminal takes something that may have already been stolen? In late June, a mobile app that can, without consent, pry into text messages, monitor call logs, and track GPS location history, warned its users that its services had been hacked. Email addresses, telephone numbers, and the content of messages were swiped, but how they were originally collected requires scrutiny. That's because the app itself, called LetMeSpy, is advertised as a parental and employer monitoring app, to be installed on the devices of other people that LetMeSpy users want to track. Want to read your child's text messages? LetMeSpy says it can help. Want to see where they are? LetMeSpy says it can do that, too. What about employers who are interested in the vague idea of "control and safety" of their business? Look no further than LetMeSpy, of course.  While LetMeSpy's website tells users that "phone control without your knowledge and consent may be illegal in your country," (it is in the US and many, many others) the app also claims that it can hide itself from view from the person being tracked. And that feature, in particular, is one of the more tell-tale signs of "stalkerware." Stalkerware is a term used by the cybersecurity industry to describe mobile apps, primarily on Android, that can access a device's text messages, photos, videos, call records, and GPS locations without the device owner knowing about said surveillance. These types of apps can also automatically record every phone call made and received by a device, turn off a device's WiFi, and take control of the device's camera and microphone to snap photos or record audio—all without the victim knowing that their phone has been compromised. Stalkerware poses a serious threat—particularly to survivors of domestic abuse—and Malwarebytes has defended users against these types of apps for years. But the hacking of an app with similar functionality raises questions. Today, on the Lock and Code podcast with host David Ruiz, we speak with the hacktivist and security blogger maia arson crimew about the data that was revealed in LetMeSpy's hack, the almost-clumsy efforts by developers to make and market these apps online, and whether this hack—and others in the past—are "good." "I'm the person on the podcast who can say 'We should hack things,' because I don't work for Malwarebytes. But the thing is, I don't think there really is any other way to get info in this industry."Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Jul 3, 2023 • 43min

Of sharks, surveillance, and spied-on emails: This is Section 702, with Matthew Guariglia

In the United States, when the police want to conduct a search on a suspected criminal, they must first obtain a search warrant. It is one of the foundational rights given to US persons under the Constitution, and a concept that has helped create the very idea of a right to privacy at home and online. But sometimes, individualized warrants are never issued, never asked for, never really needed, depending on which government agency is conducting the surveillance, and for what reason. Every year, countless emails, social media DMs, and likely mobile messages are swept up by the US National Security Agency—even if those communications involve a US person—without any significant warrant requirement. Those digital communications can be searched by the FBI. The information the FBI gleans from those searches can be used can be used to prosecute Americans for crimes. And when the NSA or FBI make mistakes—which they do—there is little oversight. This is surveillance under a law and authority called Section 702 of the FISA Amendments Act. The law and the regime it has enabled are opaque. There are definitions for "collection" of digital communications, for "queries" and "batch queries," rules for which government agency can ask for what type of intelligence, references to types of searches that were allegedly ended several years ago, "programs" that determine how the NSA grabs digital communications—by requesting them from companies or by directly tapping into the very cables that carry the Internet across the globe—and an entire, secret court that, only has rarely released its opinions to the public. Today, on the Lock and Code podcast, with host David Ruiz, we speak with Electronic Frontier Foundation Senior Policy Analyst Matthew Guariglia about what the NSA can grab online, whether its agents can read that information and who they can share it with, and how a database that was ostensibly created to monitor foreign intelligence operations became a tool for investigating Americans at home. As Guariglia explains:"In the United States, if you collect any amount of data, eventually law enforcement will come for it, and this includes data that is collected by intelligence communities."Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Jun 19, 2023 • 42min

Why businesses need a disinformation defense plan, with Lisa Kaplan: Lock and Code S04E13

When you think about the word "cyberthreat," what first comes to mind? Is it ransomware? Is it spyware? Maybe it's any collection of the infamous viruses, worms, Trojans, and botnets that have crippled countless companies throughout modern history. In the future, though, what many businesses might first think of is something new: Disinformation. Back in 2021, in speaking about threats to businesses, the former director of the US Cybersecurity and Infrastructure Security Agency, Chris Krebs, told news outlet Axios: “You’ve either been the target of a disinformation attack or you are about to be.”That same year, the consulting and professional services firm Price Waterhouse Coopers released a report on disinformation attacks against companies and organizations, and it found that these types of attacks were far more common than most of the public realized. From the report: “In one notable instance of disinformation, a forged US Department of Defense memo stated that a semiconductor giant’s planned acquisition of another tech company had prompted national security concerns, causing the stocks of both companies to fall. In other incidents, widely publicized unfounded attacks on a businessman caused him to lose a bidding war, a false news story reported that a bottled water company’s products had been contaminated, and a foreign state’s TV network falsely linked 5G to adverse health effects in America, giving the adversary’s companies more time to develop their own 5G network to compete with US businesses.”Disinformation is here, and as much of it happens online—through coordinated social media posts and fast-made websites—it can truly be considered a "cyberthreat." But what does that mean for businesses? Today, on the Lock and Code podcast with host David Ruiz, we speak with Lisa Kaplan, founder and CEO of Alethea, about how organizations can prepare for a disinformation attack, and what they should be thinking about in the intersection between disinformation, malware, and cybersecurity. Kaplan said:"When you think about disinformation in its purest form, what we're really talking about is people telling lies and hiding who they are in order to achieve objectives and doing so in a deliberate and malicious life. I think that this is more insidious than malware. I think it's more pervasive than traditional cyber attacks, but I don't think that you can separate disinformation from cybersecurity."Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Jun 5, 2023 • 44min

Trusting AI not to lie: The cost of truth

In May, a lawyer who was defending their client in a lawsuit against Columbia's biggest airline, Avianca, submitted a legal filing before a court in Manhattan, New York, that listed several previous cases as support for their main argument to continue the lawsuit.But when the court reviewed the lawyer's citations, it found something curious: Several were entirely fabricated. The lawyer in question had gotten the help of another attorney who, in scrounging around for legal precedent to cite, utilized the "services" of ChatGPT. ChatGPT was wrong. So why do so many people believe it's always right? Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes security evangelist Mark Stockley and Malwarebytes Labs editor-in-chief Anna Brading to discuss the potential consequences of companies and individuals embracing natural language processing tools—like ChatGPT and Google's Bard—as arbiters of truth. Far from being understood simply as chatbots that can produce remarkable mimicries of human speech and dialogue, these tools are becoming sources of truth for countless individuals, while also gaining attraction amongst companies that see artificial intelligence (AI) and large language models (LLM) as the future, no matter what industry they operate in. The future could look eerily similar to an earlier change in translation services, said Stockley, who witnessed the rapid displacement of human workers in favor of basic AI tools. The tools were far, far cheaper, but the quality of the translations—of the truth, Stockley said—was worse. "That is an example of exactly this technology coming in and being treated as the arbiter of truth in the sense that there is a cost to how much truth we want."Tune in today. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
May 22, 2023 • 48min

Identity crisis: How an anti-porn crusade could jam the Internet, featuring Alec Muffett

On January 1, 2023, the Internet in Louisiana looked a little different than the Internet in Texas, Mississippi, and Arkansas—its next-door state neighbors. And on May 1, the Internet in Utah looked quite different, depending on where you looked, than the Internet in Arizona, or Idaho, or Nevada, or California or Oregon or Washington or, really, much of the rest of the United States. The changes are, ostensibly, over pornography. In Louisiana, today, visitors to the online porn site PornHub are asked to verify their age before they can access the site, and that age verification process hinges on a state-approved digital ID app called LA Wallet. In the United Kingdom, sweeping changes to the Internet are being proposed that would similarly require porn sites to verify the ages of their users to keep kids from seeing sexually explicit material. And in Australia, similar efforts to require age verification for adult websites might come hand-in-hand with the deployment of a government-issued digital ID. But the large problem with all these proposals is not that they would make a new Internet only for children, but a new Internet for everyone.Look no further than Utah. On May 1, after new rules came into effect to make porn sites verify the ages of their users, the site PornHub decided to refuse to comply with the law and instead, to block access to the site for anyone visiting from an IP address based in Utah. If you’re in Utah, right now, and connecting to the Internet with an IP address located in Utah, you cannot access PornHub. Instead, you’re presented with a message from adult film star Cheri Deville who explains that:“As you may know, your elected officials have required us to verify your age before granting you access to our website. While safety and compliance are at the forefront of our mission, giving your ID card every time you want to visit an adult platform is not the most effective solution for protecting our users, and in fact, will put children and your privacy at risk.”Today, on the Lock and Code podcast with host David Ruiz, we speak with longtime security researcher Alec Muffett (who has joined us before to talk about Tor) to understand what is behind these requests to change the Internet, what flaws he's seen in studying past age verification proposals, and whether many members of the public are worrying about the wrong thing in trying to solve a social issue with technology. "The battle cry of these people have has always been—either directly or mocked as being—'Could somebody think of the children?' And I'm thinking about the children because I want my daughter to grow up with an untracked, secure private internet when she's an adult. I want her to be able to have a private conversation. I want her to be able to browse sites without giving over any information or linking it to her identity."Muffett continued:"I'm trying to protect that for her. I'd like to see more people grasping for that."Tune in today.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Additional Resources and Links for today's episode:"A Sequence of Spankingly Bad Ideas." - An analysis of age verification technology presentations from 2016. Alec Muffett."Adults might have to buy £10 ‘porn passes’ from newsagents to prove their age online." - The United Kingdom proposes an "adult pass" for purchase in 2018 to comply with earlier efforts for online age verification. Metro. "Age verification won't block porn. But it will spell the end of ethical porn." - An independent porn producer explains how compliance costs for age verification could shut down small outfits that make, film, and sell ethical pornography. The Guardian. "Minnesota’s Attempt to Copy California’s Constitutionally Defective Age Appropriate Design Code is an Utter Fail." - Age verification creeps into US proposals. Technology and Marketing Law Blog, run by Eric Goldman. "Nationwide push to require social media age verification raises questions about privacy, industry standards." - Cyberscoop."The Fundamental Problems with Social Media Age Verification Legislation." - R Street Institute.YouTube's age verification in action. - Various methods and requirements shown in Google's Support center for ID verification across the globe. "When You Try to Watch Pornhub in Utah, You See Me Instead. Here’s Why." - Cheri Deville's call for specialized phones for minors. Rolling Stone. 
undefined
May 8, 2023 • 51min

The rise of "Franken-ransomware," with Allan Liska

Ransomware is becoming bespoke, and that could mean trouble for businesses and law enforcement investigators. It wasn't always like this. For a few years now, ransomware operators have congregated around a relatively new model of crime called "Ransomware-as-a-Service." In the Ransomware-as-a-Service model, or RaaS model, ransomware itself is not delivered to victims by the same criminals that make the ransomware. Instead, it is used almost "on loan" by criminal groups called "affiliates" who carry out attacks with the ransomware and, if successful, pay a share of their ill-gotten gains back to the ransomware’s creators.This model allows ransomware developers to significantly increase their reach and their illegal hauls. By essentially leasing out their malicious code to smaller groups of cybercriminals around the world, the ransomware developers can carry out more attacks, steal more money from victims, and avoid any isolated law enforcement action that would put their business in the ground, as the arrest of one affiliate group won't stop the work of dozens of others. And not only do ransomware developers lean on other cybercriminals to carry out attacks, they also rely on an entire network of criminals to carry out smaller, specialized tasks. There are "Initial Access Brokers" who break into company networks and then sell that illegal method of access online. "You also have coders that you can contract out to," Liska said. "You have pen testers that you can contract out to. You can contract negotiators if you want. You can contract translators if you want."But as Liska explained, as the ransomware "business" spreads out, so do new weak points: disgruntled criminals. "This whole underground marketplace that exists to serve ransomware means that your small group can do a lot," Liska said. "But that also means that you are entrusting the keys to your kingdom to these random contractors that you're paying in Bitcoin every now and then. And that, for example, is why the LockBit code got leaked—dude didn't pay his contractor."With plenty of leaked code now circulating online, some smaller cybercriminals gangs have taken to making minor alterations and then sending that new variant of ransomware out into the world—no affiliate model needed. "Most of what we see is just repurposed code and we see a lot of what I call 'Franken-ransomware.'" Today, on the Lock and Code podcast with host David Ruiz, Liska explains why Franken-ransomware poses unique challenges to future victims, cybersecurity companies, and law enforcement investigators. Tune in today.You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Apr 24, 2023 • 41min

Removing the human: When should AI be used in emotional crisis?

In January, a mental health nonprofit admitted that it had used Artificial Intelligence to help talk to people in distress. Prompted first by a user's longing for personal improvement—and the difficulties involved in that journey—the AI tool generated a reply, which, with human intervention, could be sent verbatim in a chat box, or edited and fine-tuned to better fit the situation. The AI said:“I hear you. You’re trying to become a better person and it’s not easy. It’s hard to make changes in our lives, especially when we’re trying to do it alone. But you’re not alone. There are people here who care about you and want to help you. I’m proud of you for making the decision to try to improve your life. It takes a lot of courage and strength. I send you love and support as you go through this journey.”This was experimental work from Koko, a mental health nonprofit that integrated the GPT-3 large language model into its product for a short period of time that is now over. In a video demonstration posted on Twitter earlier this year, Koko co-founder Rob Morris revealed that the nonprofit had used AI to provide "mental health support to about 4,000 people" across "about 30,000 messages." Though Koko pulled GPT-3 from its system after a reportedly short period of time, Morris said on Twitter that there are several questions left from the experience. "The implications here are poorly understood," Morris said. "Would people eventually seek emotional support from machines, rather than friends and family?"Today, on the Lock and Code podcast with host David Ruiz, we speak with Courtney Brown, a social services administrator with a history in research and suicidology, to dig into the ethics, feasibility, and potential consequences of relying increasingly on AI tools to help people in distress. For Brown, the immediate implications draw up several concerns. "It disturbed me to see AI using 'I care about you,' or 'I'm concerned,' or 'I'm proud of you.' That made me feel sick to my stomach. And I think it was partially because these are the things that I say, and it's partially because I think that they're going to lose power as a form of connecting to another human."But, importantly, Brown is not the only voice in today's podcast with experience in crisis support. For six years and across 1,000 hours, Ruiz volunteered on his local suicide prevention hotline. He, too, has a background to share. Tune in today as Ruiz and Brown explore the boundaries for deploying AI on people suffering from emotional distress, whether the "support" offered by any AI will be as helpful and genuine as that of a human, and, importantly, whether they are simply afraid of having AI encroach on the most human experiences. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)
undefined
Apr 10, 2023 • 47min

How the cops buy a "God view" of your location data, with Bennett Cyphers

The list of people and organizations that are hungry for your location data—collected so routinely and packaged so conveniently that it can easily reveal where you live, where you work, where you shop, pray, eat, and relax—includes many of the usual suspects.Advertisers, obviously, want to send targeted ads to you and they believe those ads have a better success rate if they're sent to, say, someone who spends their time at a fast-food drive-through on the way home from the office, as opposed to someone who doesn't, or someone whose visited a high-end department store, or someone who, say, vacations regularly at expensive resorts. Hedge funds, interestingly, are also big buyers of location data, constantly seeking a competitive edge in their investments, which might mean understanding whether a fast food chain's newest locations are getting more foot traffic, or whether a new commercial real estate development is walkable from nearby homes. But perhaps unexpected on this list is police.According to a recent investigation from Electronic Frontier Foundation and The Associated Press, a company called Fog Data Science has been gathering Americans' location data and selling it exclusively to local law enforcement agencies in the United States. Fog Data Science's tool—a subscription-based platform that charges clients for queries of the company's database—is called Fog Reveal. And according to Bennett Cyphers, one of the investigators who uncovered Fog Reveal through a series of public record requests, it's rather powerful. "What [Fog Data Science] sells is, I would say, like a God view mode for the world... It's a map and you draw a shape on the map and it will show you every device that was in that area during a specified timeframe."Today, on the Lock and Code podcast with host David Ruiz, we speak to Cyphers about how he and his organization uncovered a massive data location broker that seemingly works only with local law enforcement, how that data broker collected Americans' data in the first place, where this data comes from, and why it is so easy to sell. Tune in now. You can also find us on Apple Podcasts, Spotify, and whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app