The Cyberlaw Podcast cover image

The Cyberlaw Podcast

Latest episodes

undefined
Feb 16, 2024 • 1h 4min

Death, Taxes, and Data Regulation

On the latest episode of The Cyberlaw Podcast, guest host Brian Fleming, along with panelists Jane Bambauer, Gus Hurwitz, and Nate Jones, discuss the latest U.S. government efforts to protect sensitive personal data, including the FTC’s lawsuit against data broker Kochava and the forthcoming executive order restricting certain bulk sensitive data flows to China and other countries of concern. Nate and Brian then discuss whether Congress has a realistic path to end the Section 702 reauthorization standoff before the April expiration and debate what to make of a recent multilateral meeting in London to discuss curbing spyware abuses. Gus and Jane then talk about the big news for cord-cutting sports fans, as well as Amazon’s ad data deal with Reach, in an effort to understand some broader difficulties facing internet-based ad and subscription revenue models. Nate considers the implications of Ukraine’s “defend forward” cyber strategy in its war against Russia. Jane next tackles a trio of stories detailing challenges, of the policy and economic varieties, facing Meta on the content moderation front, as well as an emerging problem policing sexual assaults in the Metaverse. Bringing it back to data, Gus wraps the news roundup by highlighting a novel FTC case brought against Blackbaud stemming from its data retention practices. In this week’s quick hits, Gus and Jane reflect on the FCC’s ban on AI-generated voice cloning in robocalls, Nate touches on an alert from CISA and FBI on the threat presented by Chinese hackers to critical infrastructure, Gus comments on South Korea’s pause on implementation of its anti-monopoly platform act and the apparent futility of nudges (with respect to climate change attitudes or otherwise), and finally Brian closes with a few words on possible broad U.S. import restrictions on Chinese EVs and how even the abundance of mediocre AI-related ads couldn’t ruin Taylor Swift’s Super Bowl.   Download 491st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
undefined
Feb 6, 2024 • 54min

Serious threats, unserious responses

It was a week of serious cybersecurity incidents paired with unimpressive responses. As Melanie Teplinsky reminds us, the U.S. government has been agitated for months about China’s apparent strategic decision to hold U.S. infrastructure hostage to cyberattack in a crisis. Now the government has struck back at Volt Typhoon, the Chinese threat actor pursuing that strategy. It claimed recently to have disrupted a Volt Typhoon botnet by taking over a batch of compromised routers. Andrew Adams explains how the takeover was managed through the court system. It was a lot of work, and there is reason to doubt the effectiveness of the effort. The compromised routers can be re-compromised if they are turned off and on again. And the only ones that were fixed by the U.S. seizure are within U.S. jurisdiction, leaving open the possibility of DDOS attacks from abroad. And, really, how vulnerable is our critical infrastructure to DDOS attack? I argue that there’s a serious disconnect between the government’s hair-on-fire talk about Volt Typhoon and its business-as-usual response. Speaking of cyberstuff we could be overestimating, Taiwan just had an election that China cared a lot about. According to one detailed report, China threw a lot of cyber at Taiwanese voters without making much of an impression. Richard Stiennon and I mix it up over whether China would do better in trying to influence the 2024 outcome here.   While we’re covering humdrum responses to cyberattacks, Melanie explains U.S. sanctions on Iranian military hackers for their hack of U.S. water systems.  For comic relief, Richard lays out the latest drama around the EU AI Act, now being amended in a series of backroom deals and informal promises. I predict that the effort to pile incoherent provisions on top of anti-American protectionism will not end in a GDPR-style triumph for Europe, whose market is now small enough for AI companies to ignore if the regulatory heat is turned up arbitrarily.  The U.S. is not the only player whose response to cyberintrusions is looking inadequate this week. Richard explains Microsoft’s recent disclosure of a Midnight Blizzard attack on the company and a number of its customers. The company’s obscure explanation of how its technology contributed to the attack and, worse, its effort to turn the disaster into an upsell opportunity earned Microsoft a patented Alex Stamos spanking.  Andrew explains the recent Justice Department charges against three people who facilitated the big $400m FTX hack that coincided with the exchange’s collapse. Does that mean it wasn’t an inside job? Not so fast, Andrew cautions. The government didn’t recover the $400m, and it isn’t claiming the three SIM-swappers it has charged are the only conspirators. Melanie explains why we’ve seen a sudden surge in state privacy legislation. It turns out that industry has stopped fighting the idea of state privacy laws and is now selling a light-touch model law that skips things like private rights of action. I give a lick and a promise to a “privacy” regulation now being pursued by CFPB for consumer financial information. I put privacy in quotes, because it’s really an opportunity to create a whole new market for data that will assure better data management while breaking up the advantage of incumbents’ big data holdings. Bruce Schneier likes the idea. So do I, in principle, except that it sounds like a massive re-engineering of a big industry by technocrats who may not be quite as smart as they think they are. Bruce, if you want to come on the podcast to explain the whole thing, send me an email! Spies are notoriously nasty, and often petty, but surely the nastiest and pettiest of American spies, Joshua Schulte, was sentenced to 40 years in prison last week. Andrew has the details. There may be some good news on the ransomware front. More victims are refusing to pay. Melanie, Richard, and I explore ways to keep that trend going. I continue to agitate for consideration of a tax on ransom payments. I also flag a few new tech regulatory measures likely to come down the pike in the next few months. I predict that the FCC will use the TCPA to declare the use of AI-generated voices in robocalls illegal. And Amazon is likely to find itself held liable for the safety of products sold by third parties on the Amazon platform.  Finally, a few quick hits: Amazon has abandoned its iRobot acquisition, thanks to EU “competition” regulators, with the likely result that iRobot will cease competing David Kahn, who taught us all the romance of cryptology, has died at 93  Air Force Lt. Gen. Timothy Haugh is taking over Cyber Command and NSA from Gen. Nakasone  And for those suffering from Silicon Valley Envy (lookin’ at you, Brussels), 23andMe offers a small corrective. The company is now a rare “reverse unicorn” – having fallen in value from $6 Billion to practically nothing Download 490th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
undefined
Jan 30, 2024 • 1h 12min

Going Deep on Deep Fakes—Plus a Bonus Interview with Rob Silvers on the Cyber Safety Review Board.

It was a big week for deep fakes generated by artificial intelligence. Sultan Meghji, who’s got a new AI startup, walked us through three stories that illustrate the ways AI will lead to more confusion about who’s really talking to us. First, a fake Biden robocall urged people not to vote in the New Hampshire primary. Second, a bot purporting to offer Dean Phillips’s views on the issues was sanctioned by OpenAI because it didn’t have Phillips’s consent. Third, fake nudes of Taylor Swift led to a ban on Twitter searches for her image. And, finally, podcasters used AI to resurrect George Carlin and got sued by his family. The moral panic over AI fakery meant that all of these stories were long on “end of the world” and short on “we’ll live through this.” Regulators of AI are not doing a better job of maintaining perspective. Mark MacCarthy reports that New York City’s AI hiring law, which has punitive disparate-impact disclosure requirements for automated hiring decision engines, seems to have persuaded NYC employers that they aren’t making any automated hiring decisions, so they don’t have to do any disclosures. Not to be outdone, the European Court of Justice has decided that pretty much any tool to aid in decisions is likely to be an automated decision making technology subject to special (and mostly nonsensical) data protection rules. Is AI regulation creating its own backlash? Could be. Sultan and I report on a very plausible Republican plan to attack the Biden AI executive order on the ground that its main enforcement mechanism relies, the Defense Production Act, simply doesn’t authorize what the order calls for. Speaking of regulation, Maury Shenk covers the EU’s application of the Digital Markets Act to big tech companies like Apple and Google. Apple isn’t used to being treated like just another company, and its contemptuous response to the EU’s rules for its app market could easily lead to regulatory sanctions. Looking at Apple’s proposed compliance with the California court ruling in the Epic case and the European Digital Market Act, Mark says it's time to think about price regulating mobile app stores. Even handing out big checks to technology companies turns out to be harder than it first sounds. Sultan and I talk about the slow pace of payments to chip makers, and the political imperative to get the deals done before November (and probably before March).  Senator Ron Wyden, D-Ore. is still flogging NSA and the danger of government access to personal data. This time, he’s on about NSA’s purchases of commercial data. So far, so predictable. But this time, he’s misrepresented the facts by saying without restriction that NSA buys domestic metadata, omitting NSA’s clear statement that its netflow “domestic” data consists of communications with one end outside the country.   Maury and I review an absent colleague’s effort to construct a liability regime for insecure software. Jim Dempsey's proposal looks quite reasonable, but Maury reminds me that he and I produced something similar twenty years ago, and it’s not even close to adoption anywhere in the U.S.   I can’t help but rant about Amazon’s arrogant, virtue-signaling, and customer-hating decision to drop a feature that makes it easy for Ring doorbell users to share their videos with the police. Whose data is it, anyway, Amazon? Sadly, we know the answer.  It looks as though there’s only one place where hasty, ill-conceived tech regulation is being rolled back. Maury reports on the People’s Republic of China, which canned its video game regulations, and its video game regulator for good measure, and started approving new games at a rapid clip, after a proposed regulatory crackdown knocked more than $60 bn off the value of its industry.  We close the news roundup with a few quick hits: Outside of AI, VCs are closing their wallets and letting startups run out of money  Apple launched an expensive dud – the Vision Pro  Quantum winter may be back as quantum computing turns out to be harder than hoped Speaking of winter, self-driving cars are going to need snow tires to get through the latest market and regulatory storms overtaking companies like Cruise  Finally, as a listener bonus, we turn to Rob Silvers, Under Secretary for Policy at the Department of Homeland Security and Chair of the Cyber Safety Review Board (CSRB). Under Rob’s leadership, DHS has proposed legislation to give the CSRB a legislative foundation. The Senate homeland security committee recently held a hearing about that idea. Rob wasn’t invited, so we asked him to come on the podcast to respond to issues that the hearing raised – conflicts of interest, subpoena power, choosing the incidents to investigate, and more. Download 489th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
undefined
Jan 23, 2024 • 45min

High Court, High Stakes for Cybersecurity

The Supreme Court heard argument last week in two cases seeking to overturn the Chevron doctrine that defers to administrative agencies in interpreting the statutes that they administer. The cases have nothing to do with cybersecurity, but Adam Hickey thinks they’re almost certain to have a big effect on cybersecurity policy. That’s because Chevron is going to take a beating, if it survives at all. That means it will be much tougher to repurpose existing law to deal with new regulatory problems. Given how little serious cybersecurity legislation has been passed in recent years, any new cybersecurity regulation is bound to require some stretching of existing law – and to be easier to challenge. Case in point: Even without a new look at Chevron, the EPA was balked in court when it tried to stretch its authorities to cover cybersecurity rules for water companies. Now, Kurt Sanger tells us, EPA, FBI, and CISA have combined to release cybersecurity guidance for the water sector. The guidance is pretty generic; and there’s no reason to think that underfunded water companies will actually take it to heart. Given Iran’s interest in causing aggravation and maybe worse in that sector, Congress is almost certainly going to feel pressure to act on the problem.  CISA’s emergency cybersecurity directives to federal agencies are a library of flaws that are already being exploited. As Adam points out, what’s especially worrying is how quickly patches are being turned into attacks and deployed. I wonder how sustainable the current patch system will prove to be. In fact, it’s already unsustainable; we just don’t have anything to replace it. The good news is that the Russians have been surprisingly bad at turning flaws into serious infrastructure problems even for a wartime enemy like Ukraine. Additional information about Russia’s attack on Ukraine’s largest telecom provider suggests that the cost to get infrastructure back was less than the competitive harm the carrier suffered in trying to win its customers back.  Companies are starting to report breaches under the new, tougher SEC rule, and Microsoft is out of the gate early, Adam tells us. Russian hackers stole the company’s corporate emails, it says, but it insists the breach wasn’t material. I predict we’ll see a lot of such hair splitting as companies adjust to the rule. If so, Adam predicts, we’re going to be flooded with 8-Ks.  Kurt notes recent FBI and CISA warnings about the national security threat posed by Chinese drones. The hard question is what’s new in those warnings. A question about whether antitrust authorities might investigate DJI’s enormous market share leads to another about the FTC’s utter lack of interest in getting guidance from the executive branch when it wanders into the national security field. Case in point: After listing a boatload of “sensitive location data” that should not be sold, the FTC had nothing to say about the personal data of people serving on U.S. military bases. Nothing “sensitive” there, the FTC seems to think, at least not compared to homeless shelters and migrant camps. Michael Ellis takes us through Apple’s embarrassing failure to protect users of its Airdrop feature. Adam is encouraged by a sign of maturity on the part of OpenAI, which has trimmed its overbroad rules on not assisting military projects. Apple, meanwhile, is living down to the worst Big Tech caricature in handling the complaints of app developers about its app store. Michael explains how Apple managed to beat 9 out of 10 claims brought by Epic and still ended up looking like the sorest of losers. Michael takes us inside a new U.S. surveillance court just for Europeans, but we end up worrying about the risk that the Obama administration will come back to make new law that constrains the Biden team.  Adam explains yet another European Court of Justice decision on GDPR. This time, though, it’s a European government in the dock. The result is the same, though: national security is pushed into a corner, and the data protection bureaucracy takes center stage.  We end with the sad disclosure that, while bad cyber news will continue, cyber-enabled day drinking will not, as Uber announces the end of Drizly, its liquor delivery app. Download 488th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
undefined
Jan 9, 2024 • 1h 22min

Triangulating Apple

Returning from winter break, this episode of the Cyberlaw Podcast covers a lot of ground. The story I think we’ll hear the most about in 2024 is the remarkable exploit used to compromise several generations of Apple iPhone. The question I think we’ll be asking for the next year is simple: How could an attack like this be introduced without Apple’s knowledge and support? We don’t get to this question until near the end of the episode, and I don’t claim great expertise in exploit design, but it’s very hard to see how such an elaborate compromise could be slipped past Apple’s security team. The second question is which government created the exploit. It might be a scandal if it were done by the U.S. But it would be far more of a scandal if done by any other nation.  Jeffery Atik and I lead off the episode by covering recent AI legal developments that simply underscore the obvious: AI engines can’t get patents as “inventors.” But it’s quite possible that they’ll make a whole lot of technology “obvious” and thus unpatentable. Paul Stephan joins us to note that National Institute of Standards and Technology (NIST) has come up with some good questions about standards for AI safety. Jeffery notes that U.S. lawmakers have finally woken up to the EU’s misuse of tech regulation to protect the continent’s failing tech sector. Even the continent’s tech sector seems unhappy with the EU’s AI Act, which was rushed to market in order to beat the competition and is therefore flawed and likely to yield unintended and disastrous consequences.  A problem that inspires this week’s Cybertoonz. Paul covers a lawsuit blaming AI for the wrongful denial of medical insurance claims. As he points out, insurers have been able to wrongfully deny claims for decades without needing AI. Justin Sherman and I dig deep into a NYTimes article claiming to have found a privacy problem in AI. We conclude that AI may have a privacy problem, but extracting a few email addresses from ChatGPT doesn’t prove the case.  Finally, Jeffery notes an SEC “sweep” examining the industry’s AI use. Paul explains the competition law issues raised by app stores – and the peculiar outcome of litigation against Apple and Google. Apple skated in a case tried before a judge, but Google lost before a jury and entered into an expensive settlement with other app makers. Yet it’s hard to say that Google’s handling of its app store monopoly is more egregiously anticompetitive than Apple’s. We do our own research in real time in addressing an FTC complaint against Rite Aid for using facial recognition to identify repeat shoplifters.  The FTC has clearly learned Paul’s dictum, “The best time to kick someone is when they’re down.” And its complaint shows a lack of care consistent with that posture.  I criticize the FTC for claiming without citation that Rite Aid ignored racial bias in its facial recognition software.  Justin and I dig into the bias data; in my view, if FTC documents could be reviewed for unfair and deceptive marketing, this one would lead to sanctions. The FTC fares a little better in our review of its effort to toughen the internet rules on child privacy, though Paul isn’t on board with the whole package. We move from government regulation of Silicon Valley to Silicon Valley regulation of government. Apple has decided that it will now require a judicial order to give government’s access to customers’ “push notifications.” And, giving the back of its hand to crime victims, Google decides to make geofence warrants impossible by blinding itself to the necessary location data. Finally, Apple decides to regulate India’s hacking of opposition politicians and runs into a Bharatiya Janata Party (BJP) buzzsaw.  Paul and Jeffery decode the EU’s decision to open a DSA content moderation investigation into X.  We also dig into the welcome failure of an X effort to block California’s content moderation law. Justin takes us through the latest developments in Cold War 2.0. China is hacking our ports and utilities with intent to disrupt (as opposed to spy on) them. The U.S. is discovering that derisking our semiconductor supply chain is going to take hard, grinding work. Justin looks at a recent report presenting actual evidence on the question of TikTok’s standards for boosting content of interest to the Chinese government.  And in quick takes,  I celebrate the end of the Reign of Mickey Mouse in copyright law Paul explains why Madison Square Garden is still able to ban lawyers who have sued the Garden I note the new short-term FISA 702 extension Paul predicts that the Supreme Court will soon decide whether police can require suspects  to provide police with phone passcodes And Paul and I quickly debate Daphne Keller’s amicus brief for Frances Fukuyama in the Supreme Court’s content moderation cases Download 486th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
undefined
Dec 12, 2023 • 1h 18min

Do AI Trust and Safety Measures Deserve to Fail?

It’s the last and probably longest Cyberlaw Podcast episode of 2023. To lead off, Megan Stifel takes us through a batch of stories about ways that AI, and especially AI trust and safety, manage to look remarkably fallible. Anthropic released a paper showing that race, gender, and age discrimination by AI models was real but could be dramatically reduced by instructing The Model to “really, really, really” avoid such discrimination. (Buried in the paper was the fact that the original, severe AI bias disfavored older white men, as did the residual bias that asking nicely didn’t eliminate.) Bottom line from Anthropic seems to be, “Our technology is a really cool toy, but don’t use if for anything that matters.”) In keeping with that theme, Google’s highly touted OpenAI competitor Gemini was release to mixed reviews when the model couldn’t correctly identify recent Oscar winners or a French word with six letters (it offered “amour”). The good news was for people who hate AI’s ham-handed political correctness; it turns out you can ask another AI model how to jailbreak your model, a request that can make the task go 25 times faster. This could be the week that determines the fate of FISA section 702, David Kris reports. It looks as though two bills will go to the House floor, and only one will survive. Judiciary’s bill is a grudging renewal of 702 for a mere three years, full of procedures designed to cripple the program. The intelligence committee’s bill beats the FBI around the head and shoulders but preserves the core of 702. David and I explore the “queen of the hill” procedure that will allow members to vote for either bill, both, or none, and will send to the Senate the version that gets the most votes.  Gus Hurwitz looks at the FTC’s last-ditch appeal to stop the Microsoft-Activision merger. The best case, he suspects, is that the appeal will be rejected without actually repudiating the pet theories of the FTC’s hipster antitrust lawyers. Megan and I examine the latest HHS proposal to impose new cybersecurity requirements on hospitals. David, meanwhile, looks for possible motivations behind the FBI’s procedures for companies who want help in delaying SEC cyber incident disclosures. Then Megan and I consider the tough new UK rules for establishing the age of online porn consumers. I think they’ll hurt Pornhub’s litigation campaign against states trying to regulate children’s access to porn sites.  The race to 5G is over, Gus notes, and it looks like even the winners lost. Faced with the threat of Chinese 5G domination and an industry sure that 5G was the key to the future, many companies and countries devoted massive investments to the technology, but it’s now widely deployed and no one sees much benefit. There is more than one lesson here for industrial policy and the unpredictable way technologies disseminate. 23andme gets some time in the barrel, with Megan and I both dissing its “lawyerly” response to a history of data breaches – namely changing its terms of service it harder for customers to sue for data breaches. Gus reminds us that the Biden FCC only took office in that last month or two, and it is determined to catch up with the FTC in advancing foolish and doomed regulatory initiatives. This week’s example, remarkably, isn’t net neutrality. It’s worse. The Commission is building a sweeping regulatory structure on an obscure section of the 2021 infrastructure act that calls for the FCC to “facilitate equal access to broadband internet access service...”: Think we’re hyperventilating? Read Commissioner Brendan Carr’s eloquent takedown of the whole initiative.  Senator Ron Wyden (D-OR) has a been in his bonnet over government access to smartphone notifications. Megan and I do our best to understand his concern and how seriously to take it.  Wrapping up, Gus offers a quick take on Meta’s broadening attack on the constitutionality of the FTC’s current structure. David takes satisfaction from the Justice Department’s patient and successful pursuit of Russian Hacker Vladimir Dunaev for his role in creating TrickBot. Gus notes that South Korea’s law imposing internet costs on content providers is no match for the law of supply and demand. Finally, in quick hits we cover:  The guilty plea of the founder of a cryptocurrency exchange accused of money laundering. Rumors that the ALPHV ransomware site has been taken down by law enforcement IBM’s long-term quantum computing research milestones The UK’s antitrust throat-clearing about the OpenAI-Microsoft tie-up And Europe’s low-on-details announcement of a deal on the world’s first comprehensive AI rules  Download 485th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
undefined
Dec 5, 2023 • 1h 2min

Making the Rubble Bounce in Montana

In this episode, Paul Stephan lays out the reasoning behind U.S. District Judge Donald W. Molloy’s decision enjoining Montana’s ban on TikTok. There are some plausible reasons for such an injunction, and the court adopts them. There are also less plausible and redundant grounds for an injunction, and the court adopts those as well. Asked to predict the future course of the litigation, Paul demurs. It will all depend, he thinks, on how the Supreme Court begins to sort out social media and the first amendment in the upcoming term. In the meantime, watch for bouncing rubble in the District of Montana courthouse. (Grudging credit for the graphics goes to Bing’s Image Creator, which refused to create the image until I attributed the bouncing rubble to a gas explosion. Way to discredit trust and safety, Bing!) Jane Bambauer and Paul also help me make sense of the litigation between Meta and the FTC over children’s privacy and previous consent decrees. A recent judicial decision opened the door for the FTC to pursue modification of a prior FTC order – on the surprising ground that the order had not been incorporated into a judicial order. But that decision simply gave Meta a chance to make an existential constitutional challenge to the FTC’s fundamental organization, a challenge that Paul thinks the Supreme Court is bound to take seriously. Maury Shenk and Paul analyze an “AI security by design” set of principles drafted by the U.K. and adopted by an ad hoc group of nations that pointedly split the EU’s membership and pulled in parts of the Global South. As diplomacy, it was a coup. As security policy, it’s mostly unsurprising. I complain that there’s little reason for special security rules to protect users of AI, since the threats are largely unformed, with Maury Pushing Back. What governments really seem to want is not security for users but  security from users, a paradigm that totally diverges from the direction of technology policy in past decades. Maury, who requested listener comments on, his recent AI research, notes Meta’s divergent view on open source AI technology and offers his take on why the company’s path might be different from Google’s or Microsoft’s. Jane and I are in accord in dissing California’s aggressive new AI rules, which appear to demand public notices every time a company uses spreadsheets containing personal data to make a business decision. I call it the most toxic fount of unanticipated tech liability since Illinois’s Biometric Information Privacy Act. Maury, Jane and I explore the surprisingly complicated questions raised by Meta’s decision to offer an ad-free service for around $10 a month. We explore what Paul calls the decline of global trade interdependence and the rise of a new mercantilism. Two cases in point: the U.S. decision not to trust the Saudis as partners in restricting China’s AI ambitions and China’s weirdly self-defeating announcement that it intends to be an unreliable source of graphite exports to the United States in future. Jane and I puzzle over a rare and remarkable conservative victory in tech policy: the collapse of Biden administration efforts to warn social media about foreign election meddling.  Finally, in quick hits, I cover the latest effort to extend section 702 of FISA, if only for a short time. Jane notes the difficulty faced by: Meta in trying to boot pedophiles off its platforms. Maury and I predict that the EU’s IoT vulnerability reporting requirements will raise the cost of IoT. I comment on the Canadian government’s deal with Google implementing the Online News Act Download 484th Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
undefined
Nov 28, 2023 • 58min

Rohrschach AI

The OpenAI corporate drama came to a sudden end last week. So sudden, in fact, that the pundits never quite figured out What It All Means. Jim Dempsey and Michael Nelson take us through some of the possibilities. It was all about AI accelerationists v. decelerationists. Or it was all about effective altruism. Or maybe it was Sam Altman’s slippery ambition. Or perhaps a new AI breakthrough – a model that can actually do more math than the average American law student. The one thing that seems clear is that the winners include Sam Altman and Microsoft, while the losers include illusions about using corporate governance to engage in AI governance. The Google antitrust trial is over – kind of. Michael Weiner tells us that all the testimony and evidence has been gathered on whether Google is monopolizing search, but briefs and argument will take months more – followed by years more fighting about remedy if Google is found to have violated the antitrust laws. He sums up the issues in dispute and makes a bold prediction about the outcome, all in about ten minutes. Returning to AI, Jim and Michael Nelson dissect the latest position statement from Germany, France, and Italy. They see it as a repudiation of the increasingly kludgey AI Act pinballing its way through Brussels, and a big step in the direction of the “light touch” AI regulation that is mostly being adopted elsewhere around the globe. I suggest that the AI Act be redesignated the OBE Act in recognition of how thoroughly and frequently it’s been overtaken by events. Meanwhile, cyberwar is posing an increasing threat to civil aviation. Michael Ellis covers the surprising ways in which GPS spoofing has begun to render even redundant air navigation tools unreliable. Iran and Israel come in for scrutiny. And it won’t be long before Russia and Ukraine develop similarly disruptive drone and counterdrone technology. It turns out, Michael Ellis reports, that Russia is likely ahead of the U.S. in this war-changing technology.  Jim brings us up to date on the latest cybersecurity amendments from New York’s department of financial services. On the whole, they look incremental and mostly sensible. Senator Ron Wyden (D-OR) is digging deep into his Golden Oldies collection, sending a letter to the White House expressing shock to have discovered a law enforcement data collection that the New York Times (and the rest of us) discovered in 2013. The program in question allows law enforcement to get call data but not content from AT&T with a subpoena. The only surprise is that AT&T has kept this data for much more than the industry-standard two or three years and that federal funds have helped pay for the storage. Michael Nelson, on his way to India for cyber policy talks, touts that nation’s creative approach to the field, as highlighted in Carnegie’s series on India and technology. He’s less impressed by the UK’s enthusiasm for massive new legislative initiatives on technology. I think this is Prime Minister Rishi Sunak trying to show that Brexit really did give the UK new running room to the right of Brussels on data protection and law enforcement authority. Download 483rd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
undefined
Nov 21, 2023 • 43min

Defenestration at OpenAI

Paul Rosenzweig brings us up to date on the debate over renewing section 702, highlighting the introduction of the first credible “renew and reform” measure by the House Intelligence Committee. I’m hopeful that a similarly responsible bill will come soon from Senate Intelligence and that some version of the two will be adopted. Paul is less sanguine. And we all recognize that the wild card will be House Judiciary, which is drafting a bill that could change the renewal debate dramatically. Jordan Schneider reviews the results of the Xi-Biden meeting in San Francisco and speculates on China’s diplomatic strategy in the global debate over AI regulation. No one disagrees that it makes sense for the U.S. and China to talk about the risks of letting AI run nuclear command and control; perhaps more interesting (and puzzling) is China’s interest in talking about AI and military drones. Speaking of AI, Paul reports on Sam Altman’s defenestration from OpenAI and soft landing at Microsoft. Appropriately, Bing Image Creator provides the artwork for the defenestration but not the soft landing.   Nick Weaver covers Meta’s not-so-new policy on political ads claiming that past elections were rigged. I cover the flap over TikTok videos promoting Osama Bin Laden’s letter justifying the 9/11 attack. Jordan and I discuss reports that Applied Materials is facing a criminal probe over shipments to China's SMIC.  Nick reports on the most creative ransomware tactic to date: compromising a corporate network and then filing an SEC complaint when the victim doesn’t disclose it within four days. This particular gang may have jumped the gun, he reports, but we’ll see more such reports in the future, and the SEC will have to decide whether it wants to foster this business model.  I cover the effort to disclose a bitcoin wallet security flaw without helping criminals exploit it. And Paul recommends the week’s long read: The Mirai Confession – a detailed and engaging story of the kids who invented Mirai, foisted it on the world, and then worked for the FBI for years, eventually avoiding jail, probably thanks to an FBI agent with a paternal streak. Download 482nd Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
undefined
Nov 14, 2023 • 1h 1min

The Brussels Defect: Too Early is Worse Than Too Late. Plus: Mark MacCarthy’s Book on ”Regulating Digital Industries.”

That, at least, is what I hear from my VC friends in Silicon Valley. And they wouldn’t get an argument this week from EU negotiators facing what looks like a third rewrite of the much-too -early AI Act. Mark MacCarthy explains that negotiations over an overhaul of the act demanded by France and Germany led to a walkout by EU parliamentarians. The cause? In their enthusiasm for screwing American AI companies, the drafters inadvertently screwed a French and a German AI aspirant Mark is also our featured author for an interview about his book, "Regulating Digital Industries: How Public Oversight Can Encourage Competition, Protect Privacy, and Ensure Free Speech" I offer to blurb it as “an entertaining, articulate and well-researched book that is egregiously wrong on almost every page.” Mark promises that at least part of my blurb will make it to his website. I highly recommend it to Cyberlaw listeners who mostly disagree with me – a big market, I’m told. Kurt Sanger reports on what looks like another myth about Russian cyberwarriors – that they can’t coordinate with kinetic attacks to produce a combined effect. Mandiant says that’s exactly what Sandworm hackers did in Russia’s most recent attack on Ukraine’s grid. Adam Hickey, meanwhile, reports on a lawsuit over internet sex that drove an entire social media platform out of business. Meanwhile, Meta is getting beat up on the Hill and in the press for failing to protect teens from sexual and other harms. I ask the obvious question: Who the heck is trying to get naked pictures of Facebook’s core demographic? Mark explains the latest EU rules on targeted political ads – which consist of several perfectly reasonable provisions combined with a couple designed to cut the heart out of online political advertising.  Adam and I puzzle over why the FTC is telling the U.S. Copyright Office that AI companies are a bunch of pirates who need to be pulled up short. I point out that copyright is a multi-generational monopoly on written works. Maybe, I suggest, the FTC has finally combined its unfairness and its anti-monopoly authorities to protect copyright monopolists from the unfairness of Fair Use. Taking an indefensible legal position out of blind hatred for tech companies? Now that I think about it, that is kind of on-brand for Lina Khan’s FTC.  Adam and I disagree about how seriously to take press claims that AI generates images that are biased. I complain about the reverse: AI that keeps pretending that there are a lot of black and female judges on the European Court of Justice.   Kurt and Adam reprise the risk to CISOs from the SEC's SolarWinds complaint – and all the dysfunctional things companies and CISOs will soon be doing to save themselves. In updates and quick hits:  Adam and I flag some useful new reports from Congress on the disinformation excesses of 2020. We both regret the fact that those excesses now make it unlikely the U.S. will do much about foreign government attempts to influence the 2024 election.  I mourn the fact that we won’t be covering Susannah Gibson again. Gibson raised campaign funds by doing literally what most politicians only do metaphorically. She has, gone down to defeat in her Virginia legislative race.  In Cyberlaw Podcast alumni news, Alex Stamos and Chris Krebs have sold their consulting firm to SentinelOne. They will only be allowed back on the podcast if they bring the Gulfstream.   I also note that Congress is finally starting to put some bills to renew section 702 of FISA into the hopper. Unfortunately, the first such bill, a merger of left and right extremes called the Government Surveillance Reform Act, probably should have gone into the chipper instead.  Download 481st Episode (mp3) You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.  

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app