The Cyberlaw Podcast cover image

The Cyberlaw Podcast

Latest episodes

undefined
Jan 18, 2023 • 57min

The Sun Also Sets, on Section 702

The Cyberlaw Podcast kicks off 2023 by staring directly into the sun(set) of Section 702 authorization. The entire panel, including guest host Brian Fleming and guests Michael Ellis  and David Kris, debates where things could be headed this year as the clock is officially ticking on FISA Section 702 reauthorization. Although there is agreement that a straight reauthorization is unlikely in today’s political environment, the ultimate landing spot for Section 702 is very much in doubt and a “game of chicken” will likely precede any potential deal. Everything seems to be in play, as this reauthorization battle could result in meaningful reform or a complete car crash come this time next year. Sticking with Congress, Michael also reacts to President Biden’s recent bipartisan call to action regarding “Big Tech” and ponders where Republicans and Democrats could potentially find agreement on an issue everyone seems to agree on (for very different reasons). The panel also discusses the timing of President Biden’s OpEd in the Wall Street Journal and debates whether it is intended as a challenge to the Republican-controlled House to act rather than simply increase oversight on the tech industry.  David then introduces a fascinating story about the bold recent action by the Security and Exchange Commission (SEC) to bring suit against Covington & Burling LLP to enforce an administrative subpoena seeking disclosure of the firm’s clients implicated in a 2020 cyberattack by Chinese state-sponsored group, Hafnium. David posits that the SEC knows exactly what it is doing by taking such aggressive action in the face of strong resistance, and the panel discusses whether the SEC may have already won by attempting to protect its burgeoning piece of turf in the U.S. government cybersecurity enforcement landscape. Brian then turns to the crypto regulatory and enforcement space to discuss Coinbase’s recent settlement with New York’s Department of Financial Services. Rather than signal another crack in the foundation of the once high-flying crypto industry, Brian offers that this may just be routine growing pains for a maturing industry that is more like the traditional banking sector, from a regulatory and compliance standpoint, than it may have wanted to believe. Then, in the China portion of the episode, Michael discusses the latest news on the establishment of reverse Committee on Foreign Investment in the United States (CFIUS), and suggests it may still be some time before this tool gets finalized (even as the substantive scope appears to be shrinking). Next, Brian discusses a recent D.C. Circuit decision which upheld the Federal Communication Commission’s decision to rescind the license of China Telecom at the recommendation of the executive branch agencies known as Team Telecom (Department of Justice, Department of Defense, and Department of Homeland Security). This important, first-of-its-kind decision reinforces the role of Team Telecom as an important national security gatekeeper for U.S. telecommunications infrastructure. Finally, David highlights an interesting recent story about an FBI search of an apparent Chinese police outpost in New York and ponders what it would mean to negotiate with and be educated by undeclared Chinese law enforcement agents in a foreign country. In a few updates and quick hits: Brian updates listeners on the U.S. government’s continuing efforts to win multilateral support from key allies for tough new semiconductor export controls targeting China. Michael picks up the thread on the Twitter Files release and offers his quick take on what it says about ReleaseTheMemo.   And, last but not least, Brian discusses the unsurprising (according the Stewart) decision by the Supreme Court of the United States to allow WhatsApp’s spyware suit against NSO Group to continue.  
undefined
Jan 10, 2023 • 59min

A Dispatch from the Great Tech Battlefront

Our first episode for 2023 features Dmitri Alperovitch, Paul Rosenzweig, and Jim Dempsey trying to cover a months’ worth of cyberlaw news. Dmitri and I open with an effort to summarize the state of the tech struggle between the U.S. and China. I think recent developments show the U.S. doing better than expected. U.S. companies like Facebook and Dell are engaged in voluntary decoupling as they imagine what their supply chain will look like if the conflict gets worse. China, after pouring billions into an effort to take a lead in high-end chip production, may be pulling back on the throttle. Dmitri is less sanguine, noting that Chinese companies like Huawei have shown that there is life after sanctions, and there may be room for a fast-follower model in which China dominates production of slightly less sophisticated chips, where much of the market volume is concentrated. Meanwhile, any Chinese retreat is likely tactical; where it has a dominant market position, as in rare earths, it remains eager to hobble U.S. companies. Jim lays out the recent medical device security requirements adopted in the omnibus appropriations bill. It is a watershed for cybersecurity regulation of the private sector and overdue for increasingly digitized devices that in some cases can only be updated with another open-heart surgery. How much of a watershed may become clear when the White House cyber strategy, which has been widely leaked, is finally released. Paul explains what it’s likely to say, most notably its likely enthusiasm not just for regulation but for liability as a check on bad cybersecurity. Dmitri points out that all of that will be hard to achieve legislatively now that Republicans control the House. We all weigh in on LastPass’s problems with hackers, and with candid, timely disclosures. For reasons fair and unfair, two-thirds of the LastPass users on the show have abandoned the service. I blame LastPass’s acquisition by private equity; Dmitri tells me that’s sweeping with too broad a brush. I offer an overview of the Twitter Files stories by Bari Weiss, Matt Taibbi, and others. When I say that the most disturbing revelations concern the massive government campaigns to enforce orthodoxy on COVID-19, all hell breaks loose. Paul in particular thinks I’m egregiously wrong to worry about any of this. No chairs are thrown, mainly because I’m in Virginia and Paul’s in Costa Rica. But it’s an entertaining and maybe even illuminating debate. In shorter and less contentious segments: Dmitri unpacks the latest effort by Russian hackers to subvert the security of a Ukrainian web-based military information site. He thinks the Ukrainian ability to use the site despite Russian attacks may have lessons for NATO. Dmitri also sheds light (and not a little shade) on Chinese claims to have broken RSA with a quantum computer.  Jim updates us on TikTok’s travails and the ongoing debate over restricting its use in the United States. I point out that another black man has been arrested because of a facial recognition error—bringing the total of mistaken face-recognition arrests in the entire country over the past decade to four. All of which could have been avoided by police department policy.  On the other hand, I also identify a shocking abuse of facial recognition to oppress some of the most loathed people in America: Lawyers. Madison Square Garden, in what must be the dumbest corporate policy of the year, uses facial recognition to identify lawyers working for law firms that have ongoing lawsuits against the company. The apparent purpose, or at least the result, is to prevent lawyers from those firms from bringing Girl Scout troops to see the Rockettes. No problem; I am sure everyone would rather watch the ensuing litigation. I remind listeners that Trump's return to Facebook and Instagram could happen very soon. The EU has advanced Its transatlantic data deal with the US, though more thrashing about should be expected.
undefined
Dec 20, 2022 • 40min

Bonus Episode: How Privilege Undermines Cybersecurity

This bonus episode is an interview with Josephine Wolff and Dan Schwarcz, who along with Daniel Woods have written an article with the same title as this post. Their thesis is that breach lawyers have lost perspective in their no-holds-barred pursuit of attorney-client privilege to protect the confidentiality of forensic reports that diagnose the breach. Remarkably for a law review article, it contains actual field research. The authors interviewed all the players in breach response, from the company information security teams, the breach lawyers, the forensics investigators, the insurers and insurance brokers, and more. I remind them of Tracy Kidder’s astute observation that, in building a house, there are three main players—owner, architect, and builder—and that if you get any two of them in the room alone, they will spend all their time bad-mouthing the third. Wolff, Schwarcz, and Woods seem to have done that with the breach response players, and the bad-mouthing falls hardest on the lawyers.  The main problem is that using attorney-client privilege to keep a breach forensics process confidential is a reach. So, the courts have been unsympathetic. Which forces lawyers to impose more and more restrictions on the forensic investigator and its communications in the hope of maintaining confidentiality. The upshot is that no forensics report at all is written for many breaches (up to 95 percent, Josephine estimates). How does the breached company find out what it did wrong and what it should do to avoid the next breach? Simple. Their lawyer translates the forensic firm’s advice into a PowerPoint and briefs management. Really, what could go wrong? In closing, Dan and Josephine offer some ideas for how to get out of this dysfunctional mess. I push back. All in all, it’s the most fun I’ve ever had talking about insurance law.
undefined
Dec 13, 2022 • 1h 1min

ChatGPT Successfully Imitates a Talented Sociopath with Too Many Lawyers

It’s been a news-heavy week, but we have the most fun in this episode with ChatGPT. Jane Bambauer, Richard Stiennon, and I pick over the astonishing number of use cases and misuse cases disclosed by the release of ChatGPT for public access. It is talented—writing dozens of term papers in seconds. It is sociopathic—the term papers are full of falsehoods, down to the made-up citations to plausible but nonexistent New York Times stories. And it has too many lawyers—Richard’s request that it provide his bio (or even Einstein’s) was refused on what are almost certainly data protection grounds. Luckily, either ChatGPT or its lawyers are also bone stupid, since reframing the question fools the machine into subverting the legal and PC limits it labors under. I speculate that it beat Google to a public relations triumph precisely because Google had even more lawyers telling their artificial intelligence what not to say. In a surprisingly under covered story, Apple has gone all in on child pornography. Its phone encryption already makes the iPhone a safe place to record child sexual abuse material (CSAM); now Apple will encrypt users’ cloud storage with keys it cannot access, allowing customers to upload CSAM without fear of law enforcement. And it has abandoned its effort to identify such material by doing phone-based screening. All that’s left of its effort is a weak option that allows parents to force their kids to activate an option that prevents them from sending or receiving nude photos. Jane and I dig into the story, as well as Apple’s questionable claim to be offering the same encryption to its Chinese customers. Nate Jones brings us up to date on the National Defense Authorization Act, or NDAA. Lots of second-tier cyber provisions made it into the bill, but not the provision requiring that critical infrastructure companies report security breaches. A contested provision on spyware purchases by the U.S. government was compromised into a useful requirement that the intelligence community identify spyware that poses risks to the government. Jane updates us on what European data protectionists have in store for Meta, and it’s not pretty. The EU data protection supervisory board intends to tell the Meta companies that they cannot give people a free social media network in exchange for watching what they do on the network and serving ads based on their behavior. If so, it’s a one-two punch. Apple delivered the first blow by curtailing Meta’s access to third-party behavioral data. Now even first-party data could be off limits in Europe. That’s a big revenue hit, and it raises questions whether Facebook will want to keep giving away its services in Europe.   Mike Masnick is Glenn Greenwald with a tech bent—often wrong but never in doubt, and contemptuous of anyone who disagrees. But when he is right, he is right. Jane and I discuss his article recognizing that data protection is becoming a tool that the rich and powerful can use to squash annoying journalist-investigators. I have been saying this for decades. But still, welcome to the party, Mike! Nate points to a plea for more controls on the export of personal data from the U.S. It comes not from the usual privacy enthusiasts but from the U.S. Naval Institute, and it makes sense. It was a bad week for Europe on the Cyberlaw Podcast. Jane and I take time to marvel at the story of France’s Mr. Privacy and the endless appetite of Europe’s bureaucrats for his serial grifting. Nate and I cover what could be a good resolution to the snake-bitten cloud contract process at the Department of Defense. The Pentagon is going to let four cloud companies—Google, Amazon, Oracle And Microsoft—share the prize. You did not think we would forget Twitter, did you? Jane, Richard, and I all comment on the Twitter Files. Consensus: the journalists claiming these stories are nothingburgers are more driven by ideology than news. Especially newsworthy are the remarkable proliferation of shadowbanning tools Twitter developed for suppressing speech it didn’t like, and some considerable though anecdotal evidence that the many speech rules at the company were twisted to suppress speech from the right, even when the rules did not quite fit, as with LibsofTikTok, while similar behavior on the left went unpunished. Richard tells us what it feels like to be on the receiving end of a Twitter shadowban.  The podcast introduces a new feature: “We Read It So You Don’t Have To,” and Nate provides the tl;dr on an New York Times story: How the Global Spyware Industry Spiraled Out of Control. And in quick hits and updates: Jane covers the San Francisco city council’s reversion to the mean. On second thought, it will not be letting killer police robots out on San Francisco’s streets. Nate tells us that the Netherlands (and Japan, I might add) is likely to align with the U.S. and impose new curbs on chip-making equipment sales to China.
undefined
Dec 6, 2022 • 50min

Location, Location, Location

This episode of the Cyberlaw Podcast delves into the use of location technology in two big events—the surprisingly outspoken lockdown protests in China and the Jan. 6 riot at the U.S. Capitol. Both were seen as big threats to the government, and both produced aggressive police responses that relied heavily on government access to phone location data. Jamil Jaffer and Mark MacCarthy walk us through both stories and respond to the provocative question, what’s the difference? Jamil’s answer (and mine, for what it’s worth) is that the U.S. government gained access to location information from Google only after a multi-stage process meant to protect innocent users’ information, and that there is now a court case that will determine whether the government actually did protect users whose privacy should not have been invaded.  Whether we should be relying on Google’s made-up and self-protective rules for access to location data is a separate question. It becomes more pointed as Silicon Valley has started making up a set of self-protective penalties on companies that assist law enforcement in gaining access to phones that Silicon Valley has made inaccessible. The movement to punish law enforcement access providers has moved from trashing companies like NSO, whose technology has been widely misused, to punishing companies on a lot less evidence. This week, TrustCor lost its certificate authority status mostly for looking suspiciously close to the National Security Agency and Google outed Variston of Spain for ties to a vulnerability exploitation system. Nick Weaver is there to hose me down. The U.K. is working on an online safety bill, likely to be finalized in January, Mark reports, but this week the government agreed to drop its direct regulation of “lawful but awful” speech on social media. The step was a symbolic victory for free speech advocates, but the details of the bill before and after the change suggest it was more modest than the brouhaha suggests. The Department of Homeland Security’s Cyber Security and Infrastructure Security Agency (CISA) has finished taking comments on its proposed cyber incident reporting regulation. Jamil summarizes industry’s complaints, which focus on the risk of having to file multiple reports with multiple agencies. Industry has a point, I suggest, and CISA should take the other agencies in hand to agree on a report format that doesn’t resemble the State of the Union address. It turns out that the collapse of FTX is going to curtail a lot of artificial intelligence (AI) safety research. Nick explains why, and offers reasons to be skeptical of the “effective altruism” movement that has made AI safety one of its priorities. Today, Jamil notes, the U.S. and EU are getting together for a divisive discussion of the U.S. subsidies for electric vehicles (EV) made in North America but not Germany. That’s very likely a World Trade Organziation (WTO) violation, I offer, but one that pales in comparison to thirty years of WTO-violating threats to constrain European data exports to the U.S. When you think of it as retaliation for the use of General Data Protection Regulation (GDPR) to attack U.S. intelligence programs, the EV subsidy is easy to defend. I ask Nick what we learned this week from Twitter coverage. His answer—that Elon Musk doesn’t understand how hard content moderation is—doesn’t exactly come as news. Nor, really, does most of what we learned from Matt Taibbi’s review of Twitter’s internal discussion of the Hunter Biden laptop story and whether to suppress it. Twitter doesn’t come out of that review looking better. It just looks bad in ways we already suspected were true. One person who does come out of the mess looking good is Rep. Ro Khanna (D.-Calif.), who vigorously advocated that Twitter reverse its ban, on both prudential and principled grounds. Good for him. Speaking of San Francisco Dems who surprised us this week, Nick notes that the city council in San Francisco approved the use of remote-controlled bomb “robots” to kill suspects. He does not think the robots are fit for that purpose.   Finally, in quick hits: Meta was fined $275 million for allowing data scraping for personal data. Nick and Jamil tell us that Snowden has at last shown his true colors. Jamil has unwonted praise for Apple, which persuaded TSMC to make more advanced chips in Arizona than it originally planned. And I try to explain why the decision of the DHS cyber safety board to look into the Lapsus$ hacks seems to drawing fire.
undefined
Nov 29, 2022 • 41min

Toxified Tech

We spend much of this episode of the Cyberlaw Podcast talking about toxified technology – new tech that is being demonized for a variety of reasons. Exhibit One, of course, is “spyware,” essentially hacking tools that allow governments to access phones or computers otherwise closed to them, usually by end-to-end encryption. The Washington Post and the New York Times have led a campaign to turn NSO’s Pegasus tool for hacking phones into radioactive waste. Jim Dempsey, though, reminds us that not too long ago, in defending end-to-end encryption, tech policy advocates insisted that the government did not need mandated access to encrypted phones because they could engage in self-help in the form of hacking. David Kris points out that, used with a warrant, there’s nothing uniquely dangerous about hacking tools of this kind. I offer an explanation for why the public policy community and its Silicon Valley funders have changed their tune on the issue: having won the end-to-end encryption debate, they feel free to move on to the next anti-law-enforcement campaign. That campaign includes private lawsuits against NSO by companies like WhatsApp, whose lawsuit was briefly delayed by NSO’s claim of sovereign immunity on behalf of the (unnamed) countries it builds its products for. That claim made it to the Supreme Court, David reports, where the U.S. government recently filed a brief that will almost certainly send NSO back to court without any sovereign immunity protection. Meanwhile, in France, Amesys and its executives are being prosecuted for facilitating the torture of Libyan citizens at the hands of the Muammar Qaddafi regime. Amesys evidently sold an earlier and less completely toxified technology—packet inspection tools—to Libya. The criminal case is pending. And in the U.S., a whole set of tech toxification campaigns are under way, aimed at Chinese products. This week, Jim notes, the Federal Communications Commission came to the end of a long road that began with jawboning in the 2000s and culminated in a flat ban on installing Chinese telecom gear in U.S. networks. On deck for China are DJI’s drones, which several Senators see as a comparable national security threat that should be handled with a similar ban. Maury Shenk tells us that the British government is taking the first steps on a similar path, this time with a ban on some government uses of Chinese surveillance camera systems. Those measures do not always work, Maury tells us, pointing to a story that hints at trouble ahead for U.S. efforts to decouple Chinese from American artificial intelligence research and development.  Maury and I take a moment to debunk efforts to persuade readers that artificial intelligence (AI) is toxic because Silicon Valley will use it to take our jobs. AI code writing is not likely to graduate from facilitating coding any time soon, we agree. Whether AI can do more in human resources (HR) may be limited by a different toxification campaign—the largely phony claim that AI is full of bias. Amazon’s effort to use AI in HR, I predict, will be sabotaged by this claim. The effort to avoid bias will almost certainly lead Amazon to build race and gender quotas into its engine. And in a few quick hits: I express doubt that Australia’s “unleash the hounds” approach to ransomware actually has anything to do with one notorious ransomware actor’s extortion site going down  Maury praises an MIT Technology Review piece that argues persuasively that China’s social credit system is not quite as dystopian as it’s been portrayed. I point out that, with Airbnb practicing guilt by association and PayPal taking your money for saying things PayPal doesn’t like, Silicon Valley can brag that it’s going to reach Full-Bore Dystopia well before China.  I cover the fourth review in three administrations of the dual-hat leadership of NSA and Cyber Command. No change is likely.  And we close with a downbeat assessment of Elon Musk’s chances of withstanding the combined hostility of European and U.S. regulators, the press, and the left-wing tech-toxifiers in civil society. He is a talented guy, I argue, and with a three-year runway, he could succeed, but he does not have three years.
undefined
Nov 22, 2022 • 39min

The Empire Strikes Back, at Twitter

The Cyberlaw Podcast leads with the legal cost of Elon Musk’s anti-authoritarian takeover of Twitter. Turns out that authority figures have a lot of weapons, many grounded in law, and Twitter is at risk of being on the receiving end of those weapons. Brian Fleming explores the apparently unkillable notion that the Committee on Foreign Investment in the U.S. (CFIUS) should review Musk’s Twitter deal because of a relatively small share that went to investors with Chinese and Persian Gulf ties. It appears that CFIUS may still be seeking information on what Twitter data those investors will have access to, but I am skeptical that CFIUS will be moved to act on what it learns. More dangerous for Twitter and Musk, says Charles-Albert Helleputte, is the possibility that the company will lose its one-stop-shop privacy regulator for failure to meet the elaborate compliance machinery set up by European privacy bureaucrats. At a quick calculation, that could expose Twitter to fines up to 120% of annual turnover. Finally, I reprise my skeptical take on all the people leaving Twitter for Mastodon as a protest against Musk allowing the Babylon Bee and President Trump back on the platform. If the protestors really think Mastodon’s system is better, I recommend that Twitter adopt it, or at least the version that Francis Fukuyama and Roberta Katz have described. If you are looking for the far edge of the Establishment’s Overton Window on China policy, you will not do better than the U.S.-China Economic and Security Review Commission, a consistently China-skeptical but mainstream body. Brian reprises the Commission’s latest report. The headline, we conclude, is about Chinese hacking, but the recommendations does not offer much hope of a solution to that problem, other than more decoupling.  Chalk up one more victory for Trump-Biden continuity, and one more loss for the State Department. Michael Ellis reminds us that the Trump administration took much of Cyber Command’s cyber offense decision making out of the National Security Council and put it back in the Pentagon. This made it much harder for the State Department to stall cyber offense operations. When it turned out that this made Cyber Command more effective and no more irresponsible, the Biden Administration prepared to ratify Trump’s order, with tweaks. I unpack Google’s expensive (nearly $400 million) settlement with 40 States over location history. Google’s promise to stop storing location history if the feature was turned off was poorly and misleadingly drafted, but I doubt there is anyone who actually wanted to keep Google from using location for most of the apps where it remained operative, so the settlement is a good deal for the states, and a reminder of how unpopular Silicon Valley has become in red and blue states. Michael tells the doubly embarrassing story of an Iranian hack of the U.S. Merit Systems Protection Board. It is embarrassing to be hacked with a log4j exploit that should have been patched. But it is worse when an Iranian government hacker gets access to a U.S. government network—and decided that the access is only good for mining cryptocurrency.  Brian tells us that the U.S. goal of reshoring chip production is making progress, with Apple planning to use TSMC chips from a new fab in Arizona.  In a few updates and quick hits: I remind listeners that a lot of tech companies are laying employees off, but that overall Silicon Valley employment is still way up over the past couple of years. I give a lick and a promise to the mess at cryptocurrency exchange FTX, which just keeps getting worse. Charles updates us on the next U.S.-E.U. adequacy negotiations, and the prospects for Schrems 3 (and 4, and 5) litigation. And I sound a note of both admiration and caution about Australia’s plan to “unleash the hounds” – in the form of its own Cyber Command equivalent – on ransomware gangs. As U.S. experience reveals, it makes for a great speech, but actual impact can be hard to achieve.
undefined
Nov 15, 2022 • 1h 6min

Election Aftershocks for Cyberlaw

We open this episode of the Cyberlaw Podcast by considering the (still evolving) results of the 2022 midterm election. Adam Klein and I trade thoughts on what Congress will do. Adam sees two years in which the Senate does nominations, the House does investigations, and neither does much legislation—which could leave renewal of the critically important intelligence authority, Section 702 of the Foreign Intelligence Surveillance Act (FISA), out in the cold. As supporters of renewal, we conclude that the best hope for the provision is to package it with trust-building measures to restore Republicans’ willingness to give national security agencies broad surveillance authorities. I also note that foreign government cyberattacks on our election, which have been much anticipated in election after election, failed once again to make an appearance. At this point, election interference is somewhere between Y2K and Bigfoot on the “things we should have worried about” scale. In other news, cryptocurrency conglomerate FTX has collapsed into bankruptcy, stolen funds, and criminal investigations. Nick Weaver lays out the gory details. A new panelist on the podcast, Chinny Sharma, explains to a disbelieving U.S. audience the U.K. government’s plan to scan all the country’s internet-connected devices for vulnerabilities. Adam and I agree that it could never happen here. Nick wonders why the U.K. government does not use a private service for the task.  Nick also covers This Week in the Twitter Dogpile. He recognizes that this whole story is turning into a tragedy for all concerned, but he is determined to linger on the comic relief. Dunning-Krueger makes an appearance.  Chinny and I speculate on what may emerge from the Biden administration’s plan to reconsider the relationship between the Cybersecurity and Infrastructure Security Agency (CISA) and the Sector Risk Management Agencies that otherwise regulate important sectors. I predict turf wars and new authorities for CISA in response. The Obama administration’s egregious exemption of Silicon Valley from regulation as critical infrastructure should also be on the chopping block. Finally, if the next two Supreme Court decisions go the way I hope, the Federal Trade Commission will finally have to coordinate its privacy enforcement efforts with CISA’s cybersecurity standards and priorities.  Adam reviews the European Parliament’s report on Europe’s spyware problems. He’s impressed (as am I) by the report’s willingness to acknowledge that this is not a privacy problem made in America. Governments in at least four European countries by our count have recently used spyware to surveil members of the opposition, a problem that was unthinkable for fifty years in the United States. This, we agree, is another reason that Congress needs to put guardrails against such abuse in place quickly. Nick notes the U.S. government’s seizure of what was $3 billion in bitcoin. Shrinkflation has brought that value down to around $800 million. But it is still worth noting that an immutable blockchain brought James Zhong to justice ten years after he took the money.   Disinformation—or the appalling acronym MDM (for mis-, dis-, and mal-information)—has been in the news lately. A recent paper counted the staggering cost of “disinformation” suppression during coronavirus times. And Adam published a recent piece in City Journal explaining just how dangerous the concept has become. We end up agreeing that national security agencies need to focus on foreign government dezinformatsiya—falsehoods and propaganda from abroad – and not get in the business of policing domestic speech, even when it sounds a lot like foreign leaders we do not like.  Chinny takes us into a new and fascinating dispute between the copyleft movement, GitHub, and Artificial Intelligence (AI) that writes code. The short version is that GitHub has been training an AI engine on all the open source code on the site so that it can “autosuggest” lines of new code as you are writing the boring parts of your program. The upshot is that open source code that the AI strips off the license conditions, such as copyleft, that are part of some open source code. Not surprisingly, copyleft advocates are suing on the ground that important information has been left off their code, particularly the provision that turns all code that uses the open source into open source itself. I remind listeners that this is why Microsoft famously likened open source code to cancer. Nick tells me that it is really more like herpes, thus demonstrating that he has a lot more fun coding than I ever had.  In updates and quick hits: I note that the peanut butter sandwich nuclear spies have been sentenced. Adam celebrates TSMC’s decision to build a 3 nanometer semiconductor fab in Arizona. We cross sword about whether the fab capital of the U.S. will be Phoenix or Austin.   I celebrate the Russian government’s acknowledgment of the Cyberlaw Podcast’s reach when it designated long-time regular Dmitri Alperovitch for Russian sanctions. Occasional guest Chris Krebs also makes the list. www.mid.ru Adam and I flag the Department of Justice’s release of basic rules for what I am calling the Euroappeasement court: the quasi-judicial body that will hear European complaints that the U.S. is not living up to human rights standards that no country in Europe even pretends to live up to. 
undefined
Nov 8, 2022 • 49min

AI-splaining

The war that began with the Russian invasion of Ukraine grinds on. Cybersecurity experts have spent much of 2022 trying to draw lessons about cyberwar strategies from the conflict. Dmitri Alperovitch takes us through the latest lessons, cautioning that all of them could look different in a few months, as both sides adapt to the others’ actions.  David Kris joins Dmitri to evaluate a Microsoft report hinting that China may be abusing its recent edict requiring that software vulnerabilities be reported first to the Chinese government. The temptation to turn such reports into zero-day exploits may be irresistible, and Microsoft notes with suspicion a recent rise in Chinese zero-day exploits. Dmitri worried about just such a development while serving on the Cyber Safety Review Board, but he is not yet convinced that we have the evidence to prove the case against the Chinese mandatory disclosure law.  Sultan Meghji keeps us in Redmond, digging through a deep Protocol story on how Microsoft has helped build Artificial Intelligence (AI) in China. The amount of money invested, and the deep bench of AI researchers from China, raises real questions about how the United States can decouple from China—and whether China may eventually decide to do the decoupling.  I express skepticism about the White House’s latest initiative on ransomware, a 30-plus nation summit that produced a modest set of concrete agreements. But Sultan and Dmitri have been on the receiving end of deputy national security adviser Anne Neuberger’s forceful personality, and they think we will see results. We’d better. Baks reported that ransomware payments doubled last year, to $1.2 billion.   David introduces the high-stakes struggle over when cyberattacks can be excluded from insurance coverage as acts of war. A recent settlement between Mondelez and Zurich has left the law in limbo.  Sultan tells me why AI is so bad at explaining the results it reaches. He sees light at the end of the tunnel. I see more stealthy imposition of woke academic values. But we find common ground in trashing the Facial Recognition Act, a lefty Democrat bill that throws together every bad proposal to regulate facial recognition ever put forward and adds a few more. A red wave will be worth it just to make sure this bill stays dead. Finally, Sultan reviews the National Security Agency’s report on supply chain security. And I introduce the elephant in the room, or at least the mastodon: Elon Musk’s takeover at Twitter and the reaction to it. I downplay the probability of CFIUS reviewing the deal. And I mock the Elon-haters who fear that scrimping on content moderation will turn Twitter into a hellhole that includes *gasp!* Republican speech. Turns out that they are fleeing Twitter for Mastodon, which pretty much invented scrimping on content moderation.
undefined
Nov 1, 2022 • 44min

Coming Soon: TwitTok!

You heard it on the Cyberlaw Podcast first, as we mash up the week’s top stories: Nate Jones commenting on Elon Musk’s expected troubles running Twitter at a profit and Jordan Schneider noting the U.S. government’s creeping, halting moves to constrain TikTok’s sway in the U.S. market. Since Twitter has never made a lot of money, even before it was carrying loads of new debt, and since pushing TikTok out of the U.S. market is going to be an option on the table for years, why doesn’t Elon Musk position Twitter to take its place?  It’s another big week for China news, as Nate and Jordan cover the administration’s difficulties in finding a way to thwart China’s rise in quantum computing and artificial intelligence (AI). Jordan has a good post about the tech decoupling bombshell. But the most intriguing discussion concerns China’s remarkably limited options for striking back at the Biden administration for its harsh sanctions. Meanwhile, under the heading, When It Rains, It Pours, Elon Musk’s Tesla faces a criminal investigation over its self-driving claims. Nate and I are skeptical that the probe will lead to charges, as Tesla’s message about Full Self-Driving has been a mix of manic hype and lawyerly caution.  Jamil Jaffer introduces us to the Guacamaya “hacktivist” group whose data dumps have embarrassed governments all over Latin America—most recently with reports of Mexican arms sales to narco-terrorists. On the hard question—hacktivists or government agents?—Jamil and I lean ever so slightly toward hacktivists.  Nate covers the remarkable indictment of two Chinese spies for recruiting a U.S. law enforcement officer in an effort to get inside information about the prosecution of a Chinese company believed to be Huawei. Plenty of great color from the indictment, and Nate notes the awkward spot that the defense team now finds itself in, since the point of the operation seems to have been, er, trial preparation.  To balance the scales a bit, Nate also covers suggestions that Google's former CEO Eric Schmidt, who headed an AI advisory committee, had a conflict of interest because he also invested in AI startups. There’s no suggestion of illegality, though, and it is not clear how the government will get cutting edge advice on AI if it does not get it from investors like Schmidt. Jamil and I have mildly divergent takes on the Transportation Security Administration's new railroad cybersecurity directive. He worries that it will produce more box-checking than security. I have a similar concern that it mostly reinforces current practice rather than raising the bar.  And in quick updates: The Federal Trade Commission has made good on its promise to impose consent decree obligations on CEOs as well as companies. The first victim is the CEO of Drizly. France has fined Clearview AI the maximum possible fine for not defending a General Data Protection Regulation (GDPR) case – unsurprisingly, because Clearview AI does no business in France. I offer this public service announcement: Given the risk that your Prime Minister’s phone could be compromised, it’s important to change them every 45 days.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app