

LW - SB 1047: Final Takes and Also AB 3211 by Zvi
Aug 28, 2024
34:02
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SB 1047: Final Takes and Also AB 3211, published by Zvi on August 28, 2024 on LessWrong.
This is the endgame. Very soon the session will end, and various bills either will or won't head to Newsom's desk. Some will then get signed and become law.
Time is rapidly running out to have your voice impact that decision.
Since my last weekly, we got a variety of people coming in to stand for or against the final version of SB 1047. There could still be more, but probably all the major players have spoken at this point.
So here, today, I'm going to round up all that rhetoric, all those positions, in one place. After this, I plan to be much more stingy about talking about the whole thing, and only cover important new arguments or major news.
I'm not going to get into the weeds arguing about the merits of SB 1047 - I stand by my analysis in the Guide to SB 1047, and the reasons I believe it is a good bill, sir.
I do however look at the revised AB 3211. I was planning on letting that one go, but it turns out it has a key backer, and thus seems far more worthy of our attention.
The Media
I saw two major media positions taken, one pro and one anti.
Neither worried itself about the details of the bill contents.
The Los Angeles Times Editorial Board endorses SB 1047, since the Federal Government is not going to step up, and using an outside view and big picture analysis. I doubt they thought much about the bill's implementation details.
The Economist is opposed, in a quite bad editorial calling belief in the possibility of a catastrophic harm 'quasi-religious' without argument, and uses that to dismiss the bill, instead calling for regulations that address mundane harms. That's actually it.
OpenAI Opposes SB 1047
The first half of the story is that OpenAI came out publicly against SB 1047.
They took four pages to state its only criticism in what could have and should have been a Tweet: That it is a state bill and they would prefer this be handled at the Federal level. To which, I say, okay, I agree that would have been first best and that is one of the best real criticisms.
I strongly believe we should pass the bill anyway because I am a realist about Congress, do not expect them to act in similar fashion any time soon even if Harris wins and certainly if Trump wins, and if they pass a similar bill that supersedes this one I will be happily wrong.
Except the letter is four pages long, so they can echo various industry talking points, and echo their echoes. In it, they say: Look at all the things we are doing to promote safety, and the bills before Congress, OpenAI says, as if to imply the situation is being handled. Once again, we see the argument 'this might prevent CBRN risks, but it is a state bill, so doing so would not only not be first bet, it would be bad, actually.'
They say the bill would 'threaten competitiveness' but provide no evidence or argument for this. They echo, once again without offering any mechanism, reason or evidence, Rep. Lofgren's unsubstantiated claims that this risks companies leaving California. The same with 'stifle innovation.'
In four pages, there is no mention of any specific provision that OpenAI thinks would have negative consequences. There is no suggestion of what the bill should have done differently, other than to leave the matter to the Feds. A duck, running after a person, asking for a mechanism.
My challenge to OpenAI would be to ask: If SB 1047 was a Federal law, that left all responsibilities in the bill to the USA AISI and NIST and the Department of Justice, funding a national rather than state Compute fund, and was otherwise identical, would OpenAI then support? Would they say their position is Support if Federal?
Or, would they admit that the only concrete objection is not their True Objection?
I would also confront them with AB 3211, b...
This is the endgame. Very soon the session will end, and various bills either will or won't head to Newsom's desk. Some will then get signed and become law.
Time is rapidly running out to have your voice impact that decision.
Since my last weekly, we got a variety of people coming in to stand for or against the final version of SB 1047. There could still be more, but probably all the major players have spoken at this point.
So here, today, I'm going to round up all that rhetoric, all those positions, in one place. After this, I plan to be much more stingy about talking about the whole thing, and only cover important new arguments or major news.
I'm not going to get into the weeds arguing about the merits of SB 1047 - I stand by my analysis in the Guide to SB 1047, and the reasons I believe it is a good bill, sir.
I do however look at the revised AB 3211. I was planning on letting that one go, but it turns out it has a key backer, and thus seems far more worthy of our attention.
The Media
I saw two major media positions taken, one pro and one anti.
Neither worried itself about the details of the bill contents.
The Los Angeles Times Editorial Board endorses SB 1047, since the Federal Government is not going to step up, and using an outside view and big picture analysis. I doubt they thought much about the bill's implementation details.
The Economist is opposed, in a quite bad editorial calling belief in the possibility of a catastrophic harm 'quasi-religious' without argument, and uses that to dismiss the bill, instead calling for regulations that address mundane harms. That's actually it.
OpenAI Opposes SB 1047
The first half of the story is that OpenAI came out publicly against SB 1047.
They took four pages to state its only criticism in what could have and should have been a Tweet: That it is a state bill and they would prefer this be handled at the Federal level. To which, I say, okay, I agree that would have been first best and that is one of the best real criticisms.
I strongly believe we should pass the bill anyway because I am a realist about Congress, do not expect them to act in similar fashion any time soon even if Harris wins and certainly if Trump wins, and if they pass a similar bill that supersedes this one I will be happily wrong.
Except the letter is four pages long, so they can echo various industry talking points, and echo their echoes. In it, they say: Look at all the things we are doing to promote safety, and the bills before Congress, OpenAI says, as if to imply the situation is being handled. Once again, we see the argument 'this might prevent CBRN risks, but it is a state bill, so doing so would not only not be first bet, it would be bad, actually.'
They say the bill would 'threaten competitiveness' but provide no evidence or argument for this. They echo, once again without offering any mechanism, reason or evidence, Rep. Lofgren's unsubstantiated claims that this risks companies leaving California. The same with 'stifle innovation.'
In four pages, there is no mention of any specific provision that OpenAI thinks would have negative consequences. There is no suggestion of what the bill should have done differently, other than to leave the matter to the Feds. A duck, running after a person, asking for a mechanism.
My challenge to OpenAI would be to ask: If SB 1047 was a Federal law, that left all responsibilities in the bill to the USA AISI and NIST and the Department of Justice, funding a national rather than state Compute fund, and was otherwise identical, would OpenAI then support? Would they say their position is Support if Federal?
Or, would they admit that the only concrete objection is not their True Objection?
I would also confront them with AB 3211, b...