Summary
X, owned by Elon Musk, is suing to block California’s AB 2655, a law requiring social media platforms to remove “materially deceptive content” like deepfakes about politicians within 120 days of an election.
The lawsuit argues the law violates the First Amendment and Section 230 of the Communications Decency Act, claiming it would lead to broad censorship of political speech to avoid enforcement costs.
A similar California law, AB 2839, was blocked last month for overreaching into constitutionally protected speech, including parody and satire.
AB 2655 is set to take effect in 2024.
Hmm.
I’d think that that’d also affect Lemmy instance operators were it to enter into force.
The text and its scope would also be interesting, because I can’t see a practical way for, say, an instance operator in Bakersfield, California, to have any realistic way to evaluate the truth of claims about an election, in, say, Malaysia, if it extends to all elections. I suspect that even in California alone, acting as an arbiter of truth would be tough to do reasonably.
EDIT: Looking at the bill text, it probably does not currently, as it looks like it has a floor on the number of California users, and there aren’t yet enough users on the Threadiverse:
It’s also interesting that traditional media apparently is not covered:
It is apparently specific to elections in California.
My guess is that it’ll probably get overturned on some First Amendment challenge, but we shall see…
The EU has already implemented a similar law making disinformation illegal.
https://chambers.com/articles/the-digital-services-act-dsa-and-combating-disinformation-10-key-takeaways
I don’t think it’s practical or even really possible to remove all fake content. The safer and far more effective solution IMO is to digitally sign authentic content instead, and assume that anything unsigned is fake.
Technology is not the solution to a social problem. Big tech companies have an obligation to make it more difficult for state actors and extremists from multiplying obviously false claims about elections and protected minorities.
That’s great in theory. How are you going to force foreign actors to do it?
“Russia, stop spamming Facebook with all that manipulative fake garbage”
“Oh OK you only had to ask. Sorry about that”
Tweak algorithms to limit reach of new accounts, don’t allow russians to buy ads or blue checkmarks, have a team of moderators that moderate based on known bad images, known bad IP addresses, known bad account creation patterns. If non-profit researchers are able to uncover botnets, there’s no reason why billion dollar companies can’t. It’s a cat and mouse game, but it’s not acceptable for these companies to put in 0 effort. These companies are better funded than the Internet Research Agency.
How would you rate the odds of all those things happening in order for your scenario to come true?