- cross-posted to:
- aboringdystopia
- cross-posted to:
- aboringdystopia
cross-posted from: https://feddit.uk/post/18601017
The truth is, it’s getting harder to describe the extent to which a meaningful percentage of Americans have dissociated from reality. As Hurricane Milton churned across the Gulf of Mexico last night, I saw an onslaught of outright conspiracy theorizing and utter nonsense racking up millions of views across the internet. The posts would be laughable if they weren’t taken by many people as gospel. Among them: Infowars’ Alex Jones, who claimed that Hurricanes Milton and Helene were “weather weapons” unleashed on the East Coast by the U.S. government, and “truth seeker” accounts on X that posted photos of condensation trails in the sky to baselessly allege that the government was “spraying Florida ahead of Hurricane Milton” in order to ensure maximum rainfall, “just like they did over Asheville!”
As Milton made landfall, causing a series of tornados, a verified account on X reposted a TikTok video of a massive funnel cloud with the caption “WHAT IS HAPPENING TO FLORIDA?!” The clip, which was eventually removed but had been viewed 662,000 times as of yesterday evening, turned out to be from a video of a CGI tornado that was originally published months ago. Scrolling through these platforms, watching them fill with false information, harebrained theories, and doctored images—all while panicked residents boarded up their houses, struggled to evacuate, and prayed that their worldly possessions wouldn’t be obliterated overnight—offered a portrait of American discourse almost too bleak to reckon with head-on.
Even in a decade marred by online grifters, shameless politicians, and an alternative right-wing-media complex pushing anti-science fringe theories, the events of the past few weeks stand out for their depravity and nihilism. As two catastrophic storms upended American cities, a patchwork network of influencers and fake-news peddlers have done their best to sow distrust, stoke resentment, and interfere with relief efforts. But this is more than just a misinformation crisis. To watch as real information is overwhelmed by crank theories and public servants battle death threats is to confront two alarming facts: first, that a durable ecosystem exists to ensconce citizens in an alternate reality, and second, that the people consuming and amplifying those lies are not helpless dupes but willing participants…
… “The primary use of ‘misinformation’ is not to change the beliefs of other people at all. Instead, the vast majority of misinformation is offered as a service for people to maintain their beliefs in face of overwhelming evidence to the contrary”…
… As one dispirited meteorologist wrote on X this week, “Murdering meteorologists won’t stop hurricanes.” She followed with: “I can’t believe I just had to type that”…
Hot take/unpopular opinion/downvote me to oblivion incoming: Section 230 is actively harmful and should be repealed.
Attach liability and the tech bros will change course on moderation so fast you’ll end up with a TBI.
While I wasn’t directly involved with content moderation, the abuse/fraud circle overlaps quite a lot since it’s the same group of assholes doing sketchy shit. (And then not paying for it, which is where I got involved.)
The really dirty secret is the tech bros could absolutely get rid of state-sponsored propaganda bots if they wanted to, it’s just not in their interests to do so. Finding bot accounts is usually not THAT hard, since any given bot will have a variety of fingerprinting methods. The good ones (which I always assumed were state-sponsored) are harder to find because they don’t do stupid shit like use the same user agent or a naming pattern you can suss out a regex that will identify them, or sign up from the same email domain, or all connect from a single ASN or VPN provider, etc., but uh, you can find them if you have enough determination and logs and ability to correlate a large enough data set.
(My qualifications to even make that statement: ~8 years chasing botnets around a major cloud hosting provider and booting somewhere in the rough area of 1,000,000 malicious user accounts. It absolutely wasn’t all of them, and they popped back up as soon as they figured out how they were being caught with a new set of TTPs to get around the changes made to keep them out, so it was fun if never-ending.)
Also, fun thing: those shitty AI chatbots everyone hates? Well, before they were “AI” they were “machine learning” and you can absolutely feed a proper ML model a whole pile of, for example, Nazi propaganda posts, then dump a giant dataset on it and tell it to go find all the Nazis, based on content and phrasing. It’s a shockingly simple trick and it’s why Elon immediately killed Twitter’s API and firehose, because you can do this with like, one guy who knows python and a laptop, and then find all the Nazis, and all the friends of the Nazis, and all the bots boosting the Nazis with the data Twitter was providing.
I’ll also argue just slightly about the ‘there used to be responsibility!’ and that’s true, and only just barely true. When Limbaugh was cheering and thanking god for dead gays and reading their names on the radio, who exactly was going to stop him? Regan’s FCC?
That reasoning only works if you think the government can be trusted to act in your best interests and isn’t subject to a rug pull every 4 years, which unfortunately, it mostly is.
That’s not to say you couldn’t build a legitimate and viable regulatory structure, it’s more that the one the US has that could regulate this (FCC and maybe the FTC) is utter crap right now, though the FTC is making some progress with antitrust things for the first time in decades… unless Diaper Don wins, and then welp so much for that.
100% agree.
I think something needs to replace section 230 (like you said, such a regulatory structure is hard), but it needs to go before then. It was basically put fourth with the trust that fledgling tech companies wouldn’t abuse it to this extent… But they are no longer fledgling. And they have abused it.