- cross-posted to:
- shockingnews
- cross-posted to:
- shockingnews
Elon Musk, the owner of X, criticized advertisers with expletives on Wednesday at The New York Times’s DealBook Summit.
Elon Musk, the owner of X, criticized advertisers with expletives on Wednesday at The New York Times’s DealBook Summit.
By following those questionable feeds, and just those feeds on a brand new account until they were able to get ads to show up along with those feeds and then state that it’s always showing up beside those feeds
Yes, so they were able to get them to show up then. That means there are not mechanisms in place at Twitter that would prevent those ads from showing up next to Nazi posts. Which means the companies absolutely had a reason to pull ad funding. If you owned a company and were spending millions on ads, would you be ok knowing that it’s possible your ad shows up next to Nazi posts or Holocaust denial? Would it matter that it doesn’t happen most of the time? If it’s possible then Twitter has massively dropped the ball.
Where in the article do they say those ads “always” show up beside Nazi posts? They outlined their methods, and showed screenshots for proof. Even the CEO confirmed that those ads did show up next to Nazi posts, she just claimed it didn’t happen often. Media matters never claimed they happened all the time with every ad. If you had above a 5th grade reading level or had read the original article you’d know better.
With or without people monitoring twitter you’re still get that type of content on any platform. You can only reduce the chance never completely stop. The point is you would have to be in those groups following those feeds to see that content.(allegedly) If it took Media Matters having to follow those groups for hours and then following Disney or any other company to show that, then twitter is working to make it harder for that to happen. If this is about a company’s image, even if a Nazi account would happen to see ads in their feed unless they were out her telling you, you would be none the wiser. I highly doubt if they did the same thing on any other left lining platform that towed the line, would there be the same reaction?
There would be the same reaction if FB or Instagram or any other big platform was found to be allowing ads next to objectionable content (content the company in the ads would not want associated with their brand) AND that platform said that it wasn’t an issue, they won’t change policies to prevent it, and told them to go fuck themselves.
Twitter could absolutely have filters in place to prevent ads from showing up next to literal Nazi posts with a simple word list. The posts Media Matters showed were not subtle or underhanded, they were saying the quiet parts out loud. It would be trivial to prevent ads entirely from those posts, but then they’d lose ad space. It would mean less if this had happened with borderline posts or posts using coded language.
That isn’t gaming the system. That means they if someone follows mostly far right accounts, they’ll see the ads show up next to far right content.
If they make no effort to deprioritize Nazi content or treat it differently, then ads will run with that content. They have to purposely sandbag that content so it doesn’t appear.
Honestly, the methodology here just confirms the argument. If someone is following mostly Nazis, they’ll be suggested content that is mostly Nazis, and ads are going to run alongside it. I suspect that’s not a negligible share of the accounts.