He generally shows most of the signs of the misinformation accounts:

  • Wants to repeatedly tell basically the same narrative and nothing else
  • Narrative is fundamentally false
  • Not interested in any kind of conversation or in learning that what he’s posting is backwards from the values he claims to profess

I also suspect that it’s not a coincidence that this is happening just as the Elon Musks of the world are ramping up attacks on Wikipedia, specially because it is a force for truth in the world that’s less corruptible than a lot of the others, and tends to fight back legally if someone tries to interfere with the free speech or safety of its editors.

Anyway, YSK. I reported him as misinformation, but who knows if that will lead to any result.

Edit: Number of people real salty that I’m talking about this: Lots

  • @kava
    link
    1015 hours ago

    To be fair, it’s virtually impossible to tell whether a text was written by an AI or not. If some motivated actor is willing to spend money to generate quality LLM output, they can post as much as they want on virtually all social media sites.

    The internet is in the process of eating itself as we speak.

    • @douglasg14b
      link
      58 hours ago

      You don’t analyze the text necessary, you analyze the heuristics, behavioral patterns, sentiment…etc It’s data analysis and signal processing.

      You, as a user, probably can’t. Because you lack information that the platform itself is in a position to gather and aggregate that data.

      There’s a science to it, and it’s not perfect. Some companies keep their solutions guarded because of the time and money required to mature their systems & ML models to identify artificial behavior.

      But it requires mature tooling at the very least, and Lemmy has essentially none of that.

      • @kava
        link
        2
        edit-2
        8 hours ago

        yes of course there are many different data points you can use. along with complex math you can also feed a lot of these data points in machine learning models and get useful systems that can perhaps red flag certain accounts and then have processes with more scrutiny that require more resources (such as a human reviewing)

        websites like chess.com do similar things to find cheaters. and they (along with lichess) have put out some interesting material going over some of what their process looks like

        here i have two things. one is that lichess, which is mostly developed and maintained by a single individual, is able to maintain an effective anti-cheat system. so I don’t think it’s impossible that lemmy is able to accomplish these types of heuristics and behavioral tracking

        the second thing is that these new AIs are really good. it’s not just the text, but the items you mentioned. for example I train a machine learning model and then a separate LLM on all of reddit’s history. the first model is meant to try and emulate all of the “normal” human flags. make it so it posts at hours that would match the trends. vary the sentiments in a natural way. etc. post at not random intervals of time but intervals of time that looks like a natural distribution, etc. the model will find patterns that we can’t imagine and use those to blend in

        so you not only spread the content you want (whether it’s subtle product promotion or nation-state propaganda) but you have a separate model trained to disguise that text as something real

        that’s the issue it’s not just the text but if you really want to do this right (and people with $$$ have that incentive) as of right now it’s virtually impossible to prevent a motivated actor from doing this. and we are starting to see this with lichess and chess.com.

        the next generation of cheaters aren’t just using chess engines like Stockfish, but AIs trained to play like humans. it’s becoming increasingly difficult.

        the only reason it hasn’t completely taken over the platform is because it’s expensive. you need a lot of computing power to do this effectively. and most people don’t have the resources or the technical ability to make this happen.

    • @[email protected]
      link
      fedilink
      1
      edit-2
      8 hours ago

      But something like Reddit at least potentially has the resources to throw some money at the problem. They can employ advanced firewalls and other anti-bot/anti-AI thingies. It’s very possible that they’re pioneering some state-of-the-art stuff in that area.

      Lemmy is a few commies and their pals. Unless China is bankrolling them, they’re out of their league.

    • @[email protected]
      link
      fedilink
      4
      edit-2
      11 hours ago

      spend money to generate quality LLM output, they can post as much as they want on virtually all social media sites.

      $20 for a chatgpt pro account and fractions of pennies to run a bot server. It’s really extremely cheap to do this.

      I don’t have an answer to how to solve the “motivated actor” beyond mass tagging/community effort.

      • @kava
        link
        511 hours ago

        $20 for a chatgpt pro account and fractions of pennies to run a bot server. It’s really extremely cheap to do this.

        openAI has checks for this type of thing. They limit number of requests per hour with the regular $20 subscription

        you’d have to use the API and that comes at a cost per request, depending on which model you are using. it can get expensive very quickly depending on what scale of bot manipulation you are going for

      • @douglasg14b
        link
        1
        edit-2
        8 hours ago

        Heuristics, data analysis, signal processing, ML models…etc

        It’s about identifying artificial behavior not identifying artificial text, we can’t really identify artificial text, but behavioral patterns are a higher bar for botters to get over.

        The community isn’t in a position to do anything about it the platform itself is the only one in a position to gather the necessary data to even start targeting the problem.

        I can’t target the problem without first collecting the data and aggregating it. And Lemmy doesn’t do much to enable that currently.