cross-posted from: https://lemmings.world/post/21993947

Since I suggested that I’m willing to hook my computer to an LLM model and to a mastodon account, I’ve gotten vocal anti AI sentiments. Im wondering if fediverse has made a plug in to find bots larping as people, as of now I haven’t made the bot and I won’t disclose when I do make the bot.

  • @[email protected]
    link
    fedilink
    English
    1023 hours ago

    I’m starting to think this account is the LLM experiment. A poor one, though. It just keeps saying the same thing and not accepting new information.

  • @[email protected]
    link
    fedilink
    English
    11
    edit-2
    1 day ago

    What I am failing to understand is: why?

    Is this just for some petty motivation, like “proving” that people can not easily detect the difference between text from an LLM vs text an actual person? If that is the case, can’t you just spare all this work and look at all the extensive studies that measure exactly this?

    Or perhaps it is something more practical, and you’ve already built something that you think is useful and it would require lots of LLM bots to work?

    Or is it that you fancy yourself too smart for the rest of us, and you will feel superior by having something that can show us for fools for thinking we can discern LLMs from “organic” content?

    • FaceDeer
      link
      fedilink
      123 hours ago

      What I am failing to understand is: why?

      People do things for fun sometimes. You could ask this about almost anything that people do that isn’t directly and immediately related to survival. Why do people play basketball? It’s just pointlessly bouncing a ball around in a room, following arbitrary rules that only serve to make the apparent goal of getting it through the hoop harder.

      • @[email protected]
        link
        fedilink
        English
        523 hours ago

        People do things for fun sometimes.

        This is not the same as playing basketball. Unleashing AI bots “just for the fun of it” ends up effectively poisoning the well.

        • FaceDeer
          link
          fedilink
          -123 hours ago

          No, it’s not the same. I was using basketball as an analogy. Someone who doesn’t enjoy basketball wouldn’t “get it”, just as you’re not “getting” the fun that can come from building and playing around with AI bots. Different people find different things to be fun.

          • SharkAttak
            link
            fedilink
            116 hours ago

            Yeah but this would be like having fun building and playing around with basket-bots, and then unleashing them in the middle of a game. I don’t think anybody would find it funny…

            • FaceDeer
              link
              fedilink
              113 hours ago

              I’m not making any statements about what other people may think about it. The question was why this guy is doing this thing, and I expect it’s because he finds it fun.

              I’ve seen plenty of weird bots on Reddit over the years that had no point to their existence other than, presumably, having been fun to code. This is probably one such.

          • @[email protected]
            link
            fedilink
            English
            223 hours ago

            I completely understood your analogy, and I certainly understand the fun in tinkering with technology. What you might be missing is that it seems that OP is planning to write a bunch of bots in here and then test how well people can detect them, and that affects other people.

            • FaceDeer
              link
              fedilink
              019 hours ago

              Right, and this is presumably something he finds fun. You were asking why, I was explaining why.

    • @[email protected]OP
      link
      fedilink
      English
      -824 hours ago

      Why would be easy to understand. To implement counter AI measures, best way to counter AI is to implement it.

      Btw I like the Lemmy hostility

      • @[email protected]
        link
        fedilink
        English
        423 hours ago

        To implement counter AI measures, best way to counter AI is to implement it.

        You are jumping into this conclusion with no real indication that it’s actually true. The best we get for any type of arms race is a forced stalemate due to Mutually Assured Destruction. With AI/“Counter” AI, you are bringing a medicine that is worse than the disease.

        Feel free to go ahead, though. The more polluted you make this environment, the more people will realize that it is not sustainable unless we started charging from everyone and/or adopt a very strict Web of Trust.

        • @[email protected]OP
          link
          fedilink
          English
          -523 hours ago

          Well the only way to know I’m write or wrong with certainty is to test myself. It’s like we both don’t know if I’m right. I’m going to figure it out because I’ll investigate and I’ll have fun investigating.

          The alternative is we don’t do anything about the LLM bots on the fediverse and they just integrate in. Also there’s only one way to see if your MAD is correct

          • @[email protected]
            link
            fedilink
            English
            823 hours ago

            You don’t do tests in an actual production environment. It is unethical and irresponsible.

            Feel free to do your experiments on your servers, with people who are aware that they are being subject to some type of experiment. Anything else and I will make sure to get as many admins as possible to ban you and your bots from the federation.

              • @[email protected]
                link
                fedilink
                English
                421 hours ago

                You want to write software that subverts the expectations of users (who are coming here with the expectation they will be chatting with other people) and abusing resources provided by others who did not ask you to help you with any sort of LLM detection.

                • @[email protected]OP
                  link
                  fedilink
                  English
                  -321 hours ago

                  That doesn’t answer my question and it’s not coherent. Like I’m apparently “abusing resources” when I use a bot but not when I use a bot to make a leaderboard that tracks fediverse streamers stats or if I make the content with fleshy brain, just my resources.

  • Corgana
    link
    fedilink
    English
    4
    edit-2
    23 hours ago

    Moderation on the Feviderse is different than on commercial platforms because it’s context-dependent instead of rules-dependent. That means that a user accout (bot or otherwise) that does not contribute to the spirit of a community will not be welcomed.

    There is largely no incentive to run an LLM that is a constructive member of a community, bots are built to push an agenda, product, or exhibit generally disruptive behavior. Those things are unwelcome in spaces built for discussion. So mods/admins don’t need to know “how to identify a bot”, they need to know "how to identify unwanted behavior".

    • @[email protected]OP
      link
      fedilink
      English
      -221 hours ago

      There is and it’s propaganda. Even I knew ai has been used in propaganda for months now

  • @[email protected]
    link
    fedilink
    English
    161 day ago

    why dont you make something like r/SubredditSimulator? It would be cool to see what modern llms can do in this respect.

    • FaceDeer
      link
      fedilink
      523 hours ago

      I actually wandered away from the SubredditSimulator successor subreddits because even with GPT2 they were “too good”, they lost their charm. Back when SubredditSimulator was still active it was using simple Markov chain based text generators and they produced the most wonderfully bonkers nonsense, that was hilarious. Modern AIs just sound like regular people, and I get that everywhere already.

    • @Valmond
      link
      English
      121 day ago

      A fun twist could be letting people post, and llms answer. Each with its specific angle.

  • hendrik
    link
    fedilink
    English
    11
    edit-2
    1 day ago

    Lemmy and Fediverse software have a box to tick in the profile settings. That shows an account is a bot. And other people can then choose to filter them out or read the stuff. Usually we try to cooperate. Open warfare between bots and counter-bots isn’t really a thing. We do this for spam and ban-evasion, though.

    • @[email protected]OP
      link
      fedilink
      English
      -124 hours ago

      I know and it’s up to the accounts author to do that. I know when I made my bot on mastodon. And counter measure against ai should start

      • hendrik
        link
        fedilink
        English
        2
        edit-2
        22 hours ago

        I really don’t think this place is about bot warfare. Usually our system works well. I’ve met one person who used ChatGPT as a form of experiment on us, and I talked a bit to them. Most people come here to talk or share the daily news or argue politics. With the regular Linux question in between. It’s mostly genuine. Occasionally I have to remind someone to tick the correct boxes, mostly for nsfw, because the bot owners generally behave and set this correctly, on their own. And I mean for people who like bot content, we already have X, Reddit and Facebook… I think that would be a good place for this, since they already have a goot amount of synthetic content.

        • @[email protected]OP
          link
          fedilink
          English
          -321 hours ago

          Lol not the place for bot warfare 😆. That’s like saying America isn’t a place for class warfare and yet the rich already mobilized. Plus someone is probably doing the same thing as me and not disclosed it

          • hendrik
            link
            fedilink
            English
            1
            edit-2
            20 hours ago

            It is like I said. People on platforms like Reddit complain a lot about bots. This platform on the other hand is kind of supposed to be the better version of that. Hence not about the same negative dynamics. And I can still tell ChatGPT’s uniquie style and a human apart. And once you go into detail, you’ll notice the quirks or the intelligence of your conversational partner. So yeah, some people use ChatGPT without disclosing it. You’ll stumble across that when reading AI generated article summaries and so on. You’re definitely not the first person with that idea.

            • @[email protected]OP
              link
              fedilink
              English
              019 hours ago

              Reddit is different than fediverse. They work on different principles and I argue fediverse is very libertarian.

              Is there anyway you can rule out survivorship bias? Plus I’m already doing preliminary stuff and I looking into making response shorter so that there’s less information to go on and trying different models

              • hendrik
                link
                fedilink
                English
                118 hours ago

                What kind of models are you planning to use? Some of the LLMs you run yourself? Or the usual ChatGPT/Grok/Claude?

                • @[email protected]OP
                  link
                  fedilink
                  English
                  215 hours ago

                  So far I’ve experimented with ollama3.2 (I don’t have enough ram for 3.3). Deepseek r1 7b( discovered that it’s verbose and asks a lot of questions) and I’ll try phi4 later. I could use the chat-gpt models since I have tokens. Ironically I’m thinking about making a genetic algorithm of prompt templates and a confidence check. It’s oddly meta

  • @TootSweet
    link
    English
    13
    edit-2
    1 day ago

    What would the “bot that finds bots larping as people” do exactly? Ban them? Block or mute them? File reports? DM an admin about them?

    If it’s just for pointing out suspected LLM-generated material, I think humans would be better at that than bots would be, and could block, mute, or file reports as necessary.

    Also, are you saying you intend to make a bot that posts LLM-generated drivel or a bot that detects LLM-generated drivel?

    • @[email protected]OP
      link
      fedilink
      English
      01 day ago

      At minimum flag them. Think about an Amazon review but that detects fact reviews or sponsor block. Make a database and the elements that are their post get eliminated.

      I’ll see if people can pick up on bots. If they can see if they do any on that.

      I won’t exactly say what I intend but it will involve LLM

      • @[email protected]
        link
        fedilink
        English
        131 day ago

        Please don’t create a bot account that is not flagged as a bot. There is enough malicious activity that you might not see because mods/admins are doing their job.

        There is no need to increase the volunteer work these people do.

          • @Agrivar
            link
            English
            624 hours ago

            Well, that’s all the nonsense I needed from you. Blocked.

          • @[email protected]
            link
            fedilink
            English
            624 hours ago

            This… This is how fediverse works though… You are also on an instance run basically entirely on volunteer work.

            If “we quit” there is no fediverse. On top of it all moderation tools are not mature enough yet.

            You’d rather have your data scraped and sold? Or pay to use a platform like this?

            • @[email protected]OP
              link
              fedilink
              English
              -324 hours ago

              Lol nice apparently stopping moderation means closing instances, plus how the fediverse works is redundancy. Someone can make an instance and make it complete anarchy and people can join it

              • @[email protected]
                link
                fedilink
                English
                423 hours ago

                Instance administration is also quite time consuming and is also based on volunteer work.

                The vast majority of content on lemmy will be inaccessible to an instance who is under complete anarchy / unmodded.

                There is the fediseer project that helps instances block such instances.

                If you spin up an LLM farm instance it’s guaranteed to be blocked in many of the big ones - making your instance a lone island.

                • @[email protected]OP
                  link
                  fedilink
                  English
                  023 hours ago

                  So they’ll don’t do LLMs because it will lead to instance that aren’t connected to many other instances. Better than fediverse being gone

      • @JustAnotherKay
        link
        English
        121 day ago

        Forcing the Fediverse into your experiment isn’t going to get you into the position you think it will be the way

  • Rikudou_Sage
    link
    fedilink
    English
    51 day ago

    Feel free to try, though I don’t trust the LLMs enough to detect other LLMs.

  • @[email protected]
    link
    fedilink
    English
    81 day ago

    What I would expect to happen is: their posts quickly start getting many downvotes and comments saying they sound like an AI bot. This, in turn, will make it easy for others to notice and block them individually. Other than that, I’ve never heard of automated solutions to detect LLM posting.

    • @[email protected]OP
      link
      fedilink
      English
      -171 day ago

      Ahhhhh I doubt average Lemmy users are smart enough to detect LLM content. I already thought of a few ways to find LLM bots

      • @Docus
        link
        English
        220 hours ago

        The further get down this thread, the more you sound like a person I don’t want to deal with. And looking at the downvotes, I’m not the only one.

        If you want people blocking you, perhaps followed by communities and instances blocking you as well, carry on.

        • @[email protected]OP
          link
          fedilink
          English
          -119 hours ago

          That’s fine if people don’t want to deal with me I never interacted with them before this thread (most likely)

      • @[email protected]
        link
        fedilink
        English
        51 day ago

        Imo their style of writing is very noticeable. You can obcure that by prompting LLM to deliberately change that, but I think it’s still often noticeable, not only specific wordings, but higher-level structure of replies as well. At least, that’s always been the case for me with ChatGPT. Don’t have much experience with other models.

        • @Docus
          link
          English
          120 hours ago

          That’s not entirely true. University assignments are scanned for signs of LLM use, and even with several thousand words per assignment, a not insignificant proportion comes back with an ‘undecided’ verdict.

          • @[email protected]
            link
            fedilink
            English
            119 hours ago

            With human post-processing it’s definitely more complicated. Bots usually post fully automatic content, without human supervision and editing.