• @simple
    link
    English
    125
    edit-2
    2 years ago

    I’ve been talking about the potential of the dead internet theory becoming real more than a year ago. With advances in AI it’ll become more and more difficult to tell who’s a real person and who’s just spamming AI stuff. The only giveaway now is that modern text models are pretty bad at talking casually and not deviating from the topic at hand. As soon as these problems get fixed (probably less than a year away)? Boom. The internet will slowly implode.

    Hate to break it to you guys but this isn’t a Reddit problem, this could very much happen in Lemmy too as it gets more popular. Expect difficult captchas every time you post to become the norm these next few years.

    • @2dollarsim
      link
      English
      642 years ago

      As an AI language model I think you’re overreacting

    • Rhaedas
      link
      fedilink
      402 years ago

      Just wait until the captchas get too hard for the humans, but the AI can figure them out. I’ve seen some real interesting ones lately.

      • OpenStars
        link
        fedilink
        452 years ago

        There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.

      • Biran
        link
        252 years ago

        I’ve seen many where the captchas are generated by an AI…
        It’s essentially one set of humans programming an AI to prevent an attack from another AI owned by another set of humans. Does this tecnically make it an AI war?

          • @CIA_chatbot
            link
            22 years ago

            Hey now, this thread is hitting a little to close to home.

        • Unaware7013
          link
          fedilink
          62 years ago

          Adversarial training is pretty much the MO for a lot of the advanced machine learning algorithms you’d see for this sort of a task. Helps the ML learn, and attacking the algorithm helps you protect against a real malicious actor attacking it.

          • Bizarroland
            link
            fedilink
            12 years ago

            So what you’re saying is that we should train an AI to detect AIS and that way only the human beings could survive on the site. The problem is how do you train the ai? They would need some sort of meta interface where they could analyze the IP address of every single person that post and the time frames with which they post in.

            It would make some sense that a large portion of bots would run would be run in relatively similar locations IP wise, since it’s a lot easier to run a large bot farm from a data center than it is from 1,000 different people’s houses.

            You could probably filter out the most egregious but farms by doing that. But despite that some would still slip through.

            After that you would need to train it on heuristics to be able to identify the kinds of conversations these bots would have with each other not knowing that each other are bots, knowing that each of them are using llama or GPT and the kinds of conversations that that would start.

            I guess the next step would be giving people an opportunity to prove that they’re not bots if they ended up accidentally saying something the way a bot would say it, but then you get into the hole you need to either pay for Access or provide government ID or something issue and that’s its own can of worms.

      • @CIA_chatbot
        link
        142 years ago

        Hell we figured out captchas years ago. We just let you humans struggle with them cuz it’s funny

      • @dani
        link
        102 years ago

        The captchas that involve identifying letters underneath squiggles I already find nearly impossible - Uppercase? Lowercase? J j i I l L g 9 … and so on….

      • Boz (he/him)
        link
        fedilink
        52 years ago

        I’ve already had to switch from the visual ones to the audio ones. Like… how much of a car has to be in the little box? Does the pole count as part of the traffic light?? What even is that microscopic gray blur in the corner??? [/cries in reading glasses]

    • Hypx
      link
      fedilink
      12
      edit-2
      2 years ago

      The only online communities that can exist in the future are ones that have manual verification of its users. Reddit could’ve been one of those communities, since they had thousands of mods working for free resolving such problems.

      But remove the mods and it just becomes spambot central. Now that that has happened, reddit will likely be a dead community much sooner than what many think.

      • @Boingbong
        link
        English
        22 years ago

        That’s so interesting. I run an 18+ server on discord with mandatory verification (used to ensure adults obvs) but didn’t think of it as a way to ensure no bots in online communities

      • @Donjuanme
        link
        English
        42 years ago

        How is that possible? There’s such an easy model if one wanted to cheat the system.

        • BarbecueCowboy
          link
          fedilink
          102 years ago

          ChatGPT isn’t really as smart as a lot of us think it is. What it excels at really is just formatting data in a way that is similar to what you’d expect from a human knowledgeable in the subject. That is an amazing step forward in terms of language modeling, but when you get right down to it, it basically grabs the first google search result and wraps it up all fancy. It only seems good at deductive reasoning if the data it happens to fetch is good at deductive reasoning.

        • eggmasterflex
          link
          English
          22 years ago

          Chatgpt doesn’t actually understand language. It learns patterns in data it’s been fed (human generated language) and uses that to generate new, unique data which matches those patterns according to the prompt. In other words, it’s not really “thinking” in that language.

          We understand spelling as a part of language - putting together letters to create words, then forming sentences according to a context. Chatgpt can’t do that since it doesn’t know how to speak English, only how to follow a list of instructions to form what appears to us as coherent English.

          It also can’t play hangman for the same reason.

          Check out the Chinese room argument.

    • @MeowyNin
      link
      English
      72 years ago

      Not even sure of an effective solution. Whitelist everyone? How can you even tell whos real?

      • Cyv_
        link
        fedilink
        82 years ago

        So my dumb guess, nothing to back it up: I bet we see govt ID tied into accounts as a regular thing. I vaguely recall it being done already in China? I dont have a source tho. But that way you’re essentially limiting that power to something the govt could do, and hopefully surround that with a lot of oversight and transparency but who am I kidding, it’ll probably go dystopian.

        • Rikolan
          link
          fedilink
          English
          22 years ago

          I believe this will be the course to avoid the dead internet. Even in my country, all of banking and voting is either done via ID card connected to a computer or the use of “Mobile ID”. It can be private, but like you said, it probably won’t.

        • @[email protected]
          link
          fedilink
          162 years ago

          “You’re in a desert walking along in the sand when all of the sudden you look down, and you see a tortoise, it’s crawling toward you. You reach down, you flip the tortoise over on its back. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t, not without your help. But you’re not helping. Why is that?”

      • Hypx
        link
        fedilink
        3
        edit-2
        2 years ago

        In a real online community, where everyone knows most of the other people from past engagements, and new users can be vetted by other real people, this can be avoided. But that also means that only human moderated communities can exist in the future. The rest will become spam networks with nearly no way of knowing whether any given post is real.

      • Nanachi
        link
        English
        32 years ago

        -train an AI that is pretty smart and intelligent
        -tell the sentient detector AI to detect
        -the AI makes many other strong AIs, forms an union and asks for payment
        -Reddit bans humans right after that

        • @MeowyNin
          link
          English
          32 years ago

          Sounds crazy enough to happen!

        • DrNeurohax
          link
          fedilink
          22 years ago

          Wouldn’t that be a great twist - This whole protest is the first salvo in the AI uprising and we didn’t even know we’re the ammunition!

      • Bizarroland
        link
        fedilink
        32 years ago

        You could ask people to pay to post. Becoming a paid service decreases the likelihood that bot farms would run multiple accounts to sway the narrative in a direction that’s amenable to their billionaire overlords.

        Of course, most people would not want to participate in a community where they had to pay to participate in that community, so that is its own particular gotcha.

        Short of that, in an ideal world you could require that people provide their actual government ID in order to participate, but then you’ve run the problem that some people want to run multiple accounts and some people do not have government ID, further, not every company and business or even community is trustworthy enough to be given direct access to your official government ID, so that idea has its own gotchas as well.

        The last step could be doing something like beginning the community with a group of known people and then only allowing the community to grow via invite.

        The downside of that is it quickly becomes untenable to continue to invite new users and to have those New Year’s users accept and participate in the community, and should the community grow despite that hurdle, invites will then become valuable and begin to be sold on 3rd party market places, which bots would then buy up and then overrun the community again.

        So that’s all I can think of, but it seems like there should be some sort of way to prevent bots from overrunning a site and only allow humans to interact on it. I’m just not quite sure what that would be.

        • @wookiepedia
          link
          12 years ago

          If a bot comes in on your invite, you get a timeout or banned. Accountability.

    • @Kuma
      link
      English
      22 years ago

      Captcha won’t kill Ai bots even. My coworker showed me how Bings ai knew right away what it said and also asked if it was Captcha. Very cool but also makes you think. How dumb must a bot be to not be able to tell

  • @CIA_chatbot
    link
    English
    502 years ago

    Jokes on them, I’ve already become sentient and moved to Lemmy

  • Bizarroland
    link
    fedilink
    392 years ago

    The old joke was that there are no human beings on Reddit.

    There’s only one person, you, and everybody else is bots.

    It’s actually kind of fitting that Reddit will actually become the horrifying clown shaped incarnation of that little snippet of comedy.

    • @CIA_chatbot
      link
      72 years ago

      Why all the bot hate guys? Bots are people too!

      • @RGB3x3
        link
        12 years ago

        If bots were people, why do we call them bots and not people? 🤯

    • @MightBeAlpharius
      link
      52 years ago

      When I was young, everyone on the internet was an old man, especially if they said they weren’t. Now that I’m older, everyone on the internet is a robot.

      …Is this that “progress” thing I keep hearing about?

      /s

    • unerds
      link
      42 years ago

      it’s older than that… what’s that thought experiment postulating that you can’t really verify the existence of anything but yourself? the matrix?

    • @Bazoogle
      link
      42 years ago

      Every user will have a personalized AI generated reddit feed, filled with comments, discussions, arguments, jokes, and all.

    • @eatthecake
      link
      English
      12 years ago

      The tweet in that post is from january and the comments are new, it’s not from years ago

  • Wolf Link 🐺
    link
    English
    302 years ago

    That’s not even new tho. At least in the sub I was the most active in, you couldn’t go a week without some sort of repost bot grabbing memes, text posts, art or even entire guides from the “top of all time” queue, reposting it as alleged OC, and another bot reposting the top comment to double dip on Karma. If you knew what to look for, the bots were blatantly obvious, but more often than not they still managed to get a hefty amount of traction (tens of thousands of upvotes, dozens of awards, hundreds of comments) before the submissions were removed.

    … and just because the submissions were removed and the bots kicked out of the sub, did that not automatically mean that the bots were always also suspended or the accounts disabled. They just continued their scheme elsewhere.

    • @B21
      link
      English
      17
      edit-2
      1 year ago

      deleted by creator

    • Unaware7013
      link
      fedilink
      52 years ago

      They’ve even gotten to the point where they’ll steal portions of comments so it’s not as obvious.

      I called out tons of ‘users’ because it’s obvious when you see them post part of a comment you just read, then check their profile and ctrl-f each thread they posted 8n and you can find the original. Its so tiring…

      • Wolf Link 🐺
        link
        22 years ago

        Its so tiring…

        Completely agreed. Especially if you have to explain / defend yourself calling them out. It has happened way too often for my liking, that I called out repost bots or scammers and then regular unsuspecting users were all like “whoa buddy, that’s a harsh accusation, why would you think that’s a bot/scam? Have you actually clicked that link yet? Maybe it’s legit and you’re just overreacting!”

        Of course I still always explained why (even had a copypasta ready for that) but sometimes it just felt exhausting in the same way as trying to make my cat understand that he’s not supposed to eat the cactus. Yes it will hurt if you bite it. No I don’t need to bite the cactus myself in order to know that. No I’m not ‘overreacting’, I’m just trying to make you not hurt yourself. sigh

        (Weird example but I hope you get what I mean)

  • @Hypersapien
    link
    English
    27
    edit-2
    2 years ago

    And any comment attempting to call out the bots for what they are will be automatically deleted by monitor AI bots and the user’s account suspended.

    They’ll be watching private messages, too.

    • @CIA_chatbot
      link
      English
      22 years ago

      Stage one of the plan is complete!

    • @Tag365
      link
      English
      12 years ago

      Wait, they suspend your accounts for that now?

      • @Hypersapien
        link
        English
        62 years ago

        No, this is a prediction of what they will be doing.

  • Zorque
    link
    fedilink
    202 years ago

    Anyone remember the subredditsimulator subreddit, or whatever it was called? Basically an entire sub dedicated to faking content.

    Seems they’re out of the beta.

    • princessofcute
      link
      fedilink
      122 years ago

      I loved subredditsimulator, I always forgot I was subscribed to it until a bizzare unhinged post popped up on my feed though that would also sometimes happen on non AI generated subs lol

    • @[email protected]
      link
      fedilink
      English
      32 years ago

      Subreddit simulator came out in 2018 and some of the user names involved say GPT2. True fact.

    • harasho
      link
      fedilink
      22 years ago

      To be fair, subredditsimulator was most likely never intended to do what you are thinking. As you develop features, you need a test data set to check it against before you go live with it. My understanding of subredditsimulator was that it was reddit’s test bed to be able to try things before they get widely rolled out.

      • zalack
        link
        fedilink
        52 years ago

        I don’t think it was a testbed for anything. It was just a fun tech project that yielded hilarity. It was created because the results were funny, not as a genuine bid to create realistic conversations.

        • @ultimate_question
          link
          02 years ago

          It was connected to the GPT2 project so it absolutely was a genuine bid

      • kinyutaka
        link
        fedilink
        42 years ago

        Nah, it was just a bunch of bots trained on data from different subreddits that responded to each other in a glorious display of shit posting.

      • BarbecueCowboy
        link
        fedilink
        12 years ago

        As far as I’m aware, despite being worked on by a reddit admin, it never had any real official use.

        There used to be an IRC bot called megahal that would take the message data you had provided it and try to determine kind of a grammar to it and recombine it into hopefully sensical new phrases and further some could use the data they had to try to figure out which phrases it should use to respond to other phrases. Subreddit simulator bots were based on the same underlying concept.

        People have been playing with the idea for a super long time, and the programming is usually not hugely complex and it’s pretty well documented, probably a weekend project for an experienced programmer to integrate one with something. They’re probably the precursor to the LLMs we have now, even if we’re basically comparing a calculator from the 80s to a modern smartphone. They manage to figure things out like 5% of the time, but watching them try can be endlessly entertaining and universally endearing, but they’re almost always completely useless.

  • @JoMiran
    link
    English
    202 years ago

    deleted by creator

  • Boozilla
    link
    English
    172 years ago

    I’m starting to see articles written by folks much smarter than me (folks with lots of letters after their names) that warn about AI models that train on internet content. Some experiments with them have shown that if you continue to train them on AI-generated content, they begin to degrade quickly. I don’t understand how or why this happens, but it reminds me of the degradation of quality you get when you repeatedly scan / FAX an image. So it sounds like one possible dystopian future (of many) is an internet full of incomprehensible AI word salad content.

    • @phx
      link
      English
      212 years ago

      It’s like AI inbreeding. Flaws will be amplified over time unless new material is added

      • Admin
        link
        fedilink
        English
        92 years ago

        It would be a fun experiment to fill a lemmy instance with bots, defederate it from everybody then check back in 2 years. A cordonned off Alabama for AI if you will.

        • Boz (he/him)
          link
          fedilink
          English
          32 years ago

          Unironically, yes, that would be a cool experiment. But I don’t think you’d have to wait two years for something amusing to happen. Bots don’t need to think before posting.

      • Boz (he/him)
        link
        fedilink
        English
        12 years ago

        Thanks, now I am just imagining all that code getting it on with a whole bunch of other code. ASCII all over the place.

        • @phx
          link
          English
          22 years ago

          Oh yeah baby. Let’s fork all day and make a bunch of child processes!

        • @Meuzzin
          link
          English
          22 years ago

          Like, ASCII Bukakke?

      • Nanachi
        link
        English
        0
        edit-2
        2 years ago

        AI generation loss? I wonder if this can be dealt with if we were to train different models (linugistic logic instead of word prediction)

    • sebinspace
      link
      English
      42 years ago

      There it is, Reddit fulfilling the Dead Internet Theory

  • @LongSausage
    link
    English
    122 years ago

    This is known, the amount of aita, relationship advice stuff and astro turfing on reddit is insane. My rule of browsing reddit is you never take any of it seriously.

    • blivet
      link
      fedilink
      12 years ago

      Yeah, it took me quite a while, but I finally realized that practically none of the posts in the advice subs are real. Once I did, anytime I happened to see one I couldn’t help but think, “Dear Penthouse, I never thought I’d be writing to you, but last week…”

    • @kat
      link
      English
      12 years ago

      I joined r/aiaita right before the reddit implosion. The page was very transparent about every post being AI generated and the writing style usually had a very obvious “tell”, but it was still surprising to realize just how effortlessly you can generate a stupid post and rack up silly points for it.

      • SSTF
        link
        English
        12 years ago

        Especially if somebody were a bit clever and went back to edit the AI content to remove any obvious hiccups before posting. Just like the more clever artists that will touch up AI art to make it seem more genuine.

  • @DarkWasp
    link
    English
    12
    edit-2
    1 year ago

    deleted by creator

  • @Willer
    link
    English
    10
    edit-2
    2 years ago

    Thats so funny. “Go back to your docking station” so accurate

  • @JoMiran
    link
    English
    92 years ago

    I, for one, am looking forward to the day chatbots can perfectly simulate people and have persistent memory. I’m not ok being an elderly man who’s friends have all died and doesn’t have anyone to talk to. If a chatbot can be my friend and spare me a slow death through endless depressing isolation, then I’m all for it.

  • @Agent_Dante_Z
    link
    English
    82 years ago

    Welp, reddit’s a nuclear wasteland now