Reddit CEO says facial verification may be introduced. Ostensibly to prevent bots.

But we all know how dangerous this can be. But most likely Reddit users will just accept it.

Although they have a great free analogue right under their noses - Lemmy. Which is many times better than its competitor.

I wish more people would discover Lemmy, but that’s unlikely.

  • Melvin_Ferd
    link
    fedilink
    arrow-up
    1
    ·
    2 days ago

    So a computer can’t ever truly generate a random number but it can generate total random dialogue.

    I think we just didn’t make an effort to catalog and look hard enough to identify these patterns.

    I feel like if we started to truly look at the most obvious place we could see a lot of things that can be used to identify.

    • Alaknár@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Think of it this way: there are billions of types of online interactions where detection is either impossible (an image or link post) or extremely difficult (general conversation where sometimes even humans don’t sound like humans due to slang/education/etc).

      Not only that, you’d end up with a “tug of war” where the existence of such detectors would power the improvement of the bots, which would require the improvement of the detector (which is always more difficult).

      And the other option is an anonymous token that defines you as a human user. Which is simpler and cheaper to implement?

      • Melvin_Ferd
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 day ago

        I think you’re wrong. Completely wrong. There are billions of messages but any intelligent person knows you don’t go through each one by one.

        Do you ever see those guys who can look at a picture of a tree in a field and identify where it is in the world. There’s billions of trees. How do they do that with precision?

        Classification.

        They can train their bots on whatever they want. The human is smarter. If they tune them, you use the methods and knowledge learned and adapt. Like generating random numbers, there’s limits almost always. You don’t find them by inaction.

        4chan a website full of racist, neo nazi and pedophiles were just goofing around and they do insane OSNIT research. Lemmy is fucking around with beans and moths. I don’t know. I’m just saddened by the state of things and how much better everybody else is at things I always thought the left was good at. It’s just been eye opening to see.

        • AnotherUsername@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          “the left” is not a monolith. A lot of people have various relationships with"the left". I don’t identify with “the left” much because I disagree, strongly, with being put in a groupthink box. I happen to have some beliefs that the groupthink box on the left has vhemently and painfully attacked me for. But I also REALL REALLY REALLY think the right is wildly wrong, because they’re being simplistic, stupid, and very very corrupt. And I STRONGLY object to corruption. I happen to genuinely and unironically love my country and the promise of a fair chance for all. I love clean water. Clean air. A healthy wilderness ecosystem you can spend time in. Hard work that does good for the world. I love the freedom to research, honestly, and speak freely and without fear.

          I don’t see the right sharing some key parts of those values these days. But I’m concerned the left has become too inactive, too sullen, too much a party of victims who lash out reactively.

          Don’t have a solution yet except to vote.

        • Alaknár@sopuli.xyz
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          The human is smarter

          So, you want to hire hundreds of thousands of moderators? The human is smarter, yeah, but not the bot doing the detection.

          If they tune them, you use the methods and knowledge learned and adapt

          You say it like “tuning them” is a magic trick, where they wave their hands a couple of times, and now the detection algorithms are smarter than the bots writing the comments. SOMEONE has to go in, and figure out the maths to make the detection algorithms smarter and better at detecting. That takes time and resources.

          You’re also forgetting that “tuning them” works both ways. The people writing the shit-post bots also work on improving their tools, to make them indistinguishable from human posts.

          Also: how can you tall that “lol, kys noob” is written by a human, or by a bot? The vast majority of comments online are these short shit-comments.

          I’m just saddened by the state of things and how much better everybody else is at things I always thought the left was good at

          1. 4chan is not “magically” “good” at “OSINT”. They fuck up a lot of things too. It just so happens that what they’re most famous for required one dude who wrote a script, a bunch of kids with bandwidth to spare.
          2. Their OSINT is super iffy, hit-and-miss. Much like Reddit’s. Or any other large enough community’s.
          3. What @[email protected] said.
          • Melvin_Ferd
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            11 hours ago

            So, you want to hire hundreds of thousands of moderators? The human is smarter, yeah, but not the bot doing the detection.

            I don’t know where this is coming from. Nobody is being hired. If anything I’m becoming more anti-mod lately. I feel like put boxes on things that suck oxygen out of the room rapidly. But that’s a different discussion.

            Maybe I’m reading this wrong but to clarify I am not saying we need to build our own bot detection but I would be a nice have eventually. I am saying we should be crowd sourcing our collective anger and ADHD or Autism or whatever drives us to post bean moth lemmy slop and instead focus on collection of the worst bot infestations. There are patterns. Bots are not random enough that they can’t be identified with large crowd sourced efforts. They’re also in their infancy which means it will only get harder going forward.

            You or I aren’t able to avaliable accurately tell right now. Have you ever seen the Sinclair news video? The one where every news station repeats the same dialogue. Can you or I flip on the news any day of the week and call that out, unlikely. But we can logical understand it is something that happens. It becomes obvious there is a script only when you collect the data and begin to analyze it. That is what I’m saying we need to figure out and gamify.

            Name generation, text, patterns. At the start it won’t be accurate. But as more data is collected it’ll become obvious. If the bots were that good, these websites would have left their APIs open. But they closed them so we can’t collect this data. I’m the type of person when powerful people do something like that, I want to know why and work around that. It’s not a coincidence that they locked their sites up when people were given tools where anyone could collect data and feed it into AI for analysis.

            Our inaction to do anything when the greatest opportunities are right in front of us but slipping away is a tragedy of this generation.

            • Alaknár@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              I am saying we should be crowd sourcing our collective anger and ADHD or Autism or whatever (…) and instead focus on collection of the worst bot infestations.

              That’s what “being a moderator” is, mate. You want hundreds of thousands of moderators.

              There are patterns. Bots are not random enough that they can’t be identified with large crowd sourced efforts

              You’re wrong.

              It becomes obvious there is a script only when you collect the data and begin to analyze it.

              You just said:

              I am not saying we need to build our own bot detection

              So, which is it?

              It becomes obvious there is a script only when you collect the data and begin to analyze it.

              There’s a massive difference between local news stations receiving a script to read out, and a bot farm having a “be negative, unfriendly, sow chaos” instruction.

              At the start it won’t be accurate

              So, it just won’t work? Got it.

              But as more data is collected it’ll become obvious

              I don’t think you undersand what you’re talking about. Don’t get me wrong, I’m not trying to be contrarian here, I just honestly think that your idea of “AI bots” is kind of like “we have prepared one million sentences, and now our bots will be picking between them to generate whole posts on social networks”.

              I mean, sure, there can be patterns - like the whole “LinkedIn post” style, where most of the time it’s fairly obvious that you’re reading an AI-generated slop… But that’s not what state-entities or even just hackers use. They have access to much more sophisticated content.

              If the bots were that good, these websites would have left their APIs open.

              Reddit’s API is no longer open. Didn’t do a thing to stop bots.

              But they closed them so we can’t collect this data

              You don’t need however many API keys to collect that kind of data. At least not from Reddit.

              Our inaction to do anything when the greatest opportunities are right in front of us but slipping away is a tragedy of this generation.

              Your proposed action is the equivalent of Sisyphus and his stone. Because you really seem to be forgetting that the AI tech is getting better all the time. And that any AI-detection actions you take feed that process. “Oh, they’ve detected these posts? OK, let’s tweak the algo until we get through and then flood them with our content”.

              Let’s even assume that you somehow pull it off and get a 100% detection rate as of right now. Six months down the line that will go down to 20%. Etc. etc. And you’ll be catching thousands of legitimate users in the crossfire.

              An anonymous “proof of humanity” token solves all AI issues without anyone having to spend billions on research and manpower.