• @TommySoda
    link
    English
    195 months ago

    Funny how the average person figured this out almost immediately while Google needed half a year to figure it out with their researchers. Almost like they were ignoring it as long as they could for the sake of profit. Fuck around and find out, I guess.

  • Optional
    link
    English
    75 months ago

    If only anyone - anyone at all - could have foreseen this horrible outcome

  • AwkwardLookMonkeyPuppet
    link
    English
    35 months ago

    They’re admitting that they are the source of a massive problem. But are they going to do anything about it, or keep pushing their shitty, half-baked AI? It’s crazy to me how much worse their AI is than ChatGPT, considering all of the financial and engineering resources available to Google.

  • @[email protected]
    link
    fedilink
    English
    35 months ago

    It’s alright guys–I just looked up a solution and Google suggests eating glue and a few small pebbles will solve the issue.

  • @requiem
    link
    English
    35 months ago

    Google Researchers Now Also Say We All Should Use Their Shit AI Search That Tells Us To Eat Glue

  • @darthelmet
    link
    English
    35 months ago

    With their shitty AI this belongs on not the onion.

  • cobysev
    link
    English
    25 months ago

    Ahh, just in time for the election season.

  • mozz
    link
    fedilink
    15 months ago

    I think almost certainly that disinformation based on fake accounts simply posting memes or targeted viewpoints, hoping to send the message through sheer repetition, it still a lot more common than doctored factual information. (Not that that means that faked up disinformation isn’t a problem - just saying I think it’s still relatively rare as a vehicle for disinformation.)

    Why would you even open yourself up to “see, the underlying citation for this thing they’re saying is not true” when you might as well not even enter into the sphere of backing up what you’re saying with facts, and just state your assertions as if they were facts, instead.