• @[email protected]
    link
    fedilink
    English
    2184 months ago

    I remember seeing a comment on here that said something along the lines of “for every dangerous or wrong response that goes public there’s probably 5, 10 or even 100 of those responses that only one person saw and may have treated as fact”

  • @Nobody
    link
    English
    1394 months ago

    Tech company creates best search engine —-> world domination —> becomes VC company in tech trench coat —-> destroy search engine to prop up bad investments in artificial intelligence advanced chatbots

    • @stellargmite
      link
      English
      624 months ago

      Then Hire cheap human intelligence to correct the AIs hallucinatory trash, trained from actual human generated content in the first place which the original intended audience did understand the nuanced context and meaning of in the first place. Wow more like theyve shovelled a bucket of horse manure on the pizza as well as the glue. Added value to the advertisers. AI my arse. I think calling these things language models is being generous. More like energy and data hungry vomitrons.

      • @WhatAmLemmy
        link
        English
        21
        edit-2
        4 months ago

        Calling these things Artificial Intelligence should be a crime. It’s false advertising! Intelligence requires critical thought. They possess zero critical thought. They’re stochastic parrots, whose only skill is mimicking human language, and they can only mimic convincingly when fed billions of examples.

        • @SergeantSushi
          link
          English
          34 months ago

          It’s more of a Reddit Collective Intelligence (CI) than an AI.

    • @cm0002
      link
      English
      174 months ago

      You either die a hero or live long enough to become the villain

    • @normalexit
      link
      English
      54 months ago

      Stealing advanced chat bots, that’s a great way to describe it.

  • @iAvicenna
    link
    English
    1374 months ago

    “Many of the examples we’ve seen have been uncommon queries,”

    Ah the good old “the problem is with the user not with our code” argument. The sign of a truly successful software maker.

    • @voluble
      link
      English
      514 months ago

      “We don’t understand. Why aren’t people simply searching for Taylor Swift”

    • Neo
      link
      fedilink
      English
      54 months ago

      You’re holding typing it wrong!

    • @[email protected]
      link
      fedilink
      English
      44 months ago

      I mean…I guess you could parahrase it that way. I took it more as “Look, you probably aren’t going to run into any weird answers.”. Which seems like a valid thing for them to try to convey.

      (That being said, fuck AI, fuck Google, fuck reddit.)

      • @radicalautonomy
        link
        English
        29
        edit-2
        4 months ago

        “I’m feeling depressed” is not an uncommon query under capitalism run amok. “One Reddit user recommends jumping off the Golden Gate Bridge” is not just a weird answer, it is a wholly irresponsible one.

        So, no, their response is not valid. It is entirely user-blaming in order to avoid culpability.

        • @Grimy
          link
          English
          7
          edit-2
          4 months ago

          There are currently a lot of fake screenshots since it quickly became a meme, pretty sure this is one.

          Still a fuck up in general on their part.

          • @radicalautonomy
            link
            English
            54 months ago

            Fair enough. I know how easy it is to fake a Google search with inspect element. I’ve been trying to verify for myself how shitty it is, but AI Overviews don’t seem to be showing up for me (I’ve done all the correct steps to enable it, but no searches generate results).

          • @ilinamorato
            link
            English
            24 months ago

            The fact that it’s hard to tell is pretty damning, for the public perception of SGE if not for its actual capabilities.

  • Lvxferre
    link
    fedilink
    English
    84
    edit-2
    4 months ago

    The reason why Google is doing this is simply PR. It is not to improve its service.

    The underlying tech is likely Gemini, a large language model (LLM). LLMs handle chunks of words, not what those words convey; so they have no way to tell accurate info apart from inaccurate info, jokes, “technical truths” etc. As a result their output is often garbage.

    You might manually prevent the LLM from outputting a certain piece of garbage, perhaps a thousand. But in the big picture it won’t matter, because it’s outputting a million different pieces of garbage, it’s like trying to empty the ocean with a small bucket.

    I’m not making the above up, look at the article - it’s basically what Gary Marcus is saying, under different words.

    And I’m almost certain that the decision makers at Google know this. However they want to compete with other tendrils of the GAFAM cancer for a turf called “generative models” (that includes tech like LLMs). And if their search gets wrecked in the process, who cares? That turf is safe anyway, as long as you can keep it up with enough PR.

    Google continues to say that its AI Overview product largely outputs “high quality information” to users.

    There’s a three letters word that accurately describes what Google said here: lie.

    • lemmyvore
      link
      fedilink
      English
      224 months ago

      At some point no amount of PR will hide the fact search has become useless. They know this but they’re getting desperate and will try anything.

      I’m waiting for Yahoo to revive their link directory or for Mozilla to revive DMOZ. That will be the sign that shit level is officially chin-height.

  • @SlopppyEngineer
    link
    English
    744 months ago

    Correcting over a decade of Reddit shitposting in what, a few weeks? They’re pretty ambitious.

    • @atrielienz
      link
      English
      264 months ago

      This is perhaps the most ironic thing about the whole reddit data scraping thing and Spez selling out the user data of reddit to LLM’S. Like. We spent so much time posting nonsense. And then a bunch of people became mods to course correct subreddits where that nonsense could be potentially fatal. And then they got rid of those mods because they protested. And now it’s bots on bots on bots posting nonsense. And they want their LLM’S trained on that nonsense because reasons.

      • @[email protected]
        link
        fedilink
        English
        14 months ago

        The reason being to attract investment dollars. Fuck making a good product, you just gotta make a product that’s got all the hot buzzwords so idiot billionaires will buy shares and make line go up.

    • @[email protected]
      link
      fedilink
      English
      24 months ago

      Well, they’ve got the people for it! It’s not like they recently downsized to provide their rich executives with more money or anything…

    • @Moreless
      link
      English
      34 months ago

      After enough time and massaging the data, it could all work out - Google’s head of search aka Yahoo former search exec

  • @[email protected]
    link
    fedilink
    English
    644 months ago

    Good, remove all the weird reddit answers, leaving only the “14 year old neo-nazi” reddit answers, “cop pretending to be a leftist” reddit answers, and “39 year old pedophile” reddit answers. This should fix the problem and restore google back to its defaults

    • @Spider2013
      link
      English
      84 months ago

      Let’s remove all the /s and /jk comments, r/jokes, /greentexts posts etc.

  • Maxnmy's
    link
    English
    604 months ago

    Isn’t the model fundamentally flawed if it can’t appropriately present arbitrary results? It is operating at a scale where human workers cannot catch every concerning result before users see them.

    The ethical thing to do would be to discontinue this failed experiment. The way it presents results is demonstrably unsafe. It will continue to present satire and shitposts as suggested actions.

    • @[email protected]
      link
      fedilink
      English
      44 months ago

      It won’t get people killed very often at all. Statistically there’s like no way you’ll know anybody who dies from taking a hallucinated suggestion. Give some thought to the investors who thought long and hard about how much money to put in. They worked hard and if a couple people a year have to die because of it how is that a bad trade off?

      -kinda how it literally is almost unless the hubris is stronger than I imagine

  • originalucifer
    link
    fedilink
    59
    edit-2
    4 months ago

    kinda reads like ‘Weird Al’ answers… like, yankovic seems like a nice guy and i like his music, but how many answers could he have?

    • Anas
      link
      English
      84 months ago

      that’s the point of phrasing the title that way, they get engagement from comments pointing it out

    • Nightwatch Admin
      link
      fedilink
      English
      64 months ago

      Well, he did lose on Jeopardy, so I guess to balance it out he must have all the answers?

      • @NotMyOldRedditName
        link
        English
        2
        edit-2
        4 months ago

        It was all that time he spent in a closet with Vanna White. He would’ve won otherwise.

        • @[email protected]
          link
          fedilink
          English
          24 months ago

          He once told me that Everything I Know Is Wrong, and then started calling me and, Dumb and Ugly.

  • @[email protected]
    link
    fedilink
    English
    524 months ago

    Don’t worry, they’ll insert it all into captchas and make us label all their data soon.

    • @Wappen
      link
      English
      414 months ago

      “Select the URL that answers the question most appropriately”

    • @btaf45
      link
      English
      34 months ago

      I still can’t figure out what captcha wants. When it tells me to select all squares with a bus, I can never get it right unless every square is a separate picture.

      • Karyoplasma
        link
        fedilink
        English
        14 months ago

        Captcha was implemented to stop bots and now they are so fucked up that bots are better at solving captchas than humans.

        For example the captcha on this site (it’s a google search proxy). It took me 4 tries and last time I checked, I was human. To be fair, they call it an intelligence check, so maybe that was the problem.

  • @[email protected]
    link
    fedilink
    English
    514 months ago

    This thing is way too half baked to be in production. A day or two ago somebody asked Google how to deal with depression and the stupid AI recommended they jump off the Golden Gate Bridge because apparently some redditor had said that at some point. The answers are so hilariously wrong as to go beyond funny and into dangerous.

    • kamen
      link
      English
      44 months ago

      Hopefully this pushes people into critical thinking, although I agree that being suicidal and getting such a suggestion is not the right time for that.

      “Yay! 1st of April has passed, now everything on the Internet is right again!”

      • @[email protected]
        link
        fedilink
        English
        94 months ago

        Hi everyone, JP here. This person is making a reference to the Weird Al biopic, and if you haven’t seen it, you should.

        Weird Al is an incredible person and has been through so much. I had no idea what a roller coaster his life has been! I always knew he was talented but i definitely didn’t know how strong he is.

        His autobiography will go down in history as one of the most powerful and compelling and honest stories ever told. If you haven’t seen it, you really, really should.

        ITT NO SPOILERS PLS

        • @Pretzilla
          link
          English
          34 months ago

          Not a spoiler but FYI, that biopic was 100% Al generated

    • gregorum
      link
      fedilink
      English
      2
      edit-2
      4 months ago

      either this joke has about 2 days of life left in it, or it’ll go “too many chefs” and endure for years

  • @flop_leash_973
    link
    English
    424 months ago

    If you have to constantly manually intervene in what your automated solutions are doing, then it is probably not doing a very good job and it might be a good idea to go back to the drawing board.

    • @jeffwOP
      link
      English
      15
      edit-2
      4 months ago

      That cant answer most questions though. For example, I hung a door recently and had some questions that it answered (mostly) accurately. An encyclopedia can’t tell me how to hang a door

      • Balder
        link
        English
        64 months ago

        Yeah, there’s a reason this wasn’t done before generative AI. It couldn’t handle anything slightly more specific.

      • @linearchaos
        link
        English
        54 months ago

        Same I was dealing with a strange piece of software I searched configs and samples for hours and couldn’t find anything about anybody having any problems with the weird language they use. I finally gave up and asked gpt, it explained exactly what was going wrong and gave me half a dozen answers to try to fix it.

      • @btaf45
        link
        English
        24 months ago

        That cant answer most questions though.

        It would make AI much more trustworthy. You cannot trust chatGPT on anything related to science because it tells you stuff like the Andromeda galaxy being inside the Milky Way. The only way to fix that is to directly program basic known science into the AI.

          • @btaf45
            link
            English
            1
            edit-2
            4 months ago

            It depends on how you ask the question, and there is also a randomization done on the AI answer that you get. The point is that you would be foolish to trust AI to accurately answer science questions. Why the f*ck would you want a randomized answer to a science question?

            ME: how far is andromeda from caldwell 70?

            ChatGPT: Caldwell 70, also known as NGC 7000 or the North America Nebula, is an emission nebula located in the constellation Cygnus. It is much closer to us than the Andromeda Galaxy. The North America Nebula is estimated to be around 1,600 light-years away from Earth. Therefore, in comparison to the distance between the Milky Way and Andromeda, Caldwell 70 is significantly closer to us.

            In fact Caldwell 70 is over twice as far as Andromeda, because Caldwell 70 in NGC 300, not NDC 7000 (Caldwell-20). Also, the AI didn’t even answer the question that I actually asked.

            • @btaf45
              link
              English
              1
              edit-2
              4 months ago

              LMFAO This time the AI doesn’t even know that 6.12 > 2.537. It doesn’t even know to use the computer to compute things even though it literally runs inside a computer.

              Me: is andromeda or ngc300 closer to us?

              ChatGPT: Between the Andromeda Galaxy (M31) and NGC 300, NGC 300 is closer to us. NGC 300 is a spiral galaxy located in the constellation Sculptor, and it’s approximately 6.12 million light-years away from Earth. In contrast, the Andromeda Galaxy (M31) is much farther, at around 2.537 million light-years away. Therefore, NGC 300 is closer to us than the Andromeda Galaxy.

    • @ilinamorato
      link
      English
      54 months ago

      Google wants that to work. That’s why the “knowledge panels” kept popping up at the top of search before now with links to Wikipedia. They only want to answer the easy questions; definitions, math problems, things that they can give you the Wikipedia answer for, Yelp reviews, “Thai Food Near Me,” etc. They don’t want to answer the hard questions; presumably because it’s harder to sell ads for more niche questions and topics. And “harder” means you have to get humans involved. Which is why they’re complaining now that users are asking questions that are “too hard for our poor widdle generative AI to handle :-(”— they don’t want us to ask hard questions.