• DarkThoughts
    link
    fedilink
    32 months ago

    I’m sorry but this has been a thing long before “AI” based results. Scammers always used tricks to end up at the top of search results.

    • Flying SquidOP
      link
      27
      edit-2
      2 months ago

      Scammers have been a thing long before writing. That doesn’t mean people shouldn’t be made aware of new ways to be scammed.

        • @[email protected]
          link
          fedilink
          English
          122 months ago

          One could argue there is a new aspect. When it comes to retraining the public on what to trust, there’s a likely blind spot where a person may know to only call a number listed on a trusted website. So they’ll check to see if they’re on a bank domain before picking up the phone. If Google, being a big name, presents the number in an official looking way at the top of their pages, it may pass the sniff test. And get people into trouble.

          Featured snippets would prominently display source URLs:

          But AI summaries? More opaque:

          • @markstos
            link
            82 months ago

            Rights. Scammers were in the listing before, and AI has helped them appear more trustworthy.

          • DarkThoughts
            link
            fedilink
            12 months ago

            Featured snippets would prominently display source URLs:

            That’s meaningless with how easy it is to register legit looking faux domains and how it is even easier to create legit looking sub domains. People who fall for those type of scams will likely not even understand what a domain is.

    • @njm1314
      link
      42 months ago

      At the top, but that isn’t what this post is saying. This is saying that Google’s AI gave the scammer answer. Not that they provided a link you could click on, but that Google itself said this is the number.

      • DarkThoughts
        link
        fedilink
        -12 months ago

        It’s not an AI, it’s just word prediction, which also just follows stupid algorithms, just like those who determine search results. Both can be tricked / manipulated if you understand how they work. It’s still the same principle for both cases.

        • @njm1314
          link
          22 months ago

          Regardless of what they call it, they’re the ones presenting it. I’m not arguing they can’t be tricked. I’m arguing they are fundamentally different concepts. One is offering you a choice of sources, the other is making a claim. That’s a pretty big distinction in a whole mess is different ways. Not the least of which is legal.

          • DarkThoughts
            link
            fedilink
            02 months ago

            I’m sorry but no. It’s not Google making that claim, it’s just the LLM replying in a confident way because that’s how they are expected to work. As I said, word prediction. You can install the tiniest / most dumbest model on your local PC too and ask the same question. It will give you some random hallucinated number and act like that’s what you’re looking for due to its default system prompt telling it to sound like an AI assistant. In the case of search engines the LLM is directly hooked into the search engine itself and just does the same thing you’d do and search for a hopefully fitting search result. So scammers playing those search algorithms to get a good spot will end up becoming the recommendation for the LLM to tell the user. It’s the same thing, just displayed slightly differently. All the cool AI assistant stuff they try to present this as, is just an illusion, a word based roleplay. The only benefit here is that they can somewhat understand abstract questions, which is helpful for certain search queries, but in the end it is always the user’s responsibility to check the actual search result.