I suspect that this is the direct result of AI generated content just overwhelming any real content.

I tried ddg, google, bing, quant, and none of them really help me find information I want these days.

Perplexity seems to work but I don’t like the idea of AI giving me “facts” since they are mostly based on other AI posts

ETA: someone suggested SearXNG and after using it a bit it seems to be much better compared to ddg and the rest.

  • Lvxferre
    link
    fedilink
    English
    131 month ago

    Stable Diffusors are pretty good at regurgitating information that’s widely talked about.

    Stable Diffusion is an image generator. You probably meant a language model.

    And no, it’s not just OP. This shit has been going on for a while well before LLMs were deployed. Cue to the old “reddit” trick that some people used.

    • Flying Squid
      link
      English
      61 month ago

      Also, they’re pretty good at regurgitating bullshit. Like the famous ‘glue on pizza’ answer.

      • Lvxferre
        link
        fedilink
        English
        21 month ago

        Or, in a deeper aspect: they’re pretty good at regurgitating what we interpret as bullshit. They simply don’t care about truth value of the statements at all.

        That’s part of the problem - you can’t prevent them from doing it, it’s like trying to drain the ocean with a small bucket. They shouldn’t be used as a direct source of info for anything that you won’t check afterwards; at least in kitnaht’s use case if the LLM is bullshitting it should be obvious, but go past that and you’ll have a hard time.