I suspect that this is the direct result of AI generated content just overwhelming any real content.

I tried ddg, google, bing, quant, and none of them really help me find information I want these days.

Perplexity seems to work but I don’t like the idea of AI giving me “facts” since they are mostly based on other AI posts

ETA: someone suggested SearXNG and after using it a bit it seems to be much better compared to ddg and the rest.

  • @j4k3
    link
    English
    75 days ago

    I don’t use perplexity, but AI is generally 60-80% effective with a larger than average open weights off line model running on your own hardware.

    DDG offers the ability to use some of these. I use a modified Mistral model still, even though its base model(s) are Llama 2. Llama 3 can be better in some respects but it has terrible alignment bias. The primary entity in the underlying model structure is idiotic in alignment strength and incapable of reason with edge cases like creative writing for SciFi futurism. The alignment bleeds over. If you get on DDG and use the Anthropic Mixtral 8×7b, it is pretty good. The thing with models is to not talk to them like humans. Everything must be explicitly described. Humans make a lot of implied context in general where we assume people understand what we are talking about. Talking to an AI is like appearing in court before a judge; every word matters. The LLM is basically a reflection of all of human language too. If the majority of humans are wrong about something, so is the AI.

    If you ask something simple like just a question, you’re not going to get very far into what the model knows. Models have very limited scope of focus. If you do not build prompt momentum into the space by describing a lot of details, the scope of focus is large but the depth is shallow. The more you build up momentum by describing what you are asking in detail, the more it narrows the scope and deeper connections can be made.

    It is hard to tell what a model really knows unless you can observe the perplexity output. This is more advanced, but the perplexity score for each generated token is how you infer that the model does not know something.

    Search sucks because it is a monopoly. There are only 2 relevant web crawlers m$ and the goo. All search queries go through these either directly or indirectly. No search provider is deterministic any more. Your results are uniquely packaged to manipulate you. They are also obfuscated to block others from using them for training better or competitive models. Then there is the anti trust US government case and all of that which makes obfuscating one’s market position to push people onto other platforms temporarily, their best path forward. - criminal manipulators are going to manipulate.