- cross-posted to:
- nottheonion
- [email protected]
- cross-posted to:
- nottheonion
- [email protected]
Archive link: https://archive.ph/GtA4Q
The complete destruction of Google Search via forced AI adoption and the carnage it is wreaking on the internet is deeply depressing, but there are bright spots. For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for. And that is to confidently serve its customers ideas like, to make cheese stick on a pizza, “you can also add about 1/8 cup of non-toxic glue” to pizza sauce, which comes directly from the mind of a Reddit user who calls themselves “Fucksmith” and posted about putting glue on pizza 11 years ago.
A joke that people made when Google and Reddit announced their data sharing agreement was that Google’s AI would become dumber and/or “poisoned” by scraping various Reddit shitposts and would eventually regurgitate them to the internet. (This is the same joke people made about AI scraping Tumblr). Giving people the verbatim wisdom of Fucksmith as a legitimate answer to a basic cooking question shows that Google’s AI is actually being poisoned by random shit people say on the internet.
Because Google is one of the largest companies on Earth and operates with near impunity and because its stock continues to skyrocket behind the exciting news that AI will continue to be shoved into every aspect of all of its products until morale improves, it is looking like the user experience for the foreseeable future will be one where searches are random mishmashes of Reddit shitposts, actual information, and hallucinations. Sundar Pichai will continue to use his own product and say “this is good.”
I’ve been trying out SearX and I’m really starting to like it. It reminds me of early Internet search results before Google started added crap to theirs. There’s currently 82 Instances to choose from, here
https://searx.space/
You can also easily run your own via docker. https://github.com/searxng/searxng-docker
it literally just proxies/aggregates google/bing search results tho?
So does pretty much every search engine. Running your own web crawler requires a staggering amount of resources.
Mojeek is one you can check out if that’s what you’re looking for, but it’s index is noticeably constrained compared to other search engines. They just don’t have the compute power or bandwidth to maintain an up to date index of the entire web.
yeah but that invalidates the “better/cleaner search results” point since it’s well basically the same stuff, just without the tracking
we’re working on it 😉 slow and steady and all that; we also fixed a bug with recrawl recently that should be improving things