- cross-posted to:
- searchengines
- cross-posted to:
- searchengines
Even worse, Reddit itself has been getting infected with corporate AI-generated “recommendations”
Even worse, Reddit
Fair point
It’s really strange it’s still at 47 dollars in the stock market. That thing is extreamly overvalued.
Market is irrational. Donald Social is trash garbage with no future, looks like it has a $5.5 billion market cap.
$5.5B with a yearly revenue of $4 million and a loss of $58 million. Even if they had $0 in expenses, it’d still take a little under 1400 years to earn the equivalent of their market cap.
At least the older posts seem to be okay.
You mean the ones where all the comments say [deleted]?
It’s like 1/10 or so of comments. Though it’s fun seeing posts that have obviously been edited to advertise lemmy.
Except when it’s like… “I bet that was the answer I needed, crap.”
Not completely though. A while ago I’ve had a wave of these comments on a 3 year old post of mine. They got deleted after I’ve reported them at least, though I don’t know if that action was done by a mod of the subreddit or site-wide admin.
And its inaccessible to a lot of people now anyway.
That’s why I use “site:reddit.com” instead of just adding “reddit”
Don’t worry, I’m sure google will disable that soon in the same way they disable all the other search syntax that used to make searching a simple and easy task
“Search engine” is not equivalent to “Google”.
Cool, pedant. Addend “on google” to my comment then if you need, since that’s clearly the context we’re talking about here. I’m aware there are other search engines, but context should have made what I was talking about pretty fucking obvious.
(Not OP) Point taken, but in that case the solution should also be obvious. Just use a different one that does provide that. If the product sucks, hit the bricks. DDG and Kagi are looking for market share, they’d love to have you.
I do use alternatives, but I mention Google because it’s what’s relevant to the conversation at hand.
Advanced search techniques should be a class in 6th grade
Or better yet, try my filter… It’s “-site:reddit.com”!
Here’s a tip:
site:reddit.com
Makes me sad to think that this will soon be about as useful as “site:facebook.com” with the way Reddit is going.
Yeah maybe giving corpo trash exclusivity over the sum total of human knowledge wasnt the best idea?
Or do this:
-site:reddit.com
Do you think it will ever be possible to do that for all the Lemmy instances?
Pretty much all content gets federated to lemmy.world so if you use site:lemmy.world that’ll do it.
If you look for something related to piracy, sadly it won’t show.
Kagi.com has a lens for the fediverse. A lens is basically a scope within which performing the search.
Nah. The best option we have imo is a service that indexes everything on one site so traditional search engines can find it. That requires someone to build it, and AFAIK that’s hasn’t happened.
That or the search engines themselves implement their own fediverse instances just for the purposes of indexing results. At a certain point if the platform becomes relevant enough I think we could see that happen.
I think they’d probably prefer instances that they have control over to reduce the avenues for a third party to manipulate the results. Otherwise they have to trust whoever runs the search instances.
It already works pretty well if you just add Lemmy to the search.
Lemmy’s built-in search barely works as it is, so unless some drastic changes happen it’s resounding no.
Web search engines don’t rely on sites’ built-in search features.
This is how we found anything on reddit for most of its useful life. Its search was always garbage so we relied on Google to come up with usable results.
It’s miles better than reddit’s search has ever been.
Okay but reddit is also becoming inaccessible; how to migrate this data?
I look forward to Google being forced to down rank any sites with “reddit” in the H1.
Google being forced to
What an odd phrase
I’ve spent a lot of time working in SEO.
Search results like this can drive people away from Google and toward other resources. Google likes money, and this is why they usually try to combat spammers that are gaming the system.
It’s a cat and mouse game that has been happening for years. Organic search spammers find a new thing, then Google tweaks the algorithm to downrank what they’re exploiting.
then Google tweaks the algorithm
Well you don’t have to read Cory’s newest column to understand that Google hasn’t been doing that, because they don’t have to. They do not care, at least not yet, because they have arguably become too big to care.
Well google does a horrible job at combating it
No doubt. That said, they do update the algo to combat this stuff. If you work in SEO you’re likely quite aware of what tricks currently work and no longer work.
As useful as Mozilla/5.0; AppleWebKit/537.36 (KHTML, like Gecko) Safari/537.3
Mine is Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0. Joke is, this is the trimmed version (about:config Xorigin and trimming settings) and some pages already have problems with it. If you strip out the OS part, pages like google.com won’t work anymore. Despite that you shouldn’t parse the UA string…
What browser agent is that?
Trick is I took out the actually useful parts like Chrome, Firefox, Edge, etc. And the OS. All the agents these days have AppleWebKit and Mozilla just so old websites that look for it don’t downgrade the experience.
Yeah, make your user agent absolutely unique. Too much entropy will surely confuse the shit out server side HTTP Header tracking. 😬
Oh gee, I wasn’t aware there was more to it than the UA. Thanks for opening my eyes.
Edit: I checked your link, most of the parameters on the test require client side execution. That (client side tracking) is absolutely unrelated to what (server side tracking) I was talking about, and is something you can control (by not allowing JavaScript, for example). Please do not confuse the two. There is literally nothing you can do against server side tracking.
Yeah this isn’t my UA but I’m just saying these parts are what’s considered the supported featureset rather than information about what software the device is running.
Yes, I get that point, but I also think that it’s tempting for the privacy-minded novice to think “the less information I provide, the better!”, while in actuality, it is better to provide “more” information: the most common UA, even if it means lying about your featureset. In this case, truly, more is less.
Firefox doesn’t pretend to use AppleWebKit. It’s actually the only one which identifies itself correctly… mostly, at least:
Mozilla/5.0 (X11; Linux x86_64; rv:122.0) Gecko/20100101 Firefox/122.0
While about:support says “Window Protocol: wayland”. But that’s ok websites shouldn’t care anyway.
It’s other browsers who send things like “like Gecko” to sneak past old browser-detection code.
Probably Netscape
firefox on iphone i would guess.
deleted by creator
There’s something very Darwinian, very artificial selection about this.
I fucking hate seo abusers. I have to use a locally hosted ai for a lot of my “googling” because modern day search results are fucking worthless now.
Is the AI open source? Curious what you’re using and what your experiences with it are.
llamacpp. Remember to modify the launch script to use multiple cores. Go to hugging face io and look for GGUF compatible models.
Google also sneak “reddit” into the “People also ask” section.
Stop using “reddit” and use “site:reddit.com”, searchers.
site:reddit.com
Am I the only one that wants to know more about this Japanese toaster you can fuck?
Surely I’m not alone here.
Mouse, meet cat