In the past two weeks, the site has been very unstable, sometimes for one second a page would load and the next it wouldn’t load at all. The status of lemmy.world would fluctuate a lot on lemmy-status.org, and there have times where the site has been down for hours.
There have been numerous ddos attacks etc on the site recently. Rudd, the operator has been very forthcoming with information.
Where can I find this information?
c/lemmyworld is where he makes announcements
Hi there! Looks like you linked to a Lemmy community using a URL instead of its name, which doesn’t work well for people on different instances. Try fixing it like this: [email protected]
DDOS attacks
Didn’t they also say that there were other kinds of attacks being used as well, not just DDoS I may be remembering wrong but I feel like I heard one of the admins say something about a database related attack where they overloaded the database with download requests or something like that.
Could be, I’ll look into that
Either the problems with its API responses are breaking lemmy.world, or a broken lemmy.world is causing problematic API responses.
Currently, you can ask lemmy.world for page number billion of its communities and it’ll return a response (for the communities it thinks it has on that page, rather than an empty one, as it should). For something like lemmyverse.net, this means its crawler can never get to end of a scan, and some apps are maybe trying to endlessly load the list.
References:
https://github.com/tgxn/lemmy-explorer/issues/139
https://lemmy.world/post/2651283This is the same reason I had to turn off my search engines crawler.
There were changes made to the API to ignore any page > 99. So if you ask for page 100 or page 1_000_000_000 you get the first page again. This would cause my crawler to never end in fetching “new” posts.
lemm.ee on the other hand made a similar change but anything over 99 returns an empty response. lemm.ee also flat out ignores
sort=Old
, always returning an empty array.Both of these servers did it for I assume the same reason. Using a high page number significantly increases the response time. It used to be (before they blocked pages over 99) that responses could take over 8-10 seconds! But asking for a low page number would return in 300ms or less. So because it’s a lot harder to optimize the existing queries, and maybe not possible, for now the problematic APIs were just disabled.
Oh wow, I thought this was a bug in the Lemmy api. I was implementing comments paging in my app the other day and noticed it would just infinity load pages with duplicate comments. I guess I should have tested with another instance before disabling the feature.
Can you check periodically so you can get the link?
Yep. Worked this time, so I’ve edited my comment.
It’s been under attack from malicious actors, some DDoS, some database related (they overload the database with download requests). Likely because lemmy.world is one of the largest instances and they figure they can cause the most damage by attacking this one.
I would guess that its due to the rapidly rising number of Users.
And attacks…
Fucking assholes ruin everything
…which stem from the large userbase making it an attractive target.
Ok, who’s attacking?