Does anyone run their own Lemmy instance on a pi? How was the process of setting it up? Were there any pitfalls? How is performance?
[Edit] So a lot of testing around. Compiling from scratch, etc, etc…
So far i have tried
- installing lemmy using rootless docker (on 0.17.3)
- compiling the image 0.18 docker image as arm
rootless docker did not work well for me. lots of systemd issues and i gave up after running into a lot of issues. I tried rootless docker for security reasons. minimal permissions, etc.
When trying to compile the latest lemmy image in arm, i ran into issues with muslrust not having an arm version. It might be worth investigating rewriting the docker file from 0.17.3 to work with 0.18.0 but i haven’t investigated that fully yet! I tried compiling the latest image because i wanted to be able to use the latest features
At the moment, I’m trying to set lemmy to run under bare metal. Im currently attempting to compile lemmy under arm. If that works, i’ll start setting up .service files to start up lemmy and pictrs.
I was able to get it setup, main things to watch out for:
- Don’t use the provided docker compose file. Or more precisely don’t build from source and lookup the correct image tag on docker hub first.
- The documentation was a bit confusing. This isn’t really specific for the Pi but since I was creating a compose file from scratch some of the steps listed didn’t quite explain all of the details.
I only used it for testing purposes, but performance was fine (on a Pi4 4gb). Note I only ever had one user.
As I only want to use it for myself as jump-off point (and to mess around a tad) I’m fine with performance on an RPi4 (have the 8 GB version), but I’m struggling to get it next to the rest in my Debian install on it.
Local install fails as I need imagemagick 7 (Debian still had 6.9), and it refuses to compile with imei method. (that script wants to use /usr/local/bin/identify which I think it needs to install itself (part of imagemagick) and the compose file I couldn’t get to work with an external (already hosted) postgres.
Any tips? I’m totally new with docker and ansible.
So in the official
docker-compose.yml
lines that define where/how to get the image for that application.For example:
build: context: ../ dockerfile: docker/Dockerfile
This tells docker to look for a file called
docker/Dockerfile
in the parent directory. This means that when you go to calldocker compose up -d
it will build an image from source using that Dockerfile. For the Pi we don’t want this (at least as of 0.17.x; I haven’t tested 0.18.0 yet).Instead we want to use a pre-built image. To do that we need to go to docker hub, specifically: https://hub.docker.com/r/dessalines/lemmy/tags and find the latest tag that matches the architecture of the system we’re building on. I assume you’re on a Pi4 running a 64bit so, so that gives us
0.17.3-linux-arm64
. After you’ve got that tag we just need to replace those 3 lines above with:image: dessalines/lemmy:0.17.3-linux-arm64
Now when we go to call
docker compose up -d
it will pull down that prebuilt image instead of building for source. Btw, you’ll want to do the same for thelemmy-ui
service.P.S. I don’t have much experience using Ansible, so I can’t help here. I normally just SSH directly into the Pi and do everything there.
Oh wait, I forgot, compose .yaml syntax is (almost) the same as Ansible, so no need for Ansible. Thanks for pointing to the docker images. I’ll start messing about with those. Still need to pick an fqdn for those instances. Do I want to use lemmy.my.domain or direct my.domain. (as it’s by all means mine anyway)
Im looking at setting up a lemmy instance on a rpi3 with cloudflared tunnel! I’m curious to see if anyone else has done this and how it was.
Edit: I’ll give it a whirl and hopefully post an update from my new instance later!
Edit 2:
I appear to have lost my micro sd card reader! I cant write a new image hahnevermind found it!Please don’t forget to give us updates on your adventure^^
I did this this exact setup with rpi4b and worked flawlessly. Just follow the “Docker installation” guides in setup and replace the image with an arm64 one
deleted by creator
I don’t think there should be any problems, lemmy is a fairly lightweight web application, it’s compiled so no big overhead of some runtime like ruby in case of mastodon. I haven’t tried it on a raspberry Pi, but on my server the load is always just around 0.1
The only bottleneck I could think of was Postgres, but I’ve been running postgres on raspberry pies without any problems before too.
Hey OP, I’m on a similar journey (except I’m using an rpi kubernetes cluster)
I don’t have advice but I do want to wish you good luck
Here: my daily “simply a nice stranger” award goes to you
Hey thanks, stranger
You seem pretty nice yourself
Removed by mod
You could plug in a USB SSD or HDD and make sure the DB and other regularly written data goes there. That would pretty much remove the problem.
I would wonder how well it would perform. The limited memory and cpu power surely would make database access not great under even moderate load.
Removed by mod
What user cap would a pi have running an instance?
Are you asking me what i plan to set the cap to? I guess just me. I cant see anyone else wanting to run off a pi from my house and there are so many other instances to join.
I’m a newbie here but what would be the benefit of running an instance just for yourself?
The ability to host your own data - both for privacy, and insurance that the instance you host your account in won’t suddenly disappear.
I would also add that Lemmy is part of the fediverse, meaning it is federated. Federation means all instances “talk” to all instances (unless they defederate), so you aren’t limited only to the content on one instance (or in some cases not even Lemmy, case in point: I’m posting this from my kbin.social account).
What happens to posts/comments and any media/content that is hosted on a server that just goes away (for example if I created one virtually and then deleted it or if a sdcard on a pi is corrupted)
If you upload an image to that server, the image will be gone. Your comments will still exist on other federated instances, assuming that instance was federated in the first place. But any replies to those communities will not propagate once the hosting instance is offline.
For example assume you have 3 instances, A, B, and C. You have an account on A and create a post to a community on A. At some point A goes away, but those posts and that community will still exist on B and C. So you create a new account on B, and reply to one of those posts… users on C won’t be able to see those replies as A isn’t there to broadcast those replies out. And if someone on C creates a new post on that community from A, you wouldn’t be able to see it on B either.
P.S. The same is true if A just decides to defederate instead of shutting down. (other than the images and accounts would still exist obviously).
same!
That, and somehow I think it’s nice to be able to federate with a username within your own domain when you have one. (or multiple, decisions, decisions, which one to pick ;) )
Never have to worry about what an admin decides to defederate/block.
No, I meant what is the user limit based on the power of the raspberry pi tech specs.
Basically the limit would be the speed of the database and the drive it runs on. If you connect a SATA SSD via usb3 it shouldn’t be too bad. Can’t tell you exact figures but a few hundred users is probably ok if you don’t expect the site to be super responsive.
Well, “ish”.
My experience with databases in general (granted, more the big ones than stuff like Postgres and mySQL) is that a lot if not most of the stuff that’s important for performance is held in memory (certainly they’ll tend to keep the most frequently fetched stuff in memory, along with the most used indexes) so I suspect the bigger Pi devices (with 4GB and 8GB) might just have enough memory to handle a good number of people doing common usage stuff (say, checking All in Active mode).
With a really big database and usage profile which has a random uniform distribution (i.e. any data piece is just as likely to need to be fetched as any other) then for the DB to be I/O bound in a Pi makes sense, but it’s my impression (or maybe its just me ;)) that Lemmy data access is very concentrated in a just a few things (which do change over time but the DB engine wll naturally adjust the memory cache contents for that kind of change)
From the little that I know about the structure of the Lemmy software, I expect it’s the image server that’ll have problems with slow I/O rather than the database.
Of course, all this is just conjecture, as while I worked in high performance computing, it wasn’t exactly done with Raspberry Pi devices ;)
Thanks. Might be useful for there to be a table outling diffrent hardware configs and acceptable user loads as more people people consider creating instances.
its difficult because different users have different usage patterns.
for example, two users who never post and are never online at the same time really take no resources from each other. they are effectively “one” user.one user who posts 10gb of content a day, and is constantly posting would be equivalent to hundreds of “normal” users.
Yes, sure, didn’t want to complicate the question by adding that :)