Hello! I have been struggling through a few tutorials on getting a lemmy instance to work correctly when setup with Docker. I have it mostly done, but there are various issues each time that I do not have the knowledge to properly correct. I am familiar with Docker, and already have an Oracle VPS set up on ARM64 Ubuntu. I already have portainer and an NGINX proxy set up and working okay. I have an existing lemmy instance “running” but not quite working. My best guess here would be to have someone assist with setting up the docker-compose to work with current updates/settings, as well as the config.hjson.

TIA, and I cant wait to have my own entry into the fediverse working right!

  • @[email protected]OP
    link
    fedilink
    English
    1
    edit-2
    1 year ago

    (1/2) Alright, thanks for helping.

    docker-compose.yml

    spoiler
    version: "3.3"
    
    networks:
      # communication to web and clients
      lemmyexternalproxy:
      # communication between lemmy services
      lemmyinternal:
        driver: bridge
        internal: true
    
    services:
      lemmy:
        image: dessalines/lemmy
        # this hostname is used in nginx reverse proxy and also for lemmy ui to connect to the backend, do not change
        hostname: lemmy
        networks:
          - lemmyinternal
        restart: always
        environment:
          - RUST_LOG="warn,lemmy_server=debug,lemmy_api=debug,lemmy_api_common=debug,lemmy_api_crud=debug,lemmy_apub=debug,lemmy_db_schema=debug,lemmy_db_views=debug,lemmy_db_views_actor=debug,lemmy_db_views_moderator=debug,lemmy_routes=debug,lemmy_utils=debug,lemmy_websocket=debug"
          - RUST_BACKTRACE=full
        volumes:
          - ./lemmy.hjson:/config/config.hjson:Z
        depends_on:
          - postgres
          - pictrs
    
      lemmy-ui:
        image: dessalines/lemmy-ui
        # use this to build your local lemmy ui image for development
        # run docker compose up --build
        # assuming lemmy-ui is cloned besides lemmy directory
        # build:
        #   context: ../../lemmy-ui
        #   dockerfile: dev.dockerfile
        networks:
          - lemmyinternal
        environment:
          # this needs to match the hostname defined in the lemmy service
          - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
          # set the outside hostname here
          - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.bulwarkob.com:1236
          - LEMMY_HTTPS=false
          - LEMMY_UI_DEBUG=true
        depends_on:
          - lemmy
        restart: always
    
      pictrs:
        image: asonix/pictrs:0.4.0-beta.19
        # this needs to match the pictrs url in lemmy.hjson
        hostname: pictrs
        # we can set options to pictrs like this, here we set max. image size and forced format for conversion
        # entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp
        networks:
          - lemmyinternal
        environment:
          - PICTRS_OPENTELEMETRY_URL=http://otel:4137
          - PICTRS__API_KEY=API_KEY
          - RUST_LOG=debug
          - RUST_BACKTRACE=full
          - PICTRS__MEDIA__VIDEO_CODEC=vp9
          - PICTRS__MEDIA__GIF__MAX_WIDTH=256
          - PICTRS__MEDIA__GIF__MAX_HEIGHT=256
          - PICTRS__MEDIA__GIF__MAX_AREA=65536
          - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
        user: 991:991
        volumes:
          - ./volumes/pictrs:/mnt:Z
        restart: always
    
      postgres:
        image: postgres:15-alpine
        # this needs to match the database host in lemmy.hson
        # Tune your settings via
        # https://pgtune.leopard.in.ua/#/
        # You can use this technique to add them here
        # https://stackoverflow.com/a/30850095/1655478
        hostname: postgres
        command:
          [
            "postgres",
            "-c",
            "session_preload_libraries=auto_explain",
            "-c",
            "auto_explain.log_min_duration=5ms",
            "-c",
            "auto_explain.log_analyze=true",
            "-c",
            "track_activity_query_size=1048576",
          ]
        networks:
          - lemmyinternal
          # adding the external facing network to allow direct db access for devs
          - lemmyexternalproxy
        ports:
          # use a different port so it doesnt conflict with potential postgres db running on the host
          - "5433:5432"
        environment:
          - POSTGRES_USER=noUsrHere
          - POSTGRES_PASSWORD=noPassHere
          - POSTGRES_DB=noDbHere
        volumes:
          - ./volumes/postgres:/var/lib/postgresql/data:Z
        restart: always
    

    The NGINX I am using is not the one that came with the stack, but a separate single container for nginx-proxy-manager. I did not customize the conf that it installed with, and only used the UI to set up the proxy host and SSL, both of which are working (front end, at least.). The config seems to be unrelated on this, however I can share it if the rest of the information below is not enough.

      • @[email protected]OP
        link
        fedilink
        English
        11 year ago

        Sure thing. lemmy.hjson:

        spoiler
        {
          # for more info about the config, check out the documentation
          # https://join-lemmy.org/docs/en/administration/configuration.html
          # only few config options are covered in this example config
        
          setup: {
            # username for the admin user
            admin_username: "noUsrHere"
            # password for the admin user
            admin_password: "noPassHere"
            # name of the site (can be changed later)
            site_name: "Bulwark of Boredom"
          }
        
          # the domain name of your instance (eg "lemmy.ml")
          hostname: "lemmy.bulwarkob.com"
          # address where lemmy should listen for incoming requests
          bind: "0.0.0.0"
          # port where lemmy should listen for incoming requests
          port: 8536
          # Whether the site is available over TLS. Needs to be true for federation to work.
          tls_enabled: true
        
          # pictrs host
          pictrs: {
            url: "http://pictrs:8080/"
            api_key: "API_KEY"
          }
        
          # settings related to the postgresql database
          database: {
            # name of the postgres database for lemmy
            database: "noDbHere"
            # username to connect to postgres
            user: "noUsrHere"
            # password to connect to postgres
            password: "noPassHere"
            # host where postgres is running
            host: "postgres"
            # port where postgres can be accessed
            port: 5432
            # maximum number of active sql connections
            pool_size: 5
          }
        }
        

        I am not certain if I am somehow getting the wrong location of the config in the container. There is no volume or link for a conf file from host:container, so I am just grabbing from the default area /etc /nginx/nginx.conf:

        spoiler
        # run nginx in foreground
        daemon off;
        pid /run/nginx/nginx.pid;
        user npm;
        
        # Set number of worker processes automatically based on number of CPU cores.
        worker_processes auto;
        
        # Enables the use of JIT for regular expressions to speed-up their processing.
        pcre_jit on;
        
        error_log /data/logs/fallback_error.log warn;
        
        # Includes files with directives to load dynamic modules.
        include /etc/nginx/modules/*.conf;
        
        events {
        	include /data/nginx/custom/events[.]conf;
        }
        
        http {
        	include                       /etc/nginx/mime.types;
        	default_type                  application/octet-stream;
        	sendfile                      on;
        	server_tokens                 off;
        	tcp_nopush                    on;
        	tcp_nodelay                   on;
        	client_body_temp_path         /tmp/nginx/body 1 2;
        	keepalive_timeout             90s;
        	proxy_connect_timeout         90s;
        	proxy_send_timeout            90s;
        	proxy_read_timeout            90s;
        	ssl_prefer_server_ciphers     on;
        	gzip                          on;
        	proxy_ignore_client_abort     off;
        	client_max_body_size          2000m;
        	server_names_hash_bucket_size 1024;
        	proxy_http_version            1.1;
        	proxy_set_header              X-Forwarded-Scheme $scheme;
        	proxy_set_header              X-Forwarded-For $proxy_add_x_forwarded_for;
        	proxy_set_header              Accept-Encoding "";
        	proxy_cache                   off;
        	proxy_cache_path              /var/lib/nginx/cache/public  levels=1:2 keys_zone=public-cache:30m max_size=192m;
        	proxy_cache_path              /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;
        
        	log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"';
        	log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"';
        
        	access_log /data/logs/fallback_access.log proxy;
        
        	# Dynamically generated resolvers file
        	include /etc/nginx/conf.d/include/resolvers.conf;
        
        	# Default upstream scheme
        	map $host $forward_scheme {
        		default http;
        	}
        
        	# Real IP Determination
        
        	# Local subnets:
        	set_real_ip_from 10.0.0.0/8;
        	set_real_ip_from 172.16.0.0/12; # Includes Docker subnet
        	set_real_ip_from 192.168.0.0/16;
        	# NPM generated CDN ip ranges:
        	include conf.d/include/ip_ranges.conf;
        	# always put the following 2 lines after ip subnets:
        	real_ip_header X-Real-IP;
        	real_ip_recursive on;
        
        	# Custom
        	include /data/nginx/custom/http_top[.]conf;
        
        	# Files generated by NPM
        	include /etc/nginx/conf.d/*.conf;
        	include /data/nginx/default_host/*.conf;
        	include /data/nginx/proxy_host/*.conf;
        	include /data/nginx/redirection_host/*.conf;
        	include /data/nginx/dead_host/*.conf;
        	include /data/nginx/temp/*.conf;
        
        	# Custom
        	include /data/nginx/custom/http[.]conf;
        }
        
        stream {
        	# Files generated by NPM
        	include /data/nginx/stream/*.conf;
        
        	# Custom
        	include /data/nginx/custom/stream[.]conf;
        }
        
        # Custom
        include /data/nginx/custom/root[.]conf;
        
        
        
        • I eat words
          link
          fedilink
          English
          1
          edit-2
          1 year ago

          it seems there is no config for lemmy nginx here… might be in other files?

          • @[email protected]OP
            link
            fedilink
            English
            11 year ago

            I may be mistaken in my choice of proceeding, but as many are reporting, the install guide provided docker-compose and general docker instructions dont quite seem to work as expected. I have been trying to piecemeal this together, and the Included lemmy nginx service container was completely excluded (edited out/deleted) once I had the standalone nginx-proxy-manager setup and working for regular 80,443 ->1234 proxy requests to the lemmy-ui container.

            Does the lemmy nginx have a specific role or tie in? I am still fairly new to reverse proxying in general.

            • I eat words
              link
              fedilink
              English
              1
              edit-2
              1 year ago

              yeah, nginx config for lemmy is not very straighforward. you need to mimic this:

              worker_processes  1;
              events {
                  worker_connections  1024;
              }
              http {
                  upstream lemmy {
                      server "lemmy:8536";
                  }
                  upstream lemmy-ui {
                      server "lemmy-ui:1234";
                  }
                  server {
                    listen       1236;
                    server_name  localhost;
              
                    # frontend
                    location / {
                      set $proxpass "http://lemmy-ui";
                      if ($http_accept = "application/activity+json") {
                        set $proxpass "http://lemmy";
                      }
                      if ($http_accept = "application/ldr+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
                        set $proxpass "http://lemmy";
                      }
                      if ($request_method = POST) {
                        set $proxpass "http://lemmy";
                      }
                      proxy_pass $proxpass;
              
                      rewrite ^(.+)/+$ $1 permanent;
              
                      # Send actual client IP upstream
                      proxy_set_header X-Real-IP $remote_addr;
                      proxy_set_header Host $host;
                      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    }
              
                    # backend
                    location ~ ^/(api|pictrs|feeds|nodeinfo|.well-known) {
                      proxy_pass "http://lemmy";
                      proxy_http_version 1.1;
                      proxy_set_header Upgrade $http_upgrade;
                      proxy_set_header Connection "upgrade";
              
                      # Add IP forwarding headers
                      proxy_set_header X-Real-IP $remote_addr;
                      proxy_set_header Host $host;
                      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                    }
                  }
              }
              

              also - can you check if all containers are running? just do docker-compose ps in the lemmy dir.

              • @[email protected]OP
                link
                fedilink
                English
                11 year ago

                All containers are running. I handle them with Portainer, though I build the stack from the CLI in the lemmy dir, so Portainer cant fully manage them. Reboots and logs and networking and such work fine though.

                As for the nginx config, the nginx proxy manager I use currently has all proxy-host/settings setup from the webGUI, where I use the GUI to set up the proxy host information and SSL information. I did no manual edits to any configurations or settings of the container during or after compose. Only GUI actions. When looking at the nginx.conf I replied with here (my current conf), I do not see anything related to that proxy host I created from the GUI. I am not sure if that is normal or not, or if I maybe have a wrong .conf included here.

                With that in mind, would you suggest I simply overwrite and/or add your snippet to my existing conf file?

                • I eat words
                  link
                  fedilink
                  English
                  11 year ago

                  try to look here for the config file:

                  include /etc/nginx/conf.d/*.conf;
                  	include /data/nginx/default_host/*.conf;
                  	include /data/nginx/proxy_host/*.conf;
                  	include /data/nginx/redirection_host/*.conf;
                  	include /data/nginx/dead_host/*.conf;
                  	include /data/nginx/temp/*.conf;
                  

                  btw, i think port in lemmy.bulwarkob.com:1236 in docker-compose is not needed for you, should be just lemmy.bulwarkob.com

                  • @[email protected]OP
                    link
                    fedilink
                    English
                    11 year ago

                    I appreciate your patience and clear assistance.

                    conf.d/* has two configurations that appear to be some form of default. default.conf and production.conf. production.conf is only for the admin GUI. default.conf:

                    Container has a volume set /lemmy/docker/nginx-proxy-manager/data:/data

                    I have those folders and more, and they DO seem to have the correct custom item.

                    Specifically, in the proxy_host folder I have a configuration for the proxy host I set up (1.conf) in the GUI:

                    spoiler
                    
                    # ------------------------------------------------------------
                    # lemmy.bulwarkob.com
                    # ------------------------------------------------------------
                    
                    
                    server {
                      set $forward_scheme http;
                      set $server         "172.24.0.5";
                      set $port           1234;
                    
                      listen 80;
                    listen [::]:80;
                    
                    listen 443 ssl http2;
                    listen [::]:443 ssl http2;
                    
                    
                      server_name lemmy.bulwarkob.com;
                    
                    
                      # Let's Encrypt SSL
                      include conf.d/include/letsencrypt-acme-challenge.conf;
                      include conf.d/include/ssl-ciphers.conf;
                      ssl_certificate /etc/letsencrypt/live/npm-1/fullchain.pem;
                      ssl_certificate_key /etc/letsencrypt/live/npm-1/privkey.pem;
                    
                    
                    
                      # Block Exploits
                      include conf.d/include/block-exploits.conf;
                    
                    
                    
                    
                        # Force SSL
                        include conf.d/include/force-ssl.conf;
                    
                    
                      access_log /data/logs/proxy-host-1_access.log proxy;
                      error_log /data/logs/proxy-host-1_error.log warn;
                    
                    
                      location / {
                    
                    
                    
                    
                    
                        # Proxy!
                        include conf.d/include/proxy.conf;
                      }
                    
                    
                      # Custom
                      include /data/nginx/custom/server_proxy[.]conf;
                    }
                    
                    

                    The rest of the folders are empty: