conrad82

  • 5 Posts
  • 133 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle




  • conrad82toSelfhostedEncrypted backups to the cloud
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    I run proxmox, and proxmox backup server in a vm. PBS backup is encrypted locally, and I upload the backup to backblaze b2 using rclone in a cron job. I store the decryption key elsewhere

    It has worked ok for me. I also upload a heartbeat file, it is just a empty file with todays date (touch heartbeat), so that I can easily check when the last upload happened


  • conrad82toSelfhostedDocker in LXC vs VM
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    When I have used nfs in the past, i have issues with different user ID. What is the best solution these days?

    After becoming a father last year, the time I have for tinkering is close to 0. I found it easiest to keep all the data in the same vm / lxc, pretty straight forward to maintain




  • conrad82toSelfhostedDocker in LXC vs VM
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    I used to use LXC, and switched to VM since internet said it was better.

    I kinda miss the LXC setup. Day to day I don’t notice any difference, but increasing storage space in VM was a small pain compared to LXC. In VM I increased disk size through proxmox, but then I had to increase the partition inside VM.

    In LXC you can just increase disk size and it immediately is available to the containers




  • conrad82toLinux@lemmy.mlHow to distrohop!?
    link
    fedilink
    arrow-up
    3
    ·
    4 months ago

    I set up a server with all my stuff on, and use syncthing for syncing my files, and self hosting for services. I mostly use vanilla configs for apps, and prefer distros with good defaults.

    Some time ago I switched to Bluefin, and stopped distro-hopping 😅





  • Yes it is correct. TLDR; threads run one code at the time, but can access same data. processes is like running python many times, and can run code simultaneously, but sharing data is cumbersome.

    If you use multiple threads, they all run on the same python instance, and they can share memory (i.e. objects/variables can be shared). Because of GIL (explained by other comment), the threads cannot run at the same time. This is OK if you are IO bound, but not CPU bound

    If you use multiprocessing, it is like running python (from terminal) multiple times. There is no shared memory, and you have a large overhead since you have to start up python many times. But if you have large calculations you can do in parallell that takes long time, it will be much faster than threads as it can use all cpu cores.

    If these processes need to share data, it is more complicated. You need to use special functions to share data, like queues and pipes. If you need to share many MB of data, this takes a lot of time in my experience (10s of milliseconds).

    If you need to do large calculations, using numpy functions or numba may be faster than multiple processes, due to good optimizations. But if you need to crunch a lot of data, multiprocessing is usually the way to go