Hello selfhosted !

Continuing my journey of setup up my home k3s cluster.

I’ve been asking myself if Longhorn might be overkill for my home cluster,here’s what i did:

3 machines running k3s each. One of them has a storage in Raid 5 and I dont want to use any storage from the other two.

Thing is, i had to configure replicas to 1 in longhorn for my pv to be green.

Hence my question, since data is already replicated in the array, shouldn’t I just use a NFS storage class instead?

Thanks !

  • @Starbuck
    link
    English
    41 year ago

    Longhorn is basically just acting like a fancy NFS mount in this configuration. It’s a really fancy NFS mount that will work well with kubernetes, for things like PVC resizing and snapshots, but longhorn isn’t really stretching its legs in this scenario.

    I’d say leave it, because it’s already setup. And someday you might add more (non-RAID) disks to those other nodes, in which case you can set Longhorn to replicas=2 and get some better availability.

    • @Burn1ngBull3tOP
      link
      English
      11 year ago

      You are completely right.

      However in my mind (might be wrong here) if I use another node, i wouldn’t use the RAID array completely.

      While setup up i thought that its either:

      • NAS storageClass attached to the RAID array, no longhorn
      • with longhorn when there is no RAID, but replication at 3

      In either case, the availability of my data would be quite the same right ?

      (Then there is options to backup my PV to s3 with longhorn and all that i would have to setup again though )

      Thanks for your answer !

      • @Starbuck
        link
        English
        21 year ago

        You’re on the right track here. Longhorn kind of makes RAID irrelevant, but only for data stored in Longhorn. So anything on the host disk and not a PV is at risk. I tend to use MicroOS and k3s, so I’m okay with the risk, but it’s worth considering.

        For replicas, I wouldn’t jump straight to 3 and ignore 2. A lot of distributed storage systems use 3 so that they can resolve the “split brain” problem. Basically, if half the nodes can’t talk to each other, the side with quorum (2 of 3) knows that it can keep going while the side with 1 of 3 knows to stop accepting writes it can’t replicate. But Longhorn already does this in a Kubernetes native way. So it can get away with replica 2 because only one of the replicas will get the lease from the kube-api.