So I run a small Kubernetes cluster (k3s) backed by MariaDB hosted on a Synology NAS with only HDDs rather than etcd colocated on the control nodes. For resiliency purposes it’s been great, nodes are basically pure compute resources I can wipe out and recreate with ease and not worry about data loss. However, for over a year now I’ve lived with the constant chatter of active hard drives in my office.

The Kube DB workload is extremely read heavy and very active: many thousands of selects per minutes with only a handful of writes. Clickclickclickclickclickclick. Seems like a good case for caching, and luckily my NAS has 2 NVMe slots for an SSD cache. I bought a couple data center drives with PLP (Kingston DC1000B, probably overkill, but not crazy expensive), pop them in, set up a read/write cache for the database and Kube NFS volumes and…silence, wonderful silence. It’s almost constantly at 100% cache hits. Bonus points if things are faster as well.

I’m very happy. Never optimized an application for noise levels before 😁.

  • @SheeEttin
    link
    English
    31 year ago

    Make sure your drive firmware is updated. A surprising amount of SSDs have firmware bugs that cause premature wearout.

    Personally, if I could fit the cache in RAM, I’d just do that.

    • @macgregorOP
      link
      English
      1
      edit-2
      1 year ago

      I considered it but RAM is very limited on the NAS and the cluster nodes, it’s my primary bottleneck. it would also be more volitile. the two SSDs are RAID 1 redundant, just like the underlying HDDs, in addition to the built in power loss protection on the drives. RAM discs are great if you can spare them and have a UPS though.