I have a backup server running Proxmox Backup and OMV in separate VMs. OMV provides the storage as an NFS to the proxmox backup server VM.

I have multiple remote servers that connect to the proxmox backup server but recently I keep having issues with backups. Something about file lock estale.

Is there an alternative to NFS I can use in OMV to provide the storage for the proxmox backup server?

I know there are vastly different configuration options, but I have some other things set up with OMV so I’m kinda stuck on that.

  • @[email protected]OP
    link
    fedilink
    English
    17 months ago

    Looks like my reply got purged in the server update.

    Running Proxmox baremetal. Two VMs: Proxmox Backup Server and OMV. Multiple HDDs passed through directly as SCSI to OMV. In OMV they’re combined into a mergerfs pool. Two shared folders on the pool: one dedicated to proxmox backups and the other for data backups. The Proxmox backup shared folder is an NFS share and the other shared folder is accessed by a remote duplicati server via SSH (sftp?). Within the proxmox backup server VM, the aforementioned NFS share is set up as a storage location.

    I have no problems with the duplicati backups at all. The Proxmox Backup Server was operating fine as well initially but began throwing the estale error after about a month or two.

    Is there a way to fix the estale error and also to prevent it from reoccurring?

    • Pyrosis
      link
      English
      17 months ago

      So you mentioned using proxmox as the underlying system but when I asked for proxmox filesystem I’m more referring to if you just kept the defaults during installation which would be lvm/ext4 as the proxmox filesystem or if you changed to zfs as the underlying proxmox filesystem. It sounds like you have additional drives that you used the proxmox command line to “passthru” as scsi devices. Just be aware this not true passthru. It is slightly virtualized but is handing the entire storage of the device to the vm. The only true passthru without a slight virtualization would be pci passthru utilizing IOMMU.

      I have some experience with this specifically because of a client doing similar with a truenas vm. They discovered they couldn’t import their pool into another system because proxmox had slightly virtualized the disks when they added them to vm in this manner. In other words zfs wasn’t directly managing the disks. It was managing virtual disks.

      Anyway, it would still help to know the underlying filesystem of the slightly virtualized disks you gave to mergerfs. Are these ext4, xfs, btrfs? mergerfs is just a union filesystem that unifies storage across multiple mountpoints into a single virtual filesystem. Which means you have another couple layers of complexity in your setup.

      If you are worried about disk IO you may consider letting the hypervisor manage these disks and storage a bit more directly. Removing some of the filesystem layers.

      I could recommend just making a single zfs pool from these disks within proxmox to do this. Obviously this is a pretty big transition on a production system. Another option would be creating a btrfs raid from these disks within proxmox and adding that mountpoint as storage to the hypervisor.

      Personally I use zfs but btrfs works well enough. Regardless this would allow you to just hand storage to vms from the gui and the hypervisor would aid much more efficiently with disk io.

      As for the error it’s typically repaired by unmount mount operations. As I mentioned before the cause can be various but usually is a loss of network connectivity or an inability to lock something down in use.

      My advice would be to investigate reducing your storage complexity. It will simplify administration and future transitions.