My Jellyfin VM has been failing its nightly backups for some time now (maybe a week or so).
I’m currently backing up to a NAS that has plenty of available space and my other 10 VMs are backing up without issues (though they are a bit smaller than this one).
I am backing up with the ZSTD
compression option and the Snapshot
mode.
The error is as follows:
INFO: include disk 'scsi0' 'Proxbox-Local:vm-110-disk-0' 128G
INFO: backup mode: snapshot
INFO: ionice priority: 7
INFO: creating vzdump archive '/mnt/pve/Proxbox-NAS/dump/vzdump-qemu-110-2025_01_04-03_29_45.vma.zst'
INFO: started backup task '4be73187-d25c-49cf-aed2-1217fba27f77'
INFO: resuming VM again
INFO: 0% (866.4 MiB of 128.0 GiB) in 3s, read: 288.8 MiB/s, write: 268.0 MiB/s
INFO: 1% (1.5 GiB of 128.0 GiB) in 6s, read: 221.1 MiB/s, write: 216.0 MiB/s
INFO: 2% (2.6 GiB of 128.0 GiB) in 15s, read: 130.5 MiB/s, write: 126.4 MiB/s
INFO: 3% (3.9 GiB of 128.0 GiB) in 25s, read: 128.9 MiB/s, write: 127.5 MiB/s
ERROR: job failed with err -5 - Input/output error
INFO: aborting backup job
INFO: resuming VM again
ERROR: Backup of VM 110 failed - job failed with err -5 - Input/output error
INFO: Failed at 2025-01-04 03:30:17
Anyone experienced this or have any suggestions as to resolving it?
Update: After rebooting the Proxmox node (not just the VM) my backups are now working again. Thanks all for the input!
Woa, hit the breaks on the rebuild.
I’m old and my memory is going, but I know I’ve had this before …
I think I had to figure out how to -
- shutdown the host
- mount the virtual disk as a ‘loopback’ device
- fsck the disk
I know this seems like it should be block level and that shouldn’t matter, and I could be remembering wrong - but I really think I went down that path and it ended up being the solution.
Luckily rebooting the host solved it :) the regularly scheduled 3 AM backups went off this morning without issue.
Have you tried shutting the VM down completely and starting it back up? Or running the backup while shut down to see if it acts differently?
You could also create a new VM with the same specs and move the disk over. Sometimes it resolves weird issues like this. Or create a whole new virtual disk and transfer everything.
Might also check the disk health of the underlying storage.
Hello, IT. Have you tried turning it off and back on again.
It’s funny because it’s true.
Yes, just tried that. Failed as before :(
You tried all of those suggestions in 35 minutes? I find that hard to believe.
Sorry, I meant I tried shutting down and trying the backup after bringing it back up (also tried when it was shutdown). I replaced the drive hosting these VMs recently so the SMART logs all report healthy. On the NAS side the volume is also healthy. Might need to recreate the VM as you suggested
Have you tried powering the VM down and starting it fresh? It sounds like it’s in a bad state. That’d be step one for me.
Just tried shutting it down, starting it back up and trying a backup. Failed in the same spot.