- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://lemmy.ml/post/1553943
Hi,
there are mainly core changes, refactoring and optimizations. Performance is improved in some areas, overall there may be a cumulative improvement due to refactoring that removed lookups in the IO path or simplified IO submission tracking.
No merge conflicts. Please pull, thanks.
Core:
submit IO synchronously for fast checksums (crc32c and xxhash), remove high priority worker kthread
read extent buffer in one go, simplify IO tracking, bio submission and locking
remove additional tracking of redirtied extent buffers, originally added for zoned mode but actually not needed
track ordered extent pointer in bio to avoid rbtree lookups during IO
scrub, use recovered data stripes as cache to avoid unnecessary read
in zoned mode, optimize logical to physical mappings of extents
remove PageError handling, not set by VFS nor writeback
cleanups, refactoring, better structure packing
lots of error handling improvements
more assertions, lockdep annotations
print assertion failure with the exact line where it happens
tracepoint updates
more debugging prints
Performance:
speedup in fsync(), better tracking of inode logged status can avoid transaction commit
IO path structures track logical offsets in data structures and does not need to look it up
User visible changes:
don’t commit transaction for every created subvolume, this can reduce time when many subvolumes are created in a batch
print affected files when relocation fails
trigger orphan file cleanup during START_SYNC ioctl
Notable fixes:
fix crash when disabling quota and relocation
fix crashes when removing roots from drity list
fix transacion abort during relocation when converting from newer profiles not covered by fallback
in zoned mode, stop reclaiming block groups if filesystem becomes read-only
fix rare race condition in tree mod log rewind that can miss some btree node slots
with enabled fsverity, drop up-to-date page bit in case the verification fails
Is there any chance that the write hole issue with RAID5/6 will ever get fixed?
There’s work on alternative RAID5/6 modes but their performance sucks because they need to do read-modify-write.
Note that the write hole is one of the least concerning issues with btrfs RAID5/6. It’s unstable and you shouldn’t use it for anything other than testing.
It gode write up from zygo
https://lore.kernel.org/linux-btrfs/[email protected]/
The post is dated June 26, 2020, so I wonder if this is even up to date?
Sure they are still working through the list.
https://lore.kernel.org/linux-btrfs/[email protected]/