A year ago I set up Ubuntu server with 3 ZFS pools on my server, normally I don’t make copies of very large files but today I was making a copy of a ~30GB directory and I saw in rsync that the transfer doesn’t exceed 3mb/s (cp is also very slow).

What is the best file system that “just works”? I’m thinking of migrating everything to ext4

EDIT: I really like the automatic pool recovery feature in ZFS, has saved me from 1 hard drive failure so far

  • Eideen@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    5 months ago

    Yes both BTRFS and Ext4 are vulnerable to unplanned powerloss when writes are in flight. Commonly knows as a write hole.

    For BTRFS since it use of Copy of Write, it is more vulnerable. As metadata needs to be updated and more. Ext4 does not have CoW.

    • Atemu@lemmy.ml
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      Ext4 does not have CoW.

      That’s the only true part of this comment.

      As for everything else:

      Ext4 uses journaling to ensure consistency.

      btrfs’ CoW makes it resistant to that issue by its nature; writes go elsewhere anyways, so you can delay the “commit” until everything is truly written and only then update the metadata (using a similar scheme again).

      Please read https://en.wikipedia.org/wiki/Journaling_file_system.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      For BTRFS since it use of Copy of Write, it is more vulnerable. As metadata needs to be updated and more. Ext4 does not have CoW.

      This is where theory and practice diverge and I bet a lot of people here will essentially have the same experience I have. I will never run an Ext filesystem again, not ever as I got burned multiple times both at home/homelab and at the datacenter with Ext shenanigans. BTRFS, ZFS, XFS all far superior and more reliable.