I’m looking at different options for getting a NAS/RAID array system that is tolerant to not just hard drive failures but also to hardware/firmware and board failures. I’ve utilized a RAID array in the past that was built into the motherboard, which resulted in the motherboard failing and me having to ebay another one to get the RAID array back up and running. Then I bought a NAS 2 bay drive that was only compatible with drives up to 1.5TB. I’ve also used external drives for backup since I’ve been burned by hardware/firmware/software issues related to RAID arrays. Are there are any PCI RAID cards, NAS boxes or software RAID or other options where the hard drives would still be readable by other RAID cards if the boards failed? Maybe a software RAID solution? Any thoughts would be appreciated.

  • mholiv@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    The crux of the matter is that the article’s criticisms of btrfs are largely based on its differences from ZFS, rather than any inherent flaws in btrfs itself. Notably, Suse Enterprise Linux, Fedora, and Meta’s Linux engineers all advocate for btrfs, using it extensively in production.

    The article’s main grievances are:

    Btrfs RAID Arrays:

    The author is upset that btrfs RAID arrays don’t function as he anticipated. However, btrfs isn’t ZFS or mdadm; it’s its own system and should be understood as such. The author criticizes btrfs for allowing drives of mismatched sizes. This flexibility, however, isn’t inherently negative.

    Btrfs RAID Array Management:

    The author laments that btrfs can’t be mounted by a human-readable name like ZFS, and instead requires UUID. Using UUID is standard practice for native Linux file systems. A side note: mounting by drive letter is outdated; UUID is the recommended method.

    Btrfs-RAID’s Redundancy:

    The author points out that btrfs won’t auto-mount an array if a drive fails, while ZFS will. This is actually a protective measure. By not auto-mounting, it minimizes the risk of further drive failures, prioritizing data preservation.

    Btrfs-RAID Maintenance:

    The author’s complaint here boils down to “btrfs isn’t ZFS.” He attempts ZFS recovery methods on btrfs and is surprised when they don’t work. The processes are different, but that doesn’t mean btrfs is more labor-intensive.

    He also critiques the use of crc32 for corruption detection. If this is a concern, other algorithms can be used. The default, crc32, is chosen for its speed. In fact, some argue that btrfs’s integrity checks are faster than alternatives.

    In summary, the article’s author seems primarily upset that btrfs isn’t a ZFS clone. He overlooks btrfs’s advantages over ZFS, such as ZFS pools occasionally failing to mount due to kernel updates. On the other hand, major entities like Suse Enterprise Linux, Fedora, and Meta rely on btrfs in large-scale production environments.

    When revisiting the article, keep the perspective of “an individual frustrated that btrfs isn’t ZFS” in mind. The bias becomes evident.

    • mea_rah@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      > The author is upset that btrfs RAID arrays don’t function as he anticipated. However, btrfs isn’t ZFS or mdadm; it’s its own system and should be understood as such.

      I’d say it’s quite reasonable critique, because RAID1 is kind of industry standard. I can’t think of any other RAID (HW or SW) that would do RAID1 in this way. If btrfs decided to call their implementation raid1 while it really isn’t raid1 in some major way, it was very bad idea. I don’t agree it’s documentation issue, it’s really bad name choice. ZFS has raidz that does something similar to btrfs raid1 and the name does not lead to confusion. RAID1 system should never lead to decreased reliability with increasing number of drives.

      > The author points out that btrfs won’t auto-mount an array if a drive fails, while ZFS will. This is actually a protective measure. By not auto-mounting, it minimizes the risk of further drive failures, prioritizing data preservation.

      RAID is uptime preserving mechanism. If anyone uses RAID for data preservation purposes, they are setting themselves for a nasty surprise. RAID system that does not mount in reduced redundancy situation is very bad design. It effectively sacrifices usability of RAID to serve other purpose that RAID system does not really need nor should be used for.

      > He attempts ZFS recovery methods on btrfs and is surprised when they don’t work.

      I felt that way as well, but I think they raised one important point - there was no indication that the array was still in reduced redundancy state after their “attempt at recovery”. ZFS is very clear about the state of array at every step. Same for other RAID systems including some HW based ones. Every single one I’ve used were very clear about the fact that array isn’t fully redundant.

      > In summary, the article’s author seems primarily upset that btrfs isn’t a ZFS clone.

      FWIW I didn’t have that impression. I have experience with multiple RAID controllers and multiple SW RAID systems and his points would be valid with any of those.

      Anyways thank you for your reply. It’s not the answer I was hoping for and I don’t agree with your views on some of these issues. But it gives me pretty good idea of the current state of the filesystem.

      • mholiv@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Hey. No problem. Something to keep an eye out for in the future might be bcachefs. I think it’s a step up above ZFS and btrfs. The author missed the last merge window by days but it should make it into the next kernel merge window. It’s exciting stuff. Other options might be a local GlusterFS or CephFS setup.

        • mea_rah@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Oh wow, thanks. I’ve read about bcachefs long time ago. I didn’t realize it go that far since. That’s definitely something I’m very curious to try.

          • mholiv@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Me too. I am really looking forward to the tiered storage system. NVME backed by HDDs backed by SMR HDDs. You write to the the NVME drives and in the background bcachefs slowly moves it to the slower mediums.