• @Gradually_Adjusting
    link
    English
    122 months ago

    That’s honestly intense. I would be terrified of having that much data in one place

    • @adavis
      link
      English
      15
      edit-2
      2 months ago

      While not hard drives, at $dayjob we bought a new server out with 16 x 64TB nvme drives. We don’t even need the speed of nvme for this machines roll. It was the density that was most appealing.

      It feels crazy having a petabytes of storage (albeit with some lost to raid redundancy). Is this what it was like working in tech up till the mid 00s with significant jumps just turning up?

      • @InverseParallax
        link
        English
        62 months ago

        This is exactly what it was like, except you didn’t need it as much.

        Storage used to cover how much a person needed and maybe 2-8x more, then datasets shot upwards with audio/mp3, then video, then again with Ai.

        • Justin
          link
          fedilink
          English
          112 months ago

          a petabye of ssds is probably cheaper than a petabye of hdds when you account for rack costs, electricity costs, and maintenance.

      • @toddestan
        link
        English
        1
        edit-2
        2 months ago

        The size increase in hard drives around that time was insane. Compared to the mid-90’s which was just a decade ago, hard drives capacities increased around 100 times. On average, drive capacities were doubling every year.

        Then things slowed down. In the past 20 years, we’ve maybe increased the capacities 30-40 times for hard drives.

        Flash memory, on the other hand, is a different story. Sometime around 2002-3 or so I paid something like $45 for my first USB flash drive - a whole 128MB of storage. Today I can buy one that’s literally 1000 times larger, for around a third of that price. (I still have that drive, and it still works too!)

    • @9point6
      link
      English
      72 months ago

      I guess you’re expected to set those up in a RAID 5 or 6 (or similar) setup to have redundancy in case of failure.

      Rebuilding after a failure would be a few days of squeaky bum time though.

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        2 months ago

        Absolutely not. At those densities, the write speed isn’t high enough to trust to RAID 5 or 6, particularly on a new system with drives from the same manufacturing batch (which may fail around the same time). You’d be looking at a RAID 10 or even a variant with more than two drives per mirror. Regardless of RAID level, at least a couple should be reserved as hot spares as well.

        EDIT: RAID 10 doesn’t necessarily rebuild any faster than RAID 5/6, but the write speed is relevant because it determines the total time to rebuild. That determines the likelihood that another drive in the array fails (more likely during a rebuild due to added drive stress). with RAID 10, it’s less likely the drive will be in the same span. Regardless, it’s always worth restating that RAID is no substitute for your 3-2-1 backups.

        • femtech
          link
          fedilink
          English
          12 months ago

          Yeah I have 6 14tb drives in raid 10, I’ll get 2 more if i need it.

      • @InverseParallax
        link
        English
        22 months ago

        At raid6, rebuilds are 4.2 roentgens, not great but they’re not horrible. Keep old backups.but the data isn’t irreplaceable.

        Raid5 is suicide if you care about your data.