There are quite a few choices of brands when it comes to purchasing harddisks or ssd, but which one do you find the most reliable one? Personally had great experiences with SeaGate, but heard ChrisTitus had the opposite experience with them.

So curious to what manufacturers people here swear to and why? Which ones do you have the worst experience with?

  • @[email protected]
    link
    fedilink
    English
    29 months ago

    Afaik, the wear and tear on SSDs these days is handled under the hood by the firmware.

    Concepts like Files and FATs and Copy-on-Write are format-specific. I believe that even if a filesystem were to deliberately write to the same location repeatedly to intentionally degrade an SSD, the firmware will intelligently shift its block mapping around under the hood so as to spread out the wear. If the SSD detects a block is producing errors (bad parity bits), it will mark it as bad and map in a new block. To the filesystem, there’s still perfectly good storage at that address, albeit with a potential one-off read error.

    The larger sizes SSD just gives the firmware more extra blocks to pull from.

    • @skittlebrau
      link
      English
      1
      edit-2
      9 months ago

      Does that mean that manually attempting to overprovision SSDs isn’t necessary for maximising endurance? Eg. partition a 1TB SSD as 500GB.

      • @BigMikeInAustin
        link
        English
        29 months ago

        That would be called under-provisioning.

        I haven’t read anything about how an SSD deals with partitions, so I don’t know for sure.

        Since the controller intercepts the calls for specific locations, I’m inclined to believe that the controller does not care about the concept of partitions and does not segregate any chips, thus it would spread all writes across all of the chips.

      • @[email protected]
        link
        fedilink
        English
        19 months ago

        As the other person said, I don’t think the SSD knows about partitions or makes any assumptions based on partitioning, it just knows if you’ve written data to a certain location, and it could be smart enough to know how often you’re writing data to that location. So if you keep writing data to a single location, it could decide to logically remap that location in logical memory to different physical memory so that you don’t wear it out.

        I say “could” because it really depends on the vendor. This is where one brand could be smart and spend the time writing smart software to extend the life of their drive, while another could cheap out and skip straight to selling you a drive that will die sooner.

        It’s also worth noting that drives have an unreported space of “spare sectors” that it can use if it detects one has gone bad. I don’t know if you can see the total remaining spare sectors, but it typically scales with the size of a drive. You can at least see how many bad sectors have been reallocated using S.M.A.R.T.

      • @[email protected]
        link
        fedilink
        English
        19 months ago

        Seriously? Why be like this? It feels like a Lemmy thing for people to have a chip on their shoulder all the time.

        You shared your understanding, and then I shared mine (in fewer words). I also summarized in once sentence at the bottom. Was just trying to have a conversation, sorry.