I have 4 old hard drives that I pulled from an old Drobo that needs to be trashed. I bought a Mediasonic 4-bay RAID enclosure that I thought would be a good upgrade, but knew going into that the drives may not work because the manual for the new enclosure specifically says to use new drives to avoid problems. The exact product is this Mediasonic one.

I don’t care about the existing data on these drives, and that was originally what I thought was meant by “avoiding problems”. So I tried just putting the drives in to no avail. They don’t show up as drives in the file explorer. They don’t show up in “Disks”.

I also have an external hard drive dock - the 15 year old version of this one which does let me mount the drives and see them in Disks.

I have tried running “wipefs -a” and I’ve tried formatting them in Disks with no filesystem specified. I’ve also run parted on them, but anything I try in parted gives me the error “unrecognised disk label”.

If I can’t reuse these old drives then the enclosure is of less use to me than I had hoped as I don’t intend to buy new drives any time soon.

Is there anything else I can try to reset these drives to a state where they’ll act like new drives that have never been used before?

Update: The “tricks” haven’t worked so I’m doing the full disk write using dd. It’ll be a while before I have 2 disks prepped to try.

  • @BluescreenOfDeath
    link
    English
    63 days ago

    If you want to fully wipe the disks of any data to start with, you can use a tool like dd to zero the disks. First you need to figure out what your dive is enumerated as, then you wipe it like so:

    sudo dd if=/dev/zero of=/dev/sdX

    From there, you need to decide if you’re going to use them individually or as a pool.

    !< s

    • @[email protected]
      link
      fedilink
      22 days ago

      I have 4 old hard drives that I pulled from an old Drobo that needs to be trashed. I bought a Mediasonic 4-bay RAID enclosure that I thought would be a good upgrade, but knew going into that the drives may not work because the manual for the new enclosure specifically says to use new drives to avoid problems. The exact product is this Mediasonic one.

      While this would work isn’t it a bit time consuming compared to:

      wipefs --all /dev/sdX
      
  • @[email protected]
    link
    fedilink
    4
    edit-2
    3 days ago

    I had a somewhat similar issue, kernel kept seeing the old RAID flags of a formatted drive, so would not mount, and clonezilla wouldn’t touch it either. I had to run some special command that specifically removes a certain info. I can’t recall what command it was, but once I ran it everything was fine.

    Could have been wipefs followed this maybe… https://www.slac.stanford.edu/grp/cd/soft/unix/RAIDmetadata.html

    Could have been combine with the lvremove command also. Really should have saved the notes.

    • @[email protected]OP
      link
      fedilink
      32 days ago

      Thanks, this looked really promising but didn’t work for me. lvremove said it couldn’t find the volume group and dmraid said that I have an unsupported sector size and it didn’t see any raid disks at me drive location.

      I’m currently using dd to write zeros to the drives. I’m not sure how long that will take me on this old USB 2.0 dock.

      • @[email protected]
        link
        fedilink
        12 days ago

        Hoefully that works. It was hours of trying different formating, zeroing etc, the error it gave me lead me to search and finally get a one liner that fixed it all. But why I didn’t add it to my notes is a mystery LOL

  • @[email protected]
    link
    fedilink
    23 days ago

    This is why I keep my old-as-hell Shuttle PC in the closet… I boot it off a live CD so I don’t accidentally dd my actual desktop’s OS into oblivion, again.

  • Yardy Sardley
    link
    fedilink
    13 days ago

    I’m gonna join in with everyone and recommend completely zeroing all the drives (make sure you unmount them before doing it). It will take a while but at least you will have drives in a known state and can eliminate that as a possible issue.

  • @Reliant1087
    link
    13 days ago

    Total shot in the dark but what does testdisk say?

    • @[email protected]OP
      link
      fedilink
      13 days ago

      I’ve never used this before so I’m not sure what to make of it. I am currently letting it analyze one of the disks and it’s seeing a lot of HFS+ blocks (I assume that’s what it’s reporting) and a handful of ext4. That makes sense I guess, since I’m not wiping the drive, just trying to delete any partition info and/or formatting.

      The only thing that seems like it might affect how the disk looks when inserted is cylinder geometry but I don’t know enough about that to even guess at what to do with it. Is there something I should be looking for in testdisk?

      • @Reliant1087
        link
        12 days ago

        I was hoping that testdisk would show you something funky going on with the partition table based on the parted error. No luck I guess.

        My next two ideas are,

        1. As far as I know wipefs just tries to wipe what blkid sees and it might have screwed up something. What if we just dd the whole drive with zeros and let your enclosure take it from there?
        2. What does smartctl say? Might be worth it to run the long and short tests to make sure that the drives themselves are okay.