I recently decided to replace the SD card in my Raspberry Pi and reinstall the system. Without any special backups in place, I turned to rsync to duplicate /var/lib/docker with all my containers, including Nextcloud.

Step #1: I mounted an external hard drive to /mnt/temp.

Step #2: I used rsync to copy the data to /mnt/tmp. See the difference?

Step #3: I reformatted the SD card.

Step #4: I realized my mistake.

Moral: no one is immune to their own stupidity 😂

  • @Nibodhika
    link
    English
    8410 months ago

    If you have one backup, you have no backup. That’s a hard lesson to learn, but if you care about those photos it’s possible to recover them if you haven’t written stuff on that sdcard yet.

    • TWeaK
      link
      fedilink
      English
      3510 months ago

      At least 3 backups, 2 different media, 1 offsite location.

      • @[email protected]
        link
        fedilink
        English
        8
        edit-2
        10 months ago

        I like 3-2-1-1-0 better. Like yours, but:

        • the additional 1 is for “offline” (so you have one offsite and offline backup copy).
        • 0 for zero errors. Backups must be tested and verified.
  • @[email protected]
    link
    fedilink
    English
    43
    edit-2
    10 months ago

    Fuck up #1: no backups

    Fuck up #2: using SD cards for data storage. SD cards and USB drives are ephemeral storage devices, not to be relied on. Most of the time they use file systems like FAT32 which are far less safe than NTFS or ext4. Use reliable storage media, like hard drives.

    Fuck up #3: no backups.

    • @AtariDump
      link
      English
      110 months ago

      Would an SSD be any better than a pen drive or should it be stored on spinning rust?

      • @[email protected]
        link
        fedilink
        English
        9
        edit-2
        10 months ago

        In my experience, flash drives are way more reliable than SD cards and I’d put SSD and HDD above both of those.

        I wish they’d just ditch the SD card on the Pi already as it’s always the most likely reason why your stuff stops working. For my Pi running Home Assistant, I’ve swapped to an SDD as the boot drive. For the others, I still use SD cards but they’re just doing basic stuff like running Klipper on my 3d printer or a (WIP) live photo frame that can be easily swapped with a replacement SD later.

        • @[email protected]
          link
          fedilink
          English
          110 months ago

          It really depends how you define reliability. SD cards are physically nigh indestructible, but can show failure when overwritten often. Hence for one off backups it’s actually a good alternative. It will start showing problems when used as a medium that often writes and overwrites the same data often.

          I would recommend backups on SD cards in an A/B fashion when you want to give a backup to someone else to store safely.

          • @[email protected]
            link
            fedilink
            English
            110 months ago

            Reliability in that I’ve used flash drives and SD cards for years but have only ever had issues with corrupt SD cards (probably at least half a dozen times) while I’ve never had any with flash drives.

            Constant writes is an issue with them, which is why I think it’s stupid that the Raspberry Pi Foundation continues to use them as the default storage/OS drive. Then again, they continue to make insane choices with power supplies as well, so it shouldn’t be a big surprise.

            • The Overlord
              link
              fedilink
              English
              110 months ago

              I really wish they’d just give us an m.2 slot on the back

      • @[email protected]
        link
        fedilink
        English
        610 months ago

        Much better. SSDs and HDDs do monitor the health of the drives (and you can see many parameters through SMART), while pen drives and SD cards don’t.

        Of course, they have their limits which is why raid exists. File systems like ZFS are built on the premise that drives are unreliable. It’s up to you if you want that redundancy. The most important thing to not lose data is to have backups. Ideally at least 3 copies, 1 off site (e.g. on a cloud, or on a disk at some place other than your home).

        • @AtariDump
          link
          English
          210 months ago

          RAID is not a backup.

        • @Starayo
          link
          English
          110 months ago

          Though not every fail state is going to show up. If you start seeing weird intermittent behaviour from a drive, for goodness sake find a way to back it up immediately.

          My mum’s new nuc started having some issues, SMART showed perfect drive health. After trying a few things to diagnose, I rebooted to run memtest and check for bad ram, and that was the last time it ever booted into windows. Controller or something on the nvme ssd died. Far too expensive to try and repair for data recovery. Thankfully had a… Somewhat recent backup. Not as recent as we would have liked.

      • @bbuez
        link
        English
        510 months ago

        The best way to ensure your data lasts a long time is to use a laser to beam it to the darkest part of the sky. Read speed is abysmal though

      • The Overlord
        link
        fedilink
        English
        210 months ago

        SD cards and pen drives are (usally) made from lower quality, cheaper nand (the little memory chips that store the data) and also lack health monitoring, that being said ssds can and do die so it’s important to have backups

  • @[email protected]
    link
    fedilink
    English
    2910 months ago

    Unless you’ve used something secure for formatting or wrote data to the SD after, consider attempting data recovery.

    • @[email protected]OP
      link
      fedilink
      English
      1110 months ago

      No luck with extundelete (segfault) and testdisk (sees some deleted files, but not /var/lib/docker). At least I can always throws it away and not worry about safety of my data! :)

      • @[email protected]
        link
        fedilink
        English
        1610 months ago

        You can always try professional data recovery services. It just depends on how much the data is worth to you.

        • Atemu
          link
          fedilink
          English
          110 months ago

          And how much time you want to put into not getting scammed.

  • @[email protected]
    link
    fedilink
    English
    28
    edit-2
    10 months ago

    I’m just impressed an SD card in a Pi lasted since 2017 without losing all your data on its own.

    For the future the general guideline is 3 copies of your data at minimum, so definitely set up some backups.

    • @AceBonobo
      link
      English
      210 months ago

      How do you actually do that?

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        Lots of backup software out there, I use a mix of Veeam Endpoint, Restic w/ Backblaze B2, and iDrive.

  • @[email protected]
    link
    fedilink
    English
    23
    edit-2
    10 months ago

    If you haven’t done much writing to the SD card, you may be able to recover the data. Data isn’t really “deleted”, it is just labeled as deleted. There is software that can comb through the raw data and try to make sense of what files were there. I don’t know of any specific software, so if anyone knows, please reply

    Edit: Another commenter mentioned some success with DMDE

    Edit 2: Worth mentioning that this is true of formats. As long as it doesn’t zero out the entire media, it just edits the file system metadata to say there are no files.

  • Outcide
    link
    English
    2110 months ago

    There’s an old saying, “Unix is user friendly, it’s just fussy about it’s friends.”

    • @lando55
      link
      English
      710 months ago

      Unix is the kind of friend who won’t bat an eye about holding your beer while you go and do something incredibly stupid

  • Bobby Turkalino
    link
    fedilink
    English
    2010 months ago

    Everyone else is gonna be like “if you don’t have at least 3 backups of something blahblah” but you know, not everyone has the finances for that, so advice from a cheapskate computer nerd: when going through critical transfers/reformats/deletions like you were doing, ALWAYS try actually recovering stuff from the backup before you cross the point of no return. E.g. if the backup is a .zip, extract a few individual files from it and open them in their respective programs.

  • bruhduh
    link
    English
    19
    edit-2
    10 months ago

    Testdisk and photorec, use them, they even saved my data from bricked Chinese usb flash drive, so it’ll save yours unless you wrote dd if /dev/zero of /*/microsd. Also here’s the tip, don’t attempt to rebuild partition firstly, first step try to copy all files from microsd to another device with these programs and after that try other ways, edit: I’ve seen from your other comments that your data already was overwritten, my condolences

  • @[email protected]
    link
    fedilink
    English
    1310 months ago

    I know I’m going to get down voted for this but this would be almost impossible to fuck up with a gui. Yet people insist that writing commands manually is superior. I’m sorry for your loss.

    • @[email protected]
      link
      fedilink
      English
      610 months ago

      Guardrails are absolutely not a reason why people prefer the CLI. We want the guardrails off so we can go faster.

      • @[email protected]
        link
        fedilink
        English
        310 months ago

        This is on me for sure that I’ve never seen anyone be faster using a CLI compared to a GUI especially for basic operations which is what most of us do 95% of the time. I know there are specific cases where a command just does it better/easier but for me that’s not the case for everyday stuff.

        • SayCyberOnceMore
          link
          fedilink
          English
          3
          edit-2
          10 months ago

          But what about the movies where the actors are typing commands and a visual GUI is moving around and updating on the screen (and making sound effects too).

          Isn’t that the best of all worlds? /s

    • @[email protected]
      link
      fedilink
      English
      510 months ago

      There is something to be said about CLI applications being risky by default (“rm” doesn’t prompt to ask, rsync --delete will do just that). But I’ve definitely slipped on the mouse button while “drag & dropping” files in a GUI before. And it can be a right mess if you move a bunch of individual files rather than a sub-folder…

      • Midnight Wolf
        link
        English
        110 months ago

        At least for windows, you can ctrl-z that away and it’ll handle your mouse fumble. Explorer also highlights the files after a copy so if that doesn’t work (and it was a copy action), just delete them immediately.

        I haven’t used *nix for daily stuff in years but I’m sure the same abilities are there, surely.

    • @[email protected]
      link
      fedilink
      English
      210 months ago

      To play devil’s advocate, tab completion would have also likely caught this. OP could have typed /mnt/t and it would autofill temp, or would show the matching options if it’s ambiguous.

  • @[email protected]
    link
    fedilink
    English
    11
    edit-2
    10 months ago

    Sorry to hear, I feel you:

    I wanted to delete all .m3u-files in my music collection when I learned:

    find ./ -name "*.m3u" -delete -> this would have been the right way, all .m3u in the current folder would have been deleted.

    find ./ -delete -name "*.m3u" -> WRONG, this just deletes the current folder and everything in it.

    Who would have known, that the position of -delete actually matters.

    • @[email protected]
      link
      fedilink
      English
      610 months ago

      I didn’t know there was a -delete option to find! I’ve been piping to xargs -0 for decades!

      • @[email protected]
        link
        fedilink
        English
        510 months ago

        Probably because it’s easier to fuck up. With piping to xargs, you are forced to put the delete command last.

    • Synapse
      link
      English
      510 months ago

      I did this sort of mistakes too, luckily BTRFS snapshots are always here to save the day !

    • @[email protected]
      link
      fedilink
      English
      210 months ago

      The first one would have deleted nothing as it needs to match the whole name. I recommend running find with an explicit -print before replacing it in place with -delete or -exec. It’s good to remember that find has a complex order dependent language with -or and -and, but not maybe the best idea to try to use those features.

    • @[email protected]
      link
      fedilink
      English
      210 months ago

      I can recommend fd to everyone frustrated with find, it has a much more intuitive interface imo, and it’s also significantly faster.

    • @blackbirdbiryani
      link
      English
      210 months ago

      I use GNU find every day and still have to google about the details. Only learnt about - delete the other day, good to know the position matters.

    • @evrial
      link
      English
      2
      edit-2
      10 months ago

      fd -tf -e m3u -x rm loads all cores and nothing works faster

  • @[email protected]
    link
    fedilink
    English
    710 months ago

    Unlesa you did a full zeroing format the info might still be available. There was an applicarion that attempts to rebuild the partition / Filesystem from left over meta data or inode info. I forget the name unfortunately. Normall the strings command will get your photos but probably not if they were in a docker image database.

    • @pHr34kY
      link
      English
      910 months ago

      Testdisk and photorec? It’s saved me heaps of times.

      • @[email protected]
        link
        fedilink
        English
        310 months ago

        Those are good for sure. And maybe it was testdisk. There was one that just undeleted the partition table delete. as long as new data had not been written everthing would be intact

  • shadowbert
    link
    fedilink
    610 months ago

    My condolences :'(

    I once lost a bunch of data because I accidently left a / at the end of a path… rsync can be dangerous lol

    • @blackbirdbiryani
      link
      English
      210 months ago

      Rclone is superior IMHO, you have to explicitly name the output folder. Used to think it was a hassle but in hindsight being explicit about the destination reduces mistakes.

      • shadowbert
        link
        fedilink
        210 months ago

        Sometimes you’re hands are tied by the tools already on the server - but I’ll try to remember to check to see if that’s available next time.

  • @[email protected]
    link
    fedilink
    English
    610 months ago

    The bells of the Gion monastery in India echo with the warning that all things are impermanent.

  • @[email protected]
    link
    fedilink
    English
    510 months ago

    Sorry to read that.

    I’ve dded an external drive instead of an SD card once by mistake. I’ve never felt more stupid than that day.

    • @KrapKake
      link
      English
      310 months ago

      Classic Disk Destroyer moment.