I saw this post and I was curious what was out there.

https://neuromatch.social/@jonny/113444325077647843

Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?

  • OtterOP
    link
    fedilink
    English
    7317 days ago

    One option that I’ve heard of in the past

    https://archivebox.io/

    ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view websites offline.

    • Admiral Patrick
      link
      fedilink
      English
      14
      edit-2
      17 days ago

      Going to check that out because…yeah. Just gotta figure out what and where to archive.

    • @CrazyLikeGollum
      link
      English
      1016 days ago

      That looks useful, I might host that. Does anyone have an RSS feed of at risk data?

    • @tomtomtom
      link
      English
      916 days ago

      I am using archivebox, it is pretty straight-forward to self-host and use.

      However, it is very difficult to archive most news sites with it and many other sites as well. Most cookie etc pop ups on a site will render the archived page unusable and often archiving won’t work at all because some bot protection (Cloudflare etc.) will kick-in when archivebox tries to access a site.

      If anyone else has more success using it, please let me know if I am doing something wrong…

      • Daniel Quinn
        link
        fedilink
        English
        116 days ago

        Monolith has the same problem here. I think the best resolution might be some sort of browser-plugin based solution where you could say “archive this” and have it push the result somewhere.

        I wonder if I could combine a dumb plugin with Monolith to do that… A weekend project perhaps.

    • @M600
      link
      English
      717 days ago

      This seems pretty cool. I might actually host this.

    • @PunnyName
      link
      English
      2616 days ago

      Everything is at risk.

  • @yasser_kaddoura
    link
    English
    31
    edit-2
    16 days ago

    I have a script that archives to:

    I used to solely depend on archive.org, but after the recent attacks, I expanded my options.

    Script: https://gist.github.com/YasserKa/9a02bc50e75e7239f6f0c8f04fe4cfb1

    EDIT: Added script. Note that the script doesn’t include archiving to archivebox, since its API isn’t available in stable verison yet. You can add a function depending on your setup. Personally, I am depending on Caddy and docker, so I am using caddy module [1] to execute commands with this in my Caddyfile:

    route /add {
    	@params query url=*
    	exec docker exec --user=archivebox archivebox archivebox add {http.request.uri.query.url} {
    		timeout 0
    	}
    }
    

    [1] https://github.com/abiosoft/caddy-exec

    • @[email protected]
      link
      fedilink
      English
      416 days ago

      isn’t this prone to a

       || rm -rf /
      

      or something similar at the end of the URL?

      if you can docker exec, you have a lot of privileges already, so be sure to make sure this is not a danger

      • @yasser_kaddoura
        link
        English
        2
        edit-2
        15 days ago

        Thank you for the warning. You are correct. It’s prune to command injection. I will validate the URL before executing it. This shoud suffice until archivebox’s rest API is available in stable.

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      16 days ago

      I hope you are also donating to the projects for uploading multiple copies to different services.

    • @AnUnusualRelic
      link
      English
      15
      edit-2
      16 days ago

      archive.org is hosted in the US and could end up being a valid target. It doesn’t strike me as being a very good place to securely store anything nowadays. I’d consider anything hosted in the US to be out.

      • @[email protected]
        link
        fedilink
        English
        316 days ago

        Depends on the threat model.

        NOAA and others gets underfunded/change of menagement and need to close down open access to stuff.

        or

        Data becomes illegal to possess and feds start knocking on Web Archive doors.

        or

        Web archive will do something stupid and will get sued/DDOSed

        In only one very unlikely scenario it won’t be availble due to recent events. But still redundancy would be good regardless of recent stuff.

  • @scientific_railroads
    link
    English
    1216 days ago

    For myself: Wayback It saves link to multiple different web archives and gives me pdf and warc files.

    For others: Archive team have a few active projects to save at risk data and there is IRC channel in which people can suggest adding other websites for saving. They also have wiki with explanations how people can help.

    • @Zachariah
      link
      English
      3517 days ago

      It’s a single point of failure though.

      • @[email protected]
        link
        fedilink
        English
        316 days ago

        In that they’re a single organization, yes, but I’m a single person with significantly fewer resources. Non-availability is a significantly higher risk for things I host personally.

    • OtterOP
      link
      fedilink
      English
      3317 days ago

      There was the attack on the Internet archive recently, are there any good options out there to help mirror some of the data or otherwise provide redundancy?

    • @just_another_person
      link
      English
      017 days ago

      Yes. This isn’t something you want your own machines to be doing if something else is already doing it.

        • abff08f4813c
          link
          fedilink
          217 days ago

          I guess they back either other up. Like archive.is is able to take archives from archive.org but the saved page reflects the original URL and the original archiving time from the wayback machine (though it also notes the URL used from wayback itself plus the time they got archived it from wayback).

        • @just_another_person
          link
          English
          -117 days ago

          Realize how how much they are supporting and storing.

          Come back to the comments after.

      • Deebster
        link
        fedilink
        English
        917 days ago

        Your argument is that a single backup is sufficient? I disagree, and I think that so would most in the selfhosted and datahoarder communities.

    • @[email protected]
      link
      fedilink
      English
      316 days ago

      I heard news recently that some companies recently started shipping non-m disks labelled as m-disks. You may want to have a look

  • Krafting
    link
    English
    616 days ago

    I archive youtube videos that I like with TubeArchivist, I have a playlist for random videos i’d like to keep, and also subscribe to some of my favourite creator so I can keeptheir videos, even when I’m offline

      • Krafting
        link
        English
        416 days ago

        Seems nice, but you need an external Player to watch the content, which can be goof for some people, but I like the webUI of TubeArchivist (even though it can be enhanced for sure)

        • @[email protected]
          link
          fedilink
          English
          116 days ago

          You can actually play from the UI too, but it’s not particularly nice to use (or intended to be used that way).

  • mesamune
    link
    English
    3
    edit-2
    17 days ago

    Flash drives and periodic transfers.

  • @vegetaaaaaaa
    link
    English
    316 days ago

    shaarli bookmarks + hecat (shaarli_api importer + download_media/archive_webpages processors + html_table exporter for the HTML index)

  • Daniel Quinn
    link
    fedilink
    English
    116 days ago

    Monolith can be particularly handy for this. I used it in a recent project to archive the outgoing links from my own site. Coincidentally, if anyone is interested in that, it’s called django-cool-urls.