Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)
  • INTERNET ARCHIVE FOLDER (removed due to reports of CSAM)
  • INTERNET ARCHIVE DIRECT LINK (removed due to reports of CSAM)

Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9 (EDIT: NOT ANYMORE)


EDIT [2026-02-02]: After being made aware of potential CSAM in the original Data Set 9 releases and seeing confirmation in the New York Times, I will no longer support any effort to maintain links to archives of it. There is suspicion of CSAM in Data Set 10 as well. I am removing links to both archives.

Some in this thread may be upset by this action. It is right to be distrustful of a government that has not shown signs of integrity. However, I do trust journalists who hold the government accountable.

I am abandoning this project and removing any links to content that commenters here and on reddit have suggested may contain CSAM.

Ref 1: https://www.nytimes.com/2026/02/01/us/nude-photos-epstein-files.html
Ref 2: https://www.404media.co/doj-released-unredacted-nude-images-in-epstein-files

  • WorldlyBasis9838
    link
    fedilink
    arrow-up
    4
    ·
    2 days ago

    I also was getting the same error. Going to the link successfully downloads.

    Updating the cookies fixed the issue.

    • WorldlyBasis9838
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      Can also confirm, receiving more chunks again.

      EDIT: Someone should play around with the retry and backoff settings to see if a certain configuration can avoid being blocked for a longer period of time. IP rotating is too much trouble.

      • WhatCD
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        Updated the script to display information better: https://pastebin.com/S4gvw9q1

        It has one library dependency so you’ll have to do:

        pip install rich
        

        I haven’t been getting blocked with this:

        python script.py 'https://www.justice.gov/epstein/files/DataSet%209.zip' -o 'DataSet 9.zip' --cookies cookie.txt --retries 2 --referer 'https://www.justice.gov/age-verify?destination=%2Fepstein%2Ffiles%2FDataSet+9.zip' --ua '<set-this>' --timeout 90 -t 16 -c auto
        

        The new script can auto set threads and chunks, I updated the main comment with more info about those.

        I’m setting the --ua option which let’s you override the user agent header. I’m making sure it matches the browser that I use to request the cookie.

        • WorldlyBasis9838
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          2 days ago

          Gonna grab a some tea, then get back at it. Will update when I have something.

          Thanks for this!

          EDIT: This works quite well. Getting chunks right off the bat. About 1 per second, just guessing.

            • WhatCD
              link
              fedilink
              arrow-up
              4
              ·
              1 day ago

              I don’t know exactly, but seems about an hour or two if you get a 401 unauthorized.

              Would you be interested in joining out effort here? I’m hoping to crowd source these chunks and then combine our effort.

                • WhatCD
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  edit-2
                  1 day ago

                  Ok updated the script. Added --startByte and --endByte and --totalFileBytes

                  https://pastebin.com/sjMBCnzm

                  Using --totalFileBytes 192613274080 avoids an HTTP head request at the beginning of the script making it slightly less brittle.

                  To grab the last 5 GB of the file you would add the following to your command:

                  --startByte 187244564960 --endByte 192613274079 --totalFileBytes 192613274080
                  
                • WorldlyBasis9838
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 day ago

                  If we could target different byte ranges, having 10-20 different people spaced through the expected range could cover a lot of ground!

              • WorldlyBasis9838
                link
                fedilink
                arrow-up
                0
                ·
                1 day ago

                My IP appears to have been completely blocked by the domain. Multiple browsers, devices, confirm it.

                If anyone has any suggestions for other options, I’m listening.

        • WorldlyBasis9838
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 day ago

          I had the script crash at line 324: BadStatusLine: HTTP/1.1 0 Init

          EDIT: It’s worth noting that about every time I (re) start it after seemingly been blocked a bit, I get about 1gb more before it slows WAY down (no server response).

          EDIT: It looks to me, that if I’m getting only FAILED: No server response, stopping the script for a minute or two and restarting immediately garners a lot more results. I think having a longer pause with many failures might be worth looking at. – I’ll play around a bit.