Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 6ae129b76fddbba0776d4a5430e71494245b04c4

/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA256: 7D6935B1C63FF2F6BCABDD024EBC2A770F90C43B0D57B646FA7CBD4C0ABCF846 MD5: B8A72424AE812FD21D225195812B2502


Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9.

  • xodoh74984OP
    link
    fedilink
    arrow-up
    4
    ·
    18 hours ago

    Absolutely! By the way, I hadn’t thanked you yet for your massive effort here. Thank you very much for putting this all together. Also, love your username.

    Do you think we could modify the script to use HTTP Range headers and download from the end of the file to the beginning? Or, perhaps we could work together and target different byte ranges?

    You seem much better versed in this than I am to know what’s possible.

    • WhatCD
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      17 hours ago

      Ok updated the script. Added --startByte and --endByte and --totalFileBytes

      https://pastebin.com/sjMBCnzm

      Using --totalFileBytes 192613274080 avoids an HTTP head request at the beginning of the script making it slightly less brittle.

      To grab the last 5 GB of the file you would add the following to your command:

      --startByte 187244564960 --endByte 192613274079 --totalFileBytes 192613274080
      
      • WhatCD
        link
        fedilink
        arrow-up
        1
        ·
        17 hours ago

        The next question is who goes after what part.

        • WhatCD
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          17 hours ago

          This would be the largest three gaps from what I have:

          • --startByte 49981423616 --endByte 60299411455 (9.61 GB)
          • --startByte 110131937280 --endByte 120424759295 (9.59 GB)
          • --startByte 134211436544 --endByte 144472801279 (9.56 GB)
          • WorldlyBasis9838
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            16 hours ago

            I’ll work on the second –startByte 110131937280 --endByte 120424759295 (9.59 GB)

            EDIT: I’m probably at 20-30 passes by now. Got squat.

            Do you think this is a bug, or is it possible the chunk is not there?

          • xodoh74984OP
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            16 hours ago

            I will grab the first segment: -–startByte 49981423616 --endByte 60299411455 (9.61 GB)

            EDIT: I, too, remain chunkless after 8 passes. Haven’t been able to grab anything yet, but trying.

            EDIT2: Been IP hopping and refreshing cookies to try to work around the issue. On my 3rd IP address, but still at 0%.

          • WhatCD
            link
            fedilink
            arrow-up
            2
            ·
            17 hours ago

            Perfect I’m on --startByte 134211436544 --endByte 144472801279 (9.56 GB)

          • kongstrong
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            14 hours ago

            I can also take up some of these. Do you happen to have more of those gaps?

            Also, are you guys using some chat channel for this? Might be a little more accessible

            E: other users that run into this thread, DM me and I can add you to an element group where we coordinate this. We’re looking for more people

            • WhatCD
              link
              fedilink
              arrow-up
              2
              ·
              16 hours ago

              I would really like a chat of some kind, matrix maybe?

    • WhatCD
      link
      fedilink
      arrow-up
      3
      ·
      18 hours ago

      Great idea, let me see what I can do!

    • WorldlyBasis9838
      link
      fedilink
      arrow-up
      2
      ·
      18 hours ago

      If we could target different byte ranges, having 10-20 different people spaced through the expected range could cover a lot of ground!