I am seeking advice regarding my ebook collection on a Linux system, which is stored on an external drive and sorted into categories. However, there are still many unsorted ebooks. I have tried using Calibre for organization, but it creates duplicate files during import on my main drive where I don’t want to keep any media. I would like to:

  • Use Calibre’s automatic organization (tags, etc.) without duplicating files
  • Maintain my existing folder structure while using Calibre
  • Automatically sort the remaining ebooks into my existing categories/folder structure

I am considering the use of symlinks to maintain the existing folder structure if there is a simple way to automate the process due to my very large collection.

Regarding automatic sorting by category, I am looking for a solution that doesn’t require manual organization or a significant time investment. I’m wondering if there’s a way to extract metadata based on file hashes or any other method that doesn’t involve manual work. Most of the files should have title and author metadata, but some won’t.

Has anyone encountered a similar problem and found a solution? I would appreciate any suggestions for tools, scripts, or workflows that might help. Thank you in advance for any advice!

  • @j4k3
    link
    English
    219 days ago

    Distilbert https://huggingface.co/distilbert/distilroberta-base

    …was setup for something like that here but note that the repo that runs this has an “unsafe” warning that I have not looked into: https://huggingface.co/spaces/nasrin2023ripa/multilabel-book-genre-classifier

    https://huggingface.co/spaces/nasrin2023ripa/multilabel-book-genre-classifier/tree/main

    It might be fine or whatnot, I’m on mobile and can’t see the file in question. The associated Python code might be a helpful starting point.

    In my experience, most models intentionally obfuscate copyright sources. They all know the materials to various degrees, but they are not intended to replicate sources. They all have strong interference in place to obscure both their recognition and reproduction potential. If, for instance, you can identify where errors are inserted and make a few corrections, they often continue adding a few details that are from the original source. If this is done a few times in a row, they tend to gain more freedom before reverting to obfuscation again. This is the behavior I look out for. It is a strong tool too if you get creative in application.

    Perhaps someone posts an API to look up the library of congress classification of a work based on a few lines or something. GL