AFAIK every NAS just uses unauthenticated connections to pull containers, I’m not sure how many actually allow you to log in even (raising the limit to a whopping 40 per hour).

So hopefully systems like /r/unRAID handle the throttling gracefully when clicking “update all”.

Anyone have ideas on how to set up a local docker hub proxy to keep the most common containers on-site instead of hitting docker hub every time?

  • @[email protected]
    link
    fedilink
    English
    262 days ago

    Huh. I was just considering establishing a caching registry for other reasons. Ferb, I know what we’re going to do today!

    • @Daughter3546
      link
      English
      42 days ago

      Do you have a good resource for how one can go about this?

    • @[email protected]
      link
      fedilink
      English
      12 days ago

      Same here. I’ve been building a bootstrap script, and each time I test it, it tears down the whole cluster and starts from scratch, pulling all of the images again. Every time I hit the Docker pull limit after 10 - 12 hours of work, I treat that as my “that’s enough work for today” signal. I’m going to need to set up a caching system ASAP or the hours I work on this project are about to suddenly get a lot shorter.