List of icons/services suggested:

  • Calibre
  • Jitsi
  • Kiwix
  • Monero (Node)
  • Nextcloud
  • Pihole
  • Ollama (Should at least be able to run tiny-llama 1.1B)
  • Open Media Vault
  • Syncthing
  • VLC Media Player Media Server
  • @SmokeydopeOP
    link
    English
    1
    edit-2
    1 month ago

    Thank you thats useful to know. In your opinion what context size is the sweet spot for llama 3.1 8B and similar models?

    • @brucethemoose
      link
      1
      edit-2
      1 month ago

      Oh I got you mixed up with the other commenter, apologies.

      I’m not sure when llama 8b starts to degrade at long context, but I wanna say its well before 128K, and where other “long context” models start to look much more attractive depending on the task. Right now I am testing Amazon’s mistral finetune, and it seems to be much better than Nemo or llama 3.1 out there.

    • @brucethemoose
      link
      11 month ago

      4 core i7, 16gb RAM and no GPU yet

      Honestly as small as you can manage.

      Again, you will get much better speeds out of “extreme” MoE models like deepseek chat lite: https://huggingface.co/YorkieOH10/DeepSeek-V2-Lite-Chat-Q4_K_M-GGUF/tree/main

      Another thing I’d recommend is running kobold.cpp instead of ollama if you want to get into the nitty gritty of llms. Its more customizable and (ultimately) faster on more hardware.

      • @SmokeydopeOP
        link
        English
        1
        edit-2
        1 month ago

        Thats good info for low spec laptops. Thanks for the software recommendation. Need to do some more research on the model you suggested. I think you confused me for the other guy though. Im currently working with a six core ryzen 2600 CPU and a RX 580 GPU. edit- no worries we are good it was still great info for the thinkpad users!

        • @brucethemoose
          link
          11 month ago

          8GB or 4GB?

          Yeah you should get kobold.cpp’s rocm fork working if you can manage it, otherwise use their vulkan build.

          llama 8b at shorter context is probably good for your machine, as it can fit on the 8GB GPU at shorter context, or at least be partially offloaded if its a 4GB one.

          I wouldn’t recommend deepseek for your machine. It’s a better fit for older CPUs, as it’s not as smart as llama 8B, and its bigger than llama 8B, but it just runs super fast because its an MoE.