SmokeyDope

  • 77 Posts
  • 1.31K Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle






  • SmokeyDopetomemesHow screwed are you?
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    FromSoft fans who have been around since Kingfield in the late 90s: “always has been” (because from hit gold with the old and decaying fantasy world with dwindling hope in its inhabitants atmospheric vibe and keep reusing it)



  • Hi Hawke, I understand your fustration with needing to troubleshoot things. Steam allows you to import any exe as a ‘non-steam game’ to your library and run it with the proton compatability layer. I sometimes have success getting a GOG game installed by running the install exe through proton or wine. Make sure you are using the most up to date version of lutris many package managers are outdated flatpak will gaurentee its most up to date. Hope it all works out for you






  • Ive tried official Deepseek qwen 2.5 14b r1 distill and a few unofficial mistrals trained on R1 CoT. They are indeed pretty amazing and I found myself switching between a general purpose model and a thinking model regularly before this released.

    DeepHermes is a thinking model family with R1 distill CoT that you can toggle between standard short output or spending a few thousand tokens thinking about a solution.

    I found that pure thinking models are fantastic for asking certain kinds of problem solving questions, but awful at following system prompt changes for roleplay scenarios or adopting complex personality archetypes.

    This let’s you have your cake and eat it too by letting CoT be optional while keeping regular system prompt capabilities.

    The thousands of tokens spent thinking can get time consuming when you only getting 3t/s on the larger 24b models. So its important to choose between a direct answer or spend 5 minutes to let it really think. Its abilities are impressive even if it takes 300 seconds to fully think out a problem at 2.5t/s.

    Thats why I am so happy the 8b model is pretty intelligent with CoT enabled so I can fit a thinking model entire in vram and its not dumb as rocks in knowledge base either. I’m getting 15-20t/s with 8b instead of 2.5-3t/s partially offloading a larger model. 6.4x speed inceease at the CoT is a huge W for my real life human time spent waiting for a complete output.








  • SmokeyDopetoSelfhostedWhat's up, selfhosters? It's selfhosting Sunday!
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    3 days ago

    I just spent a good few hours optimizing my LLM rig. Disabling the graphical interface to squeeze 150mb of vram from xorg, setting programs cpu niceness to highest priority, tweaking settings to find memory limits.

    I was able to increase the token speed by half a second while doubling context size. I don’t have the budget for any big vram upgrade so I’m trying to make the most of what ive got.

    I have two desktop computers. One has better ram+CPU+overclocking but worse GPU. The other has better GPU but worse ram, CPU, no overclocking. I’m contemplating whether its worth swapping GPUs to really make the most of available hardware. Its bee years since I took apart a PC and I’m scared of doing somthing wrong and damaging everything. I dunno if its worth the time, effort, and risk for the squeeze.

    Otherwise I’m loving my self hosting llm hobby. Ive been very into l learning computers and ML for the past year. Crazy advancements, exciting stuff.


  • Cool, page assist looks neat I’ll have to check it out sometimes. My llm engine is kobold.cpp, and I just user the openwebui in internet browser to connect.

    Sorry I don’t really have good suggestions for you beyond to just try some of the more popular 1-4bs in a very high quant if not full f8 and see which works best for your use case.

    Llama 4b, mistral 4b, phi-3-mini, tinyllm 1.5b, qwen 2-1.5b, ect ect. I assume you want a model with large context size and good comprehension skills to summarize youtube transcripts and webpage articles? At least I think thats what the add-on you mentioned suggested was its purpose.

    So look for models with those things over ones that try to specialize in a little bit of domain knowledge.



  • The average of all different benchmarks can be thought of as a kind of ‘average intelligence’, though in reality its more of a gradient and vibe type thing.

    Many models are “benchmaxxed” trained to answer the exact kinds of questions the test asked which often makes the benchmarks results unrelated to real world use case checks. Use them as general indicators but not to be taken too seriously.

    All model families are different in ways that you only really understand by spending time with them. Don’t forget to set the rigt chat template and correct sample range values as needed per model. Openleaderboard is a good place to start.