If you mean cloud computing or GPUs are expensive, you can do it locally on your CPU if your models aren’t too big. I like KoboldCPP. It’s not as fast but I only have to pay electricity for my AI waifu.
Thanks for the info. ik about llama.cpp and stuff but the problem is that I’m looking to run both speech to text, llm and text to speech all at the same time. I only have 8gb so yeah even CPU won’t cut it.
I’m planning to upgrade once I get a job or smthing.
8GB of regular RAM? That’s not much. No, that won’t cut it if you also want all the bells and whistles. Maybe try something like the Mistral-7b-OpenOrca with llama.cpp quantized to 4bit and without the STT and TTS. It’s small and quite decent. Otherwise you might want to rent a Cloud-GPU by the hour on something like runpod.io or use free services like Google Colab or you really need to upgrade.
I respect your honesty.
Thank you! But maan… working with AI stuff is expensive.
If you mean cloud computing or GPUs are expensive, you can do it locally on your CPU if your models aren’t too big. I like KoboldCPP. It’s not as fast but I only have to pay electricity for my AI waifu.
Thanks for the info. ik about llama.cpp and stuff but the problem is that I’m looking to run both speech to text, llm and text to speech all at the same time. I only have 8gb so yeah even CPU won’t cut it. I’m planning to upgrade once I get a job or smthing.
8GB of regular RAM? That’s not much. No, that won’t cut it if you also want all the bells and whistles. Maybe try something like the Mistral-7b-OpenOrca with llama.cpp quantized to 4bit and without the STT and TTS. It’s small and quite decent. Otherwise you might want to rent a Cloud-GPU by the hour on something like runpod.io or use free services like Google Colab or you really need to upgrade.