Here is the link: https://gpt4all.io/
it works on apple arm architecture surprisingly well; near instantly typically
Where did this come from? Did OpenAI finally release the source code? And where does the training data come from? Is the training data public domain/appropriately licensed?
that’s all on the site - openai offers an api for openai model access
Is running using a GPU a bad thing? I’m new to this…
No, a GPU would be ideal but not everyone has one, especially one with enough VRAM. I have an AMD card with 12gb of VRAM and I can run 7B - 13B models but even the 7B models (which seems to be the lowest that is still good) use a little more than 8gb of VRAM and most people probably have an Nvidia card with 8gb or less. 13B models get very close to using the full 12gb.
Not everyone has a dedicated GPU, I would guess. GPUs are good at doing tensor calculations, but they’re not the only way.
Its better if you have a good GPU, but will not run without a card from the last few years. It can run on CPU but it’s much slower.
There’s a long listing of models to choose from. How to pick one? What are differences, benefits/drawbacks?
it’s gonna take experimentation; there’s a list of all models in the app or on the site and maybe a little googling. you can still use openai too. mistral is solid overall though; good for programming
Uh…how are they going to pay for the server load?
locally running
Oh cool!
What server load?
you just need to pay your energy bill on time,that’s all
I just read its local