Today marks a significant moment in our journey, and I am thrilled to share some important news with you. After much thoughtful consideration, I have decid
You must log in or register to comment.
deleted by creator
@[email protected] @[email protected] local LLM execution times can be very fast on recent consumer hardware. No need to send anywhere, just like their translation - do it all on-device.
As an example, with no optimization or GPU support, my @[email protected] (AMD) generates around 5 characters/sec from a 4 gigabyte pre-quantized model.
deleted by creator