- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Apple engineers have shared new details on a collaboration with NVIDIA to implement faster text generation performance with large language models.
Super ironic that Apple are working with NVIDIA, and have launched their own AI offering, but you can’t connect any kind of GPU to any Mac that they currently sell.
Yes, because it would make the Mac worse. Nvidia GPUs are comically inefficient.
EDIT: tech illiterate shit-for-brains downvoted this comment.
Because of the diminishing returns in using larger and larger models, I’m hopeful that this could lead to more efficient LLM implementations that aren’t so harmful to the environment. Just maybe.
“We made Siri faster but each request now drains a lake.”
“Hey, Siri, give me a real-time count of all existing lakes and update it every second.”
Siri: “Let me search it on the web.”
Two
spoiler
dickheads
are better than one