LughM to [email protected]English • 2 months agoMultiple LLMs voting together on content validation catch each other’s mistakes to achieve 95.6% accuracy.arxiv.orgexternal-linkmessage-square27fedilinkarrow-up153arrow-down17
arrow-up146arrow-down1external-linkMultiple LLMs voting together on content validation catch each other’s mistakes to achieve 95.6% accuracy.arxiv.orgLughM to [email protected]English • 2 months agomessage-square27fedilink
minus-square@[email protected]linkfedilinkEnglish6•2 months agoFor the small ones, with GPUs a couple hundred watts when generating. For the large ones, somewhere between 10 to 100 times that. With specialty hardware maybe 10x less.
minus-squarePennomilinkEnglish3•2 months agoA lot of the smaller LLMs don’t require GPU at all - they run just fine on a normal consumer CPU.
minus-squarecopygirllinkfedilinkEnglish3•2 months agoWouldn’t running on a CPU (while possible) make it less energy efficient, though?
minus-squarePennomilinkEnglish3•2 months agoIt depends. A lot of LLMs are memory-constrained. If you’re constantly thrashing the GPU memory it can be both slower and less efficient.
minus-square@[email protected]linkfedilinkEnglish1•1 month agoyeah but 10x slower, at speeds that just don’t work for many use cases. When you compare energy consumption per token, there isn’t much difference.
For the small ones, with GPUs a couple hundred watts when generating. For the large ones, somewhere between 10 to 100 times that.
With specialty hardware maybe 10x less.
A lot of the smaller LLMs don’t require GPU at all - they run just fine on a normal consumer CPU.
Wouldn’t running on a CPU (while possible) make it less energy efficient, though?
It depends. A lot of LLMs are memory-constrained. If you’re constantly thrashing the GPU memory it can be both slower and less efficient.
yeah but 10x slower, at speeds that just don’t work for many use cases. When you compare energy consumption per token, there isn’t much difference.
Good god. Thanks for the info.