LughM to [email protected]English • 11 days agoMultiple LLMs voting together on content validation catch each other’s mistakes to achieve 95.6% accuracy.arxiv.orgexternal-linkmessage-square27fedilinkarrow-up153arrow-down17
arrow-up146arrow-down1external-linkMultiple LLMs voting together on content validation catch each other’s mistakes to achieve 95.6% accuracy.arxiv.orgLughM to [email protected]English • 11 days agomessage-square27fedilink
minus-squarePennomilinkEnglish3•11 days agoA lot of the smaller LLMs don’t require GPU at all - they run just fine on a normal consumer CPU.
minus-squarecopygirllinkfedilinkEnglish3•11 days agoWouldn’t running on a CPU (while possible) make it less energy efficient, though?
minus-squarePennomilinkEnglish3•11 days agoIt depends. A lot of LLMs are memory-constrained. If you’re constantly thrashing the GPU memory it can be both slower and less efficient.
minus-square@[email protected]linkfedilinkEnglish1•10 days agoyeah but 10x slower, at speeds that just don’t work for many use cases. When you compare energy consumption per token, there isn’t much difference.
A lot of the smaller LLMs don’t require GPU at all - they run just fine on a normal consumer CPU.
Wouldn’t running on a CPU (while possible) make it less energy efficient, though?
It depends. A lot of LLMs are memory-constrained. If you’re constantly thrashing the GPU memory it can be both slower and less efficient.
yeah but 10x slower, at speeds that just don’t work for many use cases. When you compare energy consumption per token, there isn’t much difference.