@[email protected] to [email protected]English • 3 months agoThe Rulelemmy.mlimagemessage-square66fedilinkarrow-up1647arrow-down16
arrow-up1641arrow-down1imageThe Rulelemmy.ml@[email protected] to [email protected]English • 3 months agomessage-square66fedilink
minus-square@AdrianTheFroglinkEnglish1•3 months agoYes, but 200 gb is probably already with 4 bit quantization, the weights in fp16 would be more like 800 gb IDK if its even possible to quantize more, if it is, you’re probably better of going with a smaller model anyways
Yes, but 200 gb is probably already with 4 bit quantization, the weights in fp16 would be more like 800 gb IDK if its even possible to quantize more, if it is, you’re probably better of going with a smaller model anyways