- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Meta made its Llama 2 AI model open-source because ‘Zuck has balls,’ a former top Facebook engineer says::Meta CEO Mark Zuckerberg took a big risk by making its powerful AI model Llama 2 mostly open source, according to Replit CEO Amjad Masad.
The whole point of open-source is to be able to recreate it yourself so you can make changes. This is freeware. Free-as-in-beer, not free-as-in-speech. Hell, with freeware I can use it for commercial purposes, it’s not even as free as that.
In the AI world it’s a bit different. You can do whatever you want with the model and weights data which will net you the functional part of the resulting product. Train, retrain, dissect, segment…etc. They’re just not giving out the source for the actual engine. The people working with such things really only care about the data, and in most cases, would probably convert it to a different engine anyway.
Can I remake the model only including Creative Commons sourced training material?
You can reuse the data however you want, yes. You just can’t do it with their proprietary model. So, again, the ENGINE is not open source (the thing that drives their released version), but the model and data as it runs as released you can do whatever you want with.
I thought I was only licensed for non-commercial use
Nope. Free for educational, research, or commercial. I’m sure their license has some restrictions on what that actually means once you get to be competitive with the original as a product, but otherwise free unless you start a massive enterprise based on it, at which point you probably wouldn’t use it anyway. It’s just an LLM, it’s not doing anything super special like folding proteins for drug development, or curing cancer.
Calling ML models “Open Source” is already confused. Because they are not programs, but rather formats, they don’t come 1:1 with the source.
You can obtain a model and train it futher. Similliar how you can get JPEG file with permissive licence, edit it and share it. Having the GIMP/Photoshop project from the image was created from is helpful but not nessesary.
here’s core difference: the nature of ai-models is generative, but all layers in a .PSD file are inherently static.
better analogy would be rendering of a fractal — a limited subset of infinite possibilities, but to explore the rest of them you need both rules and data