I understand the sentiments against AI, tech oligarchs investing in data centers, and etc.
But could local LLMs, be used to empower people, and ignite more startup projects?
I use an LLM to draft all sorts of writing. It’s not perfect, but it’s an easy way to flesh out my ideas in an outline or rough draft. Other open source projects like “openclaw” is a great way to create a personal assistant.
Neural networking is here, and isn’t going away anytime soon. It’ll probably get better over time.
Should people be thinking about “how I can use local AI to help me” than “anti-AI”?
People are, but talking about it is the target of dogmatic tribalism and so they do not talk about it much if at all.
dogmatic tribalism
I sense those sentiments as well. This isn’t the first time we’ve seen this happen either; it’s happened a lot throughout technology and when art medium changes. (Example: Analog purists against digital film and music).
There are some things we should draw the line, and it’s usually when we lack ownership (i.e. streaming and corporate-owned clouds). But local LLM, it and the data lives in your computer. You still keep your data.
But local LLM, it and the data lives in your computer. You still keep your data.
What was it trained on? If all of the training data was created you, or you have a licence to use it for training this sort of algorithm, then fine, otherwise you need to decide if you’re happy to use something that will plagerise others work without attribution.
You also need to decide it you’re willing to abdicate thought to that degree whilst also not being able to trust the result it produces. If you aren’t checking and validating every fact your LLM produces, you’re just adding to the amount of dross in the world, if you are checking the facts, then it would probably be quicker to have just found the facts and written it yourself.
I have not gone through the software stack to be sure there is no mechanism of logging or communications. It works offline, sure, but I’m only mostly confident it does not communicate at all. Pytorch and transformers are huge libraries, and while the Nvidia driver is open source, the nvcc compiler required to create the binary is proprietary. Every machine also runs the background OS native to the hardware and inaccessible like Intel ME. Everything has one of these systems. I also think it is likely that GitHub is the way such a connection would be practical. Or through discord or YT. Not saying it is just that it is technically possible.

