I think one of the biggest issues with FOSS-minded people is that they automatically consider open source software private, safe and having good intentions in mind, but they never actually go beyond the surface to check if it actually is.
Most people who use FOSS are not qualified to check source code for ill-intent (like me) and rely on people smarter than them (and me) to review the code and find any problems. FOSS isn’t automatically private, safe, and having good intentions, but if it isn’t, at least the code is transparent and the review process is open for all. Commercial software has no review, and zero transparency.
The problem is that quite often everything rests on that belief in someone else being there to check. Most of the time, even if some of the users are qualified to do it, they don’t have the time to go through all of the code and then be on it through each update.
Good point and worth considering. For the more popular stuff, though, it’s likely someone somewhere is looking at it, and even the threat of discovery is enough to discourage malfeasance. And in either case, it’s better to have the observability rather than a black box system with no possibility to check it.
Why can’t proprietary software be private or secure? You cannot verify it for yourself, but nothing about the licensing model precludes it. In highly regulated industries (such as health care or banking), I would expect a very large investment by software vendors into security.
Why can’t proprietary software be private or secure? You cannot verify it for yourself, but nothing about the licensing model precludes it. In highly regulated industries (such as health care or banking), I would expect a very large investment by software vendors into security.
Since it’s very difficult to verify what the program does, the user has to trust that it’s secure. But there is no place for trust in security. If you have to trust something, it is not secure. Encryption protocols that we all use are not proprietary. It would be ridiculous if we had to just believe that they are secure. Fortunately we don’t have to, because any expert in the world can verify it.
In highly regulated industries (such as health care or banking), I would expect a very large investment by software vendors into security.
Sure, the developer probably doesn’t want all the user data to leak, but they might want to spy on their users. So what security does a user have? It takes just one proprietary program to ruin your security.
Free Software gives you a right to modify how a program works and share that modified version with others. In proprietary software that would usually be illegal. So if someone does find a vulnerability and fixes it, they can’t share a patched version with others. Same problem if a program contains spyware or other malicious functionality. Users wouldn’t be able to remove it easily. In proprietary software users are at the mercy of the developers who have complete power over them and their systems.
But you can’t legally modify it and distribute your modified version. You can’t fix a vulnerability and share the patched version with others. Only the developer can, so you are at their mercy. If they add spyware into the program, users can’t do anything about it.
I think it’s a gray area in that if you merely instruct people on it or distribute it as a patcher that contains none of the original code or assets, few would take issue with it, and if they do, their legal position would be much shakier compared to fighting piracy.
Just because something is open source doesn’t mean the people behind it have the best intentions in mind.
I think one of the biggest issues with FOSS-minded people is that they automatically consider open source software private, safe and having good intentions in mind, but they never actually go beyond the surface to check if it actually is.
Most people who use FOSS are not qualified to check source code for ill-intent (like me) and rely on people smarter than them (and me) to review the code and find any problems. FOSS isn’t automatically private, safe, and having good intentions, but if it isn’t, at least the code is transparent and the review process is open for all. Commercial software has no review, and zero transparency.
True, but Libre software can be commercial. So you should instead say that the proprietary or non-libre software has no transparency.
Good point.
The problem is that quite often everything rests on that belief in someone else being there to check. Most of the time, even if some of the users are qualified to do it, they don’t have the time to go through all of the code and then be on it through each update.
Good point and worth considering. For the more popular stuff, though, it’s likely someone somewhere is looking at it, and even the threat of discovery is enough to discourage malfeasance. And in either case, it’s better to have the observability rather than a black box system with no possibility to check it.
Free software is not automatically private or secure, but it can be. Proprietary software can’t.
Why can’t proprietary software be private or secure? You cannot verify it for yourself, but nothing about the licensing model precludes it. In highly regulated industries (such as health care or banking), I would expect a very large investment by software vendors into security.
Why can’t proprietary software be private or secure? You cannot verify it for yourself, but nothing about the licensing model precludes it. In highly regulated industries (such as health care or banking), I would expect a very large investment by software vendors into security.
Since it’s very difficult to verify what the program does, the user has to trust that it’s secure. But there is no place for trust in security. If you have to trust something, it is not secure. Encryption protocols that we all use are not proprietary. It would be ridiculous if we had to just believe that they are secure. Fortunately we don’t have to, because any expert in the world can verify it.
Sure, the developer probably doesn’t want all the user data to leak, but they might want to spy on their users. So what security does a user have? It takes just one proprietary program to ruin your security.
Free Software gives you a right to modify how a program works and share that modified version with others. In proprietary software that would usually be illegal. So if someone does find a vulnerability and fixes it, they can’t share a patched version with others. Same problem if a program contains spyware or other malicious functionality. Users wouldn’t be able to remove it easily. In proprietary software users are at the mercy of the developers who have complete power over them and their systems.
You can disassemble any kind of closed source software and fully analyze it.
But you can’t legally modify it and distribute your modified version. You can’t fix a vulnerability and share the patched version with others. Only the developer can, so you are at their mercy. If they add spyware into the program, users can’t do anything about it.
I think it’s a gray area in that if you merely instruct people on it or distribute it as a patcher that contains none of the original code or assets, few would take issue with it, and if they do, their legal position would be much shakier compared to fighting piracy.
That’s true. Still it’s more difficult for everyone.