There are some people won’t touch anything to do with open source projects as they feel it might have issues with security. What does open source actually do for security or change how it works?
In my opinion it makes a project even more secure. Many eyes are able to inspect the code and review it for known and unknown vulnerabilities. It is a cat and mouse game anyway, you might as well broadcast all the flaws in hopes of people catching them and helping to fix them.
@PrecisePangolin Thanks, this explains it really well.
I think the argument is usually
If bad people see the code, they can spot vulnerabilities and exploit them
But I that’s not really how it works because it doesn’t cost anything to try an exploit. People generally aren’t going to look through the code to try and spot a weakness when they can just run an automated thing to attempt common vulnerabilities. Open source, closed source, bad code will fail the same.
I see it as a lock. With open source, you know how the internal mechanism is supposed to work and you can judge how secure it is. With closed source, someone says “trust me” and doesn’t show you how the inside works. It could just be a “if something metal is inserted, unlock the system”.
Ultimately the best thing is to look for open source software that’s been audited. If no one has checked the FOSS code, then you don’t actually know it’s safe. Once that’s happened, best of both worlds.
One other concern might be “if it’s open source, then everyone can see my password!”
Which is just… wrong
Oh and in practice, companies might pick a closed source paid product over a free and open source one.
But it’s not the product, it’s the legal/financial agreements. Companies like to externalize the risk instead of taking it on themselves. They like being able to sue someone if things go wrong.
The other company might be running the FOSS software too. They’re taking on the responsibility.
Oh and finally, a lot of open source products and protocols are used by closed source companies.
ex. Signal protocol is used by Facebook for some things
Only potential security issues, would be related to a lack of maintenance on a particular project. If an open source tool has not been updated for over a year or more, then it may have security vulnerabilities. I usually won’t use something if it has not been updated for a year or more.
However, people who make that claim, seem to subscribe to security by obscurity. They may think that the source code being public makes it more likely to be exploited for vulnerabilities. But I would say that is a strength, since many people can verify the security of a project and can have patches applied. In standard proprietary software, a security vulnerability could exist for years, without being patched because no one knows it’s there. It may or may not be exploited within this time.
It is our responsibility to choose the digital tools we use wisely, and to be mindful of a lack or drop of maintenance on a particularly project.
It doesn’t really. In theory more eyes on the code means more chance for a security bug to be found, either by white hat researchers or black hat exploiters. In practice this doesn’t really pan out; not only are most free software projects small hobbyist endeavors, but even large free software projects with many eyes on them, such as OpenSSL and curl, have had critical security vulnerabilities over the years. When it comes to security issues, having the right eyes on the code matters more than having many eyes.
The original promise of free software, the four freedoms, is all it guarantees. In my opinion this is enough to prefer free software over proprietary.
Isn’t your OpenSSL and curl points proving the opposite? Every program will have vulnerabilities and they had critical security vulnerabilities that were found and fixed.
But yes, I agree that 95% of open source projects have absolutely 0 security testing. Might not matter for some embedded applications, but it matters a great deal for public facing container plugins for example. Then again, most closed source software also hasn’t been pen tested.
Good point, finding a security vulnerability is a success not a failure.
People who don’t touch open source are mouth breathers. So next time someone says they won’t use it because it’s FOSS, you know who the weakest link in the building is.
As others have mentioned, it’s more secure code because it’s freely available. With closed source, you have no idea whats going on.
Look at it this way. FOSS is like a real safe. You can see it. Touch it. Kick it. Punch it. Closed source is like a blanket and I tell you there’s a safe under there. No you can’t touch it or see it. Trust me tho, there’s a safe.
Which would you store your money in?
In fairness, this is only the case when people are actually inspecting the code. That safe could be a cake that looks like a safe, but if nobody tastes it there is no real benefit (in terms of security at least)
It’s a harder con to build a real looking fake safe, hoping no one will actually test it out, then just lying about what’s behind a curtain no one is allowed to look behind.
In theory it helps to have multiple people verify the code. In reality, unless it has wide use and a fairly clean core it won’t likely get reviewed by anyone, but even without that it helps provide a level of trust just by the author laying their cards out for people to look at if they like.
the reason why many companies try to avoid using open source software is support. they usually can’t throw money at the creator to fix their problems or create custom solutions for them. which is kind of not accurate anymore today.
To be honest I’m a FOSS advocate, but when I recommend software I absolutely mention that getting devs (capable of fixing that software) in a SLA for critical bugs is what the absolutely should do, or accept the security risk or operational risk of insecure software.
This risk extends even more to non-foss software though as organic fixes can’t happen and the company that owns it HAS to fix it for you. Not all purchase agreements say they have to do this, and again it is our organizations that bare the risk then.
In terms of actual vulnerabilities? Probably comes out comparable? You have more eyes which means more opportunities for code review. But that is going to boil down to how rigorous the code review is and whether it is just people rubber stamping “trusted” developers.
Its controversial for a lot of reasons but a couple years back there was the university professor and his grad student who intentionally introduced vulnerabilities into one of the big projects. I forget at what point that was caught or what project, but it happens every few years. And likely happens a lot more that we don’t know about.
But mostly? When I am assessing software for a production situation, the security of an open source library versus a proprietary one isn’t even on the list. Depending on the company I am investigating the contributors, but that happens whether it is a company or a github page.
What really matters to me is how critical it is and what the support model is. Because if a vulnerability takes a week to get properly fixed or results in significant development slowdowns in the aftermath: It is worthless to me. Whereas a company that is on the hook to go all hands on deck and crunch their developers (because that always helps and doesn’t cause problems down the line…) to fix an issue within N hours? That shit means I don’t lose any sleep when the poo hits the fan.
In college I was told over and over that SECURITY BY OBSCURITY IS NOT ACTUALLY SECURITY. So using thoroughly tested and examined security techniques from open source software is the gold standard.
There are known secure algorithms that cannot be cracked by modern computers. So, no reason to try and reinvent the wheel and just hope your way is better despite decades of refinement and research into modern open algorithms.
@ZenFriedRice @SamXavia I remember back in '98 getting in an argument over this with the company’s chief architect. Finally had him back down when after he claimed “no professional security company just publishes their source code” I then replied “well, actually RSA BSAFE is distributed with all source.”
@ZenFriedRice security by obscurity may not be security, but it is one of the tools to reduce hacking attacks, like changing the SSH port for example
Being FOSS doesn’t it make secure, but it doesn’t make it more possible for people to actually test and secure it (people with less interests in it being seen as secure, but instead actually secure).
The reality is that virtually all widely used modern codebases contain at least some open source code (source).
Nothing, really. Anything you got from the play store is just as capable as a foss file is. The only changes between open source and closed source are that open source runs the risk of virus clones being created, at the benefit of a person being able to review the code to check for viri.
I might be wrong though idk