Why do people assume Signal messenger isn’t spying on you? Yes, it has open source code, yes it uses end-to-end encryption. But we can’t check which code runs in the version from Google Play or the App Store. And also their APK (IPA) build process is essentially a black box, it doesn’t use GitHub Actions or some other transparent build system. I also heard from Techlore that they add a proprietary part to the apk to filter bots. The only thing I can assume is that people scanned the traffic coming from the app (Android), phone (iOS) and checked whether encryption keys were being sent to Signal or not. But it seems to me that this can be also circumvented. What do you think?
P.S. I myself use Signal to communicate with relatives and friends. Definetly not a hater.
@133arc585
A rival sounds more like fighting against, but we rather designed a complementary solution that secure your data and metadata also while is use.
With Confidential Computing the messages are not traditionally stored/deleted, but they operate in a memory enclave so they cannot be retrieved with forensic technology… of course this comes with a capacity limit, focusing on (few) highly confidential comms.
We’ll take the feedback about the hashtags in consideration. Thanks
That’s fair, rival does have a different connotation than “competitor”, which is a more accurate term here I think.
Is the source code fully available for your product?
@133arc585
The client-code is naturally open, while currently the core-engine is kept highly encrypted and we do not publish it (yet) as open-source.
There’s a bit of a debate about pros & cons of opening it, regarding confidential comms.
Anyway we are independently pen-tested by volunteers.
Thanks for asking 👍
Why not? If you’re 100% confident it’s secure, you should have no issue making it public. If you aren’t 100% confident its secure, not making it public is just dishonest and ends up hurting trust when something inevitably does happen. Also, what do you mean that the code is “highly encrypted”? First off, using phrases like “highly encrypted” and “military grade” are already massively suspicious because they’re marketing terms that really don’t mean anything. Second, keeping the code encrypted (at rest perhaps?) doesn’t mean anything; and in order to run the code, it has to be un-encrypted anyway.
How so? Here are the possibilities:
There’s no situation in which not releasing code helps security or trust. Security by obscurity is not security.
Which is fine as one facet of being verifiably secure, but it’s not suffucient. Code can have flaws that pen-testers will not (or are very unlikely to) stumble upon, even with fuzzing environments. The proper approach is to have the code audited and openly-available and to have independent pen-testing of the running implementation.
Not that I was a potential user of your software to begin with, but the way you’re describing your product and operations really would turn me off trusting it.
@133arc585
Thanks, I will share your feedback internally and get back to you with a more details 👍
@133arc585
Wishing to write more but limited at 500 chars… we are happy to get on board your constructive feedback. We are enthusiast of what we are doing but it takes time and a lot of work to improve. Feel free to contact us at [email protected] to expand the conversation. Regards
@133arc585
A brief feedback summary 🙂
100% secure code is ideal but never the case: bugs, vulnerabilities, patches exist always. Hence, option one (100% secure) cannot be really considered in a real-world scenario.
Option two (not 100% secure) is not a binary choice: open-source is great but has wider implications other than peer/security review. Rights, alteration, distribution (etc) are to be considered too. We started with mixed open & closed source code, aiming to improve. Read next