Why do people assume Signal messenger isn’t spying on you? Yes, it has open source code, yes it uses end-to-end encryption. But we can’t check which code runs in the version from Google Play or the App Store. And also their APK (IPA) build process is essentially a black box, it doesn’t use GitHub Actions or some other transparent build system. I also heard from Techlore that they add a proprietary part to the apk to filter bots. The only thing I can assume is that people scanned the traffic coming from the app (Android), phone (iOS) and checked whether encryption keys were being sent to Signal or not. But it seems to me that this can be also circumvented. What do you think?

P.S. I myself use Signal to communicate with relatives and friends. Definetly not a hater.

  • 133arc585
    link
    fedilink
    31 year ago

    while currently the core-engine is kept highly encrypted and we do not publish it

    Why not? If you’re 100% confident it’s secure, you should have no issue making it public. If you aren’t 100% confident its secure, not making it public is just dishonest and ends up hurting trust when something inevitably does happen. Also, what do you mean that the code is “highly encrypted”? First off, using phrases like “highly encrypted” and “military grade” are already massively suspicious because they’re marketing terms that really don’t mean anything. Second, keeping the code encrypted (at rest perhaps?) doesn’t mean anything; and in order to run the code, it has to be un-encrypted anyway.

    There’s a bit of a debate about pros & cons of opening it, regarding confidential comms.

    How so? Here are the possibilities:

    • Your code is 100% secure:
      • You don’t release it: nobody trusts your claim of security (and fairly so).
      • You do release it: people can verify for themselves that your claim is valid.
    • Your code is not 100% secure:
      • You don’t release it: nobody trusts your claim of security (and fairly so).
      • You do release it: you can potentially have bugs discovered for you; or, people will fairly decide not to use an insecure product.

    There’s no situation in which not releasing code helps security or trust. Security by obscurity is not security.

    Anyway we are independently pen-tested by volunteers.

    Which is fine as one facet of being verifiably secure, but it’s not suffucient. Code can have flaws that pen-testers will not (or are very unlikely to) stumble upon, even with fuzzing environments. The proper approach is to have the code audited and openly-available and to have independent pen-testing of the running implementation.

    Not that I was a potential user of your software to begin with, but the way you’re describing your product and operations really would turn me off trusting it.

    • TopSecret Chat
      link
      fedilink
      11 year ago

      @133arc585
      Wishing to write more but limited at 500 chars… we are happy to get on board your constructive feedback. We are enthusiast of what we are doing but it takes time and a lot of work to improve. Feel free to contact us at [email protected] to expand the conversation. Regards

    • TopSecret Chat
      link
      fedilink
      01 year ago

      @133arc585
      A brief feedback summary 🙂
      100% secure code is ideal but never the case: bugs, vulnerabilities, patches exist always. Hence, option one (100% secure) cannot be really considered in a real-world scenario.
      Option two (not 100% secure) is not a binary choice: open-source is great but has wider implications other than peer/security review. Rights, alteration, distribution (etc) are to be considered too. We started with mixed open & closed source code, aiming to improve. Read next