(Please don’t downvote just because I need some help.)

I was once a privacy nut. But it’s getting so hard nowadays, and there are so many more important problems – global warming, AI, the inevitable collapse of the current world order… how does privacy improve the world? Please help remind me.

I do approve of privacy, of course. All this protect-the-children flak is bullshit. I just can’t remember why I thought it was something worth fighting for and preaching about.

  • azuth
    link
    fedilink
    English
    103 months ago

    For people to try and effectively respond to those issues they must be able to communicate privately with no fear of retribution.

    It also requires private and secure communications to be a normal thing and not an indicator that the parties involved are criminals, terrorists or pedos.

    Ai by the way is going to be just a little fart compared to the other issues you described as well as the lack of privacy.

    Also a problem related to privacy is our growing dependence on private corps to be a ‘normal’ person (“why you don’t have a Facebook/Google/Amazon/etc account ?”)

    • @[email protected]OP
      link
      fedilink
      03 months ago

      AI could kill everyone, though it most likely won’t IMO. 10% chance I think. That’s still very bad though. Despite the fact that Ilya Sutskever, Geoff Hinton, MIRI, heck even Elon Musk have expressed varying degrees of concern about this, it seems the risk here is largely dismissed because it sounds too much like science fiction. If only science fiction writers had avoided the topic!

      • azuth
        link
        fedilink
        English
        33 months ago

        This is bullshit. AI will be hunting down survivors? Thus more lethal than nuclear war? ChatGTP4 will be better at it?

        Most of these concern seem to be about AGI which we are nowhere close to having and have no clear path to. Our "AI"s not only do not understand causality but don’t have the ability to perform arithmetic. Nor do they run stuff that could kill humans. Except if you consider Tesla’s FSD an AI system, but Musk assured us back in 2017 it would be safe…

        • @[email protected]OP
          link
          fedilink
          1
          edit-2
          3 months ago

          where did you get the idea that gpt4 is capable of this? this is concerns for 10+ years from now, assuming AI makes the same strides is has in the past 10 years, which is not guaranteed at all.

          I think there are probably 3-5 big leaps still required, on the order of the invention of transformer models, deep learning, etc., before we have superintelligence.

          Btw humans are also bad at arithmetic. That’s why we have calculators. if you don’t understand that LLMs use RAG, langchain (or similar), and so on, you clearly don’t understand the scope of the problem. Superintelligence doesn’t need access to anything in particular except, say, email or chat to destroy the world.