I found that idea interesting. Will we consider it the norm in the future to have a “firewall” layer between news and ourselves?

I once wrote a short story where the protagonist was receiving news of the death of a friend but it was intercepted by its AI assistant that said “when you will have time, there is an emotional news that does not require urgent action that you will need to digest”. I feel it could become the norm.

EDIT: For context, Karpathy is a very famous deep learning researcher who just came back from a 2-weeks break from internet. I think he does not talks about politics there but it applies quite a bit.

EDIT2: I find it interesting that many reactions here are (IMO) missing the point. This is not about shielding one from information that one may be uncomfortable with but with tweets especially designed to elicit reactions, which is kind of becoming a plague on twitter due to their new incentives. It is to make the difference between presenting news in a neutral way and as “incredibly atrocious crime done to CHILDREN and you are a monster for not caring!”. The second one does feel a lot like exploit of emotional backdoors in my opinion.

  • @GrymEdm
    link
    2
    edit-2
    9 months ago

    There are enormous issues with who decides what makes it through the filter, how to handle things that are of unknown truth (say ongoing research), and the hazards of training consumers of information to assume everything that makes it to them is completely factual (the whole point of said fake news filter). If you’d argue that people on the far side of the filter can still be skeptical, then just train that and avoid censorship via filter.

    • xxd
      link
      fedilink
      29 months ago

      Yeah, I agree. it’s not easy to determine truth, and whoever decides truth might introduce bias that then gets rolled out to everyone. With ongoing reserach or unknown information, you could just have a “currently being researched” or “not confirmed yet” attached to the information. I’m just saying that in an ideal world where this does work, it could be safer than relying on people being skeptical, because everyone fails to be skeptical about something eventually.