Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

  • @MasterBlaster
    link
    English
    231 year ago

    As long as this doesn’t get repurposed to regulate “social credit”, this is fine.

    • @[email protected]
      link
      fedilink
      English
      121 year ago

      That’s the scary part of it. Idc how good it is, but if it starts to be used to censor information and rate humans, that’s the line.

      • @kroy
        link
        English
        91 year ago

        The line will come far far FAR before that

      • @graphite
        link
        English
        51 year ago

        but if it starts to be used to censor information and rate humans, that’s the line.

        That line has already been crossed. Since it’s already been crossed, it’s inevitable that this will be used in that way.