• @eguidarelli
    link
    01 year ago

    The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.

    That testing will also examine the potential for societal harms, such as bias and discrimination, and more theoretical dangers about advanced AI systems that could gain control of physical systems or “self-replicate” by making copies of themselves.

    The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images or audio known as deepfakes.

    These commitments are faster to secure while slower steps like creating regulations through laws can come after.

    • @[email protected]
      link
      fedilink
      41 year ago

      I don’t disagree, but voluntary agreements really mean nothing. If we were talking about companies with a good track record then sure, but webare talking about companies with a track record of deception and unsavory activities.

      • @eguidarelli
        link
        31 year ago

        Is the alternative that we do nothing in the short term while we wait for new laws? Sure voluntary agreements may not be fully enforced but even if one of these companies followed some of those restrictions then I’d say that it worked.