• @332
    link
    English
    121 year ago

    These guys are literally the last people in the world I would trust to regulate themselves responsibily.

    • @Xanthobilly
      link
      English
      4
      edit-2
      1 year ago

      This is true for corporations in general. They’re organized to maximize profits, unless regulated. Even if you’re pro-business, you’ll know that statement as true. Without realizing it, we’re currently in an Oppenheimer-like arms race with various state sponsored interests including tech corporations.

  • @FantasticFox
    link
    English
    41 year ago

    The biggest risks I see with AI are making misinfomation, scams etc. a lot easier.

    I remember as a kid you knew that people could just make stuff up, but a photograph was fairly reliable. Then along came Photoshop and it was trivial to make convincing fake photographs.

    AI is able to do this with audio, soon with full video (perhaps already?) - so then it becomes much harder to trust anything.

  • @Capricorny90210OP
    link
    English
    11 year ago

    Four of the preeminent AI players are coming together to form a new industry body designed to ensure “safe and responsible development” of so-called “frontier AI” models.

    In response to growing calls for regulatory oversight, ChatGPT developer OpenAI, Microsoft, Google, and Anthropic have announced the Frontier Model Forum, a coalition that draws on the expertise of member companies to develop technical evaluations and benchmarks, and promote best practices and standards.

    New frontier At the core of the forum’s mission is something that OpenAI has previously termed “frontier AI,” which are advanced AI and machine learning models deemed dangerous enough to pose “severe risks to public safety.” They argue that such models pose a unique regulatory challenge, as “dangerous capabilities can arise unexpectedly,” making it difficult to prevent models from being misappropriated.

    The self-stated goals for the new forum include:

    i) Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety. ii) Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology. iii) Collaborating with policymakers, academics, civil society and companies to share knowledge about trust and safety risks. iiii) Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

    Although the Frontier Model Forum counts just four members at present, the collective says it’s open to new members. Qualifying organizations should be developing and deploying frontier AI models, and demonstrate a “strong commitment to frontier model safety.”

    In the immediate term, the founding members say that the first steps will be to establish an advisory board to steer its strategy, alongside a charter, governance and funding structure.

    “We plan to consult with civil society and governments in the coming weeks on the design of the Forum and on meaningful ways to collaborate,” the companies wrote in a joint statement today.

    Regulation While the Frontier Model Forum is designed to demonstrate that the AI industry is taking safety concerns seriously, it also highlights Big Tech’s desires to stave off incoming regulation through voluntary initiatives — and perhaps go some way toward writing its own rules.

    Indeed, today’s news comes as Europe pushes ahead with what is set to be the first concrete AI rulebook, designed to enshrine safety, privacy, transparency, and anti-discrimination at the heart of companies’ AI development ethos.

    And last week, President Biden met with seven AI companies at the White House — including the four founding members of the Frontier Model Forum — to agree voluntary safeguards against the burgeoning AI revolution, though critics argue that the commitments were somewhat vague.

    However, Biden also indicated that regulatory oversight would be on the cards in the future.

    “Realizing the promise of AI by managing the risk is going to require some new laws, regulations, and oversight,” Biden said at the time. “In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation. And we’re going to work with both parties to develop appropriate legislation and regulation.”