Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

  • Hildegarde
    link
    English
    -87 months ago

    Is there any evidence he was fired for these beliefs? Using the deliberately vague phrase, “earlier this year” in November gives the impression of an author trying to either manufacture causation where none exists.

    The only part of this article about Altman is the beginning. This author seems incredibly passionate about virology and is using recent news to draw attention to the author’s points on the subject.

    • @[email protected]OP
      link
      fedilink
      English
      127 months ago

      So far I like best that he was probably fired for not being enough of an AI doomer, i.e. deprioritizing AI safety and diverting too many resources from research to product service, all the while only paying lip service to the ea/rationalist orthodoxy about heading off the impending birth of AI Cthulhu.

      Any remaining weirdness can be explained away by the OpenAI board being sheltered oddballs who honestly though they could boot the CEO and face of the company on a dime and without repercussions, in order to bring in an ideologically purer replacement.

      • @[email protected]OP
        link
        fedilink
        English
        17
        edit-2
        7 months ago

        ‘We are the sole custodians of this godlike technology that we can barely control but that we will let you access for a fee’ has been a mainstay of OpenAI marketing as long as Altman has been CEO, it’s really no surprise this was ‘leaked’ as soon as he was back in charge.

        It works, too! Anthropic just announced they are giving chat access to a 200k token context model (chatgtp4 is <10k I think) where they supposedly cut the rate of hallucinations in half and it barely made headlines.