• @NounsAndWords
    link
    English
    3381 year ago

    So a Board member wrote a paper about focusing on safety above profit in AI development. Sam Altman did not take kindly to this concept and started pushing to fire her (to which end he may or may not have lied to other Board members to split them up). Sam gets fired for trying to fire someone for putting safety over profit. Everything exploded and now profit is firmly at the head of the table.

    I like nothing about this version of events either.

    • @[email protected]
      link
      fedilink
      English
      106
      edit-2
      1 year ago

      Wasn’t that evident from the very first few days, when we learned the board stood for the non profit, safety first mother org while the booted ceo stands for reckless monetization?

      Now he’s back, the safety concerns got silenced, money can be made, people can get fucked. A good day for capitalists

      • Jeena
        link
        fedilink
        English
        711 year ago

        That’s why I was so confused that all the workers stood behind the CEO and threatened to go to Microsoft.

        • @ours
          link
          English
          621 year ago

          My guess is that they want the company to grow fast so that their salaries and stock options grow as well.

        • @dustyData
          link
          English
          45
          edit-2
          1 year ago

          That’s what a personality cult gets you. The amount of idiots willing to die for another man’s ego is why we have some of the shittiest things in society. “Daddy told me so” is a powerful force when the people who believe it cannot see that their vision has absolutely no rational support. Jobs, Musk, Gates, Trump, they all thrive by telling people that their irrational beliefs are true and if they follow them they will make their dreams realities. The talk and narrative around Altman has always struck me similar to Musk’s cult of personality in the late 2010s.

          • @APassenger
            link
            English
            15
            edit-2
            1 year ago

            Stock options help. If they make enough off of OpenAI, they won’t need to find a job after this.

            • @dustyData
              link
              English
              91 year ago

              This is tech, they have no protections. I bet there’s some clause with a time lock that they can only sell the stock in 10 years time and they lose them if they leave OpenAI before that time window for any reason. In 5 years or before they’ll get hit by some mass layoffs and lose everything. This has happened so many times before with so many companies that it is laughable. Stock options in tech are a fairy tale.

          • @FrostyTrichs
            link
            English
            11
            edit-2
            1 year ago

            The amount of idiots willing to die for another man’s ego

            U.S. Military has entered the chat

          • TimeSquirrel
            link
            fedilink
            6
            edit-2
            1 year ago

            I’m not sure Gates ever had a “personality cult”. In the 90s during his heyday he was pretty much reviled even by Windows users. He built his empire by swallowing everyone else around him that was doing anything even a little bit innovative. He wasn’t really the “visionary artist/engineer” type like those others. Just a random rich nerd who won the technology monopoly game.

            • @raspberriesareyummy
              link
              English
              151 year ago

              Like @Zak, I would like to point out that - as much as I despised Bill Gates back then - he was actually competent. And - despite me never liking Microsoft - they have a legitimate business model built on selling products, not user data (like all social media and google). So of all the evil dipshits out there, Microsoft and Apple are the lesser ones. (I am a Linux user since 2004 or so)

            • @Zak
              link
              English
              101 year ago

              Early accounts are that Bill Gates was absolutely a talented coder, at least in the 1970s. Of course that was’t what made him rich - a series of business decisions that were some combination of lucky and prescient were.

        • @trafalgar225
          link
          English
          151 year ago

          The company gave the companies a large amount of equity. That was the work of Sam Altman. The employees are voting their wallet my sticking up for him.

        • @NounsAndWords
          link
          English
          10
          edit-2
          1 year ago

          That was some classic business pressure tactics. The sort of thing a massive multinational corporation would have a lot of experience in. The sort of thing a massive multinational corporation suddenly blindsided by this with a lot of financial interest in the situation would be interested in doing…while at the same time mitigating risk by trying to pull those same employees into the parent company if things don’t go their way.

          Edit: Now that I think about it, they also managed to get the vast majority of employees to ‘join together’ on the issue making it (psychologically) easier for them to ‘join together’ in choosing where to jump ship to. Maybe I’m just paranoid, but it’s just a really clever move on Microsoft’s part.

    • @[email protected]
      link
      fedilink
      English
      591 year ago

      I feel like this isn’t surprising knowing about all the other stuff altman has done. Seems like yet another loss for the greater good in the name of profit.

      • @NounsAndWords
        link
        English
        131 year ago

        Now what would the company do if the AI model started putting safety above profit (i.e. refusing to lie to profit the user (aka reducing market value))? How fucked are we if they create an AGI that puts profit above safety?

    • @[email protected]
      link
      fedilink
      English
      11 year ago

      So basically it’s exactly what I expected and I’m not surprised in the slightest. Amazing how that works.

      It’s not too surprising considering they don’t even have basic essential security features in 2023 like two-factor authentication. Absolutely pitiful.

  • @seiryth
    link
    English
    1191 year ago

    The thing that shits me about this is google appear to the public to be late to the party but the reality is they DID put safety before profit when it came to AI. The sheer amount of research and papers put out by them on AI should have proven to people they know what they’re doing.

    And then openAI throw caution into the wind and essentially make google and others panic knee jerk because there’s really money to be made, and now everyone seems to be throwing caution into the wind and pushing it into the mainstream before society is ready.

    All in the name of shareholders.

    • @blazeknave
      link
      English
      111 year ago

      10k%! A friend works in brand marketing at Google. They’d been using internally for months before market pressure forced them to start onboarding public end users. I’ve been in the earliest of the external betas (bc I give a lot of product feedback over the years?) and from the beginning the user experiences have been the most locked down of all the consumer LLMs

    • Toes♀
      link
      fedilink
      English
      -681 year ago

      I think it’s not enough, disable all the safe guards and let people decide if the output is what they want, hate being treated like a child trying to buy a M rated game.

      • xor
        link
        fedilink
        English
        671 year ago

        But this isn’t an M rated game, it’s a transformative new technology with potentially horrifying consequences to misuse

        • @[email protected]
          link
          fedilink
          English
          -231 year ago

          By answering questions? We are general intelligences that can answer questions. Oh shit oh fuck what am I doing talking.

                • @[email protected]
                  link
                  fedilink
                  English
                  -1
                  edit-2
                  1 year ago

                  Im bot sure if you are aware but thats litterally what makes ai so usefull its just responding to external inputs and doesent habe to be programmed value for value because it getsTrained with datasets and chat GPT isn’t gonna hurt a fly the reason its m Rated is because the idiots who made it didn’t filter the input content whilst web scraping its litterally too stupid to Funktionass a weapon except for misinformation which it outputs regardless oft its age rating openai is just a buncha cucs who switched to a close source system and can’t actually make any gold company decisions

                  TL;DR: Fuck openai

                • @[email protected]
                  link
                  fedilink
                  English
                  -11 year ago

                  If you can get chatgpt to drive your murder drone I’d be very impressed. Telsa can’t figure it out in 2d.

              • @[email protected]
                link
                fedilink
                English
                0
                edit-2
                1 year ago

                FWIW I work in the field and agree with this. LLMs in the current state are not so dangerous they can’t be released to public. Generative image and video models are a much bigger threat, but that was largely something which came from open source.

                If we really want to pearl clutch, it is NVIDIA which is really propping open this Pandora’s box in terms of putting the capability in irresponsible hands

          • xor
            link
            fedilink
            English
            151 year ago

            Okay, so let’s do a thought experiment, and take off all the safeguards.

            Oops, you made:

            • a bomb design generator
            • an involuntary pornography generator
            • a CP generator

            Saying “don’t misuse it” isn’t enough to stop people misusing it

            And that’s just with chatgpt - AI isn’t just a question and answer machine - I suggest you read about “the paperclip maximiser” as a very good example of how misalignment of general purpose AI can go horribly wrong

            • El Barto
              link
              English
              61 year ago

              I was going to say that a well-determined individual would find this information regardless. But the difference here is that it being so easily accessible would increase the risks of someone doing something reaaaally stupid by a factor of 100. Yikes.

                • El Barto
                  link
                  English
                  21 year ago

                  For you or many others, for sure it won’t be complicated. The world is vast, and the environment you are in is very specific to you. Many other kids may have phones, sure, but they are not in the same environment as you or me.

                  Some non-sciency kid will have a hard time getting to do what their edgy mind wants them to do, unless an AI guides them mini-step by mini-step.

            • @[email protected]
              link
              fedilink
              English
              31 year ago

              I mean half that is deviant art and you can look up how to make explosives on youtube chem channels or in books. It’s not hard to rig up a custom detonator if you can get the energetics.

              • @[email protected]
                link
                fedilink
                English
                21 year ago

                What are we going to do with these Universities, just teaching anyone how to be a chemical engineer!!?

              • @guacupado
                link
                English
                11 year ago

                Jesus some of you guys completely miss the point.

            • @[email protected]
              link
              fedilink
              English
              11 year ago

              ChatGPT was very far from the first publically available generative AI. It didn’t even do images at first.

              Also, there are plenty of YouTube channels which show you how to make all sorts of extremely dangerous explosives already.

              • xor
                link
                fedilink
                English
                11 year ago

                But the concern isn’t which was the first generative ai - their “idea” was that AIs - of all types, including generalised - should just be released as-is, with no further safeguards.

                That doesn’t consider that OpenAI doesn’t only develop text generation AIs. Generalised AI can do horrifying things, even just by accidental misconfiguration (see the paperclip optimiser example).

                But even a GANN like chatGPT can be coerced to generate non-text data with the right prompting.

                Even in that example, one can’t just dig up those sorts of videos without, at minimum, leaving a trail. But an unresticted pretrained model can be distributed and run locally, and used without trace to generate any content whatsoever that it’s capable of generating.

                And with a generalised AI, the only constraint to the prompt “kill everybody except me” becomes available compute.

      • @hansl
        link
        English
        191 year ago

        And while you’re at it, remove safety on guns. And seatbelts. And might as well get rid of those pesky boom gates. I can hear the trains just fine, I don’t like being treated like a child. /s

        • konalt
          link
          English
          -211 year ago

          Guns and car crashes may break my bones, but words will never hurt me

          • @hansl
            link
            English
            231 year ago

            That makes a great song jingle but it’s been proven that you are more a product of words around you than you want to admit in your comment.

          • @[email protected]
            link
            fedilink
            English
            2
            edit-2
            1 year ago

            Have you ever heard of propaganda? Or manufacturing consent? Or what about the whole field of psychology?

  • @danielfgom
    link
    English
    381 year ago

    Yet another jackass CEO. Anybody surprised?

  • AutoTL;DRB
    link
    fedilink
    English
    171 year ago

    This is the best summary I could come up with:


    Toner, who serves as director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, allegedly drew Altman’s negative attention by co-writing a paper on different ways AI companies can “signal” their commitment to safety through “costly” words and actions.

    In the paper, Toner contrasts OpenAI’s public launch of ChatGPT last year with Anthropic’s “deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype.”

    She also wrote that, “by delaying the release of [Anthropic chatbot] Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur.”

    At the same time, Duhigg’s piece also gives some credence to the idea that the OpenAI board felt it needed to be able to hold Altman “accountable” in order to fulfill its mission to “make sure AI benefits all of humanity,” as one unnamed source put it.

    “It’s hard to say if the board members were more terrified of sentient computers or of Altman going rogue,” Duhigg writes.

    The piece also offers a behind-the-scenes view into Microsoft’s three-pronged response to the OpenAI drama and the ways the Redmond-based tech giant reportedly found the board’s moves “mind-bogglingly stupid.”


    The original article contains 414 words, the summary contains 215 words. Saved 48%. I’m a bot and I’m open source!