Questions remain about what spurred the board’s decision to oust Altman, but growing tensions became impossible to ignore as Altman rushed to make OpenAI the next big technology company.

  • @just_another_person
    link
    English
    241 year ago

    Because 90% of it is all “Hopes N Dreams”, and PR bullshit at this point.

    • @SupraMario
      link
      English
      -81 year ago

      Yup, been saying this wave of “AI” is a fad, the cracks are starting to show…

      • @Candelestine
        link
        English
        171 year ago

        How do you think that something people are already using to type out dull, routine emails is some kind of fad? It saves time, if you don’t have to spend an extra minute typing out a routine confirmation email, why would you?

        • key
          link
          fedilink
          English
          131 year ago

          Being a fad is relative. Something that’s hugely popular and being brought up in every industry as a world changing technology for a while before normalizing as something useful in a few areas but not a good fit in many others qualifies as a fad in my book. Most tech fads never go totally away because they’re not useless, they were just overblown.

          Generative ML models certainly are useful and extend what we think of ML traditionally being capable of doing. But the current worldwide AI madness is not due to the merits of Gen ML, it’s because it’s marketed as AI (which to 90 percent of the population means what AGI means to nerds) and makes people think “we’re about to be in terminator any second now” when in reality it means “hey my autocomplete isn’t totally crap anymore”

          • @Candelestine
            link
            English
            01 year ago

            Gotcha. I think you’re right about fads, and I agree.

          • @SupraMario
            link
            English
            01 year ago

            I love how your reply everyone agrees with that this is another tech fad…just like the .com boom…yet my reply has everyone disagreeing with calling it a fad lol.

            You nailed it though. I don’t know how many people I’ve talked to who think this is legit terminator level AI…or nearly there.

        • @[email protected]
          link
          fedilink
          English
          71 year ago

          I feel like there will be a backlash to this. As a recipient, what do I get out of reading your routine confirmation email that I wouldn’t get out of reading whatever (presumably more concise) prompt you used to generate it?

          Maybe people will find better ways to use these systems, but so far most of what I’ve seen is text that is bloated without any useful information being added. I think we’ll get to a point pretty quickly where that is considered less polite and less professional.

        • @[email protected]
          link
          fedilink
          English
          61 year ago

          generating emails that is the worst possible use case of ai. just send me whatever you feed the ai to generate the world salad. i don’t need the word salad, there is nothing wrong with being brief and to the point.

        • Franzia
          link
          fedilink
          English
          21 year ago

          Because sending and receiving dull emails isnt real work. AI has only replaced bullshit, so far. It just isnt worth the billions these companies claim it is until they find a product and sell it.

        • ripcord
          link
          fedilink
          01 year ago

          It’s a shame no one has invented copy and paste yet.

        • Norgur
          link
          fedilink
          -31 year ago

          Hello,
          Thanks for your message. I’ll check the sales figures out as soon.as.I can.

          Best regards
          Norgur

          I couldn’t have opened any AI thingy in the time I typed that.

          • 520
            link
            fedilink
            01 year ago

            Okay, now imagine you have to type out a formal email but are typically shit at that kind of thing.

              • 520
                link
                fedilink
                0
                edit-2
                1 year ago

                But imagine you were shit at writing formal emails. Sure, right now you can knock something like that out sooner but that’s because you know how to do it.

      • @[email protected]
        link
        fedilink
        English
        71 year ago

        Because it’s not AI and it never was. Calling a t9 artificial intelligence was a great marketing but zero substance

        • @diviledabit
          link
          English
          11 year ago

          But what if we’re just a very fancy T9 too?

        • @SupraMario
          link
          English
          11 year ago

          You way overestimate this tech. I work in this field. We do not have fast enough anything (storage/ram/CPU) to do what you think it can. We need ubi sure, but this isn’t the leap everyone thinks it is. It’s still dumb tech and just follows what humans feed it. Do remember that less than 40 years ago everyone was still using paper for everything, now one person can do what 20 did in the past with a computer.

          Until you see robots walking around, this tech isn’t the apocalyptic expectations you think it has.

          When quantum computing actually takes off then you can start worrying

            • @SupraMario
              link
              English
              01 year ago

              Guess what? I work in this field too.

              Doesn’t sound like it from the shit you posted below that literally just says you used it lol

              How can you say this but completely miss the fucking point?

              I’m not at all, you’re the dude who started screaming what about the farriers when cars started taking over…that literally what you’re doing right now.

              Oh, that’s how you can say it. You clearly don’t work in this field if that’s what you think the bottleneck is right now.

              Lol ok Mr. 200k lines of code lol

              I’ve personally used AI to generate more than 200K lines of production code this year alone. One senior engineer today can do the work of dozens with this. We are going to go from labor shortages to labor surpluses.

              Lol sounds like you’re a pretty shit dev if you are putting in 200k lines of code from AI into prod…that or you’re full of shit…which either way it’s a really dumb argument to use. I’d fire you if you were doing that shit in my company. No way you’re checking over 200k lines for cohesion or security issues. Sounds like outsourced devs all over again, where the code is a fucking nightmare of shit.

              You can scream and shout all you want but it’s not replacing humans any time soon. Sure it’s great for augmenting us but it’s not replacing us.

                • @SupraMario
                  link
                  English
                  11 year ago

                  Wait… Do you think that people can’t work in a field if they use their own products to make sure they’re working?

                  So wait… you’re using AI gen code in prod…to make sure the AI gen code you didn’t write works?

                  No dev is out there using AI to write prod code, they’re using it to help, but a ton of the code that AI spits out is trash and they know it. If you put in over 200k lines of code yourself, you’re not checking it and whatever you are rolling out, is probably swiss fucking cheese in the security department.

                  I’m not going to dox myself by linking to what I’ve done. It’s just sad that you’re yet another person (probably a greybeard) who thinks they know everything but they’re completely unprepared for the future.

                  Naa I’m not a greybeard at all, I’m just not a panicked fad follower who thinks the advantages of LLMs are going to replace everyone down to a single guy named Roger who has AI write him 200k lines of code for prod.

                  And if you can’t figure out how to check over 200K lines of properly formatted and documented code, then you’re a shit engineer.

                  Shit what AI are you using that spitting out documentation as well as properly formatting? Sounds like you got a gold mine on your hands…

                  What the fuck do you think “augmenting” means to bloodsucking MBAs? If one mid-level engineer can do the work of five, then they can scale a team of ten down to two. There are many reasons not to do that no matter how good AI is (i.e. bus factor). But since when has that stopped management from making dumbass decision?

                  O I’m not saying it won’t, just like they did in 08 when the outsourced everything to code farms in India and shit canned a bunch of senior devs only to find out the code is trash and not documented at all…same thing is going to happen here…it’s a fad…hell even the cloud was a fad and a ton of it’s starting to come back in house.

                  Not going to waste anymore time on you. Hope you have a good retirement plan in place. You’re going to need it.

                  Sounds like you got it all figured out. Have a good day.

    • @jacksilver
      link
      English
      01 year ago

      I haven’t watched a lot of two-minute papers, but this video is very misleading. Simulated environments have been used for years to speed up DeepRL. The only ChatGPT/LLM portion was about defining a scoring mechanism and there video gives no indication of if it did a better job or not, not to mention the problem the LLM was solving is one that’s been studied for decades, which reduces the “it generalizes better”.

      I’m not saying LLMs have a lot of potential, but that video isn’t really supportive of that stance.

  • AutoTL;DRB
    link
    fedilink
    English
    71 year ago

    This is the best summary I could come up with:


    The power struggle revolved around Altman’s push toward commercializing the company’s rapidly advancing technology versus Sutskever’s concerns about OpenAI’s commitments to safety, according to people familiar with the matter.

    Senior OpenAI executives said they were “completely surprised” and had been speaking with the board to try to understand the decision, according to a memo sent to employees on Saturday by chief operating officer Brad Lightcap that was obtained by The Washington Post.

    During its first-ever developer conference, Altman announced an app-store-like “GPT store” and a plan to share revenue with users who created the best chatbots using OpenAI’s technology, a business model similar to how YouTube gives a cut of ad and subscription money to video creators.

    Quora CEO Adam D’Angelo, one of OpenAI’s independent board members, told Forbes in January that there was “no outcome where this organization is one of the big five technology companies.”

    Two of the board members who voted Altman out worked for think tanks backed by Open Philanthropy, a tech billionaire-backed foundation that supports projects preventing AI from causing catastrophic risk to humanity: Helen Toner, the director of strategy and foundational research grants for Center for Security and Emerging Technology at Georgetown, and Tasha McCauley, whose LinkedIn profile says she began work as an adjunct senior management scientist at Rand Corporation earlier this year.

    Within five to 10 years, there could be “data centers that are much smarter than people,” Sutskever said on a recent episode of the AI podcast “No Priors.” Not just in terms of memory or knowledge, but with a deeper insight and ability to learn faster than humans.


    The original article contains 1,563 words, the summary contains 268 words. Saved 83%. I’m a bot and I’m open source!

    • 520
      link
      fedilink
      01 year ago

      Who’s the rapist in this scenario?

      I’m out of the loop

        • 👁️👄👁️
          link
          fedilink
          English
          2
          edit-2
          1 year ago

          She definitely seems a little out there, and a lot of those claims seem a bit insane tbh.

            • 👁️👄👁️
              link
              fedilink
              English
              21 year ago

              I question the legitimacy or accuracy of a lot of that given all the things she’s wrote. She’s clearly not mentally stable. I feel at the very least, there’s a lot of context missing. She claims even her therapist is against her. These are still accusations, nothing more.

          • 520
            link
            fedilink
            11 year ago

            If half of the claims were even remotely true, that’s gonna be one hell of a toll on one’s mental health. I get that false claims can be a thing, but some people really are that sick. That includes talented people as well.

            • @5BC2E7
              link
              English
              01 year ago

              She only claims her brother slept in the same bed as her at 13. She does not have other claims just insinuations. So if it’s all true and she was abused she doesn’t seem to remember what exactly happened.

              • 520
                link
                fedilink
                11 year ago

                Trauma can do that to someone unfortunately.

                • @5BC2E7
                  link
                  English
                  -11 year ago

                  I believe so. My point is that by your comment half of her claims would be no claims since she only made 1 concrete claim.