• @hokage
    link
    23510 months ago

    What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS’s LLM play at this point.

    • P03 Locke
      link
      fedilink
      English
      11010 months ago

      When you get articles like this, the first thing you should ask is “Who the fuck is Firstpost?”

      • Altima NEO
        link
        fedilink
        English
        3410 months ago

        Yeah where the hell do these posters find these articles anyway? It’s always from blogs that repost stuff from somewhere else

    • @[email protected]
      link
      fedilink
      English
      4010 months ago

      Openai biggest spending is infrastructure, Whis is rented from… Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested

      • @[email protected]
        link
        fedilink
        English
        2510 months ago

        MS is basically getting a ton of equity in exchange for cloud credits. That’s a ridiculously good deal for MS.

    • @[email protected]
      link
      fedilink
      English
      1510 months ago

      While title is click bite, they do say right at the beginning:

      *Right now, it is pulling through only because of Microsoft’s $10 billion funding *

      Pretty hard to miss, and than they go to explain their point, which might be wrong, but still stands. 700k i only one model, there are others and making new ones and running the company. It is easy over 1B a year without making profit. Still not significant since people will pour money into it even after those 10B.

    • lemmyvore
      link
      fedilink
      English
      1010 months ago

      I mean, you’re correct in the sense Microsoft basically owns their ass at this point, and that Microsoft doesn’t care if they make a loss because it’s sitting on a mountain of cash. So one way or another Microsoft is getting something cool out of it. But at the same time it’s still true that OpenAI’s business plan was unsustainable hyped hogwash.

      • @chiliedogg
        link
        English
        2010 months ago

        Their business plan got Microsoft to drop 10 billion dollars on them.

        None of my shitty plans have pulled that off.

        • lemmyvore
          link
          fedilink
          English
          110 months ago

          If they got any of that into their own pockets kudos to them.

          Mainly they used it to pay for the tech and research and it’s all reverting back to Microsoft eventually. Going bankrupt is not quite the same as being acquired.

      • @[email protected]
        link
        fedilink
        English
        310 months ago

        Also, their biggest expenses are cloud expenses, and they use the MS cloud, so that basically means that Microsoft is getting a ton of equity in a hot startup in exchange for cloud credits which is a ridiculously good deal for MS. Zero chance MS would let them fail.

    • R0cket_M00se
      link
      English
      510 months ago

      Almost every company uses either Google or Microsoft Office products and we already know that they’re working on an AI offering/solution for O365 integration, they can see the writing on the wall here and are going to profit massively as they include it in their E5 license structure or invent a new one that includes AI. Then they’ll recoup that investment in months.

    • @[email protected]
      link
      fedilink
      English
      6610 months ago

      If there’s no path to make it profitable, they will buy all the useful assets and let the rest go bankrupt.

      • @[email protected]
        link
        fedilink
        English
        1310 months ago

        Microsoft reported profitability in their AI products last quarter, with a substantial gain in revenue from it.

        It won’t take long for them to recoup their investment in OpenAI.

        If OpenAI has been more responsible in how they released ChatGPT, they wouldn’t be facing this problem. Just completely opening Pandora’s box because they were racing to beat everyone else out was extremely irresponsible and if they go bankrupt because of it then whatever.

        There’s plenty of money to be made in AI without everyone just fighting over how to do it in the most dangerous way possible.

        I’m also not sure nVidia is making the right decision trying their company to AI hardware. Sure, they’re making mad money right now, but just like the crypto space that can dry up instantly.

        • @[email protected]
          link
          fedilink
          English
          1310 months ago

          I don’t think you’re right about nvidia. Their hardware is used for SO much more than AI. They’re fine.

          Plus their own AI products are popping off rn. DLSS and their frame generation one (I forget the name) are really popular in the gaming space.

          I think they also have a new DL-based process for creating stencils for silicon photolithography which, in my limited knowledge, seems like a huge deal.

    • Tigbitties
      link
      fedilink
      2510 months ago

      That’s $260 million .There are 360 million paid seats of MS360. So they’d have to raise their prices $0.73 per year to cover the cost.

      • PorkSoda
        link
        English
        110 months ago

        You mean 365 right? Or is there another MS product I’m unaware of?

        Not trying to be a dick, I really don’t know given how expansive they are.

  • @[email protected]
    link
    fedilink
    English
    9510 months ago

    That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.

    • @glockenspiel
      link
      English
      6110 months ago

      I don’t think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.

      What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn’t supposed to do.

      I’ve noticed it has become worse at rubber ducking non-trivial coding prompts. I’ve noticed that my juniors have a hell of a time functioning without access to it, and they’d rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.

      A good tool for getting people on ramped if they’ve never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.

      • @Windex007
        link
        English
        6310 months ago

        As a Sr. Dev, I’m always floored by stories of people trying to integrate chatGPT into their development workflow.

        It’s not a truth machine. It has no conception of correctness. It’s designed to make responses that look correct.

        Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

        ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.

        • JackbyDev
          link
          fedilink
          English
          3210 months ago

          Search engines aren’t truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it’s just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.

          • @Windex007
            link
            English
            610 months ago

            I agree with everything you just said, but i think that without greater context it’s maybe still unclear to some why I still place chatGPT in a league of it’s own.

            I guess I’m maybe some kind of relic from a bygone era, because tbh I just can’t relate to the “I copied and pasted this from stack overflow and it just worked” memes. Maybe I underestimate how many people in the industry are that fundamentally different from how we work.

            Google is not for obtaining code snippets. It’s for finding docs, for troubleshooting error messages, etc.

            If you have like… Design or patterning questions, bring that to the team. We’ll run through it together with the benefits of having the contextual knowledge of our problem domain, internal code references, and our deployment architecture. We’ll all come out of the conversation smarter, and we’re less likely to end up needing to make avoidable pivots later on.

            The additional time required to validate a chatGPT generated piece of code could have instead been spent invested in the dev to just do it right and to properly fit within our context the first time, and the dev will be smarter for it and that investment in the dev will pay out every moment forward.

            • JackbyDev
              link
              fedilink
              English
              110 months ago

              I guess I see your point. I haven’t asked ChatGPT to generate code and tried to use it except for once ages ago but even then I didn’t really check it and it was a niche piece of software without many examples online.

        • @SupraMario
          link
          English
          1110 months ago

          Don’t underestimate C levels who read a Bloomberg article about AI to try and run their entire company off of it…then wonder why everything is on fire.

        • @ewe
          link
          English
          510 months ago

          Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

          Not me, but my boss would… wait a minute…

        • @[email protected]
          link
          fedilink
          English
          510 months ago

          Honestly once ChatGPT started giving answers that consistently don’t work I just started googling stuff again because it was quicker and easier than getting the AI to regurgitate stack overflow answers.

      • @bmovement
        link
        English
        12
        edit-2
        10 months ago

        Copilot is pretty amazing for day to day coding, although I wonder if a junior dev might get led astray with some of its bad ideas, or too dependent on it in general.

        Edit: shit, maybe I’m too dependent on it.

        • @JimmyMcGill
          link
          English
          410 months ago

          I’m also having a good time with copilot

          Considering asking my company to pay for the subscription as I can justify that it’s worth it.

          Yes many times it is wrong but even if it it’s only 80% correct at least I get a suggestion on how to solve an issue. Many times it suggest a function and the code snippet has something missing but I can easily fix it or improve it. Without I would probably not know about that function at all.

          I also want to start using it for documentation and unit tests. I think there it’s where it will really be useful.

          Btw if you aren’t in the chat beta I really recommend it

          • @Jerkface
            link
            English
            310 months ago

            Just started using it for documentation, really impressed so far. Produced better docstrings for my functions than I ever do in a fraction of the time. So far all valid, thorough and on point. I’m looking forward to asking it to help write unit tests.

            • @JimmyMcGill
              link
              English
              310 months ago

              it honestly seems better suited for those tasks because it really doesn’t need to know anything that you’d have to tell it otherwise.

              The code is already there, so it can get literally all the info that it needs, and it is quite good at grasping what the function does, even if sometimes it lacks the context of the why. But that’s not relevant for unit tests, and for documentation that’s where the user comes in. It’s also why it’s called copilot, you still make the decisions.

    • @[email protected]
      link
      fedilink
      English
      17
      edit-2
      10 months ago

      But what did they expect would happen, that more people would subscribe to pro? In the beginning I thought they just wanted to survey-farm usage to figure out what the most popular use cases were and then sell that information or repackage use-cases as an individual added-value service.

    • @[email protected]
      link
      fedilink
      English
      410 months ago

      I am unsure about the free version, but I really am very surprised by how good the paid version with the code interpreter has gotten in the last 4-6weeks. Feels like I have a c# syntax guru on 24/7 access. Used to make lots of mistakes a couple months ago, but rarely does now and if it does it almost always fixes in in the next code edit. It has saved my untold hours.

  • @merthyr1831
    link
    English
    8110 months ago

    I mean apart from the fact it’s not sourced or whatever, it’s standard practice for these tech companies to run a massive loss for years while basically giving their product away for free (which is why you can use openAI with minimal if any costs, even at scale).

    Once everyone’s using your product over competitors who couldn’t afford to outlast your own venture capitalists, you can turn the price up and rake in cash since you’re the biggest player in the market.

    It’s just Uber’s business model.

    • @[email protected]
      link
      fedilink
      English
      2610 months ago

      The difference is that the VC bubble has mostly ended. There isn’t “free money” to keep throwing at a problem post-pan. That’s why there’s an increased focus on Uber (and others) making a profit.

      • FlumPHP
        link
        fedilink
        English
        2210 months ago

        In this case, Microsoft owns 49% of OpenAI, so they’re the ones subsidizing it. They can also offer at-cost hosting and in-roads into enterprise sales. Probably a better deal at this point than VC cash.

      • @[email protected]
        link
        fedilink
        English
        1610 months ago

        This is what caused spez at Reddit and Musk at Twitter to go into desperation mode and start flipping tables over. Their investors are starting to want results now, not sometime in the distant future.

      • @voluble
        link
        English
        410 months ago

        I don’t know anything about anything, but part of me suspects that lots of good funding is still out there, it’s just being used more quietly and more scrupulously, & not being thrown at the first microdosing tech wanker with a great elevator pitch on how they’re going to make “the Tesla of dental floss”.

    • @nodimetotie
      link
      English
      1210 months ago

      Speaking of Uber, I believe it turned a profit the first time this year. That is, it never made any profit since its creation in whenever it was created.

      • ineedaunion
        link
        English
        1210 months ago

        All it’s every done is rob from it’s employees so it can give money to stockholders. Just like every corporation.

  • @Billy_Gnosis
    link
    English
    5310 months ago

    If AI was so great, it would find a solution to operate at fraction of the cost it does now

    • @Death_Equity
      link
      English
      7010 months ago

      Wait, has anybody bothered to ask AI how to fix itself? How much Avocado testing does it do? Can AI pull itself up by its own boot partition, or does it expect the administrator to just give it everything?

      • @[email protected]
        link
        fedilink
        English
        1310 months ago

        Really says something that none of your responses yet seem to have caught that this was a joke.

        • @Death_Equity
          link
          English
          310 months ago

          The upvotes vs comments shows the vocal minority is just doing vocal minority things.

        • FaceDeer
          link
          fedilink
          210 months ago

          OP might have been intending it as a joke, but self-improvement is a very real subject of AI research so if that’s the case he accidentally said something about a serious topic.

          • @Buddahriffic
            link
            English
            210 months ago

            It’s an essential part of the idea of the technological singularity. An AI iterates itself and the systems it runs on, becoming more efficient, powerful, and effective at a rate that makes all of human progress up to that point look like nothing.

            • @MajorHavoc
              link
              English
              310 months ago

              While I’m inclined to believe the singularity is achievable, it’s important to remember that there’s no evidence today that it will ever be reached.

              Our hope for it, and the good than can come with it, can’t pull it into the realm of things we will see in our lifetimes. It could emerge soon, but it’s at least as likely to stay science fiction for another millennia.

              • @Buddahriffic
                link
                English
                210 months ago

                Yeah, when chat gpt 4 first came out, I thought we might be close. But as it’s capabilities and limitations became more clear, it doesn’t look like we’re close at all. I mean, it’s hard to say for sure since an LLM will just make up a part of an AI and maybe the other pieces are farther along but just not getting as much attention because there’s value in not making those things public.

                But as someone who works in one of the fields that would be involved in the technological singularity, no one really knows good ways to apply AI to the work we do and the best initiatives I’ve seen come out of the corporate drive to leverage AI aren’t actually AI, but just smarter automation tools.

      • @[email protected]
        link
        fedilink
        English
        610 months ago

        if we don’t know, it doesn’t know.

        If we know, but there’s no public text about it, it doesn’t know either.

        it is trained off of stuff that has already been written, and trained to emulate the statistical properties of those words. It cannot and will not tell us anything new

        • FaceDeer
          link
          fedilink
          1410 months ago

          That’s not true. These models aren’t just regurgitating text that they were trained on. They learn the patterns and concepts in that text, and they’re able to use those to infer things that weren’t explicitly present in the training data.

          I read recently about some researchers who were experimenting with ChatGPT’s ability to do basic arithmetic. It’s not great at it, but it’s definitely figured out some techniques that allow it to answer math problems that were not in its training set. It gets them wrong sometimes, but it’s like a human doing math in its head rather than a calculator using rigorous algorithms so that’s to be expected.

          • @[email protected]
            link
            fedilink
            English
            410 months ago

            they learn statistical correlations between words. given the last 5000 (or however large the context is) words, and absolutely no other information besides that, what is the most likely word to appear next? It’s a glorified order 5000 markov chain.

            The reason it can “do” some math is that there are tons of examples in the training set using small numbers usually used as examples. it can do basic arithmetic because it has seen “2+2=4” and other examples with simple numbers like that. The studies used test basic arithmetic. The same things that it had millions of pre-worked examples of. And it still gets those wrong, with astonishing frequency. those studies aren’t talking about asking it “what is the square root of pi” or stuff like that. but stuff such as “is 7 greater than 4?”, “what is 10 + 3?”, “is 97 prime?” stuff it has most definitely seen the answers to. ask it about some large prime, and it’ll nay no, and be probably right, because most numbers are composite

            • FaceDeer
              link
              fedilink
              3
              edit-2
              10 months ago

              those studies aren’t talking about asking it “what is the square root of pi” or stuff like that. but stuff such as “is 7 greater than 4?”, “what is 10 + 3?”, “is 97 prime?” stuff it has most definitely seen the answers to.

              No, they very explicitly checked to see whether the training set contains the literal math problem that they asked it for the answer to. ChatGPT is able to answer math questions that it has never seen before. I believe this is the article (though I had to go searching, it’s been a while).

              When people dismiss LLMs as “just prediction engines” they’re really missing the point. Of course they’re prediction engines, that’s not in dispute. The question is about how they go about making those predictions. When I show you the string “18 + 10 =” you can predict what comes next, yes? Well, how did you predict it? Did you memorize that particular specific string, or have you developed heuristics for how to do simple addition problems when you see them?

              • @MajorHavoc
                link
                English
                0
                edit-2
                10 months ago

                These things are currently infamously bad at math, though.

                I won’t argue that it’ll never get there. I’m confident it will, - though with a lot more perl hacks than elegant emergence.

                But today, these things have an astonishingly high ‘appearance of intelligence’ to ‘incredible stupidity’ ratio.

                • FaceDeer
                  link
                  fedilink
                  310 months ago

                  Humans are also not particularly well known for their math skills. Ask a random stranger to do simple arithmetic in their head, with only a few seconds to think and no outside help, and I wouldn’t expect particularly reliable results.

            • @MajorHavoc
              link
              English
              0
              edit-2
              10 months ago

              It gives me the giggles that folks speculating are getting more upvotes than your post that demonstrates actual understanding of the implementation details.

              If I were the type to sell sizzle hype and snake oil, now would be the time to do it. The venture capitalists may have learned their lesson, but the general public haven’t.

      • discodoubloon
        link
        fedilink
        610 months ago

        It doesn’t “know” anything. It can’t solve that problem. It’s trained on humans so it’s limited to what we have written down.

        I love ChatGPT but if it’s creative it’s because you asked it the right questions and found an oblique answer yourself.

      • @Buffaloaf
        link
        English
        510 months ago

        I asked Bard how ChatGPT can fix itself, here’s what it said:

        ChatGPT can fix itself in a number of ways, including:

        Retraining on a larger dataset. ChatGPT is trained on a massive dataset of text and code, but it can always be retrained on a larger dataset. This would help ChatGPT to learn new things and improve its performance.

        Using a different algorithm. ChatGPT is currently trained using the GPT-3 algorithm, but there are other algorithms that could be used. For example, ChatGPT could be trained using the Megatron-Turing NLG algorithm, which is one of the most powerful language models in the world.

        Using a combination of algorithms. ChatGPT could also be trained using a combination of algorithms. This would allow ChatGPT to benefit from the strengths of different algorithms.

        Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.

        Using human feedback. ChatGPT could also be used to generate text, and then this text could be reviewed by humans. The human feedback could then be used to improve ChatGPT’s performance.

        • @[email protected]
          link
          fedilink
          English
          1010 months ago

          “Using a feedback loop. ChatGPT could be used to generate text, and then this text could be used to train ChatGPT. This would allow ChatGPT to learn from its own mistakes and improve its performance over time.”

          So basically create its own Fox News and see how that goes.

          • FaceDeer
            link
            fedilink
            310 months ago

            The full suggestion includes “This would allow ChatGPT to learn from its own mistakes”, which implies that the text it generated would be evaluated and curated before being sent back into it for training. That, as well as including non-AI-generated text along with the AI generated stuff, should stop model collapse.

            Model collapse is basically inbreeding, with similar causes and similar solutions. A little inbreeding is not inherently bad, indeed it’s used frequently when you’re trying to breed an organism to have specific desirable characteristics.

        • FaceDeer
          link
          fedilink
          410 months ago

          If having an AI tell researchers that they should base its next iteration off of Megatron isn’t the plot of a Michael Bay Transformers movie already, it should have been.

    • @Zeth0s
      link
      English
      9
      edit-2
      10 months ago

      Deepmind is actually working on an AI that improve performances of low level programs. It started with improving sorting algorithm.

      It’s an RL algorithm.

      Main issue is that everything takes time, and expectations on current AI are artificially inflated.

      It will reach the point most are discussing now, it’ll simply take a bit longer than people expect

      Source: https://www.nature.com/articles/d41586-023-01883-4

    • Michaelsoft SirFaceFone
      link
      English
      710 months ago

      It’s Firstpost, their Kremlin-bootlicking YouTube videos are even worse. Just below Forbes Breaking News trash.

    • @Cyberflunk
      link
      English
      310 months ago

      Glad I’m not there only one to think that

  • @whispering_depths
    link
    English
    4610 months ago

    huh, so with the 10bn from Microsoft they should be good for… just over 30 years!

    • pachrist
      link
      English
      2910 months ago

      ChatGPT has the potential to make Bing relevant and unseat Google. No way Microsoft pulls funding. Sure, they might screw it up, but they’ll absolutely keep throwing cash at it.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        10 months ago

        They seems to be killing Cortana… So I expect a new assistant at least based partially on this tbh.

  • danielbln
    link
    English
    42
    edit-2
    10 months ago

    This article has been flagged on HN for being clickbait garbage.

    • @Zeth0s
      link
      English
      11
      edit-2
      10 months ago

      It is clearly no sense. But it satisfies the irrational needs of the masses to hate on AI.

      Tbf I have no idea why. Why do people hate a extremely clever family of mathematical methods, which highlights the brilliance of human minds. But here we are. Casually shitting on one of the highest peak humanity has ever reached

      • FaceDeer
        link
        fedilink
        510 months ago

        It seems to be a common thing. I gave up on /r/futurology and /r/technology over on Reddit long ago because it was filled with an endless stream of links to cool new things with comment sections filled with nothing but negativity about those cool new things. Even /r/singularity is drifting that way. And so it is here on the Fediverse too, the various “technology” communities are attracting a similar userbase.

        Sure, not everything pans out. But that’s no excuse for making all of these communities into reflections of /r/nothingeverhappens. Technology does change, sometimes in revolutionary ways. It’d be nice if there was a community that was more upbeat about that.

      • @MajorHavoc
        link
        English
        110 months ago

        I probably sound like I hate it, but I’m just giving my annual “this new tech isn’t the miracle it’s being sold as” warning, before I go back to charging folks good money to clean up the mess they made going “all in” on the last one.

      • @BetaDoggo_
        link
        English
        110 months ago

        People are scared because it will make consolidation of power much easier, and make many of the comfyer jobs irrelevant. You can’t strike for better wages when your employer is already trying to get rid of you.

        The idealist solution is UBI but that will never work in a country where corporations have a stranglehold on the means of production.

        Hunger shouldn’t be a problem in a world where we produce more food with less labor than anytime in history, but it still is, because everything must have a monetary value, and not everyone can pay enough to be worth feeding.

        • @Zeth0s
          link
          English
          110 months ago

          I agree with this. People should fight to democratize AI, public model, public data, public fair research. And should fight misuse of it from business schools’ type of guys.

    • @sfgifz
      link
      English
      2
      edit-2
      10 months ago

      Company go bankrupt, biggest investors take assets and IP at discount. Win.

  • @[email protected]
    link
    fedilink
    English
    3510 months ago

    Pretty sure Microsoft will be happy to come save the day and just buy out the company.

    • @pexavc
      link
      English
      1510 months ago

      it feels like, that was the plan all along

  • @SocialMediaRefugee
    link
    English
    2810 months ago

    A couple of my coworkers will have to write their own code again and start reading documentation

      • @NuanceDemon
        link
        English
        2110 months ago

        It works if you ask it for small specific components, the bigger the scope of the request, the less likely it will give you anything worthwhile.

        So basically you still need to know what you’re doing and how to design a script/program anyway, and you’re just using chatgpt to figure out the syntax.

        It’s a bit of time-saver at times but it’s not replacing anyone in the immediate future.

      • @SocialMediaRefugee
        link
        English
        2
        edit-2
        10 months ago

        I’ve tried using it myself and the responses I get, no matter how I phrase them, are too vague in most places to be useful. I have yet to get anything better than what I’ve found in documentation.

        • @sfgifz
          link
          English
          8
          edit-2
          10 months ago

          My experience is different, the response I get is not perfect but it’s good enough to be a start for any decent dev to refactor and build upon with lesser effort than from scratch. Maybe it depends on what language or framework you’re asking for.

        • Tony Bark
          link
          fedilink
          English
          310 months ago

          I have problems with it repeating certain words over and over again no matter how much I adjust the style and tone.

  • @Zuberi
    link
    English
    2810 months ago

    This article is dumb as shit

    • @BetaDoggo_
      link
      English
      1810 months ago

      No sources and even given their numbers they could continue running chatgpt for another 30 years. I doubt they’re anywhere near a net profit but they’re far from bankruptcy.

    • @[email protected]
      link
      fedilink
      English
      1110 months ago

      Right!? I believe it has the hallmark repetitive blandness indicating AI wrote it (because oroboros)

    • @pexavc
      link
      English
      210 months ago

      The flow of the writing style felt kinda off, like someone was speaking really fast spewing random trivia and leaving

  • @[email protected]
    link
    fedilink
    English
    2710 months ago

    I don’t understand Lemmy’s hate boner over AI.

    Yeah, it’s probably not going to take over like companies/investors want, but you’d think it’s absolutely useless based on the comments on any AI post.

    Meanwhile, people are actively making use of ChatGPT and finding it to be a very useful tool. But because sometimes it gives an incorrect response that people screenshot and post to Twitter, it’s apparently absolute trash…

    • @Zeth0s
      link
      English
      11
      edit-2
      10 months ago

      AI is literally one of the most incredible creation of humanity, and people shit on it as if they know better. It’s genuinely an astonishing historical and cultural achievement, peak of human ingenuity.

      No idea why such hate…

      One can hate disney ceo for misusing AI, but why shitting on AI?

      • @[email protected]
        link
        fedilink
        English
        1410 months ago

        It’s shit on because it is not actually AI as the general public tends to use the term. This isn’t Data from Star Trek, or anything even approaching Asimov’s three laws.

        The immediate defense against this statement is people going into mental gymnastics and hand waving about “well we don’t have a formal definition for intelligence so you can’t say they aren’t” which is just… nonsense rhetorically because the inverse would be true as well. Can’t label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.

        Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this “AI haters” group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems’ problems intrinsically means you want them to fail. That’s BS.


        Machine Learning and Large Language Models are amazing, they’re game changing, but they aren’t magical panaceas and they aren’t even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.

        Anyone trying to say different isn’t familiar with the field or is trying to sell you something. It’s the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.

        The frustrating part is that people caught up in the hype train of AI will say the same thing: “You just don’t understand!” But then they’ll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.


        At least in my opinion that’s where the negativity comes from.

        • @Aceticon
          link
          English
          4
          edit-2
          10 months ago

          Personally, having been in Tech for almost 3 decades I am massivelly skeptical when the usual suspects put out yet another incredible claim backed up by overly positive one-sided evaluations of something they own, and worse in an area I actually have quite a lot of knowledge in and can see through a lot of the bullshit, and it gets picked up by mindless fanboys who don’t have the expertise to understand jack-shit of what they’re parroting and greedy fuckers using salesspeak because they stand to personally gain if enough usefull idiots jump into the hype train.

          You don’t even need to be old enough to remember that “revolution in human transportation” was how the Segway was announced: all it takes is to look at the claims about Bitcoin and the blockchain and remember the fraud-ridden shitshow the whole area became.

          As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.

          It’s not even negativity (except towards the greedy fuckers trying to take advantage of others and who can Burn In Hell), it’s informed (both historically and by domain knowledge) skepticism.

          • @[email protected]
            link
            fedilink
            English
            310 months ago

            As I see it, anybody who is not skeptical towards “yet another ‘world changing’ claim from the usual types” is either dumb as a doorknob, young and naive or a greedy fucker invested in it trying to make money out of any “suckers” that jump into that hype train.

            I’ve been working on AI projects on and off for about 30 years now. Honestly, for most of that time I didn’t think neural nets were the way to go, so when LLMs and transformers got popular, I was super skeptical. After learning the architecture and using them myself, I’m convinced they’re part of but not the whole solution to AGI. As they are now, yes, they are world changing. They’re capable of improving productivity in a wide range of industries. That seems pretty world changing to me. There are already products out there proving this (GitHub Copilot, jasper, even ChatGPT). You’re welcome to downplay it and be skeptical, but I’d highly recommend giving it an honest try. If you’re right then you’ll have more to back up your opinion, and if you’re wrong, you’ll have learned to use the tech and won’t be left behind.

            • @Aceticon
              link
              English
              3
              edit-2
              10 months ago

              In my experience they’re a great tool to wrap and unwrap knowledge in and from language envelopes with different characteristics and I wouldn’t at all be surprised if they replace certain jobs which deal mostly with communicating with people (for example, I suspect the kind of news reporting of news agencies doesn’t really need human writters to compose articles, just data in bullet point format an LLM to turn it into a “story”).

              What LLMs are not is AGI and using them as knowledge engines or even just knowledge sources is a recipe for frustration as you end up either going down the wrong route by believing the AI or spending more time validating the AI output than the time it would take to find out the knowledge yourself from reliable sources.

              Whilst I’ve been on and off on the whole “might they be the starting point from which AGI comes” (which is really down to the question “what is intelligence”), what I am certain is nobody who is trully knowledgeable about it can honestly and assuredly state that “they are the seed from which AGI will come”, and that kind of crap (or worse, people just stating LLMs already are intelligent) is almost all of the hype we get about AI at the moment.

              At the moment and judging by the developments we are seeing, I’m more inclined to think that at least the reasoning part of intelligence won’t be solved by this path, though the intuition part of it might as that stuff is mainly about pattern recognition.

              • @[email protected]
                link
                fedilink
                English
                4
                edit-2
                10 months ago

                Yeah, I generally agree there. And you’re right. Nobody knows if they’ll really be the starting point for AGI because nobody knows how to make AGI.

                In terms of usefulness, I do use it for knowledge retrieval and have a very good success rate with that. Yes, I have to double check certain things to make sure it didn’t make them up, but on the whole, GPT4 is right a large percentage of the times. Just yesterday I’d been Googling to find a specific law or regulation on whether airlines were required to refund passengers. I spent half an hour with no luck. ChatGPT with GPT4 pointed me to the exact document down to the right subsection on the first try. If you try that with GPT3.5 or really anything else out there, there’s a much higher rate of failure, and I suspect a lot of people who use the “it gets stuff wrong” argument probably haven’t spent much time with GPT4. Not saying it’s perfect-- it still confidently says incorrect things and will even double down if you press it, but 4 is really impressive.

                Edit: Also agree, anyone saying LLMs are AGI or sentient or whatever doesn’t understand how they work.

                • @Aceticon
                  link
                  English
                  1
                  edit-2
                  10 months ago

                  That’s a good point.

                  I’ve been thinking about the possibility of LLM revolutionizing search (basically search engines) which are not autoritative sources of information (far from) but they’ll get you much faster to those.

                  LLM’s do have most of the same information as they do, add the whole extra level of being able to use natural language to query it in a more natural way and due to their massive training sets, even if one’s question is slightly incorrect the nearest cluster of textual tokens in the token space (an oversimplified descriptions of how LLMs work, I know) to said incorrect question might very well be were the correct questions and answers are, so you get the correct answer (and funnilly enough the more naturally one poses the question the better).

                  However as a direct provider of answers, certainly in a professional setting, it quickly becomes something that produces more work than it saves, because you always have to check the answers since there are no cues about how certain or uncertain that result was.

                  I suspect many if not most of us also had human colleagues who were just like that: delivering even the most “this is a wild guess” answer to somebody’s question as an assured “this is the way things are”, and I suspect also that most of of those who had such colleagues quickly learned to not go to them for answers and always double check the answer when they did.

                  This is why I doubt it will do things like revolutionizing programming or in fact replace humans in producing output in hard-knowledge domains that operate mainly on logic, though it might very well replace humans whose work is to wrap things up in the appropriate language for the target audience (I suspect it’s going to revolutionize the production of highly segmented and even individually targetted propaganda in social networks)

      • @Aceticon
        link
        English
        -1
        edit-2
        10 months ago

        Ah, yes.

        Remind me again how that “revolution of human mobility”, the Segway, is doing now…

        Or how wanderful every single one the announcements of breakthroughs in Fusion generation have turned out to be…

        Or how the safest Operating System ever, Windows 7, turned out in terms of security…

        Or how Bitcoin has revolutionized how people pay each other for stuff…

        Some of us have seen lots of hype trains go by over the years, always with the same format and almost all of them originating from exactly the same subset of people as the AI one, and recognize the salesspeak from greedy fuckers designed to excite ignorant naive fanboys of such bullshit chu-chu-trains when they come to the station.

        Rational people who are not driven by “personal profit maximization on the backs of suckers” will not use salesspeak and refer to anything brand new as “the most incredible creation of humanity” (it’s way too early to tell) or deem any and all criticism of it as “shitting on it”.

        • FaceDeer
          link
          fedilink
          110 months ago

          “Completely unrelated thing X didn’t live up to its hype, therefore thing Y must also suck” is not particularly sound logic for shitting on something.

          • @Aceticon
            link
            English
            110 months ago

            Funny how from all the elements were it ressonates with historical events: “people promoting it”, “bleeding edge tech”, “style of messaging”, “extraordinary claims without extraordinary proof” and more, your ended up making the kind of simplistic conclusion that a young child might make.

            • @[email protected]
              link
              fedilink
              English
              210 months ago

              extraordinary claims without extraordinary proof

              What are you looking for here? Do you want it to be self aware and anything less than that is hot garbage? That latest advances in AI have many uses. Sure Bitcoin was over hyped and so is AI, but Bitcoin was always a solution with no problem. AI (as in AGI) offers literally a solution to all problems (or maybe the end of humans but hopefully not hah). The current tech though is widely useful. With GPT4 and GitHub Copilot, I can write good working code at multiple times my normal speed. It’s not going to replace me as an engineer yet, but it can enhance my productivity by a huge amount. I’ve heard similar from many others in different jobs.

        • @Zeth0s
          link
          English
          0
          edit-2
          10 months ago

          AI, even at the current state is one of the most incredible creation of humanity.

          If there was a nobel prize for math and computer science, the whole field would deserve one next year. It would probably go to a number of different people who contributed to the current methodologies.

          You cannot compare nft to AI. You can open nature or science (the scientific publications) now and you’d see how big is the impact of AI.

          You can start your research here https://www.deepmind.com/research/highlighted-research/alphafold . Another nobel prize material

          • @Aceticon
            link
            English
            1
            edit-2
            10 months ago

            I actually have some domain expertise so excuse me if I don’t just eat up that overexcited ignorant fanboy pap and phamplet from one of the very companies trying to profit for such things.

            GAI (General Artificial Intelligence, i.e. a “thinking machine”) would indeed be that “incredible creation of humanity”, but that’s not this shit. This shit is a pattern matching and pattern reassembly engine - a technologically evolve parrot capable of producing outputs that mimic what was present in its training sets to such a level that they even parrot associations that were present in their training sets (i.e. certain questions get certain answers, only the LLM doesn’t even understand them as “questions” and “answers” just as textual combinations).

            Insuficiently intelligent people with no training in hard sciences often actually confuse such perfect parroting of that which intelligent beings previously produces with actually having intelligence, which is half part hilarious and half part sad.

            Edit: that was actually unfair, so let me put things better: some reactions to the hype on this AI remind me of how my grandmother - an illiterate old lady from the countryside who had been very poor most of her life - used to get very confused when she saw the same actor in multiple soap operas. The whole concept of actors and Acting was beyond her life experience so when I was a kid and she had moved to live with us in the “big city”, she took what she saw on TV at face value. I suspect a lot of people who have no previous understanding of the domain and related are going down the same route of reasoning on AI as my nana did on soap operas, so end up confusing the LLM’s impeccable imitation of human language use with there actually being a human-like intelligence behind it, just like my nana confused the “living truthfully in imaginary circunstances” of good actors with the real living it imitated.

            • @Zeth0s
              link
              English
              3
              edit-2
              10 months ago

              As you have domain expertise you will agree with us that, despite not being AGI, as it is now, deep learning, reinforcement learning, generative AI are an incredible creation of humanity, that, among other things, are capable already of:

              1. solving long standing scientific challenges such as protein folding,
              2. taking independent decisions and develop strategies that, on specific tasks, surpass human experts
              3. mapping human languages and artistic creations in high dimensional vector spaces where concepts and relationships are retained as properties of the spaces, allowing to perform math and statistical inference, generating original images and text (a thing for which, few decades ago, not many would have guessed such manageable mathematical representation could even exist).

              On top of this we give for granted all the current already existing applications, such as image recognition, translation, text classification…

              You would also agree with us that the potential of current AI methodologies in all fields of science and technology is already enormous, as demonstrated by alphafold for instance. We just need few more years to see even more groundbreaking applications of the exising methodologies, while we wait for even more powerful techniques or, why stop dreaming, AGI in few decades.

              • @Aceticon
                link
                English
                -1
                edit-2
                10 months ago

                What it’s doing is just a natural extension of what was done with basic Neural Networks back in the 90s when it started being used for recognition of human-written postal code numbers on mail envelopes.

                This is why I disagree that this specific moment in the development of AI is “an incredible creation of humanity”. Maybe the domain as a whole will turn out to be as groundbreaking as computers, but the idea that what’s being done now by itself is that is ignorant, premature or both.

                As for the rest, I actually studied Physics at a Degree level and with it complex Mathematics and your point #3 is absolute total bollocks.

                • @Zeth0s
                  link
                  English
                  3
                  edit-2
                  10 months ago

                  I was actually taking the time to share with you some very basic resources for you to learn something on basic stuff such as latent space, embedding, attention mechanism, markov decision processes, but your attitude really made change my mind.

                  It’s fine that you clearly don’t have the domain knowledge you claim, but your rudeness is really annoying. Enjoy your life with your achievement of complex math at degree level and learn how to speak

                  BTW, neural networks, even if few decades old, are an incredible achievement of humanity, even knowing how to roughly simulate a human neural network involves understanding of the brain, of non-linear math and existence of computers and (each of them) are astonishing achievements of humanity

    • Not A Bird
      link
      English
      210 months ago

      Lemmy and Mastodon to a larger extent hate anything owned by a corporation. That voice is getting more and more louder by the day.

    • deadcream
      link
      fedilink
      -9
      edit-2
      10 months ago

      It’s just projection of the hate for techbros (especially celebrities like Musk). Everything that techbros love (crypto, ai, space, etc) is hated automatically.
      I.e. they don’t really hate AI. You can’t hate something if you have zero understanding what that something is. It’s just an expression of hate for someone who promotes that something.

      • chaogomu
        link
        fedilink
        610 months ago

        AI is not good. I want to be good, but it’s not.

        I’ll clarify, it’s basically full of nonsense. Half of the shit it spits out is nonsense, and the rest is questionable. Even with that, it’s already being used to put people out of their jobs.

        Techbros think AI will run rampant and kill all humans, when they’re the ones killing people by replacing them with shitty AI. And the worst part is that it isn’t even good at the jobs it’s being used for. It makes shit up, it plagiarizes, it spits out nonsense. And a disturbing amount of the internet is starting to become AI generated. Which is also a problem. See, AI is trained on the wider internet, and now AI is being trained on the shitty output of AI. Which will lead to fun problems and the collapse of the AI. Sadly, the jobs taken by AI will not come back.

        • @Aceticon
          link
          English
          4
          edit-2
          10 months ago

          It’s a tool which can be used to great effect in the right setting, for example to wrap cold knowledge summarily stated into formats with much broader appeal and to revert the process.

          However it’s being sold by greedy fuckers who stand to gain from people jumping into the hype-train as something else altogether: a shortcut into knowledge and the output of those who have it, because there’s a lot more money to be made from that than there is of something which can “write an article from a set of bullet points”.

          For me the most infuriating aspect of this is that this is hardly the 1st such hype train going to “FleeceTheSuckersTown” coming out of “TechBrosCity” that we’ve seen in the last 2 decades, not even the 2nd or the 3rd - there have been a lot of such things always following the same formula, to the point that the “great men” of the age in Tech (such as Musk) are, unlike the ones in the first Tech boom (that ended in 2000), people who repeatedly used this kind of thing to make themselves rich by fleecing suckers, not makers.

        • @_danny
          link
          English
          310 months ago

          It’s definitely gone down hill recently, but at the launch of gpt4 it was pretty incredible. It would make several logical jumps that a lot of actual people probably wouldn’t make. I remember my “wow moment” was asking how many M&M’s would fit in a typical glass milk jug, and then I measured it myself (by weight) and got an answer about 8% off. It gave measurements and cited actual equations. I couldn’t find anything through Google that solved the same problem or had the same answer that it could have just copied. It was supposed to be bad at math, but gpt4 got those types of problems pretty much spot on for me.

          I think that most people who have tried the latest AI models have had a bad experience because its power is distributed over more users.

          • chaogomu
            link
            fedilink
            310 months ago

            There’s also the issue of model collapse, when the AI is trained on data generated by AI, the errors and hallucinations start to compound until all you have left is gibberish. We’re about halfway there.

            • FaceDeer
              link
              fedilink
              310 months ago

              ChatGPT is trained on data with a cutoff in September 2021. It’s not training on AI-generated data.

              Even if some AI-generated data is included, as long as it’s reasonably curated and it’s mixed with non-AI data model collapse can be avoided.

              “Model collapse” is starting to feel like just a keyword for “this AI isn’t as good as I wanted.”

            • @_danny
              link
              English
              210 months ago

              I feel like you’re undereducated on how and when AI models are trained. Especially for the gpt model, it’s not “constantly learning” like other models. It’s being tweaked in discreet increments by developers trying to cover their ass, and get it to less frequently say things they can be sued for.

              Also, AI are already training other AI, that’s kinda how AI are made… There’s an AI that detects how well a given phrase follows another phrase, and that’s used to train the part of the AI you interact with. (arguably they are part of the same whole, depending on how you view the architecture)

              CGP gray has a good into video on how bots learn, it’s pretty outdated and not really applicable to how LLMs learn, but the general idea is still there.

      • @aesthelete
        link
        English
        210 months ago

        Not everyone that dislikes a thing or the promoters of that thing “have no idea what it is”…but sure, go off I guess. 🤷

  • LemmyLefty
    link
    2710 months ago

    Does it feel like these “game changing” techs have lives that are accelerating? Like there’s the dot com bubble of a decade or so, the NFT craze that lasted a few years, and now AI that’s not been a year.

    The Internet is concentrating and getting worse because of it, inundated with ads and bots and bots who make ads and ads for bots, and being existentially threatened by Google’s DRM scheme. NFTs have become a joke, and the vast majority of crypto is not far behind. How long can we play with this new toy? Its lead paint is already peeling.