• TWeaK
    link
    fedilink
    English
    1021 year ago

    Sounds like the internet in the 90s.

    • 1bluepixel
      link
      English
      67
      edit-2
      1 year ago

      It also reminds me of crypto. Lots of people made money from it, but the reason why the technology persists has more to do with the perceived potential of it rather than its actual usefulness today.

      There are a lot of challenges with AI (or, more accurately, LLMs) that may or may not be inherent to the technology. And if issues cannot be solved, we may end up with a flawed technology that, we are told, is just about to finally mature enough for mainstream use. Just like crypto.

      To be fair, though, AI already has some very clear use cases, while crypto is still mostly looking for a problem to fix.

      • @[email protected]
        link
        fedilink
        English
        191 year ago

        Let’s combine AI and crypto, and migrate it to the cloud. Imagine the PowerPoints middle managers will make about that!

      • P03 Locke
        link
        fedilink
        English
        17
        edit-2
        1 year ago

        No, this isn’t crypto. Crypto and NFTs were trying to solve for problems that already had solutions with worse solutions, and hidden in the messaging was that rich people wanted to get poor people to freely gamble away their money in an unregulated market.

        AI has real, tangible benefits that are already being realized by people who aren’t part of the emotion-driven ragebait engine. Stock images are going to become extinct in several years. People can make at least a baseline image of what they want, no matter the artistic ability. Musicians are starting to use AI tools. ChatGPT makes it easy to generate low-effort, high-time-consuming letters and responses like item descriptions, or HR responses, or other common draft responses. Code AI engines allow programmers to present reviewable solutions in real-time, or at least something to generate and tweak. None of this is perfect, but it’s good enough for 80% of the work that can be modified after the initial pass.

        Things like chess AI has existed for decades, and LLMs are just extensions of the existing generative AI technology. I dare you to tell Chess.com that “AI is a money pit that isn’t paying off”, because they would laugh their fucking asses off, as they are actively pouring even more money and resources into Torch.

        The author here is a fucking idiot. And he didn’t even bother to change the HTML title (“Microsoft’s Github Copilot is Losing Huge Amounts of Money”) from its original focus of just Github Copilot. Clickbait bullshit.

        • @Revonult
          link
          English
          211 year ago

          I totally agree. However, I do feel like the market around AI is inflated like NFTs and Crypto. AI isn’t a bust, there will be steady progress at universities, research labs, and companies. There is too much hype right now, slapping AI on random products and over promising the current state of technology.

          • P03 Locke
            link
            fedilink
            English
            6
            edit-2
            1 year ago

            slapping [Technology X] on random products and over promising the current state of technology

            A tale as old as time…

            Still waiting on those “self-driving” cars.

            • @instamat
              link
              English
              31 year ago

              Self driving will be available next year.*

              *since 2014

          • @[email protected]
            link
            fedilink
            English
            31 year ago

            I love how suddenly companies started advertising things as AI that would have been called a chatbot a year ago. I saw a news article headlinethe other day that said that judges were going to improve the time they took to render judgments significantly by using AI.

            Reading the content of the article they went on to explain that they would use it to draft the documents. Its like they never heard of templates

      • @iopq
        link
        English
        151 year ago

        I’m still trying to transfer $100 from Kazakhstan to me here. By far the lowest fee option is actually crypto since the biggest difference is the currency conversion. If you have to convert anyway, might as well only pay 0.30% on both ends

        • @[email protected]
          link
          fedilink
          English
          4
          edit-2
          1 year ago

          Look into DJED on Cardano. It’s WAY cheaper than ETH (but perhaps not cheaper than some others). A friend of mine sent $10,000 to Thailand for less than a dollar in transaction fees. To 1bluepixel: Sounds like a use-case to me!

            • @[email protected]
              link
              fedilink
              English
              5
              edit-2
              1 year ago

              Hmm.

              You still have to deal with ETH fees just to get the funds into the roll up. I admit that ETH was revolutionary when it was invented but the insane fee market makes it a non-starter and the accounts model is just a preposterously bad (and actually irreparably broken) design decision for a decentralized network, makes Ethereum near impossible to parallelize since the main chain is required for state and the contracts that run on it are non-deterministic.

              • FaceDeer
                link
                fedilink
                -11 year ago

                There are exchanges where you can buy Ether and other tokens directly on a layer 2, once it’s on layer 2 there are no further fees to get it there.

                Layer 2 rollups are a way to parallelize things, the activity on one layer 2 can proceed independently of activity on a different layer 2.

                I have no idea why you think contracts on Ethereum are nondeterminstic, the blockchain wouldn’t work at all if they were.

                • @[email protected]
                  link
                  fedilink
                  English
                  4
                  edit-2
                  1 year ago

                  I think that because it’s true. Smart contracts on Ethereum can fail and still charge the wallet. Because of the open ended nature of Ethereum’s design, a wallet can be empty when the contract finally executes, causing a failure. This doesn’t happen in Bitcoin and other utxo chains like Ergo, and Cardano (where all transactions must have both inputs and outputs accounted for FULLY to execute). Utxo boasts determinism while the accounts model can fail due to an empty wallet. Determinism makes concurrency harder for sure…but at least your entire chain isn’t one gigantic unsafe state machine. Ethereum literally is by definition non-deterministic.

      • @[email protected]
        link
        fedilink
        English
        -7
        edit-2
        1 year ago

        Crypto found a problem to fix. The reason the problem remains: everything is run by that problem so it was astroturfed to death by parties that run the current financial system and the enemy of their enemy (who’s a friend), opportunistic scammers like SBF and Do Kwan.

    • @[email protected]
      link
      fedilink
      English
      101 year ago

      Or computers decades before that.

      Many of these advances are incredibly recent.

      And also many of the things we use in our day to day are ai powered without people even realising.

      • El Barto
        link
        English
        31 year ago

        AI powered? Like what?

          • El Barto
            link
            English
            21 year ago

            Got it. Thanks.

        • @Aceticon
          link
          English
          71 year ago

          Automated mail sorting has been using AI to read post codes from envelopes for deacades, only back then - pre hype - it was just called Neural Networks.

          That tech is almost 3 decades old.

          • El Barto
            link
            English
            21 year ago

            But was it using neural networks or was it using OCR algorithms?

            • @Aceticon
              link
              English
              2
              edit-2
              1 year ago

              At the time I learned this at Uni (back in the early 90s) it was already NNs, not algorithms.

              (This was maybe a decade before OCR became widespread)

              In fact a coursework project I did there was recognition of handwritten numbers with a neural network. The thing was amazingly good (our implementation actually had a bug and the thing still managed to be almost 90% correct on a test data set, so it somehow mostly worked its way around the bug) and it was a small NN with no need for massive training sets (which is the main difference with Large Language Models versus the more run-off-the-mill neural networks), this at a time when algorithmic number and character recognition were considered a very difficult problem.

              Back then Neural Networks (and other stuff like Genetic Algorithms) were all pretty new and using it in automated mail sorting was recent and not yet widespread.

              Nowadays you have it doing stuff like face recognition, built-in on phones for phone unlocking…

              • El Barto
                link
                English
                21 year ago

                Very interesting. Thanks for sharing!

            • @[email protected]
              link
              fedilink
              English
              21 year ago

              I love people who talk about AI that don’t know the difference between an LLM and a bunch of if statements

        • TWeaK
          link
          fedilink
          English
          61 year ago

          The key fact here is that it’s not “AI” as conventionally thought of in all the scifi media we’ve consumed over our lifetimes, but AI in the form of a product that tech companies of the day are marketing. It’s really just a complicated algorithm based off an expansive dataset, rather than something that “thinks”. It can’t come up with new solutions, only re-use previous ones; it wouldn’t be able to take one solution for one thing and apply that to a different problem. It still needs people to steer it in the right direction, and to verify its results are even accurate. However AI is now probably better than people at identifying previous problems and remembering the solution.

          So, while you could say that lots of things are “powered by AI”, you can just as easily say that we don’t have any real form of AI just yet.

          • El Barto
            link
            English
            21 year ago

            Oh but those pattern recognition examples are about machine learning, right? Which I guess it’s a form of AI.

            • TWeaK
              link
              fedilink
              English
              11 year ago

              Perhaps, but at best it’s still a very basic form of AI, and maybe shouldn’t even be called AI. Before things like ChatGPT, the term “AI” meant a full blown intelligence that could pass a Turing test, and a Turing test is meant to prove actual artificial thought akin to the level of human thought - something beyond following mere pre-programmed instructions. Machine learning doesn’t really learn anything, it’s just an algorithm that repeatedly measures and then iterates to achieve an ideal set of values for desired variables. It’s very clever, but it doesn’t really think.

              • El Barto
                link
                English
                11 year ago

                I have to disagree with you in the machine learning definition. Sure, the machine doesn’t think in those circumstances, but it’s definitely learning, if we go by what you describe what they do.

                Learning is a broad concept, sure. But say, if a kid is learning to draw apples, then is successful to draw apples without help in the future, we could way that the kid achieved “that ideal set of values.”

                • TWeaK
                  link
                  fedilink
                  English
                  11 year ago

                  Machine learning is a simpler type of AI than an LLM, like ChatGPT or AI image generators. LLM’s incorporate machine learning.

                  In terms of learning to draw something, after a child learns to draw an apple they will reliably draw an apple every time. If AI “learns” to draw an apple it tends to come up with something subtley unrealistic, eg the apple might have multiple stalks. It fits the parameters it’s learned about apples, parameters which were prescribed by its programming, but it hasn’t truly understood what an apple is. Furthermore, if you applied the parameters it learned about apples to something else, it might completely fail to understand it all together.

                  A human being can think and interconnect its throughts much more intricately, we go beyond our basic programming and often apply knowledge learned in one thing to something completely different. Our understanding of things is much more expansive than AI. AI currently has the basic building blocks of understanding, in that it can record and recall knowledge, but it lacks the full amount of interconnections between different pieces and types of knowledge that human beings develop.

  • Bappity
    link
    English
    771 year ago

    if A.I. dies out because capitalism I will wheeze

      • @[email protected]
        link
        fedilink
        English
        321 year ago

        This is why I, as a user, am far more interested in open-source projects that can be run locally on pro/consumer hardware. All of these cloud services are headed down the crapper.

        My prediction is that in the next couple years we’ll see a move away from monolithic LLMs like ChatGPT and toward programs that integrate smaller, more specialized models. Apple and even Google are pushing for more locally-run AI, and designing their own silicon to run it. It’s faster, cheaper, and private. We will not be able to run something as big as ChatGPT on consumer hardware for decades (it takes hundreds of gigabytes of memory at minimum), but we can get a lot of the functionality with smaller, faster, cheaper models.

        • FaceDeer
          link
          fedilink
          91 year ago

          Hundreds of gigabytes of memory in consumer PCs is not decades away. There are already motherboards that accept 128 GB.

          • @[email protected]
            link
            fedilink
            English
            61 year ago

            You’re right, I shouldn’t say decades. It will be decades before that’s standard or common in the consumer space, but it could be possible to run on desktops within the next generation (~5 years). It’d just be very expensive.

            High-end consumer PCs can currently support 192GB, and that might increase to 256 within this generation when we get 64GB DDR5 modules. But we’d need 384 to run BLOOM, for instance. That requires a platform that supports more than 4 DIMMs, e.g. Intel Xeon or AMD Threadripper, or 96GB DIMMs (not yet available in the consumer space). Not sure when we’ll get consumer mobos that support that much.

        • @nodsocket
          link
          English
          3
          edit-2
          10 months ago

          deleted by creator

          • @[email protected]
            link
            fedilink
            English
            71 year ago

            Technically I could upgrade my desktop to 192GB of memory (4x48). That’s still only about half the amount required for the largest BLOOM model, for instance.

            To go beyond that today, you’d need to move beyond the Intel Core or AMD Ryzen platforms and get something like a Xeon. At that point you’re spending 5 figures on hardware.

            I know you’re just joking, but figured I’d add context for anyone wondering.

          • P03 Locke
            link
            fedilink
            English
            21 year ago

            Don’t worry about the RAM. Worry about the VRAM.

            • @nodsocket
              link
              English
              4
              edit-2
              10 months ago

              deleted by creator

      • @_number8_
        link
        English
        61 year ago

        GPT already got way shittier from the version we all saw when it first came out to the heavily curated, walled garden version now in use

        • @Sanctus
          link
          English
          111 year ago

          Flash games did not die out because people stopped playing them. The smart phone was created and this changed the entire landscape of small game development.

          • @o0joshua0o
            link
            English
            41 year ago

            Steve Jobs killed Flash. It was premeditated.

              • @Sanctus
                link
                English
                31 year ago

                It was atrocious compared to what we have now. But god fucking dammit I love those games. They mean more to me than a lot of AAA studios.

            • FaceDeer
              link
              fedilink
              2
              edit-2
              1 year ago

              If it had been killed without an adequate replacement (eg. mobile gaming) then people wouldn’t have let Flash die. There are open-source flash players.

        • @[email protected]
          link
          fedilink
          English
          51 year ago

          Flash games didnt die on their own, the technology was purposefully killed off via similar corporate requirements to maximize profits.

          • kirklennon
            link
            fedilink
            81 year ago

            It died because Safari for iPhone supported only open web standards. Flash was also the leading cause of crashes on the Mac because it was so poorly-written. It was also a huge security vulnerability and a leading vector for malware, and Adobe just straight up wasn’t able to get it running well on phones. Flash games were also designed with the assumption of a keyboard and mouse so many could never work right on touchscreen devices.

  • Margot Robbie
    link
    English
    65
    edit-2
    1 year ago

    Oh surprise surprise, looks like generative AI isn’t going to fulfill Silicon Valley and Hollywood studios’ dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!

    As I said here before, generative AIs are not universal solution to everything that has ever existed like they are hyped up to be, but neither are they useless. At the end of the day, they are ultimately tools. Complex, powerful, useful tools, but tools nonetheless. A good artist can create better work faster with the help of a diffusion model, the same way LLM code generation can help a good programmer finish their project faster and better. (I think). All of these AI models are trained on data from data from everyone on Internet, which is why I think its reasonable that everyone should have access to these generative AI models for the benefit of humanity and not profit, and not just those who took other people’s work for free to trained the models. In other words, these generative AI models should belong to everyone.

    And here lies my distaste for Sam Altman: OpenAI was founded as a nonprofit for the benefit of humanity, but at the first chance of money he immediately started venture capitalisting and put anything from GPT-2 onwards under locks and keys for money, and now it looks like that they are being crushed under the weight of their own operating costs while groups like Facebook and Stability catches up with actual open models, I will not be sad if "Open"AI fails.

    (For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source.)

    I have to admit, playing with these generative models is pretty fun.

    • @[email protected]
      link
      fedilink
      English
      151 year ago

      Hm. I think you should zoom out a bit and try to recognize that AI isn’t stagnant.

      Voice recognition and translation programs to years before they were appropriate for real-world applications. AI is also going to require years before it’s ready. But that time is coming. We haven’t reached a ‘ceiling’ for AI’s capabilities.

      • Margot Robbie
        link
        English
        101 year ago

        Breakthrough technological development usually can be described as a sigmoid function (s-shaped curve), while there is an exponential progress in the beginning, it usually hit a climax then slow down and plateau until the next breakthrough.

        There are certain problem that are not possible to resolve with the current level of technology for which development progress has slowed to a crawl, such as level 5 autonomous driving (by the way, better public transport is a way less complex solution.), and I think we are hitting the limit of what far transformer based generative AI can do since training has become more and more expensive for smaller and smaller gains, whereas hallucination seems to be an inherent problem that is ultimately unfixable with the current level of technology.

        • @[email protected]
          link
          fedilink
          English
          31 year ago

          One thing that I think makes AI a possibility to deviate from that S model is that it can be honed against itself to magnify improvements. The better it gets the better the next gen can get.

          • @[email protected]
            link
            fedilink
            English
            91 year ago

            that is a studied, documented, surefire way to very quickly destroy your model. It just does not work that way. If you train an llm on the output of another llm (or itself) it will implode.

            • @[email protected]
              link
              fedilink
              English
              31 year ago

              Also at best it’s an refinement, not a new sigmoid. So are new hardware/software designs for even faster dot products or advancements in network topology within the current framework. T3 networks would be a new sigmoid but so far all we know is why our stuff fundamentally doesn’t scale to the realm of AGI, and the wider industry (and even much of AI research going on in practice) absolutely doesn’t care as there’s still refinements to be had on the current sigmoid.

    • @batmangrundies
      link
      English
      10
      edit-2
      1 year ago

      There was a smallish VFX group here that was attached to a volume screen company. They employed something like 20 people I think? So pretty small.

      But the volume screen employed a guy who could do an adequate enough job with generative tools instead and the company folded. The larger VFX company they partner with had 200 employees, they recently cut to 50.

      In my field, a team leader in 2018 could earn about 180,000 AUD P/A. Now those jobs are advertised for 130,000 AUD, because new models can do ~80% of the analysis with human accuracy.

      AI is already folding companies and cutting jobs. It’s not in the news maybe, but as industries shift to compete with smaller firms leveraging AI it will cascade.

      I had/have my own company, we were attached to Metropolis which unfortunately folded. I think that had a role to play in the job cuts as well. Luckily for me I wasn’t overleveraged, but I am packing up and changing careers for sure.

      • Margot Robbie
        link
        English
        101 year ago

        Generative AI can make each individual artist/writer/programmer much more efficient at their job, but the shareholders and executives get their way and only big companies have access to this technologu, this increased productivity will instead be used reduce headcount and make the remaining people do more work on a tighter deadline, instead of helping everyone work less, do better work, and be happier.

        This is the reason I think democratizing generative AI via local models is important, because as your example shows, it levels the playing field between small and big players, and helps people work less while making more cool stuff.

        • @batmangrundies
          link
          English
          81 year ago

          A big problem in Aus is the industry culture. They don’t care about using technology to improve results. They only care about cutting costs, even if the final product doesn’t meet the previous standard.

          And we’ve seen that with VFX across the globe, the overall quality dropped drastically. Because studios play silly buggers to weasel out of paying VFX companies what they are due.

          From what I hear, even DNEG is in trouble, and were even before the strike.

          It’s a race to the bottom it seems.

          My honest hope for the film industry is likely the same as yours. That we have smaller productions with access to better post due to improvements in AI-driven compositing software and so on.

          But it’s likely that a role that was earning $$$ before is devalued significantly. And while I’m an unabashed anti-capitalist, I think a lot of folks misunderstand what this sudden downward pressure on income can do. Cost of living increasing while wages shrink is an awful combination

          I’m 35, left a six figure job, folding my company and starting an electrician’s apprenticeship. To give you an idea around what my views about AI are. And of course this is as an Australian. We have a garbage white collar work culture anyway.

          I think there will be a net improvement. But I worry that others will fail to adapt quickly. Too many are writing off AI as this thing that already came and went, but the tools have just landed, and we don’t yet have workflows that correctly implement and leverage these yet.

          • nickwitha_k (he/him)
            link
            fedilink
            English
            71 year ago

            This is exactly why the SAG-AFTRA and WGA strikes have been vitally important, I think. Without pressure on industry, as we’ve seen across the board in the US for the last near half-century, fewer and fewer things that should improve lives are allowed to do so.

          • @AdrianTheFrog
            link
            English
            31 year ago

            It’s crazy that with current economic systems, tools that make people work more efficiently have such a negative impact on society.

    • nickwitha_k (he/him)
      link
      fedilink
      English
      91 year ago

      Oh surprise surprise, looks like generative AI isn’t going to fulfill Silicon Valley and Hollywood studios’ dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!

      It really is incredible how much this rhymes with the crypto hype. To be fair, the technology does actually have uses but, as someone in the latter category, after I saw it in action, I quickly felt less worried about my job prospects.

      Fortunately, enough people in charge of staffing seem to have listened to people with technical knowledge to not make my earlier prediction (mass layoffs directly due to LLMs, followed by mass, panicked re-hirings when said LLMs ruined the business) come true. But, the worry itself, along with the RTO pushes (not to mention exploitation of contractors and H1B holders) really underscore his desperately the industry needs to get organized. Hopefully, what’s going on in the games industry with IATSE gets more traction and more of my colleagues on the same page but, that’s one area where I’m not as optimistic as I’d like to be - I’ll just have to cheer on SAG, WGA, and UAW for the time being.

      (For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source.)

      Absolutely agreed. There’s a surprising amount of good in the open source world that has come from otherwise ethically devoid companies. Even Intuit donated the Argo project, which has evolved from a cool workflow tool to a toolkit with far more. There is always the danger of EEE, however, so, we’ve got to stay vigilant.

    • FLeX
      link
      English
      -141 year ago

      A powerful tool maybe, but useless

      If your drill needs a nuclear plant and monthly subcription to drill a hole, it’s a shitty tool

      • @warbond
        link
        English
        81 year ago

        Going to have to disagree with you there. I’ve gotten plenty of use out of chat GPT in multiple scenarios. I find it difficult to imagine what exactly you think is useless about it because it seems so indispensable to me at this point.

        • FLeX
          link
          English
          01 year ago

          Indispensable, nothing less. lmao

          Have fun when they decide to multiply the price x10 and you are too dependant to have an alternative, or when it becomes stupid or malevolent 👍

          • @warbond
            link
            English
            51 year ago

            Sorry, I’m not sure I understand how that makes it useless. I get the feeling that you just want to feel smug, so if it makes you feel better go ahead, I guess.

            • FLeX
              link
              English
              01 year ago

              Because it’s too fragile and not ready to be use at scale without causing massive damage

              Not useless for now (even if i’d like to know more about the domains where it’s really “indispensable”), but as useless as a drill with a dead battery the day they decide to cut it.

              I don’t find it future-proof, as impressive as some results are

              • @[email protected]
                link
                fedilink
                English
                41 year ago

                Nowdays LLM can be ran on consumer hardware, so the “dead battery” analogy fall short here too.

                • FLeX
                  link
                  English
                  21 year ago

                  With the same efficiency ? I’m interested in an example

                  Why everyone using these crappy SaaS then ?

          • @guacupado
            link
            English
            01 year ago

            You sound like the people who thought credit cards would never replace cash.

            • FLeX
              link
              English
              31 year ago

              And you sound like the people who thought cryptos would replace credit cards ;)

    • @Potatos_are_not_friends
      link
      English
      24
      edit-2
      1 year ago

      So much fucking this.

      Every cash grab right now around AI is just a frontend for a chatGPT API. And every investor who throws money at them is the mark. And now they’re crying a river.

    • Lemmington Bunnie
      link
      fedilink
      English
      61 year ago

      It’s also helped me find the names of several books and films that have been rattling around in my mind, some for decades, which actually made me very happy because not remembering that sort of thing drives me a little mad.

      I’m stuck on two books that it can’t work out - both absolute trash pulp fiction, one that I stopped reading because it was so terrible and the other that was so bad but I actually wouldn’t mind reading again.

      Oh well, can’t have it all.

    • @trashgirlfriend
      link
      English
      21 year ago

      IRL, people are doing some amazing things with generative AI, esp in 2D graphic art.

      Woah, shiny bland images that are a regurgitation of stolen artwork!!!

    • FLeX
      link
      English
      11 year ago

      people are doing

      No they ain’t doing shit, they just prompt

  • macallik
    link
    fedilink
    531 year ago

    What I don’t like about the article is that the phrasing ‘paying off’ can apply to making investors money OR having worthwhile use cases. AI has created plenty of use cases from language learning to code correction to companionship to brainstorming, etc.

    It seems ironic that a consumer-facing website is framing things from a skeptical “But is it making rich people richer?” perspective

    • @xantoxis
      link
      English
      81 year ago

      In my case, I still want to know if it’s not making rich people richer, because a) fuck rich people, and b) I don’t want to buy into things that will disappear in a year when the hype dies down. As a “consumer” my purchasing decisions impact my life, and the actions of the wealthy affect that more than you’d like.

  • 👁️👄👁️
    link
    fedilink
    English
    531 year ago

    Silicon like usual thinking these things are as big as the invention as the internet, and trying to get their money in there the first place. AI was and still is a massive game changer, but nothing can live up to the hype of which they throw a stupid amount of money at these things. They didn’t learn their lesson after crypto or the “metaverse” either lol. I see AI being a tool, an incredibly useful one. That also means it has a lot of jobs it simply can’t do. It can’t replace artists, but artists can use it as a tool to help them work off of things.

    • ???
      link
      English
      171 year ago

      So far I’ve only seen AI being used to fire employees that a company totally absolutely still needs but just doesn’t want to pay wages to. Companies are dumb as fuck, that’s my conclusion, but what else can you expect by organizations run by ladder-climbing CEO figures?

      • @[email protected]
        link
        fedilink
        English
        71 year ago

        There’s utility in keeping workers desperate, it depresses wages.

        Think about the coordinated tech layoffs that happened and now the tech industry has a labor surplus.

        Saves them money.

        • ???
          link
          English
          51 year ago

          The company that laid me off is paying me 4 months of wages without any work, this is the severance package I get in exchange for them not being forced to try to find me another suitable position in the company or “prove” that there no tasks at the company that I could do with my existing skill set (that’s how the law works where I live).

          Was it really worth it?

          • @Chocrates
            link
            English
            21 year ago

            Are you in the states? I’m surprised you have those protections.

            • ???
              link
              English
              31 year ago

              No, Sweden

    • @guacupado
      link
      English
      91 year ago

      What I’m curious is what’s going to happen to all these companies that went all-in on building data centers when they weren’t doing it previously. Places like Meta and Amazon are huge enough that it’s always been a sound investment but with this hype there are other companies trying to set up server farms with no real prize in sight.

      • @[email protected]
        link
        fedilink
        English
        5
        edit-2
        1 year ago

        I mean A100s don’t exactly break that quickly and they’re specialised enough hardware so that they will continue to be able to rent them out. They’re also overpriced AF though which might cut into the bottom line but they’re probably not going to end up with a giant loss, I don’t really doubt they will break even. Opportunity costs are stellar, but OTOH there’s so much billionaire capital floating around screaming for opportunities to park itself in that macro-economically it’s negligible. Also I’m not exactly in the habit of crying about billionaires having a low ROI.

    • @Aceticon
      link
      English
      2
      edit-2
      1 year ago

      Ever since the Internet Bubble crashed around 2000 that the business community in the Valley has been repeatedly trying to pump up a new bubble, starting with what they called Web 2.0 which started being hyped maybe even before the dust settled on tha crash after the first Tech bubble.

      And if you think about it, it makes sense: the biggest fortunes ever made in Tech are still from companies which had their initial growth back then, such as Google, Amazon and even Paypal (Microsoft and Apple being maybe the most notable exceptions, both predating it).

  • @Smacks
    link
    English
    371 year ago

    AI is a tool to assist creators, not a full on replacement. Won’t be long until they start shoving ads into Bard and ChatGPT.

  • @RanchOnPancakes
    link
    English
    301 year ago

    Thats how this works. Blow though VC money to try and “strike gold” fail. Change model to become profitable." Move to the next scam.

  • @jimbo
    link
    English
    28
    edit-2
    11 months ago

    deleted by creator

  • @aesthelete
    link
    English
    27
    edit-2
    1 year ago

    You’d think at this point that investors would wait for a thing to fill out the question mark second step in their business plan before investing in it, but you’d be way, way wrong.

    Every new tech company comes to the investor panel with:

    1. build expressive to run new tool and give it away to end users for free

    2. ???

    3. profit!

    And somehow they keep falling for it.

    • Punkie
      link
      English
      201 year ago

      Because people assume all these investors know what they are doing. They don’t. Now, some investors are good, but they usually don’t go for shit like this. At lot of investors are VCs, rich upper class twits, who can afford to lose money. Pure and simple. It’s like a bunch of lotto winners telling people they know how to pick numbers, betting outside bets once in a while, get lucky, and have selective bias.

      Plus, they have enough money to hedge their bets. For example, say you invest $1mil in companies A, B, C, D, E, and F. All lose everything except A and B, which earn you $3mil each. You put in $6mil, got back $6mil. You broke even, tell people you knew what you were doing because you picked A and B, and conveniently never mention the rest. Then rich twits people invest in what YOU invest in. So you invest in H, others invest in H because you did, drives up the value. Now magnify this by a lot of investors, hundreds of letters, and it’s all like some weird game of luck and timing.

      But a snapshot in time leads to your 2) ??? Point. Many know this is a confidence game, based on luck, charm, and timing. Some just stumble through it, and others are fleeced, but who cares? Daddy’s got money.

      Money works different for rich people. It’s truly puzzling.

    • @[email protected]
      link
      fedilink
      English
      21 year ago

      They sure as hell are doing a good job of making me reliant on AI though. Soon I’ll probably be payinf 200$ a month because i cant remember how to do things without AI. I think thats the plan anyway.

      • @aesthelete
        link
        English
        21 year ago

        Soon I’ll probably be payinf 200$ a month because i cant remember how to do things without AI.

        Sounds like a problem TBH, I’d get that checked out by a professional.

  • @alienanimals
    link
    English
    241 year ago

    AI isn’t paying off if you’re too dumb to figure out how to use the many amazing tools that have come about.

    • BolexForSoup
      link
      fedilink
      24
      edit-2
      1 year ago

      I was going to say…I use AI-transcription tools for video editing, AI-upscaling, and Resolve dropped an incredible AI green screen tool that makes it effortless. I also use AI to repair audio as of 6mo ago lol. I don’t think I gone more than 48hrs without using an AI tool professionally.

      • NegativeLookBehind
        link
        fedilink
        321 year ago

        I wonder if “AI not paying off” in the context of this article actually means “Companies haven’t been able to lay off a bunch of their staff yet, like they’re hoping to do”

        • ripcord
          link
          fedilink
          31 year ago

          If anyone read the article you’d know what they meant, and it wasn’t either of the things you two mentioned.

      • Semi-Hemi-Demigod
        link
        fedilink
        81 year ago

        AI is a lot more like the Internet than it is like Facebook. It’s a set of techniques you can use to create tools. These are incredibly useful tools, but you’re not going to make Facebook money off of them because the techniques are pretty easy to replicate and the genie is out of the bottle.

        What the tech bros are looking for is a way to control access to AI so they can be a chokepoint. Like if Craftsman could charge for every single time you used their tool to make something. For one very recent example, see what happened to Unity. Creating chokepoints and then collecting rent is the modern corporate feudal strategy, but that won’t work if everybody with an AWS account and enough money can spin up an LLM and start training it.

      • @_number8_
        link
        English
        21 year ago

        AI stem splitting for songs is magical as well

        • BolexForSoup
          link
          fedilink
          3
          edit-2
          1 year ago

          Yes I use it all the time. Adobe Audio Enhance. It’s the flagship feature of their upcoming podcast app, but you can use it in browser currently. If you have an adobe subscription, it doesn’t charge extra or anything. It’s only for spoken word though, not music. If you throw music on it, though, you get some pretty wild stuff as it tries to create words out of the sounds.

          To further answer your question, yes, it is actually very good with highly compressed audio. I regularly feed it zoom audio to make more intelligible. Obviously there are always limits, but I assure you it can do more than you can manually 85% of the time and buy a large margin. My only frustration is it is a simple slider, you can’t really fine tune it, but it’s still incredibly effective and I often use it as a first pass on the original audio file before I even start editing.

        • BolexForSoup
          link
          fedilink
          1
          edit-2
          1 year ago

          Topaz Labs makes a decent one. You’ll need to do a lot of trial and error to kind of find your own favorite settings for baking, but as far as cost and efficacy go, there aren’t a lot better out right now.

          They do a watermark free version You can test with. I think it also only let you do a couple of minutes a video at a time. But frankly it’s incredibly processor intensive so you will only want to test a 15-20s clip at a time anyway.

    • @[email protected]
      link
      fedilink
      English
      61 year ago

      The problem here is that AI in the media has become synonymous with generalized LLMs, while other “AI” applications have been in place for many years doing more specific things that have more obvious use cases that can be more easily commercialised.

    • FlumPHP
      link
      fedilink
      English
      121 year ago

      If you think businesses have sunk this much money and effort into AI and didn’t do a cost-benefit analysis that stretched out decades, you are being naive or disingenuous.

      Are you kidding? We literally just watched the same bubble and burst in companies that rushed to get their piece of the Metaverse and NFT cash grab. I worked at a SaaS company that decided to add AI features because it was in the news and Azure offered it as a service. There was zero financial analysis done, just like for every other feature they added

      I’m sure Microsoft has a plan since they invested heavily. But even Google is playing catch-up like they did with GCP.

      • @[email protected]
        link
        fedilink
        English
        31 year ago

        AI is actually useful.

        The metaverse and NFTs aren’t.

        Your analogy is not a 1:1 representation of the situation and only serves to distract from the topic at hand.

        • @jj4211
          link
          English
          61 year ago

          But there is a similarity, the hype pulls in all sorts of companies to blindly add buzzwords without even knowing how it might possibly apply to their product, even if it were the perfect realization of the ideal.

          Yes AI techniques obviously have utility. 90% of the spend is by companies that don’t even know what that utility might be. With that much noise, it’s hard to keep track of the value.

          • @[email protected]
            link
            fedilink
            English
            21 year ago

            But there is a similarity, the hype pulls in all sorts of companies to blindly add buzzwords without even knowing how it might possibly apply to their product

            Yes, I see what you are saying. I guess we can add ‘blockchain’ to that list, then.

    • @[email protected]
      link
      fedilink
      English
      71 year ago

      Do people really not understand that we are in the early stages of ai development?

      Yes. Top post in this thread is someone cheering that AI won’t replace people in hollywood.

      Just give it time. Remember how poor voice recognition and translation software was at first?

      • Margot Robbie
        link
        English
        51 year ago

        Top post in this thread is someone cheering that AI won’t replace people in hollywood.

        I really like how I’m just “someone” here now.

    • @[email protected]
      link
      fedilink
      English
      41 year ago

      pretty much all improvements aren’t “better tech”, but just “bigger tech”. Reducing their footprint is an unsolved problem (just like it has always been with neural networks, for decades)

      • @WhiteHawk
        link
        English
        21 year ago

        Optimization is a problem that cannot be “solved” by definition, but a lot of work is being done on it with some degree of success

  • @Potatos_are_not_friends
    link
    English
    211 year ago

    These are the same kind of people who go, “We spent money on Timmy’s clothes for over two years and it’s not paying off.”

    Bro, AI is an investment.

    • bluGill
      link
      fedilink
      51 year ago

      It is a risky investment. Taking care of your kid is something where we have done it enough that we understand the risks and pay off and most parents can make a reasonable prediction. (a few kids will “turn 21 in prison doing life without parole” - but most turn out okay and return love to their parents and attempt to improve society - though you may not agree with their definition of improve society)

      I have no idea if the current faults with AI will be solved or not. That is a risk you are taking. It is useful for some things, but we don’t know how useful.

      • @iopq
        link
        English
        31 year ago

        There’s also the “not in prison, but mostly just lives at home and smokes weed” money pit of children

        My childhood friend ended up this way and I’ve given up on him

    • @btaf45
      link
      English
      -1
      edit-2
      1 year ago

      So far what I’ve seen from AI is that it lies and lies and lies. It lies about history. It lies about science. It lies about politics. It lies about case law. It lies about programming libraries. Maybe this will all be fixed some day, or maybe it will just get worse. Until then the only thing I would trust it is about something in which their is no wrong answer.

      • @RagingRobot
        link
        English
        11 year ago

        I never ask it things I don’t know. I don’t think that’s really what’s it’s useful for. It’s really good at combining words though. So it can write a better sentence than I could. Better in a sense that it’s easier for others to understand what my thoughts are if I feed them in as input. Since they were my thoughts originally I can spot the bullshit pretty fast.