• Margot Robbie
    link
    English
    65
    edit-2
    1 year ago

    Oh surprise surprise, looks like generative AI isn’t going to fulfill Silicon Valley and Hollywood studios’ dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!

    As I said here before, generative AIs are not universal solution to everything that has ever existed like they are hyped up to be, but neither are they useless. At the end of the day, they are ultimately tools. Complex, powerful, useful tools, but tools nonetheless. A good artist can create better work faster with the help of a diffusion model, the same way LLM code generation can help a good programmer finish their project faster and better. (I think). All of these AI models are trained on data from data from everyone on Internet, which is why I think its reasonable that everyone should have access to these generative AI models for the benefit of humanity and not profit, and not just those who took other people’s work for free to trained the models. In other words, these generative AI models should belong to everyone.

    And here lies my distaste for Sam Altman: OpenAI was founded as a nonprofit for the benefit of humanity, but at the first chance of money he immediately started venture capitalisting and put anything from GPT-2 onwards under locks and keys for money, and now it looks like that they are being crushed under the weight of their own operating costs while groups like Facebook and Stability catches up with actual open models, I will not be sad if "Open"AI fails.

    (For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source.)

    I have to admit, playing with these generative models is pretty fun.

    • @[email protected]
      link
      fedilink
      English
      151 year ago

      Hm. I think you should zoom out a bit and try to recognize that AI isn’t stagnant.

      Voice recognition and translation programs to years before they were appropriate for real-world applications. AI is also going to require years before it’s ready. But that time is coming. We haven’t reached a ‘ceiling’ for AI’s capabilities.

      • Margot Robbie
        link
        English
        101 year ago

        Breakthrough technological development usually can be described as a sigmoid function (s-shaped curve), while there is an exponential progress in the beginning, it usually hit a climax then slow down and plateau until the next breakthrough.

        There are certain problem that are not possible to resolve with the current level of technology for which development progress has slowed to a crawl, such as level 5 autonomous driving (by the way, better public transport is a way less complex solution.), and I think we are hitting the limit of what far transformer based generative AI can do since training has become more and more expensive for smaller and smaller gains, whereas hallucination seems to be an inherent problem that is ultimately unfixable with the current level of technology.

        • @[email protected]
          link
          fedilink
          English
          31 year ago

          One thing that I think makes AI a possibility to deviate from that S model is that it can be honed against itself to magnify improvements. The better it gets the better the next gen can get.

          • @[email protected]
            link
            fedilink
            English
            91 year ago

            that is a studied, documented, surefire way to very quickly destroy your model. It just does not work that way. If you train an llm on the output of another llm (or itself) it will implode.

            • @[email protected]
              link
              fedilink
              English
              31 year ago

              Also at best it’s an refinement, not a new sigmoid. So are new hardware/software designs for even faster dot products or advancements in network topology within the current framework. T3 networks would be a new sigmoid but so far all we know is why our stuff fundamentally doesn’t scale to the realm of AGI, and the wider industry (and even much of AI research going on in practice) absolutely doesn’t care as there’s still refinements to be had on the current sigmoid.

    • @batmangrundies
      link
      English
      10
      edit-2
      1 year ago

      There was a smallish VFX group here that was attached to a volume screen company. They employed something like 20 people I think? So pretty small.

      But the volume screen employed a guy who could do an adequate enough job with generative tools instead and the company folded. The larger VFX company they partner with had 200 employees, they recently cut to 50.

      In my field, a team leader in 2018 could earn about 180,000 AUD P/A. Now those jobs are advertised for 130,000 AUD, because new models can do ~80% of the analysis with human accuracy.

      AI is already folding companies and cutting jobs. It’s not in the news maybe, but as industries shift to compete with smaller firms leveraging AI it will cascade.

      I had/have my own company, we were attached to Metropolis which unfortunately folded. I think that had a role to play in the job cuts as well. Luckily for me I wasn’t overleveraged, but I am packing up and changing careers for sure.

      • Margot Robbie
        link
        English
        101 year ago

        Generative AI can make each individual artist/writer/programmer much more efficient at their job, but the shareholders and executives get their way and only big companies have access to this technologu, this increased productivity will instead be used reduce headcount and make the remaining people do more work on a tighter deadline, instead of helping everyone work less, do better work, and be happier.

        This is the reason I think democratizing generative AI via local models is important, because as your example shows, it levels the playing field between small and big players, and helps people work less while making more cool stuff.

        • @batmangrundies
          link
          English
          81 year ago

          A big problem in Aus is the industry culture. They don’t care about using technology to improve results. They only care about cutting costs, even if the final product doesn’t meet the previous standard.

          And we’ve seen that with VFX across the globe, the overall quality dropped drastically. Because studios play silly buggers to weasel out of paying VFX companies what they are due.

          From what I hear, even DNEG is in trouble, and were even before the strike.

          It’s a race to the bottom it seems.

          My honest hope for the film industry is likely the same as yours. That we have smaller productions with access to better post due to improvements in AI-driven compositing software and so on.

          But it’s likely that a role that was earning $$$ before is devalued significantly. And while I’m an unabashed anti-capitalist, I think a lot of folks misunderstand what this sudden downward pressure on income can do. Cost of living increasing while wages shrink is an awful combination

          I’m 35, left a six figure job, folding my company and starting an electrician’s apprenticeship. To give you an idea around what my views about AI are. And of course this is as an Australian. We have a garbage white collar work culture anyway.

          I think there will be a net improvement. But I worry that others will fail to adapt quickly. Too many are writing off AI as this thing that already came and went, but the tools have just landed, and we don’t yet have workflows that correctly implement and leverage these yet.

          • nickwitha_k (he/him)
            link
            fedilink
            English
            71 year ago

            This is exactly why the SAG-AFTRA and WGA strikes have been vitally important, I think. Without pressure on industry, as we’ve seen across the board in the US for the last near half-century, fewer and fewer things that should improve lives are allowed to do so.

          • @AdrianTheFrog
            link
            English
            31 year ago

            It’s crazy that with current economic systems, tools that make people work more efficiently have such a negative impact on society.

    • nickwitha_k (he/him)
      link
      fedilink
      English
      91 year ago

      Oh surprise surprise, looks like generative AI isn’t going to fulfill Silicon Valley and Hollywood studios’ dream of replacing artist, writers, and programmers with computer to maximize value for the poor, poor shareholders. Oh no!

      It really is incredible how much this rhymes with the crypto hype. To be fair, the technology does actually have uses but, as someone in the latter category, after I saw it in action, I quickly felt less worried about my job prospects.

      Fortunately, enough people in charge of staffing seem to have listened to people with technical knowledge to not make my earlier prediction (mass layoffs directly due to LLMs, followed by mass, panicked re-hirings when said LLMs ruined the business) come true. But, the worry itself, along with the RTO pushes (not to mention exploitation of contractors and H1B holders) really underscore his desperately the industry needs to get organized. Hopefully, what’s going on in the games industry with IATSE gets more traction and more of my colleagues on the same page but, that’s one area where I’m not as optimistic as I’d like to be - I’ll just have to cheer on SAG, WGA, and UAW for the time being.

      (For as much crap as I give Zuck for the other awful things they do, I do admire their commitment to open source.)

      Absolutely agreed. There’s a surprising amount of good in the open source world that has come from otherwise ethically devoid companies. Even Intuit donated the Argo project, which has evolved from a cool workflow tool to a toolkit with far more. There is always the danger of EEE, however, so, we’ve got to stay vigilant.

    • FLeX
      link
      English
      -141 year ago

      A powerful tool maybe, but useless

      If your drill needs a nuclear plant and monthly subcription to drill a hole, it’s a shitty tool

      • @warbond
        link
        English
        81 year ago

        Going to have to disagree with you there. I’ve gotten plenty of use out of chat GPT in multiple scenarios. I find it difficult to imagine what exactly you think is useless about it because it seems so indispensable to me at this point.

        • FLeX
          link
          English
          01 year ago

          Indispensable, nothing less. lmao

          Have fun when they decide to multiply the price x10 and you are too dependant to have an alternative, or when it becomes stupid or malevolent 👍

          • @warbond
            link
            English
            51 year ago

            Sorry, I’m not sure I understand how that makes it useless. I get the feeling that you just want to feel smug, so if it makes you feel better go ahead, I guess.

            • FLeX
              link
              English
              01 year ago

              Because it’s too fragile and not ready to be use at scale without causing massive damage

              Not useless for now (even if i’d like to know more about the domains where it’s really “indispensable”), but as useless as a drill with a dead battery the day they decide to cut it.

              I don’t find it future-proof, as impressive as some results are

              • @[email protected]
                link
                fedilink
                English
                41 year ago

                Nowdays LLM can be ran on consumer hardware, so the “dead battery” analogy fall short here too.

                • FLeX
                  link
                  English
                  21 year ago

                  With the same efficiency ? I’m interested in an example

                  Why everyone using these crappy SaaS then ?

                  • @AdrianTheFrog
                    link
                    English
                    3
                    edit-2
                    1 year ago

                    Llama 2 and its derivatives, mostly. Simple local ui available here.

                    Not as good as chatGPT 3.5 in my experience. Just kinda falls apart on anything too complex, and is a lot more likely to get things wrong.

                    I tried it out using the ‘Open-Orca/OpenOrcaxOpenChat-Preview2-13B’ 4 bit 32g model. Its surprisingly fast to generate. It seems significantly faster than ChatGPT on my 3060. (with ExLlama)

                    There are also some models tuned specifically to actually answer your requests instead of the ‘As an AI language model’ kind of stuff.

                    Edit: just tried a newer model and its a lot better. (dolphin-2.1-mistral-7b)

                  • @[email protected]
                    link
                    fedilink
                    English
                    11 year ago

                    For the same reason SaaS is popular in general: yes, you could get a VPS, install all the needed software on it, keep it up to date, oor you could pay a company to do all that for you.

          • @guacupado
            link
            English
            01 year ago

            You sound like the people who thought credit cards would never replace cash.

            • FLeX
              link
              English
              31 year ago

              And you sound like the people who thought cryptos would replace credit cards ;)