The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

    • @Hackworth
      link
      English
      295 months ago

      I dunno, mishandling of AI can be worse than avoiding it entirely. There’s a middle manager here that runs everything her direct-report copywriter sends through ChatGPT, then sends the response back as a revision. She doesn’t add any context to the prompt, say who the audience is, or use the custom GPT that I made and shared. That copywriter is definitely hampered, but it’s not by AI, really, just run-of-the-mill manager PEBKAC.

    • @cikano
      link
      English
      125 months ago

      What have you actually replaced/automated with AI?

      • @Hackworth
        link
        English
        1
        edit-2
        5 months ago

        Voiceover recording, noise reduction, rotoscoping, motion tracking, matte painting, transcription - and there’s a clear path forward to automate rough cuts and integrate all that with digital asset management. I used to do all of those things manually/practically.

        e: I imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

        • @aesthelete
          link
          English
          10
          edit-2
          5 months ago

          imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

          They’re right IMO. Practical effects still look and age better than (IMO very obvious) digital effects. Oh and digital deaging IMO looks like crap.

          But, this will always remain an opinion battle anyway, because quantifying “artistry” is in and of itself a fool’s errand.

          • @Hackworth
            link
            English
            85 months ago

            Digital video, not digital effects - I mean the guys I went to film school with that refused to touch digital videography.

        • @[email protected]
          link
          fedilink
          English
          55 months ago

          All the models I’ve used that do TTS/RVC and rotoscoping have definitely not produced professional results.

          • @Hackworth
            link
            English
            1
            edit-2
            5 months ago

            What are you using? Cause if you’re a professional, and this is your experience, I’d think you’d want to ask me what I’m using.

            • @[email protected]
              link
              fedilink
              English
              25 months ago

              Coqui for TTS, RVC UI for matching the TTS to the actor’s intonation, and DWPose -> controlnet applied to SDXL for rotoscoping

              • @Hackworth
                link
                English
                25 months ago

                Full open source, nice! I respect the effort that went into that implementation. I pretty much exclusively use 11 Labs for TTS/RVC, turn up the style, turn down the stability, generate a few, and pick the best. I do find that longer generations tend to lose the thread, so it’s better to batch smaller script segments.

                Unless I misunderstand ya, your controlnet setup is for what would be rigging and animation rather than roto. I do agree that while I enjoy the outputs of pretty much all the automated animators, they’re not ready for prime time yet. Although I’m about to dive into KREA’s new key framing feature and see if that’s any better for that use case.

                • @[email protected]
                  link
                  fedilink
                  English
                  25 months ago

                  I was never able to get appreciably better results from 11 labs than using some (minorly) trained RVC model :/ The long scripts problem is something pretty much any text-to-something model suffers from. The longer the context the lower the cohesion ends up.

                  I do rotoscoping with SDXL i2i and controlnet posing together. Without I found it tends to smear. Do you just do image2image?

                  • @Hackworth
                    link
                    English
                    15 months ago

                    The voice library 11labs added includes some really reliable and expressive models. I’ve only trained a few voice clones, but I find them totally usable for swapping out short lines to avoid having to bring a subject back in to record. I’ll fabricate a sentence or two, but for longer form stuff, I only use AI for the rough cuts. Then I’ll practically record as a last step, once everything’s gone through revision cycles. The “generate a few and chop em together” method is fine for short clips, but becomes tedious for longer stuff.

                    Funnily enough, when I say roto, I really just mean tracing the subject to remove it from the background. Background removal’s so baked in to things now, I dunno if people even think of it as roto. But I mostly still prefer the Adobe solutions on this - roto brush in After Effects, for the AI/manual collaboration. As for roto in the A Scanner Darkly sense, I’ve played with a few of the video to video models, but mostly as a lark for fluff B-roll.

    • FaceDeer
      link
      fedilink
      45 months ago

      A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.

    • @Cryophilia
      link
      English
      35 months ago

      This may come as a shock to you, but the vast majority of the world does not work in tech.

      • @[email protected]
        link
        fedilink
        English
        35 months ago

        I’m not working in tech either. Everyone relying on a computer can use this.

        Also, medicin and radiology are two areas that will benefit from this - especially the patients.