cross-posted from: https://lemmy.ca/post/37011397

[email protected]

The popular open-source VLC video player was demonstrated on the floor of CES 2025 with automatic AI subtitling and translation, generated locally and offline in real time. Parent organization VideoLAN shared a video on Tuesday in which president Jean-Baptiste Kempf shows off the new feature, which uses open-source AI models to generate subtitles for videos in several languages.

  • @renzev
    link
    English
    707 hours ago

    This sounds like a great thing for deaf people and just in general, but I don’t think AI will ever replace anime fansub makers who have no problem throwing a wall of text on screen for a split second just to explain an obscure untranslatable pun.

    • @FordBeeblebrox
      link
      English
      72 hours ago

      They are like the * in any Terry Pratchett (GNU) novel, sometimes a funny joke can have a little more spice added to make it even funnier

    • @FMT99
      link
      English
      115 hours ago

      Translator’s note: keikaku means plan

  • @m8052
    link
    English
    10611 hours ago

    What’s important is that this is running on your machine locally, offline, without any cloud services. It runs directly inside the executable

    YES, thank you JB

  • TheRealKuni
    link
    English
    218 hours ago

    And yet they turned down having thumbnails for seeking because it would be too resource intensive. 😐

    • @DreamlandLividity
      link
      English
      517 minutes ago

      I mean, it would. For example Jellyfin implements it, but it does so by extracting the pictures ahead of time and saving them. It takes days to do this for my library.

    • @serenissi
      link
      English
      23 hours ago

      It is useful for internet streams though, not really for local or lan video.

  • Phoenixz
    link
    fedilink
    English
    36
    edit-2
    9 hours ago

    As vlc is open source, can we expect this technology to also be available for, say, jellyfin, so that I can for once and for all have subtitles.done right?

    Edit: I think it’s great that vlc has this, but this sounds like something many other apps could benefit from

      • @Eagle0110
        link
        English
        136 minutes ago

        Has there been any estimated minimal system requirements for this yet, since it runs locally?

    • @GreenKnight23
      link
      English
      169 hours ago

      crunchyroll is currently using AI subtitles. it’s obvious because when someone says “mothra. Funky…” it captions “mother fucker”

      • @Alexstarfire
        link
        English
        149 hours ago

        That explains why their subtitles have seemed worse to me lately. Every now and then I see something obviously wrong and wonder how it got by anyone who looked at it. Now I know why. No one looked at it.

        • @GreenKnight23
          link
          English
          138 hours ago

          my wife and I love laughing at the dumbass mistakes it makes.

          some characters name is Asura Halls?

          instead of “That’s Asura Halls!” you get “That asshole!”

          but if I was actually hearing impaired I’d be really pissed that I’m being treated as second class even though Sony still took my money like everyone else.

      • @dance_ninja
        link
        English
        37 hours ago

        Malevolent Kitchen Intensifies

      • @NOT_RICK
        link
        English
        38 hours ago

        ( ͡° ͜ʖ ͡°)

    • @asbestos
      link
      English
      29 hours ago

      Ooooh I like this

  • @asbestos
    link
    English
    21013 hours ago

    Finally, some good fucking AI

    • @shyguyblue
      link
      English
      12113 hours ago

      I was just thinking, this is exactly what AI should be used for. Pattern recognition, full stop.

      • snooggums
        link
        English
        5212 hours ago

        Yup, and if it isn’t perfect that is ok as long as it is close enough.

        Like getting name spellings wrong or mixing homophones is fine because it isn’t trying to be factually accurate.

        • @[email protected]
          link
          fedilink
          English
          77 hours ago

          I’d like to see this fix the most annoying part about subtitles, timing. find transcript/any subs on the Internet and have the AI align it with the audio properly.

        • TJA!
          link
          fedilink
          English
          2012 hours ago

          Problem ist that now people will say that they don’t get to create accurate subtitles because VLC is doing the job for them.

          Accessibility might suffer from that, because all subtitles are now just “good enough”

          • snooggums
            link
            English
            97 hours ago

            Regular old live broadcast closed captioning is pretty much ‘good enough’ and that is the standard I’m comparing to.

            Actual subtitles created ahead of time should be perfect because they have the time to double check.

          • @[email protected]
            link
            fedilink
            English
            611 hours ago

            Honestly though? If your audio is even half decent you’ll get like 95% accuracy. Considering a lot of media just wouldn’t have anything, that is a pretty fair trade off to me

            • @[email protected]
              link
              fedilink
              English
              4
              edit-2
              7 hours ago

              From experience AI translation is still garbage, specially for languages like Chinese, Japanese, and Korean , but if it only subtitles in the actual language such creating English subtitles for English then it is probably fine.

              • @[email protected]
                link
                fedilink
                English
                14 hours ago

                That’s probably more due to lack of training than anything else. Existing models are mostly made by American companies and trained on English-language material. Naturally, the further you get from the model, the worse the result.

                • @[email protected]
                  link
                  fedilink
                  English
                  13 hours ago

                  It is not the lack of training material that is the issue, it doesn’t understand context and cultural references. Someone commented here that crunchyroll AI subtitles translated Asura Hall a name to asshole.

          • @[email protected]
            link
            fedilink
            English
            611 hours ago

            I have a feeling that if you care enough about subtitles you’re going to look for good ones, instead of using “ok” ai subs.

          • @shyguyblue
            link
            English
            2
            edit-2
            6 hours ago

            I imagine it would be not-exactly-simple-but-not- complicated to add a “threshold” feature. If Ai is less than X% certain, it can request human clarification.

            Edit: Derp. I forgot about the “real time” part. Still, as others have said, even a single botched word would still work well enough with context.

            • snooggums
              link
              English
              1
              edit-2
              7 hours ago

              That defeats the purpose of doing it in real time as it would introduce a delay.

              • @shyguyblue
                link
                English
                16 hours ago

                Derp. You’re right, I’ve added an edit to my comment.

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      11 hours ago

      Yeah it’s pretty wonderful To see how far auto generated transcription/captioning has become over the last couple of years. A wonderful victory for many communities with various disabilities.

  • m-p{3}
    link
    fedilink
    English
    54
    edit-2
    11 hours ago

    Now I want some AR glasses that display subtitles above someone’s head when they talk à la Cyberpunk that also auto-translates. Of course, it has to be done entirely locally.

    • @[email protected]
      link
      fedilink
      English
      1510 hours ago

      I guess we have most of the ingredients to make this happen. Software-wise we’re there, hardware wise I’m still waiting for AR glasses I can replace my normal glasses with (that I wear 24/7 except for sleep). I’d accept having to carry a spare in a charging case so I swap them out once a day or something but other than that I want them to be close enough in terms of weight and comfort to my regular glasses and just give me AR like overlaid GPS, notifications, etc, and indeed instant translation with subtitles would be a function that I could see having a massive impact on civilization tbh.

      • @[email protected]
        link
        fedilink
        English
        126 minutes ago

        It’d be incredible for deaf people being able to read captions for spoken conversations and to have the other person’s glasses translate from ASL to English.

        Honestly I’d be a bit shocked if the AI ASL -> English doesn’t exist already, there’s so much training data available, the Deaf community loves video for obvious reasons.

      • @[email protected]
        link
        fedilink
        English
        37 hours ago

        I think we’re closer with hardware than software. the xreal/rokid category of hmds are comfortable enough to wear all day, and I don’t mind a cable running from behind my ear under a clothes layer to a phone or mini PC in my pocket. Unfortunately you still need to byo cameras to get the overlays appearing in the correct points in space, but cameras are cheap, I suspect these glasses will grow some cameras in the next couple of iterations.

      • m-p{3}
        link
        fedilink
        English
        59 hours ago

        I believe you can put prescription lenses in most AR glasses out there, but I suppose the battery is a concern…

        I’m in the same boat, I gotta wear my glasses 24/7.

  • @[email protected]
    link
    fedilink
    English
    2512 hours ago

    I hope Mozilla can benefit of a good local translation engine that could come out of it as well.

        • @[email protected]
          link
          fedilink
          English
          17 hours ago

          And it takes forever. I’m using the TWP plugin for Firefox (which uses external resources, configurable to google, bing and yandex translate respectively), and it’s near instantaneous. The local one from Mozilla often takes 30 seconds, and sometimes hangs until I refresh the page.

  • @[email protected]
    link
    fedilink
    English
    1011 hours ago

    Haven’t watched the video yet, but it makes a lot of sense that you could train an AI using already subtitled movies and their audio. There are times when official subtitles paraphrase the speech to make it easier to read quickly, so I wonder how that would work. There’s also just a lot of voice recognition everywhere nowadays, so maybe that’s all they need?