• NemeskiOP
    link
    fedilink
    5819 hours ago

    Hopefully better than YouTube’s, those are often pretty bad, especially for non-English videos.

    • moosetwin
      link
      fedilink
      English
      912 hours ago

      Youtube’s removal of community captions was the first time I really started to hate youtube’s management, they removed an accessibility feature for no good reason, making my experience with it significantly worse. I still haven’t found a replacement for it (at least, one that actually works)

      • @[email protected]
        link
        fedilink
        16 hours ago

        Same here. It kick-started my hatred of YouTube, and they continued to make poor decision after poor decision.

      • moosetwin
        link
        fedilink
        English
        812 hours ago

        and if you are forced to use the auto-generated ones remember no [__] swearing either! as we all know disabled people are small children who need to be coddled!

    • wazzupdog (they/them)
      link
      fedilink
      1918 hours ago

      They’re awful for English videos too, IMO. Anyone with any kind of accent(read literally anyone except those with similar accents to the team that developed the auto-caption) it makes egregious errors, it’s exceptionally bad with Australian, New Zealand, English, Irish, Scottish, Southern US, and North Eastern US. I’m my experience “using” it i find it nigh unusable.

    • @[email protected]
      link
      fedilink
      818 hours ago

      I’ve been working on something similar-ish on and off.

      There are three (good) solutions involving open-source models that I came across:

      • KenLM/STT
      • DeepSpeech
      • Vosk

      Vosk has the best models. But they are large. You can’t use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.

      What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).

      One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either “… still he …” or “… silly …”. My tool can give you “… (still he|silly) …” instead of 50/50 chancing it.

      • @[email protected]
        link
        fedilink
        618 hours ago

        I love that approach you’re taking! So many times, even in shows with official subs, they’re wrong because of homonyms and I’d really appreciate a hedged transcript.

    • themeatbridge
      link
      218 hours ago

      That would depend on the LLM and the data used to train it.

        • themeatbridge
          link
          117 hours ago

          I didn’t read the article, but I would have assumed that the AI was using predictive text to guess at the next word. Speech recognition is already pretty good, but it often misses contextual cues that an LLM would be good at spotting. Like, “The famous French impressionist painter mayonnaise…”