this rootless Python script rips Windows Recall’s screenshots and SQLite database of OCRed text and allows you to search them.

  • @[email protected]
    link
    fedilink
    English
    707 months ago

    Hilarious to me that it OCRs the text. The text is generated by the computer. It’s almost like when Lt. Cmdr. Data wants to get information from the computer database, so he tells the computer to display it and just keeps increasing the speed — there are way more efficient means of getting information from A to B than displaying it, imaging it, and running it though image processing!

    I totally get that this is what makes sense, and it’s independent of the method/library used for generating text, but still…the computer “knows” what it’s displaying (except for images of text), and yet it has to screenshot and read it back.

    • @Wispy2891
      link
      English
      287 months ago

      It happens the same on android for some reason

      Like 5-8 years ago the google assistant app was able to select and copy text from any app when invoked, I think it was called “now on tap”. Then because they’re google and they’re contractually obligated to remove features after some time, they removed this from the google app and integrated it in the pixel app switcher (and who cares if 99% of android users aren’t using a pixel, they say). The new implementation sucks, as it does ocr instead of just accessing the raw text…

      It only works fine with us English and not with other languages. But maybe it’s ok as it seems that google’s development style is us-centric

      • @nawa
        link
        English
        137 months ago

        Now on Tap also used OCR. Both Google Lens and Now on Tap get the same bullshit results on any languages that are not Latin. Literally, Ж gets read as >|< by both exactly the same.

        • @Wispy2891
          link
          English
          97 months ago

          They changed it, in the beginning it was using the text and not ocr

          For example this app could be set as assistant and get the raw text https://play.google.com/store/apps/details?id=com.weberdo.apps.copy

          But only the app set on system as assistant can do it

          I was very disappointed when they changed it around 2018 as it produced garbage in my language when it was working so good…

    • @[email protected]
      link
      fedilink
      English
      25
      edit-2
      7 months ago

      Hey, yeah… why aren’t they just tapping the font rendering DLL?

      are they tapping the front rendering dll??

      • @[email protected]
        link
        fedilink
        English
        27 months ago

        My guess is that they looked at their screen reader API, saw that it wasnt 100% of the text on screen and said fuck it! Were using OCR!

    • @[email protected]
      link
      fedilink
      English
      247 months ago

      Having worked on a product that actually did this, it’s not as easy as it seems. There are many ways of drawing text on the screen.

      GDI is the most common, which is part of the windows API. But some applications do their own rendering (including browsers).

      Another difficulty, even if you could tap into every draw call, you would also need a way to determine what is visible on the screen and what is covered by something else.

    • @[email protected]
      link
      fedilink
      English
      207 months ago

      That’s the thing, it doesn’t really know what it’s displaying. I can send a bunch of textboxes, but if they’re hidden, or drawn off-screen, or underneath another element, then they’re not actually displayed.

    • Eager Eagle
      link
      English
      97 months ago

      Text from OCR is one kind of match. Recall also runs visual comparisons with the image tokens stored.

    • @TheGrandNagus
      link
      English
      37 months ago

      To be fair, Data was designed to be like a human, and was made in the image of his creator. He has a number of design decisions that are essentially down to his creator wanting to create something like a human. Including that which you describe.

      Data was never intended to work like a PC, it’s very normal that he can’t just wirelessly interface with stuff.