Microsoft quietly added a new AI feature, called Cocreator, into its raster graphics editor included in every version of Windows since 1985. You need a Copilot + PC with an NPU that can deliver 40 TOPS or better to use it. So, you need to shell out at least $1,099 to get one of the new Snapdragon X Windows Copilot+ PCs that launched recently if you want your version of Microsoft Paint to come with Cocreator enabled.

However, Microsoft still requires you to sign in with your Microsoft account and be connected to the internet “to ensure safe use of AI.” According to Microsoft’s Privacy Statement, “Cocreator uses Azure online services to help ensure the safe and ethical use of AI. These services do content filtering to prevent the generation of harmful, offensive, or inappropriate content. Microsoft collects attributes such as device and user identifiers, along with the user prompts, to facilitate abuse prevention and monitoring. Microsoft does not store your input images or generated images.”

This is a nightmare for security and privacy-conscious users, especially as Microsoft recently blocked the last easy workaround to set up Windows 11 without a Microsoft account. Microsoft is likely doing this to stop unscrupulous users from generating illegal images like child and non-consensual deep fake pornography. However, storing this information is also a source of concern, as prompts a user typed in and stored on their account could be stolen. And, no matter how innocent, it could then be weaponized and used against them.

  • Björn Tantau
    link
    fedilink
    English
    1265 months ago

    Or just use Krita with the AI plugin offline and local completely free, even without a GPU (if you have time to wait).

    • QuantumSoul
      link
      fedilink
      English
      525 months ago

      Exactly! Open source, better quality, better privacy, better editing tools, … Why am I even comparing?

        • @General_Effort
          link
          English
          25 months ago

          Was a reference to the thread next door that revealed - horror of horrors - that photos of children were part of the training data. Sure, you never know who is behind these hit pieces, but there doesn’t really need to be anyone behind it.

        • @CosmoNova
          link
          English
          25 months ago

          Bill founded the entire company on monetizing open source code from software enthusiasts across the world. If anyone knows what open source means, it‘s them.

    • @ArtVandelay
      link
      English
      95 months ago

      holy shit, i just tried it, and this is amazing, thank you!

    • @RustyNova
      link
      English
      25 months ago

      Would be cool to be AI horde compatible and just ditch the GPU requirements entirely.

      I don’t think everyone got a GPU that could run stable diffusion easily, even more for laptops

      • Björn Tantau
        link
        fedilink
        English
        6
        edit-2
        5 months ago

        You don’t have to run the AI stuff on the same computer running Krita. At home I have my gaming PC set up for that for the whole family. And if I recall correctly the plugin also promotes a specific cloud service, but you can enter any URL to a compatible service.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        5 months ago

        The thing is that AI Horde relies on donated hardware. There are only so many people willing to donate relative to people who want to use.

        Vast.ai lets people rent hardware, but not on a per-operation basis. That’s cheaper then buying and keeping it idle a lot of the time, reduces costs, but it’s still gonna have idle time.

        I think what would be better is some kind of service that can sell compute time on a per-invocation basis. Most of the “AI generation services” do thus, but they also entail that you use their software.

        So, it’s expensive to upload models to a card, and you don’t want tonnage to re-upload a model for each run. But hash the model and remember what the last thing run on the card is. If someone queues a run with the same model again, just use the existing uploaded model.

        Don’t run the whole Stable Diffusion or whatever package on the cloud machine.

        That makes the service agnostic to the software involved. Like, you can run whatever version of whatever LLM software you want and use whatever models. It makes the admin-side work relatively light. It makes sure that the costs get covered, but people aren’t having to pay to buy hardware that’s idle a lot of the time.

        Might be that some service like that already exists, but if so, I’m not aware of it.