Hello everyone!

My name’s Benjamin, I’m the developer of ENFUGUE, a self-hosted Stable Diffusion Web UI that’s built around an intuitive canvas interface, while still trying to deliver the power and deep customization of the popular tab-and-slider web UI’s.

I’m taking it out of Alpha and into Beta with the v0.2 release, which brings SDXL support while still maintaining most of the feature set of 1.5 by allowing you to configure multiple checkpoints for various diffusion plans. It also has a ton of changes since 0.1 as suggested by other users, like the the ability to point ENFUGUE to the directories of other Web UI installations to share models and other files.

This is not monetized software in any way; I simply built the tool I wanted to use, and wanted to share. Thanks you taking a look!

  • @[email protected]OP
    link
    fedilink
    English
    21 year ago

    Yes! That is the very next big feature to tackle after just adding MacOS support (and the surprise that was needing to add SDXL support.) I’ve been trying to weave between addressing bug reports and feature requests while also trying to understand what hardware people are actually trying to use - It seems like I’ve covered the vast majority of use cases for casual tinkerers and self-hosters, now it’s time to make the docker build for the advanced users and individuals wanting to run this on a remote server.

    In theory, the portable installation should “just work” in Docker, though the Nvidia runtime could cause troubles - but I’ll start publishing Docker containers to the repository starting with 0.2.1.

    Thank you for the feedback!

    • ffhein
      link
      English
      21 year ago

      Sounds good, looking forwards to trying it! Personally I like to use docker on my Linux desktop PC for web server based apps. Makes it easy to run and update everything without having to rely on custom installers and updaters. Usually gives better control over which port to use, and where to store data. Been using AbdBarho’s docker files for A1111 and ComfyUI, which makes it very easy to share models and other large files between the two.

      I’ve used cuda in docker quite a lot, and it has even helped me solve problems, e.g. some llama apps needed cuda toolkit, which was’t available for Fedora 38. I think the biggest challenge with docker is to make sure the right dependencies get built into the image, and that all run-time data is contained to mounted volumes. If you need any help with docker let us know, I’m not some kind of super pro but I have a fair amount of experience with it.

      If you’re collecting info about users’ hardware, I have a Ryzen 7 7700X, 32GB ram, RTX3080 12GB vram.

      • @[email protected]OP
        link
        fedilink
        English
        21 year ago

        Hi! The docker version is out! 😁 Just run docker pull ghcr.io/painebenjamin/app.enfugue.ai:latest to get it. There’s some more documentation on ports, volumes, etc. on the wiki.