Prompt

Midjourney: a cozy little cabin with a colorful roof at the top of a cloudy peak, a long twisting slide leads down the side of the peak, a small garden is planted around the house --ar 4:3

Theme

This weeks theme is your dream home. Where would you want to live if your imagination is the only limit? A cozy cabin in the middle of nowhere? The gigantic pillow fortress you dreamed of when you were little? Or maybe a majestic tower on an asteroid drifting through space?
Go with whatever you, or your inner child, would love to live in :)

Rules:

  • Follow the community’s rules above all else
  • One comment and image per user
  • Embed image directly in the post (no external link)
  • Workflow/Prompt sharing encouraged (we’re all here for fun and learning)
  • Posts that are tied will both get the points
  • The challenge runs for 7 days from now on
  • Down votes will not be counted

Scores

At the end of the challenge each post will be scored:

Prize Points
Most upvoted +3 points
Second most upvoted +2 point
Third most upvoted +1 point
OP’s favorite +1 point
Most original +1 point
Last two entries (to compensate for less time to vote) +1 point
Prompt and workflow included +1 point

The winner gets to pick next theme! Have fun everyone!

Previous entries

  • @[email protected]M
    link
    fedilink
    English
    4
    edit-2
    8 months ago

    Okay soooooo, that took a lot longer than I anticipated, but I think I got it. It seems it is a problem with the VAE encoding process and it can be handled with the ImageCompositeMasked node that combines the padded image with the new outpainted area so that pre-outpainted area isn’t affected by the VAE. I learned this here https://youtu.be/ufzN6dSEfrw?si=4w4vjQTfbSozFC6F&t=498. The whole video is quite useful, but the part I linked to is where he talks about that problem.

    The next problem I ran into is that at around the fourth from the last outpainting, ComfyUI would stop, it just wouldn’t go any further. The system I’m using has 24GB of VRAM and 42 GB of RAM so I didn’t think that was the problem, but just in case I tried it on a beastly RunPod machine that had 48GB VRAM and 58GB of RAM. It had the exact same problem.

    To work around this I first bypassed everything except the original gen and the first outpaint. Then I enabled each outpaint one by one until I got to the fourth from the last. At that point I saved the output image and bypassed everything except the original gen and first outpaint and enabled the last four outpaints, loading the last image manually.

    I used DreamShaper XL Lightning because there was no way I was going to wait for 60 steps each time with FenrisXL 😂 I tried two different ways of using the same model for inpainting. The first was using the Fooocus Inpaint node and Differential Diffusion node. This worked well, but when comfy stopped working I thought maybe that was the problem so I switched all of those out for some model merging. Basically, it subtracts the base SDXL model from the SDXL inpainting model and adds the Dreamshaper XL Lighting model to that. This creates a “Dreamshaper XL Lightning inpainting model”. The SDXL inpainting model can be found here.

    You should be able to use this workflow with FenrisXL the whole time if you want. You’ll just need to change the steps, CFG, and maybe sampler at each ksampler.

    Image with ImageMaskedComposite: https://files.catbox.moe/my4u7r.png

    Image without ImageMaskedComposite: https://files.catbox.moe/h8yiut.png

    • @Itrytoblenderrender
      link
      English
      47 months ago

      Wow! Thank you for the effort and time you put into this! I will definitely look into the workflow. Model Merging sounds very interesting. I will look into it!