Hi Perchance team!

I’ve been delving into the world of AI image generation, and have found that the Perchance generator is head and shoulders above pretty much everything else, and ridiculously fast. I’m very impressed, and was wondering if you might indulge me with a few questions?

To preface, I feel like you can’t be getting good value out of me as a user - I’m frequently generating a ton of images, and only seeing occasional ads. To that end, I’m trying to shift my heavier usage to my own local Stable Diffusion instance, but don’t seem to get the same quality of results.

Completely understand if you want to keep the secret recipe a secret - but if you’re willing to share - what checkpoints/LORAs/etc. are you using? My best guess is SD1.5 + Reliberate, which gives me better results than the other checkpoints I’ve experimented with, but it’s definitely not the whole picture.

I also wanted to ask if you guys accept donations, or have a way for me to shout you coffee, beer, courvoisier, or whatever. :D

  • @perchanceM
    link
    English
    5
    edit-2
    1 year ago

    Yep SD 1.5, and you should be able to replicate the text-to-image-plugin’s results locally by just following vanilla tutorials on /r/stablediffusion or youtube with pretty much any of the top models on civitai - I’m not doing anything special. Your local results will actually end up be better than the plugin’s because I have a stupid amount of regex and stuff trying (and somewhat failing) to prevent the model from creating oversexualised stuff for benign prompts, and that almost always comes at a cost of quality/coherence. I’m not the best person to ask about troubleshooting local setups, but I’d just advise that you follow a tutorial/guide exactly to start with, and then once you’ve replicated what they’ve shown, you start exploring your own prompts, tweaking parameters, etc.

    • @GrthOP
      link
      English
      31 year ago

      Thanks for getting back to me so quickly, even if it’s taken me so long to respond. Whatever non-special things you’re doing seem to work really well! It works a lot faster than my local instance (suboptimal video card to blame there) and I get really good consistent base results, which I can then pull into my own instance to do more fun stuff with inpainting, upscaling and the like. So hopefully that ad revenue is making it worth your while. :D

      • @perchanceM
        link
        English
        31 year ago

        Yeah it’s definitely worth investing in a fast graphics card if you’re getting deep into AI stuff, but they’re pricey. Inpainting and image-to-image should be possible on perchance within the next month or so if all goes well. Ad revenue doesn’t cover all the server costs yet, so I pay for a portion of it out of my own pocket, but it’ll eventually be self-sustaining and it’s not ‘breaking the bank’ for me. Much closer to self-sustaining than it was 12 months ago when I made the plugin - research community has made SD inference a lot more efficient.

    • Ashenthorn
      link
      English
      21 year ago

      @[email protected] Is there a roadmap or existing discussion anywhere about your experiments with the t2i plugin or where you might be going with it? Or for user questions/feedback/requests?

      I’m currently getting some fantastic results with it “as is”… but additional options are always appreciated. =)

    • @tomanivolley
      link
      English
      1
      edit-2
      1 year ago

      Sorry to bug you about this, but do you have a specific model you’d recommend? I’ve tried ~10 different models on civitai and I’m having a hard time replicating the results I get from your 2D Disney option without using a Seed for an image I’ve created on Perchance. This is even when I copy/paste your prompts.

      You’re not wrong that some of the results I’ve gotten are just as good in some ways, but it’s really bugging me that I can’t seem to replicate that exact style.

  • @ArtificialScr00b
    link
    English
    41 year ago

    You can hover over the images to see the prompt that was used. Some keywords are added to the prompt on the backend depending on the “Art Style” used, and you can see those, as well as the seed, when you hover.

    At one time in the past, the generator was saving prompt/generation information in the EXIF data of the images. The last image I have saved that contained EXIF data showed the generator was using SD1.5+Deliberate v2, 20 steps, Euler a, CFG/Guidance 7. It really does seem like there’s other things going on under the hood though, because if I use those settings and input the saved prompt exactly, even including the same seed, from one of the images I have saved from before the EXIF data was dropped, I can’t replicate the image using HappyAccidents. The quality is the same basically the same, but experience tells me that if I feed the exact same prompt, seed, and configuration into both, I should get the same output, which I do not, which tells me there’s still something hidden somewhere.

    I really wish that saved EXIF data would be brought back. It makes it a lot easier to go back to an image I saved from earlier and tweak or retouch it.