Hi all,

first off I’d just like to say how blown away I am by the potentials of Perchance. I bet to most of you this is baby stuff but for me this is my first step in to this world and its just incredible stuff.

So I have a question…more so to if its possible and then I’ll properly wrap my head around the coding of it.

I’m looking to create a short sequence of scenes, like a still animation. Some time the background will stay the same but the character would change pose say. Or maybe the background (say a kitchen scene as example) may change camera angle/view and the character would change position/pose. Im not looking to create frame by frame stuff. Just scene changes but retaining features through out. I totally can see it being possible to do, was just hoping to hear some advice from people that have much more experience than I do.

If any of that doesn’t make sense (most probably!) please just ask and I’ll try to better explain.

TIA

Sam

p.s. Oh I should probably state that I plan to use t2i to create the scenes, then overlay/combine character and adjust accordingly

  • MindBlown! 🌬️🤯OP
    link
    English
    3
    edit-2
    4 months ago

    p.s. Ive just looked up the site Playground you suggested. Landing page it highlights the very thing, two images integrated. Ive not been on the site so have no idea as to the UI and whats involved…but if Im honest, I quite like the idea of coding it myself and really deep diving in. Also I think having the images seperate still might be handy, I dont know!? But it’ll be quite fun to explore. This whole project has evolved for me…I want to create a portfolio of music compositions for film/TV etc… and was going to use old silent movies. But then pondered the idea of creating my own. Animation would be too labour intensive (I think…well in these preliminary stages anyway) so thought about creating stills. Started exploring AI possibilities, discovered perchance today and was blown away by it! Fits what Im looking for perfectly. Coding it bespoke to my project will provide so much versatility and potential. So it looks like Im back to school!

    Also, I wasn’t sure if the image would differ if I altered the prompt itself and so would lose continuity? Was thinking that maybe setting up the generator to have conditions that cover the alterations so would only change say ‘viewpoint’ and use the same seed? again, completely new to this and just piecing bits together so most likely completely arse about face way to do it! lol

    • 🎲VioneTM
      link
      English
      34 months ago

      I would be quite hard in Perchance since it currently doesn’t have any ‘image-to-image’ or ‘in-painting’ capabilities, which might be what you are looking for. I would suggest looking into creating your own local Stable Diffusion setup (which is the model that Perchance is using). Other than that, prompting in Perchance is quite customizable, see this prompting guide.

      Unfortunately, we wouldn’t want for users to post here on asking ‘Prompts’ or how to get a certain image with the plugin or with a certain generator (see Rule #5), since people would mostly think that Perchance is an AI website, it isn’t, it is first and foremost an engine for creating Random Text Generators. We have a channel on Discord that is much more relaxed in asking ‘prompts’ for images.

      • MindBlown! 🌬️🤯OP
        link
        English
        34 months ago

        Thank you for your very informative reply (although now the cogs are whirring in all manner of directions!).

        I totally understand what you’re saying and see that if I were requiring that realm of functionality I.e. in-painting then a different approach would be better. I’m intrigued by the suggestion of my own local Stable Diffusion setup. I didn’t even know that was a possibility and in the long run may possibly be the better option even if what I require is available here. So in my thinking (and please forgive my naivity, I’m green as grass at present!) I would generate one image, say background first. Write the generator to have quite specific conditions I.e. depth of view, positioning, scale, size etc etc (I’m totally clueless as to the limitations here so again please correct if not possible). Result, one image. Backdrop. Then generate the character image. Maybe whole separate generator. Again, very specific with scale or pose, positioning etc and without background. So now both images are created in the manner required. My thoughts were the locked-layer-combination!? To combine the two images 🤷🏻‍♂️ and voila. The next step would be to then adjust parameters within each generator but revolving around the same seed (might be talking nonsense here as again, I’m just learning) to adjust in the desired way but retaining the features I.e. colours, style etc for continuity.

        Am I just wishing on a star do you think and better suited to my own local?

        Also, I’m not seeking any prompt advice here and respect that this isnt the place for that too. I just wanted to reach out to experienced minds to question the plausibility of my thinking as with my limited knowledge I could only piece together an idea from what is already available, tailoring to my needs. I mean, even if I create 2 images and use photoshop to layer its not an issue. My main focus is the retention of detail whilst manipulating the image.

        Thank you again for a very thought provoking reply. Off I go to see what a ‘local’ setup looks like! :)

        • 🎲VioneTM
          link
          English
          34 months ago

          I would generate one image, say background first. Write the generator to have quite specific conditions I.e. depth of view, positioning, scale, size etc etc (I’m totally clueless as to the limitations here so again please correct if not possible). Result, one image. Backdrop.

          Images can be fine-tuned with tags (albeit the AI tries to), so you can change the conditions of the images.

          Then generate the character image. Maybe whole separate generator. Again, very specific with scale or pose, positioning etc and without background.

          You can also do the character in the same generator, no need to code another one, you just need to change the prompts.

          My thoughts were the locked-layer-combination!? To combine the two images 🤷🏻‍♂️ and voila.

          The layering part is quite easy to implement (see this example generator I made a while back) but you would need to edit the character image to be cropped so it can overlay successfully on top of the image. If you were to use ‘inpainting’ you could just ‘paint-over’ the current image without needing to layer and position multiple images.

          The next step would be to then adjust parameters within each generator but revolving around the same seed (might be talking nonsense here as again, I’m just learning) to adjust in the desired way but retaining the features I.e. colours, style etc for continuity.

          Using the same seed is a good idea to have the ‘same’ essence, or replicable composition (see the prompting guide that I linked earlier). But yeah, a local setup with ‘inpainting’ would be better, there are also other ‘plugins’ for the local setups to instruct the AI area by area. Also note that you might need a powerful computer (and graphics card) to run the local models quickly.

          • MindBlown! 🌬️🤯OP
            link
            English
            2
            edit-2
            4 months ago

            ahhh I see, tags will be involved for sure then. I’ll take a look at your generator too and try and get my head around whats going on. I wonder; you mentioned ‘cropped image’ in your reply…well it just got me thinking about how images are ‘recognised’ and if there is a possibility in the AI to differentiate the character from background (if background is given transparency) and create the crop itself? As in, a lasso plug in type thing that can auto detect? Just an abstract thought so don’t know if its completely ludicrous or not!? 🤷‍♂️

            Yes, GPU…might be my achilies heal with that at present. Im just running off of a laptop, 8GB RAM kind of crap! Apparently could run Stable Diffusion from it but not SDxL without it being slooooow! Would having a local Stable Diffusion and not xL be pointless?

            • 🎲VioneTM
              link
              English
              24 months ago

              I don’t think there are any AI that can auto-crop an image, at least that is currently available and can be integrated on the site. I think v1.5 models are the creme of the crop currently since there are a lot of models for it (also it having not much censorship compared to the XL, I think). I haven’t made a local setup myself so I can’t really point you to a direction. I would suggest looking into the StableDiffusion subreddit ( or at the community here on Lemmy), to start.

              • MindBlown! 🌬️🤯OP
                link
                English
                24 months ago

                Ive been looking in to the image cropping and finding some jscript plugin subreddits so will delve a little deeper to see what people have been coming up with. I’m also going to try SDXL on my laptop to see what happens…if it takes an age to do anything will resort to SD…Thanks for all the pointers 👍

    • MindBlown! 🌬️🤯OP
      link
      English
      24 months ago

      I would like to have a generator that I can use for projects after too…just keep on developing it as required

    • allo
      link
      English
      1
      edit-2
      4 months ago

      “to create a portfolio of music compositions for film/TV etc”

      do you make music?

      here’s a cute thing: setInterval(function(){

      },3000)

      is javascript for ‘do something every 3000 miliseconds’

      if you have constant BPM in your songs you can align your ‘animation’ exactly to your song :)

      • MindBlown! 🌬️🤯OP
        link
        English
        24 months ago

        Yes I do make music. This is a really cool idea to have in your back pocket…once Im a bit further down the line will see if it comes in to fruition! thanks