yeah, and how many iterations did they do, how much did they storyboard, did the ai do the entire thing and if so did the ai use the traditonal concept to storyboarding to scene to etc. process to generate it.
we have a specific process that is followed when folk make film
does the AI do something different? could we look at the tree and arbitrarily pick a different fork if we didn’t like a decision rather than having to ask an entirely new question?
Afaik seedance only has like three modes of generation:
Text to video
First frame + text to video
First frame + last frame + text to video
From what I’ve seen you can do ranges of time in the text for certain things like 1-3s: slow pan in etc.
People will use something like Google’s nano banana to generate still frames in a storyboard-like prompt then have seedance generate the video for each 12 or so second portion
Pretty good. I want to know more about the creative process behind it.
Seedance 2.0
yeah, and how many iterations did they do, how much did they storyboard, did the ai do the entire thing and if so did the ai use the traditonal concept to storyboarding to scene to etc. process to generate it.
we have a specific process that is followed when folk make film
does the AI do something different? could we look at the tree and arbitrarily pick a different fork if we didn’t like a decision rather than having to ask an entirely new question?
it’s fascinating.
Afaik seedance only has like three modes of generation:
From what I’ve seen you can do ranges of time in the text for certain things like 1-3s: slow pan in etc.
People will use something like Google’s nano banana to generate still frames in a storyboard-like prompt then have seedance generate the video for each 12 or so second portion