It doesn’t think. It has no logic skills. It doesn’t understand English nor the rules to construct good literature.
It can spit out sentences all day. Though to construct dialog that can be interchangable between characters and still function well enough is only something it could luck in to. Even with generic throwaway lines, making them interchangable either takes a very dull story, or a good writer. While AI can spit out dumb, disjointed and meaningless stories, it cannot neatly craft meaningful sentences unless you give it every single bit of context ever and structure your own prompts well.
But it generates off what you put in, right? Like, if you fed it a script up to a certain point and said, “write a scene where Mike explains to the group what happened with [X],” it could do that, right?
Because my experience with Volume 2 was that it was almost entirely exposition dumps or recaps. Episode 5 and half of episode 6 are essentially just characters going to different locations and explaining previous events or future plans. Like I said, I’ve never used AI, so I’m not an expert, but from what I’ve seen from ChatGPT, it doesn’t seem impossible to write an outine, write the scenes you’re most interested in, then say, “turn this summary into a full scene” or, “add dialog where these characters explain what happened in them in the upside down to Joyce.”
You might want to look up how it works then, because it truly is just glorified autocomplete. It can appear to output some seriously cool things, but
Remember the output is based on what the model was trained on. If its output is good, it’s only because the model was trained on a shitload of examples to produce a well mapped token graph. Examples like every single thing they can scrape off the internet damn the copyrights…
It’s only ever going to be an illusion of intelligence based off of the associations humans have already given to words and other tokenized things. LLMs will never grow past their training data.
While they can have the illusion of intelligence, they only ever are associating tokens. Sure, they can have large sets of input tokens, too, to relate the output tighter to what you want, but it’s all just mathematical associations!
The output works out a lot of the time when using models and GPUs way bigger than would fit on most anyone’s home computer… but it’s still just associations of tokens. They don’t know that all those tokens paint a picture that means token #3745, the protagonist, has ongoing motivation to interact with token #3758, the love interest. If your input instructions don’t directly associate those two with sheer reppitition, the "AI"s would basically all just wander all over the place as far as their relationship was concerned.
Magnify that kind of “vapid story” type problems across all aspects of a story that aren’t basically pre-written in the prompt anyways, and it turns out “AI” is actually extremely shit at anything requiring actual intuition, understanding, and basic intelligence no matter how much electricity is thrown at it!
I mean, I think most of Season 5 was extremely shit at anything requiring actual intuition, understanding, and basic intelligence, so that was kinda my point. Also, the second YouTube result for “using chatgpt to write a screenplay,” shows a guy doing basically exactly what I’m describing in parts 5 and 7. I’m not saying the the Duffers just said, “ChatGPT, write Season 5 for us,” and it magically generated the screenplay for every episode, but I think they had it generate some dialog, and everything I’ve seen of people using ChatGPT makes me think that’s very possible.
What do you mean? I don’t use ChatGPT, but I’ve seen it’s outputs, and it seems more than capable of writing exposition dumps.
It doesn’t think. It has no logic skills. It doesn’t understand English nor the rules to construct good literature.
It can spit out sentences all day. Though to construct dialog that can be interchangable between characters and still function well enough is only something it could luck in to. Even with generic throwaway lines, making them interchangable either takes a very dull story, or a good writer. While AI can spit out dumb, disjointed and meaningless stories, it cannot neatly craft meaningful sentences unless you give it every single bit of context ever and structure your own prompts well.
It’d be less work to just… be creative yourself.
But it generates off what you put in, right? Like, if you fed it a script up to a certain point and said, “write a scene where Mike explains to the group what happened with [X],” it could do that, right?
Because my experience with Volume 2 was that it was almost entirely exposition dumps or recaps. Episode 5 and half of episode 6 are essentially just characters going to different locations and explaining previous events or future plans. Like I said, I’ve never used AI, so I’m not an expert, but from what I’ve seen from ChatGPT, it doesn’t seem impossible to write an outine, write the scenes you’re most interested in, then say, “turn this summary into a full scene” or, “add dialog where these characters explain what happened in them in the upside down to Joyce.”
You might want to look up how it works then, because it truly is just glorified autocomplete. It can appear to output some seriously cool things, but
The output works out a lot of the time when using models and GPUs way bigger than would fit on most anyone’s home computer… but it’s still just associations of tokens. They don’t know that all those tokens paint a picture that means token #3745, the protagonist, has ongoing motivation to interact with token #3758, the love interest. If your input instructions don’t directly associate those two with sheer reppitition, the "AI"s would basically all just wander all over the place as far as their relationship was concerned.
Magnify that kind of “vapid story” type problems across all aspects of a story that aren’t basically pre-written in the prompt anyways, and it turns out “AI” is actually extremely shit at anything requiring actual intuition, understanding, and basic intelligence no matter how much electricity is thrown at it!
I mean, I think most of Season 5 was extremely shit at anything requiring actual intuition, understanding, and basic intelligence, so that was kinda my point. Also, the second YouTube result for “using chatgpt to write a screenplay,” shows a guy doing basically exactly what I’m describing in parts 5 and 7. I’m not saying the the Duffers just said, “ChatGPT, write Season 5 for us,” and it magically generated the screenplay for every episode, but I think they had it generate some dialog, and everything I’ve seen of people using ChatGPT makes me think that’s very possible.