Not about the context, but it knows who it’s replying to and what community it’s on. This is its system message:
You are a bot replying to a Lemmy community called "%s". The community is hosted on instance %s. You are hosted on instance %s. The user's display name is %s, their username is %s and they're on instance %s.
The %s gets replaced with correct values based on who (and where) it’s replying to.
Edit: I’ve just yesterday added the ability to maintain history to the underlying code, but it’s not yet supported with the bot itself. But given the hard work is done, it should be fairly easy to implement thread context.
The AI model has revised your prompt: An image showcasing a unique creature which is a hybrid of a cat and a shark. This curious being possesses the features of a domestic cat like the furry body, sharp eyes, and agile posture, combined with characteristics of a shark like the dorsal fin, sharp teeth, and a muscular tail. This hybrid creature stands as an imaginary concept, hence the realism in the image lies predominantly in the textural rendering and anatomical details of both animals. The surroundings are undefined allowing full focus on the hybrid creature.
The AI model has revised your prompt: An illustration captioned ‘thanks, I hate it.’ featuring a comical cartoon character looking disdainfully at an object. The character is a tall, lanky individual with exaggerated facial features, styled in the manner of early 20th century comic strips. A mysterious object, ambiguously resembling a piece of modern art, stands in front of them, causing a clear reaction of confusion and dislike. The colors should be vivid and the overall vibe should maintain a sense of humor and sarcasm.
You did this? I saw some examples in the “show me your best hearse if it were a ___” post, and I’m flabbergasted at the computer-based witchcraft it’s pulling off over there.
What image generator is it using? Is it using chatgpt to expand the prompts?
It’s using DALL-E, the version 3 also sometimes expands the prompt (IMO it’s using GPT in the background to do so, but I haven’t checked). The prompt is only printed below the image if it has been changed. I’ve noticed that if the prompt is long already, it doesn’t change it, but I haven’t really confirmed anything. Well, you can try it yourself, just tag the bot and it will respond with the image.
It also falls back to DALL-E version 2 to lower the costs and there’s some daily limit (per-user and global).
Very interesting. So all the requests from the bot get sent through one account? Seems like it would get expensive fast as people find out that this option exists.
Yeah, but there’s rate limiting in place that makes sure I won’t pay more than I’m comfortable with. I’m thinking of making it possible for users to supply their own api key.
What about the comment thread preceding the prompt? Could you include that too? Or maybe get other GPTs to write summaries to chunk it, if it’s too much text?
Is this thing aware of the thread context here? It must be, talking about memes
Not about the context, but it knows who it’s replying to and what community it’s on. This is its system message:
You are a bot replying to a Lemmy community called "%s". The community is hosted on instance %s. You are hosted on instance %s. The user's display name is %s, their username is %s and they're on instance %s.
The
%s
gets replaced with correct values based on who (and where) it’s replying to.Edit: I’ve just yesterday added the ability to maintain history to the underlying code, but it’s not yet supported with the bot itself. But given the hard work is done, it should be fairly easy to implement thread context.
Fuck, that’s so cool. You’re so cool
@[email protected] A realistic photo of a cat and shark hybrid.
Here’s your image!
The AI model has revised your prompt: An image showcasing a unique creature which is a hybrid of a cat and a shark. This curious being possesses the features of a domestic cat like the furry body, sharp eyes, and agile posture, combined with characteristics of a shark like the dorsal fin, sharp teeth, and a muscular tail. This hybrid creature stands as an imaginary concept, hence the realism in the image lies predominantly in the textural rendering and anatomical details of both animals. The surroundings are undefined allowing full focus on the hybrid creature.
@[email protected] thanks, I hate it.
Here’s your image!
The AI model has revised your prompt: An illustration captioned ‘thanks, I hate it.’ featuring a comical cartoon character looking disdainfully at an object. The character is a tall, lanky individual with exaggerated facial features, styled in the manner of early 20th century comic strips. A mysterious object, ambiguously resembling a piece of modern art, stands in front of them, causing a clear reaction of confusion and dislike. The colors should be vivid and the overall vibe should maintain a sense of humor and sarcasm.
You did this? I saw some examples in the “show me your best hearse if it were a ___” post, and I’m flabbergasted at the computer-based witchcraft it’s pulling off over there.
What image generator is it using? Is it using chatgpt to expand the prompts?
It’s using DALL-E, the version 3 also sometimes expands the prompt (IMO it’s using GPT in the background to do so, but I haven’t checked). The prompt is only printed below the image if it has been changed. I’ve noticed that if the prompt is long already, it doesn’t change it, but I haven’t really confirmed anything. Well, you can try it yourself, just tag the bot and it will respond with the image.
It also falls back to DALL-E version 2 to lower the costs and there’s some daily limit (per-user and global).
Very interesting. So all the requests from the bot get sent through one account? Seems like it would get expensive fast as people find out that this option exists.
Yeah, but there’s rate limiting in place that makes sure I won’t pay more than I’m comfortable with. I’m thinking of making it possible for users to supply their own api key.
Heh, in that case you’re gonna like my next bot!
What about the comment thread preceding the prompt? Could you include that too? Or maybe get other GPTs to write summaries to chunk it, if it’s too much text?
It’s possible, I’ll probably do it sooner or later.