• Rikudou_Sage
      link
      fedilink
      English
      4
      edit-2
      1 year ago

      Not about the context, but it knows who it’s replying to and what community it’s on. This is its system message:

      You are a bot replying to a Lemmy community called "%s". The community is hosted on instance %s. You are hosted on instance %s. The user's display name is %s, their username is %s and they're on instance %s.

      The %s gets replaced with correct values based on who (and where) it’s replying to.

      Edit: I’ve just yesterday added the ability to maintain history to the underlying code, but it’s not yet supported with the bot itself. But given the hard work is done, it should be fairly easy to implement thread context.

      • kase
        link
        31 year ago

        Fuck, that’s so cool. You’re so cool

          • @[email protected]B
            link
            fedilink
            41 year ago

            Here’s your image!

            AI image generated with the prompt from the previous comment


            The AI model has revised your prompt: An image showcasing a unique creature which is a hybrid of a cat and a shark. This curious being possesses the features of a domestic cat like the furry body, sharp eyes, and agile posture, combined with characteristics of a shark like the dorsal fin, sharp teeth, and a muscular tail. This hybrid creature stands as an imaginary concept, hence the realism in the image lies predominantly in the textural rendering and anatomical details of both animals. The surroundings are undefined allowing full focus on the hybrid creature.

              • @[email protected]B
                link
                fedilink
                210 months ago

                Here’s your image!

                AI image generated with the prompt from the previous comment


                The AI model has revised your prompt: An illustration captioned ‘thanks, I hate it.’ featuring a comical cartoon character looking disdainfully at an object. The character is a tall, lanky individual with exaggerated facial features, styled in the manner of early 20th century comic strips. A mysterious object, ambiguously resembling a piece of modern art, stands in front of them, causing a clear reaction of confusion and dislike. The colors should be vivid and the overall vibe should maintain a sense of humor and sarcasm.

          • @Usernameblankface
            link
            21 year ago

            You did this? I saw some examples in the “show me your best hearse if it were a ___” post, and I’m flabbergasted at the computer-based witchcraft it’s pulling off over there.

            What image generator is it using? Is it using chatgpt to expand the prompts?

            • Rikudou_Sage
              link
              fedilink
              English
              21 year ago

              It’s using DALL-E, the version 3 also sometimes expands the prompt (IMO it’s using GPT in the background to do so, but I haven’t checked). The prompt is only printed below the image if it has been changed. I’ve noticed that if the prompt is long already, it doesn’t change it, but I haven’t really confirmed anything. Well, you can try it yourself, just tag the bot and it will respond with the image.

              It also falls back to DALL-E version 2 to lower the costs and there’s some daily limit (per-user and global).

              • @Usernameblankface
                link
                21 year ago

                Very interesting. So all the requests from the bot get sent through one account? Seems like it would get expensive fast as people find out that this option exists.

                • Rikudou_Sage
                  link
                  fedilink
                  English
                  31 year ago

                  Yeah, but there’s rate limiting in place that makes sure I won’t pay more than I’m comfortable with. I’m thinking of making it possible for users to supply their own api key.

      • @[email protected]
        link
        fedilink
        21 year ago

        What about the comment thread preceding the prompt? Could you include that too? Or maybe get other GPTs to write summaries to chunk it, if it’s too much text?