Example generators made with this plugin:
- https://perchance.org/ai-character-design-example-simple#edit
- https://perchance.org/random-character-chat-example#edit
- https://perchance.org/ai-character-design-example#edit
- https://perchance.org/ai-chat-example#edit
- https://perchance.org/ai-text-plugin-render-example#edit
- https://perchance.org/ai-text-plugin-text-to-speech-example#edit
- https://perchance.org/ai-text-plugin-tester#edit
See the plugin page for more. There will probably be issues/bugs! Thank you in advance to the pioneers who test this and report bugs/issues in these first few days/weeks 🫡
(It was actually possible to discover this plugin a few days ago, but no one made it through all the clues lol ^^ some people did at least figure out the first step)
Just made ai-text-recipes and this template for testing.
I assume that if the AI is generating multiple paragraphs, those paragraphs are ‘chunks’? Also, we can also use just the
onChunk()
function instead ofrender()
since both are applied on each chunk?Nice! Thank you for playing around with it.
The chunks are basically words, or chunks of words, but they can be larger than that. E.g. the first chunk is your
startWith
text if you specified that, and then each subsequent chunk is generally a little piece of text - corresponding to the chunks that are being appended to the output element several times per second.The
render
function is specifically for transforming the output into some different form. Whatever youreturn
from that function is what gets displayed - like in this example where we ask the AI for asterisks around actions (since that would be easy for it to generate) but then “render” that text so that the asterisked parts are italicized via HTML. Getting the AI itself to generate HTML is okay, but it has been trained mostly on text, rather than HTML, so it’s probably better to get it to use a “syntax” that it’s more accustomed to, and then we handle the transformation to HTML ourselves withrender
.onChunk
doesn’t have any effect on the display of the output unless you specifically write some code to do that. It just allows you to run whatever custom code you want every time a new chunk is received.But yeah you can definitely just use
onChunk
if you want to manage the “rendering” yourself (e.g.onChunk(data) => outputEl.innerHTML = data.fullTextSoFar.replace(...)
), or if you don’t want to change what is displayed, but instead want to do something else for every chunk.Thanks for the question! I’ve just updated the plugin page with some details on most of the options that are currently available.
The list on the plugin page is really helpful! Thanks again for the explanation!
Found the 🦙 but got stuck there! Thanks for this!
Can’t wait to see what you create with this one! Your text-to-image-plugin creations (esp. realistic portraits) are amazing. Let me know if there are any extra prompt options that would make certain common use cases easier (akin to
hideStartWith
- which I guessed would be something people asked for, but it was just a guess)Thanks! Just started testing it and some hiccups about network failure is happening.
Woops! Thanks. Should be fixed now. Please keep me updated with any other issues you run into.