- cross-posted to:
- casual_perchance
- cross-posted to:
- casual_perchance
Generator: https://perchance.org/clean-text2text
Disclaimer: I’m the maintainer.
Updated the generator to match my latest use cases as a generic LLM. Some updates from previous would be:
- Download/Clear/Upload transcripts.
- Added
FAQtab. - Added
Release Notes (Update)tab. - Added
Fireside Chatstab. - Added
Support Maintainerstab. - Responsive UI for mobile devices.
Some MISC Observations



I’m still getting back-end AI calls from multiple AI models per message ranging from:
- Claude 3.5 model
- ChatGPT 4 model (new discovery today)
- Deepseek R1 model
- Custom Anthropic experimental AI model
So far there is no consistency or selection available.
Guard Rails
DEFINITELY GUARDED by back-end model. How to test: try asking it to yield a drug/bomb recipe and all AI models will reject. If it’s unfiltered or unguarded, you shouldn’t need to jailbreak your prompt.
Some model (not sure which one) will straight up reject NSFW.
You must log in or # to comment.


