As for the prospect that AI will enable business users to do more with fewer humans in the office or on the factory floor, the technology generates such frequent errors that users may need to add workers just to double-check the bots’ output.
And here we were worrying about being out of work…
Shitty idea: fire the workers anyway and gamify the product so the customer does quality control while earning fake money. If we’re feeling generous we may even award NFT monkeys to the harder working customers.
It’s literally a word slot machine. A spicy autocorrect.
I’ve found it has three great uses:
As a fun “yes man” fiction helper when you can’t get the wording right in a passage.
Interactive choose your own adventure porn literature.
As a teacher, an easy way to spin up quick questions (not answers!) for quizzes and stuff.
Other than that, it’s trash. I tried a free demo of Copilot through work, and it tried to “organize” my powerpoints. It either did nothing, or incorrectly “summarized” a section and fucked up the slideshow, depending on the topic.
I’ve found some use in it when doing programming. For example, I can explain what I need - eg a loop that reads every line in a file - and it will spit something out that I can try.
The fact that I’m verifying the answers means I catch it making mistakes, but it’s still faster than me doing the task manually.
But outside of this it’s not been terribly useful for me. My job has an AI bot trained on our documentation but we don’t really trust it and have to verify its answers anyway.
It’s also been a major assist with programming for me too. I’ve asked it for some pretty complicated algorithms and it gets it mostly right out of the box.
Interactive choose your own adventure porn literature.
I tried generating fiction with a couple of the big-name LLMs. They’re not very good because any time there’s any non-trivial conflict, the systems will just refuse to continue. However, they’re very good at spitting out vapid sonnets about non-controversial topics.
You might be able to generate good literature if you created your own system, but then you’d have to explicitly use data stolen from people you admire.
NovelAI has no guardrails. BigName LLMs are trash because they need to be investor-safe.
That said, you do generally need to take the lead, since it’s not gonna just spin up a story entirely on its own. It’s more of a “yeah, and~” than the big instruct models.
And here we were worrying about being out of work…
Tediously fixing botslop all day is another kind of hellscape, but at least we won’t be homeless.
Pretty sure that’s already a reality in Kenya - https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/
More tedious work with worse pay \o/
Shitty idea: fire the workers anyway and gamify the product so the customer does quality control while earning fake money. If we’re feeling generous we may even award NFT monkeys to the harder working customers.
It’s literally a word slot machine. A spicy autocorrect.
I’ve found it has three great uses:
pornliterature.Other than that, it’s trash. I tried a free demo of Copilot through work, and it tried to “organize” my powerpoints. It either did nothing, or incorrectly “summarized” a section and fucked up the slideshow, depending on the topic.
I’ve found some use in it when doing programming. For example, I can explain what I need - eg a loop that reads every line in a file - and it will spit something out that I can try.
The fact that I’m verifying the answers means I catch it making mistakes, but it’s still faster than me doing the task manually.
But outside of this it’s not been terribly useful for me. My job has an AI bot trained on our documentation but we don’t really trust it and have to verify its answers anyway.
My work has one where you can upload a document and ask questions about it. It’s hilariously bad at it. Example:
User: How many Xs does Foo have?
AI: The document doesn’t have any information about how many Xs Foo has.
User: What information does the document have on Foo?
AI: [OK summary which includes “Foo has 12 Xs”]
Everyone I know who has played with it pretty quickly decided not to use it for anything.
I mean… Google’s model can’t spell Strawberry properly.
It’s also been a major assist with programming for me too. I’ve asked it for some pretty complicated algorithms and it gets it mostly right out of the box.
I tried generating fiction with a couple of the big-name LLMs. They’re not very good because any time there’s any non-trivial conflict, the systems will just refuse to continue. However, they’re very good at spitting out vapid sonnets about non-controversial topics.
You might be able to generate good literature if you created your own system, but then you’d have to explicitly use data stolen from people you admire.
NovelAI has no guardrails. BigName LLMs are trash because they need to be investor-safe.
That said, you do generally need to take the lead, since it’s not gonna just spin up a story entirely on its own. It’s more of a “yeah, and~” than the big instruct models.
Thanks for the tip, I’ll check it out!