I’ve found some use in it when doing programming. For example, I can explain what I need - eg a loop that reads every line in a file - and it will spit something out that I can try.
The fact that I’m verifying the answers means I catch it making mistakes, but it’s still faster than me doing the task manually.
But outside of this it’s not been terribly useful for me. My job has an AI bot trained on our documentation but we don’t really trust it and have to verify its answers anyway.
It’s also been a major assist with programming for me too. I’ve asked it for some pretty complicated algorithms and it gets it mostly right out of the box.
I’ve found some use in it when doing programming. For example, I can explain what I need - eg a loop that reads every line in a file - and it will spit something out that I can try.
The fact that I’m verifying the answers means I catch it making mistakes, but it’s still faster than me doing the task manually.
But outside of this it’s not been terribly useful for me. My job has an AI bot trained on our documentation but we don’t really trust it and have to verify its answers anyway.
My work has one where you can upload a document and ask questions about it. It’s hilariously bad at it. Example:
User: How many Xs does Foo have?
AI: The document doesn’t have any information about how many Xs Foo has.
User: What information does the document have on Foo?
AI: [OK summary which includes “Foo has 12 Xs”]
Everyone I know who has played with it pretty quickly decided not to use it for anything.
I mean… Google’s model can’t spell Strawberry properly.
It’s also been a major assist with programming for me too. I’ve asked it for some pretty complicated algorithms and it gets it mostly right out of the box.