Generative AI, and ChatGPT in particular have been catnip to the tech press, the mainstream media, and the conversations of professionals in nearly every
Oof. I’ve tried it with a few Powershell things and it has recommended cmdlets that don’t exist, parameters that don’t exist, or the wrong usage of cmdlets.
It’s really limited to basic, junior level programming assistance, and even then it’s not 100% reliable. Any time I’ve tried asking it something more advanced it takes a lot of coaxing to get it to output reasonable code. But it’s helpful for boilerplating basic code sometimes.
I haven’t had many issues in 4. Occasionally it does what you’re saying and I just say “bro, that doesn’t exist” and it’s like “oh, my bad, here you go.” And gives me something that works.
Just yesterday I had 4 make up a Jinja filter that didn’t exist. I told it that and it returned something new that also didn’t work but had the same basic form. 4 sucks now for anything that I’d like to be accurate.
I used gpt4 for terraform and it was kind of all over the place in terms of fully deprecated methods. It felt like a nice jumping off point but honestly probably would’ve been less work to just write it up from the docs in the first place.
I can definitely see how it could help someone fumble through it and come up with something working without knowing what to look for though.
Was also having weird issues with it truncating outputs and needing to split it, but even telling it to split would cause it to kind of stall.
Oof. I’ve tried it with a few Powershell things and it has recommended cmdlets that don’t exist, parameters that don’t exist, or the wrong usage of cmdlets.
It’s really limited to basic, junior level programming assistance, and even then it’s not 100% reliable. Any time I’ve tried asking it something more advanced it takes a lot of coaxing to get it to output reasonable code. But it’s helpful for boilerplating basic code sometimes.
Have you tried 3.5 or 4?
I haven’t had many issues in 4. Occasionally it does what you’re saying and I just say “bro, that doesn’t exist” and it’s like “oh, my bad, here you go.” And gives me something that works.
Just yesterday I had 4 make up a Jinja filter that didn’t exist. I told it that and it returned something new that also didn’t work but had the same basic form. 4 sucks now for anything that I’d like to be accurate.
Both models have definitely decreased in quality over time.
What kind of prompts are you giving?
I find results can be improved quite easily with better prompt engineering.
It makes things up wholecloth and it’s the user’s fault for not prompting in correctly? Come on.
It’s not a person. It’s a tool.
I don’t remember what version. I just gave up trying
Well don’t expect it to just give magical results without learning prompt engineering and understanding the tools you’re working with.
Set-MailboxAddressBook doesn’t exist.
Set-ADAttribute doesn’t exist.
Asking for a simple command and expecting to receive something that actually exists is magical?
I used gpt4 for terraform and it was kind of all over the place in terms of fully deprecated methods. It felt like a nice jumping off point but honestly probably would’ve been less work to just write it up from the docs in the first place.
I can definitely see how it could help someone fumble through it and come up with something working without knowing what to look for though.
Was also having weird issues with it truncating outputs and needing to split it, but even telling it to split would cause it to kind of stall.