tl;dr: No

Even simple problems are too complicated for it.

  • @[email protected]
    link
    fedilink
    English
    179 months ago

    A llm works by predicting words, its nowhere near understanding anything. That it even remotely works for simple programming problems is impressive I think.

    • @[email protected]
      link
      fedilink
      English
      09 months ago

      Yeah. I have a buddy at work that thinks it’s the greatest thing ever but I have little faith the info is the best every time. He uses it like a search engine with a bit more power and it works alright tbh.

      • @abhibeckert
        link
        English
        5
        edit-2
        9 months ago

        You don’t need to “have faith”. Just test the code and find out if it works.

        For example earlier today I asked ChatGPT to write some javascript to make a circle orbit around another circle, calculating the exact position it should be for a given radius/speed/time. Easy enough to verify that was working.

        Then I asked it to draw a 2D image of the earth, to put on that circle. I know what our planet looks like, so that was easy. I did need to ask several times with different to get the style I was looking for… but it was a hell of a lot easier than drawing one myself.

        Then the really tricky part… I asked it how to make a CSS inner shadow that is updated in real time as the earth rotates around the sun. That would’ve been really difficult for me to figure out on my own, since geometry ins’t my strong point and neither is CSS.

        Repeated that for every other planet and moon in our solar system, added some asteroid belts… I got a pretty sweet representation of our solar system, not to scale but roughly to scale and fully animated, in a couple hours. Would have taken a week if I had to use Stack Overflow.