Right? AI chatbots are so bad with hallucinations. As a programmer I can usually spot the inaccurate code chatbot is spewing, but these kids won’t be able to spot the inaccuracy unless there’s very strict quality control…which would require human teachers anyway.
And what would the damage be of they have errors in their code? They could learn to find those based on tests.
GPT is frigging amazing with coding. Slap down some points what to do and it shots out 400 lines of code that do exactly that. Seriously, it is not a god, but really far from as bad as people here make it seem.
Right? AI chatbots are so bad with hallucinations. As a programmer I can usually spot the inaccurate code chatbot is spewing, but these kids won’t be able to spot the inaccuracy unless there’s very strict quality control…which would require human teachers anyway.
And what would the damage be of they have errors in their code? They could learn to find those based on tests.
GPT is frigging amazing with coding. Slap down some points what to do and it shots out 400 lines of code that do exactly that. Seriously, it is not a god, but really far from as bad as people here make it seem.