We have spotted quite a few students using generative AI in their essays this summer and applied standard academic misconduct proceedings, though in most cases the work was so bad they would’ve failed anyway.

Today I learned of one whose use was sufficiently extensive that they will fail their degree.

I am wondering if this is the first time a student has failed a whole degree for using AI?[…]

  • @SaucyGoodness
    link
    English
    61 year ago

    Depends on the university policy. To me, I don’t see any difference between certain AI use and plagiarism. And plagiarism ought to result in expulsion.

    As an instructor, however, it is increasingly difficult being 100% certain someone is using an LLM. While the easy spot is usually shorter paragraphs and a final hedging paragraph (the one paragraph that OpenAI included so they won’t be liable if shit goes south), there is still no way to be sure.

    So instead, I just have to begrudgingly nod along as my engineering students dump awful, boring AI texts on me.

    • @twack
      link
      English
      51 year ago

      As a current student attending an online class, it’s annoying to me as well. Sometimes it’s quite obvious when another student is using it, especially in the online discussions. The AI posts include too many detailed concepts for someone that clearly doesn’t know what they are talking about, and it comes off weird. There’s also usually a list including nuanced answers that either shouldn’t be there or need further explanation.

      However, my entire course consists of weekly discussion boards, 7 unrelated papers, and a 15-20 page final paper. These AI models might eventually force some of the crappier online teachers to actually teach instead of just proctor a time wasting exercise, and that would be a good thing. Education should not be exclusively about wasting adequate time reading and writing papers, and sometimes it seems like that is all that it is now.

      • @namelessdread
        link
        English
        21 year ago

        You make an excellent point about the impact of ChatGPT and others on written assignments. I’ve been in the online higher ed field designing online courses for almost 10 years now. I also teach online.

        Having students regurgitate the same information over and over in the form of discussions and papers (or other assignments with names like blog, journal, etc) are starting to become problematic. Students generally don’t like this form of assessment, and as noted in this thread, it’s very difficult to be able to prove that a student has used ChatGPT.

        The reason this is really exciting for someone like me is that it will drive innovation, whether universities want to or not. We can utilize other assessment tools that allow students to test their knowledge and learn through doing. Will all writing assignments go away? Certainly not. But I do think there’s so much opportunity for online higher education and I really do think these changes in technology will force new learning modalities to be developed.

        ~ end nerdy online education rant ~

  • @grabyourmotherskeys
    link
    English
    21 year ago

    I interviewed someone recently who was obviously using chatgpt. Did ok at first but as the questions changed to hone in various details the candidate stopped responding, the pauses got longer, and so on. Also, found numerous excuses to leave the camera off, etc (remote).