A Massachusetts couple claims that their son’s high school attempted to derail his future by giving him detention and a bad grade on an assignment he wrote using generative AI.

An old and powerful force has entered the fraught debate over generative AI in schools: litigious parents angry that their child may not be accepted into a prestigious university.

In what appears to be the first case of its kind, at least in Massachusetts, a couple has sued their local school district after it disciplined their son for using generative AI tools on a history project. Dale and Jennifer Harris allege that the Hingham High School student handbook did not explicitly prohibit the use of AI to complete assignments and that the punishment visited upon their son for using an AI tool—he received Saturday detention and a grade of 65 out of 100 on the assignment—has harmed his chances of getting into Stanford University and other elite schools.

Yeah, I’m 100% with the school on this one.

  • @Knock_Knock_Lemmy_In
    link
    -52 months ago

    If I used a calculator on a maths test I should only be penalised if the rules stated no calculators.

    • @GreenKnight23
      link
      6
      edit-2
      2 months ago

      According to Oxford, they define plagiarism as,

      Presenting work or ideas from another source as your own, with or without consent of the original author, by incorporating it into your work without full acknowledgement.

      I think that covers 100% of your argument here.

      LLMs can’t provide reference to their source materials without opening the business behind it to litigation. this means the LLM can’t request consent.

      the child, in this case, cannot get consent from the original author that wrote the content that trained the LLM, cannot get consent from the LLM, and incorporated the result of LLM plagiarism into their work and attempted to pass it off as their own.

      the parents are entitled and enabling pricks and don’t have legal ground to stand on.

      • @Knock_Knock_Lemmy_In
        link
        02 months ago

        LLMs are certainly trained without consent, but they exist to spot common patterns. It’s only likely to plagiarise if that text is also similar to lots of other text.

        In fact, the academic practice of references and exact quotes has actually increased the tendency of statistical models to “plagiarise”.

        LLM will continue to be a useful academic tool. We just have to learn how best to incorporate them into our testing.

        the parents are entitled and enabling pricks and don’t have legal ground to stand on.

        After reading that the exam rules basically said not to use chatgpt or similar, I completely agree.

    • @[email protected]
      link
      fedilink
      English
      52 months ago

      Is AI more like a calculator, or more like copy/pasting Wikipedia articles without attribution?

      • @Knock_Knock_Lemmy_In
        link
        02 months ago

        It’s not really a calculator because it gives different answers. Newer moldels can give attribution (e.g. bing copilot).

        My opinion is that LLMs are not going to go away. Testing needs to adapt to focus on the human element. Marks are no longer lost for bad handwriting.

        • @[email protected]
          link
          fedilink
          English
          32 months ago

          Just like when I was a kid using Wikipedia for research when it wasn’t acceptable, the expectation should be that you use it to understand the material and then follow it to the source material to read that or at least find a relevant quote that lets you repeat that wikipedia said in your own words with attribution.

          Copying wiki, or copying the output of an LLM, are both similarly academically fraudulent. LLMs are just more likely to also be wrong.

          • @Knock_Knock_Lemmy_In
            link
            12 months ago

            Mostly Agreed. I think the “in your own words” part will be debated strongly over the next few years. Will proof of writing your own prompt be sufficient?

      • @Knock_Knock_Lemmy_In
        link
        -12 months ago

        No, you’d be penalising yourself (except if you got the wolfram alpha plugin working).

        Professors should be setting exams that chatgpt can’t hope to solve.

    • @Red_October
      link
      32 months ago

      And what if you had an app on your phone that let you just take a picture of the question, and write out the answer it gave you? A calculator still requires that you know what to input, and at the level of math where a calculator really is just easy mode, it absolutely would specifically prohibit them.

      • @Knock_Knock_Lemmy_In
        link
        -12 months ago

        And what if you had an app on your phone that let you just take a picture of the question, and write out the answer it gave you?

        At college level, the question setter should ensure they are testing something where this is not possible.