• @alienanimals
    link
    English
    206 months ago

    It’s not a big deal. People are just upset that kids have more tools/resources than they did. They would prefer kids wrote on paper with pencil and did not use calculators or any other tool that they would have available to them in the workforce.

    • BraveLittleToaster
      link
      fedilink
      English
      86 months ago

      Teachers when I was little “You won’t always have a calculator with you” and here I am with a device more powerful than what sent astronauts to the moon in my pocket 24/7

      • @LukeMedia
        link
        English
        26 months ago

        Fun fact for you, many credit-card/debit-card chips alone are comparably powerful to the computers that sent us to the moon.

        It’s mentioned a bit in this short article about how EMV chips are made. This summary of compute power does come from a company that manufactures EMV chips, so there is bias present.

    • Phanatik
      link
      fedilink
      86 months ago

      There’s a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you. One invokes plagiarism which schools/universities are strongly against.

      The problem is being able to differentiate between a paper that’s been written by a human (which may or may not be written with ChatGPT’s assistance) and a paper entirely written by ChatGPT and presented as a student’s own work.

      I want to strongly stress that in the latter situation, it is plagiarism. The argument doesn’t even involve the plagiarism that ChatGPT does. The definition of plagiarism is simple, ChatGPT wrote a paper, you the student did not and you are presenting ChatGPT’s paper as your own, ergo plagiarism.

        • @[email protected]
          link
          fedilink
          English
          16 months ago

          Correct me if I am wrong with current teaching methods, but I feel like the way you outlined things is how school is taught. Calculators were “banned” until about 6th grade, because we were learning the rules of math. Sure, we could give calculators to 3rd graders, but they will learn that 2 + 2 = 4 because the calculator said so, and not because they worked it out. Calculators were allowed once you get into geometry and algebra, where the actual calculation is merely a mechanism for the logical thinking you are learning. Finding the answer to 5/7 is so trivially important to finding that that value for X is what makes Y = 0.

          I am not close to the education sector, but I imagine LLM are going to be used similarly, we just don’t have the best way laid out yet. I can totally see a scenario, where in 2030, students have to write and edit their own papers until they reach grade 6 or so. Then, rather than writing a paper which tests all your language arts skills, you will proof-read 3 different papers written by LMM, with a hyper focus on one skill set. One week, it may be active vs passive voice, or using gerunds correctly. Just like with math and the calculator, you will move beyond learning the mechanics of reading and writing, and focus on composing thoughts in a clear manner. This doesn’t seem like a reach, we just don’t have curriculum ready to take advantage of it yet.

      • RiikkaTheIcePrincess
        link
        fedilink
        16 months ago

        There’s a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you.

        Yeah, one is what many “AI” fans insist is what’s happening, and the other is what people actually do because humans are lazy, intellectually dishonest piles of crap. “Just a little GPT,” they say. “I don’t see a problem, we’ll all just use it in moderation,” they say. Then somehow we only see more garbage full of errors; we get BS numbers, references to studies or legal cases or anything else that simply don’t exist, images of people with extra rows of teeth and hands where feet should be, gibberish non-text where text could obviously be… maybe we’ll even get ads injected into everything because why not screw up our already shitty world even more?

        So now people have this “tool” they think is simultaneously smarter and more creative than humans at all of the things humans have historically claimed makes them better than not only machines but other animals, but is also “just a tool” that they’re only going to use a little bit, to help out but not replace. They’ll trust this tool to be smarter than they are, which it will arguably impressively turn out to not be. They’ll expect everyone else to accept the costs this incurs, from environmental damage due to running the damn things to social, scientific, economic, and other harms caused by everything being generated by “hallucinating” “AI” that’s incapable of thinking.

        It’s all very tiring.

        (And now I’m probably going to get more crap for both things I’ve said and things I haven’t, because people are intellectually lazy/dishonest and can’t take criticism. Even more tiring! Bleh.)

        • Phanatik
          link
          fedilink
          16 months ago

          Everything you’ve said I agree with wholeheartedly. This kind of cornercutting isn’t good for us as a species. When you eliminate the struggle involved in developing skills, it cheapens whatever you’ve produced. Just soulless garbage and it’ll proliferate the most in art spaces.

          The first thing that happened was that Microsoft implemented ChatGPT into Windows as part of their Copilot feature. It can now use your activity on your pc as data points and the next step is sure as shit going to be an integration with Bing Ads. I know this because Microsoft presented this to our company.

          I distrusted it then and I want it to burn now.