- cross-posted to:
- technology
- cross-posted to:
- technology
Remember that one teacher who made going to school fun and inspired you to pursue your passions? Students at a new charter school in Arizona won’t, because they don’t get to have teachers. Instead, the two hours of academic instruction they receive each day—yes, just two hours—will be directed entirely by AI.
By a 4-3 margin, the Arizona State Board for Charter Schools on Monday approved an application from Unbound Academy to open a fully online school serving grades four through eight. Unbound already operates a private school that uses its AI-dependent “2hr Learning” model in Texas and is currently applying to open similar schools in Arkansas and Utah.
Under the 2hr Learning model, students spend just two hours a day using personalized learning programs from companies like IXL and Khan Academy. “As students work through lessons on subjects like math, reading, and science, the AI system will analyze their responses, time spent on tasks, and even emotional cues to optimize the difficulty and presentation of content,” according to Unbound’s charter school application in Arizona. “This ensures that each student is consistently challenged at their optimal level, preventing boredom or frustration.”
This sounds absolutely terrible. So everyone gets their own separate education that no one is vetting? And what you learn sounds like it could be totally different than the next person.
We’re fucked if this is how kids are being taught going forward.
preventing boredom or frustration
imagine having never experienced these things and then entering the workforce. This is a preventing forest fires situation and I’m here for it.
Do you think the best schools in the country will go this route? If not, why?
No, while the best schools in the country have done a lot to incorporate non-traditional educational tools, like Khan Academy, have changed the class time structure, have moved to more “hands on” methods of communicating ideas, and have been moving away from a dependence on rote cram/purge cycles, they will not be scrapping down time, and forcing kids to do things they think are boring. Instead they have more per capita resources to tutor kids on how to deal with their frustrations, scheduling, boredom, and doing things they don’t like, because they have to. They actually have people that will work 1 on 1 with kids to do this, or their parents will hire private tutors to work with them on that, and subjects they struggle with. However, they know that cutting it out isn’t the solution, it is to specifically teach the kids how to deal with it.
They actually have people that will work 1 on 1 with kids to do this, or their parents will hire private tutors to work with them on that, and subjects they struggle with. However, they know that cutting it out isn’t the solution, it is to specifically teach the kids how to deal with it.
I agree, which is what we should be doing with all schools. Training and paying our teachers more while lowering classroom sizes is the way to go, not replace teachers with a computer. Supplementing with AI, yeah sure. Replacing? Fuck no.
“Dad, I got sent to the principal’s office for breaking my teacher!”
wtf?
This will end well
They still have a curriculum? I wonder how long it will take for the GOPs to abandon the concept of school for all altogether.
Didn’t they already do that decades ago?
I have my doubts about this, but it’s an interesting experiment and charter schools are great for that.
Also, the kids aren’t just ignored for the rest of the school day. They spend most of their time being taught by humans.
Spending less time on traditional curriculum frees up the rest of students’ days for life-skill workshops that cover “financial literacy, public speaking, goal setting, entrepreneurship, critical thinking, and creative problem-solving,” according to the Arizona application.
Teachers are replaced by “guides” who lead those workshops.
Edit: One interesting possibility is that simply teaching young kids to interact with computers will in itself be beneficial for them. I introduced my friend’s second grader to Minecraft and he learned a lot of very useful skills because of that. At the beginning he had to be taught to use a mouse, but he got the hang of it quickly and soon he wasn’t just playing the game. He was reading the wiki, watching tutorials on YouTube, etc. That’s learning how to learn, which is arguably more important than learning anything specific.
(Now he’s a 5th grader who wants a 3D printer for Christmas and I suspect that that may somehow be related to Minecraft too. He’s probably a little too young for the printer but I suppose it’s better to start early than to start late. One of my adult friends started being taught how to program when he was nine and he has gone very far, although I’m sure that simply being the sort of person who is capable of learning to program at nine played a huge role in that.)
This charter school’s software is probably not as interesting as Minecraft yet, but that might be the direction where things are headed. A personal tutor that’s infinitely patient opens up interesting possibilities.
You are way too optimistic about this, have you used a spellcheck ever? This is a terrible idea.
What’s wrong with spellcheckers? The only problem I have had with them is that they’re triggered by technical terms, but that’s just a minor inconvenience.
(Thinking I have the spellchecker on when I don’t and therefore leaving mistakes in can also be a problem, but it’s not strictly the spellchecker’s fault.)
Also, before anyone asks, I do find ChatGPT and similar software quite useful.
The reference to spellchecking was because at the core this is how (very simplistically) LLMs work as well. Training on data and probability of the next word. For some purposes it works great most of the time, for others it’s like using a screwdriver to beat in a nail. It might work sometimes to some degree, but not what it’s for.
My opinion is that LLMs are being forced to be solutions in all sorts of places when we’re still trying to figure out their best application. To do this in a grade school academic setting is probably not the best idea, such experimental things should filter down from the higher education once they work well. This is about money and someone trying to find a simple answer instead of fixing the problem correctly.
The reference to spellchecking was because at the core this is how (very simplistically) LLMs work as well.
That’s not wrong but it is pedantic, contrary to popular usage, and irrelevant to the discussion of how LLMs might affect education. (I’m not saying you are pedantic, because you aren’t the one who originally brought it up.) The whole discussion of spellcheckers is.
LLMs are being forced to be solutions in all sorts of places when we’re still trying to figure out their best application
Education doesn’t have a first-mover advantage, but people are excited about AI and I don’t blame them. The risks of this particular attempt are quite low, so while I don’t think I would send a kid to this school myself, I don’t think parents who do are wrong.
filter down from the higher education once they work well
I think this technology will be useful in elementary schools before it will be useful in higher education, because college students are more capable of learning without supervision.
Your last point is really the key one, isn’t it? Is a LLM reliable enough to be put in charge of supervising a child’s path of learning? I’ve messed around with local LLMs enough to realize that I’d better double check everything it gives me, as its goal is to tell me what I want to hear, not what is factual.
In rereading that it occurred to me that it was not very different from the worse of the teachers I have had long ago in the past…so take that as a warning, I guess.
Ah, I was thinking of autocorrect on PCs, which generally won’t change what you wrote without your input. I swipe to type on my phone and the phone does often interpret my gestures as a word other than the one that I intended, but my gestures are so imprecise that I think the phone does a remarkably good job even if I do have to proofread afterwards.
I expect that the phones will do better once they have AI capable of noticing things the user clearly didn’t intend to write.
Spellcheck and autocorrect are AI, lol.
I expect that the phones will do better once they have AI capable of noticing things the user clearly didn’t intend to write.
I suppose they’re AI in a very general sense according to which even simple, deterministic programs such as one that plays tic-tac-toe are AI, but they’re not generative models like the software people people usually have in mind when they say “AI” is.
Ummm, dude where are you getting this info? It’s the OG of AI. It learns from your mistakes and teaches the model.
Edit: Do you know what AI is? I suggest looking at what it really does.