Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an “edu” version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.

Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the “robots in front of the kids so they can learn to dance with them” (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn’t something we should probably take lightly. In very important ways, AI isn’t comparable to technologies that came before it.

The kind of reasoning we’re hearing from those educators in favor of AI adoption in schools doesn’t seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn’t sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.

ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it’s been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It’s really a stretch to say it’s had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven’t. We’re still scrambling and debating about how we should be using it in general. We’re still in the AI wild west, untamed and largely lawless.

The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?

The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they’re young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.

While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.

Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.

The way people (of all ages) often use AI has often been shown to lead to a tendency to “offload” thinking onto it — which doesn’t seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.

This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.

This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I’ve heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what’s real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.

I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we’re hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?

  • jpreston2005@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I gotta be honest. Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets. What these things are doing to our minds, is akin to how your diet and cooking habits change once you start utilizing doordash extensively.

    I say this with full understanding that I’m coming off as just some luddite, but I don’t care. A tool is only as useful as it improves your life, and off-loading critical thinking does not improve your life. It actively harms your brains higher functions, making you a much easier target for propaganda and conspiratorial thinking. Letting children use this is exponentially worse than letting them use social media, and we all know how devastating the effects of that are… This would be catastrophically worse.

    But hey, good thing we dismantled the department of education! Wouldn’t want kids to be educated! just make sure they know how to write a good ai prompt, because that will be so fucking useful.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      That sounds like a form of prejudice. I mean even Siri and Alexa? I don’t use them for different reaons… but a lot of people use them as voice activated controls for lights, music, and such. I can’t see how they are different from the clapper. As for the llms… they don’t do any critical thinking, so noone is offloading thier critical thinking to them. If anything, using them requires more critical thinking because everyone who has ever used them knows how often they are flat out wrong.

      • jpreston2005@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        voice activated light switches that constantly spy on you, harvesting your data for 3rd parties?

        Claiming that using ai requires more critical thinking than not is a wild take, bro. Gonna have to disagree with all of what you said hard.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          You hit on why I don’t use them. But some people don’t care about that for a variety of reasons. Doesn’t make them less than.

          Anyone who tries to use AI and not apply critical thinking fails at thier task because AI is just wrong often. So they either stop using it, or they apply critical thinking to figure out when the results are usable. But we don’t have to agree on that.

          • jpreston2005@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I don’t think using an inaccurate tool gives you extra insight into anything. If I asked you to measure the size of objects around your house, and gave you a tape measure that was not correctly metered, would that make you better at measuring things? We learn by asking questions and getting answers. If the answers given are wrong, then you haven’t learned anything. It, in fact, makes you dumber.

            People who rely on ai are dumber, because using the tool makes them dumber. QED?

            • Modern_medicine_isnt@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              How about this. I think it is pretty well known that pilots and astronauts are trained on simulations where some of the information they get from “tools” or gauges is wrong. On the surface it is just simulating failures. But the larger purpose is to improve critical thinking. They are trained to take each peice of information into context and if it doesn’t fit, question it. Sound familiar?

              AI spits out lots of information with every response. Much of it will be accurate. But sometimes there will be a faulty basis in it that causes one or more parts of the information to be wrong. But the wrongness almost always follows a pattern. In context the information is usually obviously wrong. And if you learn to spot the faulty basis, you can even sus out which information is still good. Or you can just tell it where it went wrong and it often will come back with the correct answer.

              Talking to people isn’t all that different. There is a whole sub for confidently wrong on reddit. But spotting when a person is wrong is often harder because the depth of thier faulty basis can be soo much deeper than an AIs. And, they are people, so you pften can’t politely question the accuracy of what they are saying. Or they are just a podcast… I think you get where I am going.

              • jpreston2005@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 months ago

                you are really reaching to justify this stuff, it’s wild. No. I disagree. using a flawed tool doesn’t increase your critical thinking skills. All it will do is confuse and ill inform the vast majority of people. Not everybody is an astronaut.

                • Modern_medicine_isnt@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  I didn’t need to reach at all. I brought down to several simple examples. You just aren’t willing to open your mind and consider it.
                  I 100% agree that it confuses and ill informs many adults. That is why I think it is so important that kids be exposed to it, and taught to think critically about what it tells them. It isn’t going to go away. And who kmows, they might learn to apply that same critical thinking to what the talking heads on the internet tell them. But even if not, it would be worth it.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Did you even read the comment I responded to? “Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets.”

          They are litterally judging someone before they even know any details other than that they use any form of AI at all. Could be a cyber security researcher fir all the commenter knows.

    • E_coli42@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Old man yells at cloud.

      I remember the “ban calculators” back in the day. “Kids won’t be able to learn math if the calculator does all the calculations for them!”

      The solution to almost anything disruptive is regulation, not a ban. Use AI in times when it can be a leaning tool, and re-design school to be resilient to AI when it would not enhance learning. Have more open discussions in class for a start instead of handing kids a sheet of homework that can be done by AI when the kid gets home.

      • lemmy_outta_here@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I remember the “ban calculators” back in the day

        US math scores have hit a low point in history, and calculators are partially to blame. Calculators are good to use if you already have an excellent understanding of the operations. If you start learning math with a calculator in your hand, though, you may be prevented from developing a good understanding of numbers. There are ‘shortcut’ methods for basic operations that are obvious if you are good with numbers. When I used to teach math, I had students who couldn’t tell me what 9 * 25 is without a calculator. They never developed the intuition that 10 * 25 is dead easy to find in your head, and that 9 * 25 = (10-1) * 25 = 250-25.

        • E_coli42@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Interesting. The US is definitely not doing a good job at this then and needs to re-vamp their education system. Your example didn’t convince me that calculators are bad for students, but rather than the US schooling system is really bad if they introduce calculators so early that students don’t even have an intuition of 9 * 25 = (10-1) * 25 = 250-25.

      • Jason2357@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Offloading onto technology always atrophies the skill it replaces. Calculators offloaded, very specifically, basic arithmetic. However, Math =/= arithmetic. I used calculators, and cannot do mental multiplication and division as fast or well as older generations, but I spent that time learning to apply math to problems, understand number theory, and gaining a mastery of more complex operations, including writing computer sourcecode to do math-related things. It was always a trade-off.

        In Aristotle’s time, people spent their entire education memorizing literature, and the written world off-loaded that skill. This isn’t a new problem, but there needs to be something of value to be educated in that replaces what was off-loaded. I think scholars are much better trained today, now that they don’t have to spend years memorizing passages word for word.

        AI replaces thinking. That’s a bomb between the ears for students.

        • E_coli42@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          It doesn’t have to replace thinking if used properly. This is what schools should focus on instead of banning AI and pretending that kids are not going to use it behind closed doors.

          For example, I almost exclusively use Gen AI to help me find sources or as a jumping-off point to researching various topics, rather than as a source of truth itself (because it is not one). This is super useful as it automates away the tedious parts of finding the right research papers to start learning something and gives me more time to focus on my actual literature review.

          If we ban AI in schools instead of embrace it with caution, students won’t know how to learn skills in order to use it effectively. They’ll just start offloading their thinking to AI when doing homework.

  • undrwater@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I spent some years in classrooms as a service provider when Wikipedia was all the rage. Most districts had a “no Wikipedia” policy, and required primary sources.

    My kids just graduated high school, and they were told NOT to use LLM’s (though some of their teachers would wink). Their current college professors use LLM detection software.

    AI and Wikipedia are not the same, though. Students are better off with Wikipedia as they MIGHT read the references.

    Still, those students who WANT to learn will not be held back by AI.

    • Jankatarch@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      College professors are making homeworks harsher to make up for the cheating so students who WANT to learn may actually be held back by the literal sense.