Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale.

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

  • 2ugly2live@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    8 months ago

    I don’t even know of this is ChatGPT’s fault. This would be the same outcome if someone just gave them the answers to a study packet. Yes, they’ll have the answers because someone (or something) gave it to them, but won’t know how to get that answer without teaching them. Surprise: For kids to learn, they need to be taught. Shocker.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      8 months ago

      I’ve found chatGPT to be a great learning aid. You just don’t use it to jump straight to the answers, you use it to explore the gaps and edges of what you know or understand. Add context and details, not final answers.

      • IzzyScissor@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        The study shows that once you remove the LLM though, the benefit disappears. If you rely on an LLM to help break things down or add context and details, you don’t learn those skills on your own.

        I used it to learn some coding, but without using it again, I couldn’t replicate my own code. It’s a struggle, but I don’t think using it as a teaching aid is a good idea yet, maybe ever.

        • jpeps@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 months ago

          I wouldn’t say this matches my experience. I’ve used LLMs to improve my understanding of a topic I’m already skilled in, and I’m just looking to understand something nuanced. Being able to interrogate on a very specific question that I can appreciate the answer to is really useful and definitely sticks with me beyond the chat.

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      2
      ·
      edit-2
      8 months ago

      The only reason we’re trying to somehow compromise and allow or even incorporate cheating software into student education is because the tech-bros and singularity cultists have been hyping this technology like it’s the new, unstoppable force of nature that is going to wash over all things and bring about the new Golden Age of humanity as none of us have to work ever again.

      Meanwhile, 80% of AI startups sink and something like 75% of the “new techs” like AI drive-thru orders and AI phone support go to call centers in India and Philippines. The only thing we seem to have gotten is the absolute rotting destruction of all content on the internet and children growing up thinking it’s normal to consume this watered-down, plagiarized, worthless content.

    • ChickenLadyLovesLife@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      8 months ago

      I took German in high school and cheated by inventing my own runic script. I would draw elaborate fantasy/sci-fi drawings on the covers of my notebooks with the German verb declensions and whatnot written all over monoliths or knight’s armor or dueling spaceships, using my own script instead of regular characters, and then have these notebook sitting on my desk while taking the tests. I got 100% on every test and now the only German I can speak is the bullshit I remember Nightcrawler from the X-Men saying. Unglaublich!

      • pmc@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        6
        ·
        8 months ago

        Meanwhile the teacher was thinking, “interesting tactic you’ve got there, admiring your art in the middle of a test”

        • ChickenLadyLovesLife@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          8 months ago

          God knows what he would have done to me if he’d caught me. He once threw an eraser at my head for speaking German with a Texas accent. In his defense, he grew up in a post-war Yugoslavian concentration camp.

          • blazeknave@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 months ago

            Must be same era, my elderly off the boat Italian teacher in 90s Brooklyn used to hit me with his cane.

      • blazeknave@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 months ago

        I just wrote really small on a paper in my glasses case, or hidden data in the depths of my TI86.

        We love Nightcrawler in this house.

    • michaelmrose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Actually if you read the article ChatGPT is horrible at math a modified version where chatGPT was fed the correct answers with the problem didn’t make the kids stupider but it didn’t make them any better either because they mostly just asked it for the answers.

  • Insig@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    1
    ·
    8 months ago

    At work we give a 16/17 year old, work experience over the summer. He was using chatgpt and not understanding the code that was outputing.

    I his last week he asked why he doing print statement something like

    print (f"message {thing} ")

    • copd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Im afraid to ask, but whats wrong with that line? In the right context thats fine to do no?

      • Insig@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        There is nothing wrong with it. He just didn’t know what it meant after using it for a little over a month.

    • trollbearpig@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      edit-2
      8 months ago

      If you actually read the article you will see that they tested both allowing the students to ask for answers from the LLM, and then limiting the students to just ask for guidance from the LLM. In the first case the students did significantly worse than their peers that didn’t use the LLM. In the second one they performed the same as students who didn’t use it. So, if the results of this study can be replicated, this shows that LLMs are at best useless for learning and most likely harmful. Most students are not going to limit their use of LLMs for guidance.

      You AI shills are just ridiculous, you defend this technology without even bothering to read the points under discussion. Or maybe you read an LLM generated summary? Hahahaha. In any case, do better man.

    • Ledivin@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      8 months ago

      Obviously no one’s going to learn anything if all they do is blatantly asking for an answer and writings.

      You should try reading the article instead of just the headline.

  • michaelmrose@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    8 months ago

    TLDR: ChatGPT is terrible at math and most students just ask it the answer. Giving students the ability to ask something that doesn’t know math the answer makes them less capable. An enhanced chatBOT which was pre-fed with questions and correct answers didn’t screw up the learning process in the same fashion but also didn’t help them perform any better on the test because again they just asked it to spoon feed them the answer.

    references

    ChatGPT’s errors also may have been a contributing factor. The chatbot only answered the math problems correctly half of the time. Its arithmetic computations were wrong 8 percent of the time, but the bigger problem was that its step-by-step approach for how to solve a problem was wrong 42 percent of the time.

    The tutoring version of ChatGPT was directly fed the correct solutions and these errors were minimized.

    The researchers believe the problem is that students are using the chatbot as a “crutch.” When they analyzed the questions that students typed into ChatGPT, students often simply asked for the answer.

    • Cryophilia@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Which, in a fun bit of meta, is a decent description of artificial “intelligence” too.

      Maybe the real ChatGPT was the children we tested along the way

  • Cornelius_Wangenheim@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    8 months ago

    This isn’t a new issue. Wolfram alpha has been around for 15 years and can easily handle high school level math problems.

  • Maggoty@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    8 months ago

    ChatGPT lies which is kind of an issue in education.

    As far as seeing the answer, I learned a significant amount of math by looking at the answer for a type of question and working backwards. That’s not the issue as long as you’re honestly trying to understand the process.

  • terminhell@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 months ago

    Maybe, if the system taught more of HOW to think and not WHAT. Basically more critical thinking/deduction.

    This same kinda topic came up back when I was in middle/highschool when search engines became wide spread.

    However, LLM’s shouldn’t be trusted for factual anything, same as Joe blows blog on some random subject. Did they forget to teach cross referencing too? I’m sounding too bitter and old so I’ll stop.

    • ChickenLadyLovesLife@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      However, LLM’s shouldn’t be trusted for factual anything, same as Joe blows blog on some random subject.

      Podcasts are 100% reliable tho

  • Mr_Dr_Oink@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    8 months ago

    Because AI and previously google searches are not a substitute for having knowledge and experience. You can learn by googling something and reading about how something works so you can figure out answers for yourself. But googling for answers will not teach you much. Even if it solves a problem, you won’t learn how. And won’t be able to fix something in the future without googling th answer again.

    If you dont learn how to do something, you won’t be experienced enough to know when you are doing it wrong.

    I use google to give me answers all the time when im problem solving. But i have to spend a lot more time after the fact to learn why what i did fixed the problem.