College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.

  • HexesofVexes@lemmy.world
    link
    fedilink
    English
    arrow-up
    81
    ·
    2 years ago

    Prof here - take a look at it from our side.

    Our job is to evaluate YOUR ability; and AI is a great way to mask poor ability. We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.

    I am not arguing exams are perfect mind, but I’d rather doubt a few student’s inability (maybe it was just a bad exam for them) than always doubt their ability (is any of this their own work).

    Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students, but do suggest they can obfuscate AI work well.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 years ago

      Is AI going to go away?

      In the real world, will those students be working from a textbook, or from a browser with some form of AI accessible in a few years?

      What exactly is being measured and evaluated? Or has the world changed, and existing infrastructure is struggling to cling to the status quo?

      Were those years of students being forced to learn cursive in the age of the computer a useful application of their time? Or math classes where a calculator wasn’t allowed?

      I can hardly think just how useful a programming class where you need to write it on a blank page of paper with a pen and no linters might be, then.

      Maybe the focus on where and how knowledge is applied needs to be revisited in light of a changing landscape.

      For example, how much more practically useful might test questions be that provide a hallucinated wrong answer from ChatGPT and then task the students to identify what was wrong? Or provide them a cross discipline question that expects ChatGPT usage yet would remain challenging because of the scope or nuance?

      I get that it’s difficult to adjust to something that’s changed everything in the field within months.

      But it’s quite likely a fair bit of how education has been done for the past 20 years in the digital age (itself a gradual transition to the Internet existing) needs major reworking to adapt to changes rather than simply oppose them, putting academia in a bubble further and further detached from real world feasibility.

      • SkiDude@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        ·
        2 years ago

        If you’re going to take a class to learn how to do X, but never actually learn how to do X because you’re letting a machine do all the work, why even take the class?

        In the real world, even if you’re using all the newest, cutting edge stuff, you still need to understand the concepts behind what you’re doing. You still have to know what to put into the tool and that what you get out is something that works.

        If the tool, AI, whatever, is smart enough to accomplish the task without you actually knowing anything, what the hell are you useful for?

        • prosp3kt@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 years ago

          But that’s actually most of the works we have nowadays. IA is replacing repetitive works such as magazine writers or script writers

      • orangeboats@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        2 years ago

        As an anecdotal though, I once saw someone simply forwarding (ie. copy and pasting) their exam questions to ChatGPT. His answers are just ChatGPT responses, but paraphrased to make it look less GPT-ish. I am not even sure whether he understood the question itself.

        In this case, the only skill that is tested… is English paraphrasing.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 years ago

        I’ll field this because it does raise some good points:

        It all boils down to how much you trust what is essentially matrix multiplication, trained on the internet, with some very arbitrarily chosen initial conditions. Early on when AI started cropping up in the news, I tested the validity of answers given:

        1. For topics aimed at 10–18 year olds, it does pretty well. It’s answers are generic, and it makes mistakes every now and then.

        2. For 1st–3rd year degree, it really starts to make dangerous errors, but it’s a good tool to summarise materials from textbooks.

        3. Masters+, it spews (very convincing) bollocks most of the time.

        Recognising the mistakes in (1) requires checking it against the course notes, something most students manage. Recognising the mistakes in (2) is often something a stronger student can manage, but not a weaker one. As for (3), you are going to need to be an expert to recognise the mistakes (it literally misinterpreted my own work at me at one point).

        The irony is, education in its current format is already working with AI, it’s teaching people how to correct the errors given. Theming assessment around an AI is a great idea, until you have to create one (the very fact it is moving fast means that everything you teach about it ends up out of date by the time a student needs it for work).

        However, I do agree that education as a whole needs overhauling. How to do this: maybe fund it a bit better so we’re able to hire folks to help develop better courses - at the moment every “great course” you’ve ever taken was paid for in blood (i.e. 50 hour weeks teaching/marking/prepping/meeting arbitrary research requirement).

        • Armok: God of Blood@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 years ago

          (1) seems to be a legitimate problem. (2) is just filtering the stronger students from the weaker ones with extra steps. (3) isn’t an issue unless a professor teaching graduate classes can’t tell BS from truth in their own field. If that’s the case, I’d call the professor’s lack of knowledge a larger issue than the student’s.

          • jarfil@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 years ago

            You may not know this, but “Masters” is about uncovering knowledge nobody had before, not even the professor. That’s where peer reviews and shit like LK-99 happen.

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 years ago

              It really isn’t. You don’t start doing properly original research until a year or two into a PhD. At best a masters project is going to be doing something like taking an existing model and applying it to an adjacent topic to the one it was designed for.

    • mrspaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 years ago

      I recently finished my degree, and exam-heavy courses were the bane of my existence. I could sit down with the homework, work out every problem completely with everything documented, and then sit to an exam and suddenly it’s “what’s a fluid? What’s energy? Is this a pencil?”

      The worst example was a course with three exams worth 30% of the grade, attendance 5% and homework 5%. I had to take the course twice; 100% on HW each time, but barely scraped by with a 70.4% after exams on the second attempt. Courses like that took years off my life in stress. :(

    • Smacks@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 years ago

      Graduated a year ago, just before this AI craze was a thing.

      I feel there’s a social shift when it comes to education these days. It’s mostly: “do 500 - 1,000 word essay to get 1.5% of your grade”. The education doesn’t matter anymore, the grades do; if you pick something up along the way, great! But it isn’t that much of a priority.

      I think it partially comes from colleges squeezing students of their funds, and indifferent professors who just assign busywork for the sake of it. There are a lot of uncaring professors that just throw tons of work at students, turning them back to the textbook whenever they ask questions.

      However, I don’t doubt a good chunk of students use AI on their work to just get it out of the way. That really sucks and I feel bad for the professors that actually care and put effort into their classes. But, I also feel the majority does it in response to the monotonous grind that a lot of other professors give them.

      • HexesofVexes@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        “Avoid at all costs because we hate marking it even more than you hate writing it”?

        An in person exam can be done in a locked down IT lab, and this leads to a better marking experience, and I suspect a better exam experience!

  • aulin@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    2 years ago

    There are places where analog exams went away? I’d say Sweden has always been at the forefront of technology, but our exams were always pen-and-paper.

  • UsernameIsTooLon@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 years ago

    You can still have AI write the paper and you copy it from text to paper. If anything, this will make AI harder to detect because it’s now AI + human error during the transferring process rather than straight copying and pasting for students.

  • thedirtyknapkin@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 years ago

    as someone with wrist and hand problems that make writing a lot by hand, I’m so lucky i finished college in 2019

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 years ago

    It just brings into question what the point of exams are.

    AI in its current form is equivalent to the advent of the typewriter. Its just empowering you to do a whole lot more a whole lot faster.

    Not using it is dumb.

    • lunarul@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 years ago

      AI is a tool that can indeed be of great benefit when used properly. But using it without comprehending and verifying the source material can be downright dangerous (like those lawyers citing fake cases). The point of the essay/exam is to test comprehension of the material.

      Using AI at this point is like using a typewriter in a calligraphy test, or autocorrect in a spelling and grammar test.

      Although asking for handwritten essays does nothing to combat use of AI. You can still generate content and then transcribe it by hand.

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        That argument is great until someone gets maimed or killed because the “AI” got it wrong and the user didn’t know enough to realize.

        You know idiots with AI do that all the time everyday right?

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 years ago

        My broader point (in your metaphor) is that calligraphy tests are irrelevant at this point. The world changed. Theres not going back.

        • ratskrad@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 years ago

          A calligraphy test is not irrelevant if you are studying to LEARN calligraphy. If you are arguing that calligraphy as a subject doesnt need to exist then fine then don’t study it. But you don’t learn it by asking AI to do it for you.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          2 years ago

          Typewriters are also irrelevant today. It was an analogy. I agree that AI can be used in some evaluations, depending what you’re evaluating.

          I allow and encourage Googling for information when I interview software engineering candidates. I don’t consider it “cheating”, on the contrary. Being able to unblock themselves is one of the skills they should have. They will be using external help when doing their job, so why should the test be any different.

          But that also reminds me now that I actually once had a candidate using generative AI in the coding interview. It did feel like cheating when it was a the level of asking for the full solution, not just help getting unblocked. It didn’t help at all though because the candidate didn’t even have enough skill to tell the good suggestions from the bad ones or what they needed to iterate on.

    • SocialMediaRefugee@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      ·
      2 years ago

      If that is the case and comprehending the material isn’t necessary then who needs the students in the first place? Just replace them with AI.

  • ZytaZiouZ@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    2 years ago

    The best part is there are hand writing generating programs or even web pages that convert text to gcode allowing you to use a 3d printer to write things out. In theory it should be really hard to pass it off as being human written, let alone match your own writing, but I’m sure it will only get better. I think there are even models to try to match someone’s writing.

      • CookieJarObserver@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 years ago

        Nah i just think of the people like me, that literally can’t write with the hand, and no it’s not a education issue, i have a motoric impairment in my hands.

          • CookieJarObserver@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            2 years ago

            Its “not diagnosable”, the doctors just told me to “learn to write”…

            Writing isn’t even the only problem, I’m also having no feel for filigree works in general…

            Nowadays its just not bothering me anymore, besides putting my signature under stuff (tbh i usually use a stamp for that…) im not writing anything by hand.

  • Meowoem@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 years ago

    Only the ones too dumb to incorporate ai usage into their work and grade accordingly. Going to be a load of kids who aren’t just missing out a learning how to best use modern tools but who have wasted their time learning obsolete skills.

    Thankfully those kids will be able to get a proper education from AI soon.

    • bean@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Cool video if a bit impractical. Also teachers don’t have time to play detective with handwriting comparisons xD

  • Chickenstalker@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    If your exams can be solved by AI, then your exams are not good enough. How to get around this? Simple. Oral exams aka viva voce. Anyone who had defended their thesis knows the pants shitting terror this kind of exam does to you. It will take longer but you can truly determine how well the student understands the content.