There’s no way for teachers to figure out if students are using ChatGPT to cheat, OpenAI says in new back-to-school guide::AI detectors used by educators to detect use of ChatGPT don’t work, says OpenAI.

  • PurpleTentacle@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    2
    ·
    2 years ago

    My wife teaches at a university. The title is partly bullshit:

    For most teachers it couldn’t be more obvious who used ChatGPT in an assignment and who didn’t.

    The problem, in most instances, isn’t the “figuring out” part, but the “reasonably proving” part.

    And that’s the most frustrating part: you know an assignment was AI-written, there are no tools to prove it and the university gives its staff virtually no guidance or assistance on the subject matter, so you’re almost powerless.

      • Brainsploosh@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 years ago

        Biggest reason for written exams is bulk processing.

        There are many better ways to show competency, ask any engineering or medical school, but few as cheap.

      • inspxtr@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 years ago

        To add on to the detection issues, international students, students on the spectrum, students with learning disability, … can all be subject to being flagged as “AI generated” by AI detectors. Teachers/professors who have gut feelings should (1) re-consider what biases they have in expected writing styles, and (2), like u/mind says, check in with the students.

    • Uriel238 [all pronouns]@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 years ago

      My coven-mate was called in by her college dean, accusing her of faking or plagiarizing her mid-term thesis. (I totally forget what the subject was. This was late 1980s. She wanted to work in national intelligence.)

      But the thing is, she could expain every part of her rabbit-hole deep dive (which was a trip to several libraries and locating books themselves rather than tracking leads through the internet.) It was all fresh in her head, and to the shock and awe of her dean and instructor (delight? horror?) it was clear she was just a determined genius doing post-grad quality work because she pushed herself that hard. And yes, she was out of their league and could probably write the thesis again if that was necessary.

      In our fucked up society, the US has little respect for teachers or even education so I don’t expect anything real to happen, but this would be grounds to reduce classroom size by increasing faculty size so that each teacher is familiar with their fifteen students, their capabilities and ambitions and challenges at home. That way when a kid turns in an AI essay but then can’t expain what the essay says, the teacher can use it as a teachable moment: point out that AI is a springboard, a place to start as a foundation for a report, but it’s still important for the student to make it their own, and make sure it comes to conclusions they agree with.

  • T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 years ago

    It makes some sense. If a tool could reliably discern it, the tool would used to train the model to be more indistinguishable from regular text, putting us back to where we are now.

  • EndOfLine@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    4
    ·
    edit-2
    2 years ago

    At the core of learning is for students to understand the content being taught. Using tools and shortcuts doesn’t necessarily negate that understanding.

    Using chatGPT is no different, from an acidemic evaluation standpoint, than having somebody else do an assignment.

    Teachers should already be incorporating some sort of verbal q&a sessions with students to see if their demonstrated in-person comprehension matches their written comprehension. Though from my personal experience, this very rarely happens.

    • Dojan@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 years ago

      That’s going on the supposition that a person just prompts for an essay and leaves it at that, which to be fair is likely the issue. The thing is, the genie is out of the bottle and it’s not going to go back in. I think at this point it’ll be better to adjust the way we teach children things, and also get to know the tools they’ll be using.

      I’ve been using GPT and LLAMA to assist me in writing emails and reports. I provide a foundation, and working with the LLMs I get a good cohesive output. It saves me time, allowing me to work on other things, and whoever needs to read the report or email gets a well-written document/letter that doesn’t meander in the same way I normally do.

      I essentially write a draft, have the LLMs write the whole thing, and then there’s usually some back-and-forth to get the proper tone and verbiage right, as well as trim away whatever nonsense the models make up that wasn’t in my original text. Essentially I act as an editor. Writing is a skill I don’t really possess, but now there are tools to make up for this.

      Using an LLM in that way, you’re actively working with the text, and you’re still learning the source material. You’re just leaving the writing to someone else.

    • BrianTheeBiscuiteer@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      2 years ago

      Don’t know why the downvote(s). Like many great technology advancements it can be used for good or for malice. AI definitely can be a great boon to society, but one of the unique aspects of this vs something like the computer or vaccines is that the tech is quite new, organizations and governments are scrambling to regulate it, and almost any fool can get their hands on it.

  • TropicalDingdong@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    9
    ·
    2 years ago

    Calling it cheating is the wrong way to think about it. If you had a TI 80 whatever in the early 90s, it was practically cheating when everyone else had crap for graphing calculators.

    Cat GPT used effectively isn’t any different than a calculator or an electronic typewriter. It’s a tool. Use it well and you’ll do much better work

    These hand wringing articles tell us more about the paucity of our approach to teaching and learning than they do about technology.

    • Copernican@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 years ago

      Do you understand what definitions are in place for authorship, citation, and plagiarism in regards to academic honesty policies?

      • TropicalDingdong@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        2 years ago

        The policies, and more importantly, the pedagogy are out of date and basically irrelevant in an age where machines can and do create better work than the majority of university students. Teachers used to ban certain levels of calculator from their classrooms because it was considered ‘cheating’ (they still might). Those teacher represent a backwards approach towards preparing students for a changing world.

        The future isn’t writing essays independent of machine assistance just like the future of calculus isn’t slide rulers.

        • Copernican@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 years ago

          I think a big challenge or gap here is that writing has a correlation to vocabulary and developing the ability to articulate. It pays off not just for the prose that you write, but your ability to speak and discuss and present ideas. I agree that ai is a tool we will likely be using more in the future. But education is in place to develop skills and knowledge. Does ai help or hinder that goal if a teachers job includes evaluating how much a student has learned and whether they can articulate that?