‘But there is a difference between recognising AI use and proving its use. So I tried an experiment. … I received 122 paper submissions. Of those, the Trojan horse easily identified 33 AI-generated papers. I sent these stats to all the students and gave them the opportunity to admit to using AI before they were locked into failing the class. Another 14 outed themselves. In other words, nearly 39% of the submissions were at least partially written by AI.‘

Article archived: https://web.archive.org/web/20251125225915/https://www.huffingtonpost.co.uk/entry/set-trap-to-catch-students-cheating-ai_uk_691f20d1e4b00ed8a94f4c01

  • hark@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    2 hours ago

    But I am a historian, so I will close on a historian’s note: History shows us that the right to literacy came at a heavy cost for many Americans, ranging from ostracism to death. Those in power recognised that oppression is best maintained by keeping the masses illiterate, and those oppressed recognised that literacy is liberation.

    It’s scary how much damage is being done to education, not just from AI but also the persistent attacks on public education in the US over decades, hampering the system with things like No Child Left Behind and diverting funds to private schools with vouchers in the name of “school choice”. On top of that there are suggestions that teachers aren’t even needed and that students could be taught with AI. It’s grim.

  • Cyrus Draegur@lemmy.zip
    link
    fedilink
    English
    arrow-up
    18
    ·
    5 hours ago

    I heard of something brilliant though: The teacher TELLS the students to have the AI generate an essay on a subject AND THEN the students have to go through the paper and point out all the shit it got WRONG :D

  • Empricorn@feddit.nl
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    Curious why you’re only posting the archived version? This article is not paywalled…

  • ThePantser@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    1
    ·
    10 hours ago

    It should be treated the same as if another student wrote the paper. If it was used as a research tool where you didn’t repeat it word for word then it’s cool, it can be treated like a peer that helped you research. But using it to fully write then it’s an instant fail because you didn’t do anything.

    • RogerMeMore@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      Well said! It’s like plagiarizing from another student. Using AI as a tool is one thing, but completely relying on it to write the paper is cheating in my book.

    • definitemaybe@lemmy.ca
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      6 hours ago

      Okay, sure. But how can you identify its use? You’d better be absolutely confident or there are likely to be professional consequences.

      Not to mention completely destroy your relationship with the student (maybe not so relevant to professors, but relationship building is the main job of effective primary and secondary educators.)

      • ThePantser@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 hours ago

        Have the student submit drafts with the first rough draft written in class and submitted at the end. Then weekly or daily improved drafts. If the finished paper is totally and material different then it’s a red flag. If the student wants to drastically change the paper then the teacher must approve.

        • KairuByte@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          4
          ·
          5 hours ago

          “Chat GPT, this is my rough draft. I want you to polish it a little and add some, but not a lot. This is meant to be a second pass, not the final draft. Make a couple mistakes on the grammar.”

        • definitemaybe@lemmy.ca
          link
          fedilink
          arrow-up
          5
          ·
          6 hours ago

          Can easily be faked with AI. You can just prompt AI to make progress, drafts, mistakes, fix the mistakes, etc.

          I’ve presented on this at teacher conferences, for what it’s worth. There’s no effective way to detect AI usage accurately when the text-writing process isn’t supervised. The solutions need to accept that reality.

            • definitemaybe@lemmy.ca
              link
              fedilink
              arrow-up
              1
              ·
              3 hours ago

              Or we could focus on the core of teaching, which is building relationships with students. Then, with that rapport, students will trust their teachers when they explain why getting AI to do the work for them is hurting their own education. We can also change our assessment practices, so that students don’t feel the pressure to write a “perfect” essay.

              And, yes; occasionally require students to do a bit of writing with invigilation.

  • korazail@lemmy.myserv.one
    link
    fedilink
    English
    arrow-up
    38
    ·
    edit-2
    11 hours ago

    From later in the article:

    Students are afraid to fail, and AI presents itself as a saviour. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.

    I think this is the big issue with ‘ai cheating’. Sure, the LLM can create a convincing appearance of understanding some topic, but if you’re doing anything of importance, like making pizza, and don’t have the critical thinking you learn in school then you might think that glue is actually a good way to keep the cheese from sliding off.

    A cheap meme example for sure, but think about how that would translate to a Senator trying to deal with more complex topics… actually, on second thought, it might not be any worse. 🤷

    Edit: Adding that while critical thinking is a huge part. it’s more of the “you don’t know what you don’t know” that tripped these students up, and is the danger when using LLM in any situation where you can’t validate it’s output yourself and it’s just a shortcut like making some boilerplate prose or code.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    arrow-up
    30
    arrow-down
    2
    ·
    12 hours ago

    Students would want to learn instead of doing less work if there were incentives to learn instead of just get out with a degree.

    • Auth@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      5 hours ago

      There are incentives to learn. The smartest kids do far better than the average kid.

    • ulterno@programming.dev
      link
      fedilink
      English
      arrow-up
      13
      ·
      11 hours ago

      It seems AI is putting more light on this problem of the academic system not really being learning oriented.
      Not that it matters. There was already enough light on it and now it’s just blinding.

  • 🍉 Albert 🍉@lemmy.world
    link
    fedilink
    arrow-up
    49
    arrow-down
    1
    ·
    edit-2
    13 hours ago

    I think the only solution is the Cambridge exam system.

    The only grade they get is at the final written exam. all other assignments and tests are formative, to see if they are on track or to practice skills… This way it does not matter if a student cheats in those assignments, they only hurt themselves. Sorry for the final exam stress though.

      • IMALlama@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        5 hours ago

        I took three history classes while I was in college. It’s been a while, but I recall most of them having a paper or two and those papers counting for a pretty big chunk of you grade. The author of the article is a history teacher, so essays make some amount of sense.

        My engineering classes were basically as you described.

    • Taldan@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      11 hours ago

      I had a couple classes in college where the grade was 8% homework, 42% midterm, and 42% final exam. Feels a bit more balanced

      I think we should also be adjusting the criteria we use for grading. Information accuracy should be weighted far more heavily, and spelling/grammar being de-prioritized. AI can correct bad spelling and grammar, but it’s terrible for information accuracy

      • definitemaybe@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        7 hours ago

        My math undergrad classes were largely like that, too, and that was before there were smartphone solver apps, let alone “AI”. A typical grade breakdown was 10% assignments, 30% midterm, 60% final in first and second year. Then in third and fourth year, it was entirely midterm + final.

        They gave a few marks for assignments in lower years since high schoolers often come to them thinking the only things that are important are grades, so won’t practice unless it’s for marks. If you haven’t figured out that practice is important by third year…

        And agreed re: changing the focus of our assessment, just like memorizing facts for history “trivia-style” assessment should no longer be used by anyone in a post-search Web 2.0 world. (Although it was never good assessment, regardless.)

      • 🍉 Albert 🍉@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        10 hours ago

        also bad at synthesizing new ideas… however, it is likely that future models will be better at those things.

        then whole situation sucks and I’m glad I’m out of uni.

    • MDCCCLV@lemmy.ca
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      11 hours ago

      Except this is terrible for a lot of people and then only measures how well people do at taking tests.

      • 🍉 Albert 🍉@lemmy.world
        link
        fedilink
        arrow-up
        26
        ·
        edit-2
        14 hours ago

        Meant how the university of Cambridge does their tests. rather than year long exams and assignments being a fraction of your grade. the only grade that matters is that of the final yearly exam. I think it is the only one in the UK that does that. not sure how it works in the rest of the world, but I think that is rare. but likely the only AI proof system.

        IE, you can literally goof off the whole year, not even be in town, and if you show up for the exam and ace it you get a good grade.

        • quick_snail@feddit.nl
          link
          fedilink
          arrow-up
          15
          ·
          13 hours ago

          God that’s great. I’ve failed classes in Uni where I got As on all the tests, just because I didn’t do the homework >:0

          • LifeInMultipleChoice@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            13 hours ago

            Classes like Calc 2,3, Thermo/fluid dynamics, chem classes etc never had required homework for me, just suggested. The only classes with required homework were engineering projects building something physical or programs I had to submit in C or what not. I suppose I had a writing course that required I turned in essays, but I don’t know how you’d get around that.

  • Alaknár@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    65
    ·
    16 hours ago

    Let me tell you why the Trojan horse worked. It is because students do not know what they do not know. My hidden text asked them to write the paper “from a Marxist perspective”. Since the events in the book had little to do with the later development of Marxism, I thought the resulting essay might raise a red flag with students, but it didn’t.

    I had at least eight students come to my office to make their case against the allegations, but not a single one of them could explain to me what Marxism is, how it worked as an analytical lens or how it even made its way into their papers they claimed to have written. The most shocking part was that apparently, when ChatGPT read the prompt, it even directly asked if it should include Marxism, and they all said yes. As one student said to me, “I thought it sounded smart.”

    Christ…

    • quick_snail@feddit.nl
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      13 hours ago

      Yeah, those students deserve to fail.

      I assume he taught what Marxism is in the class, yeah?

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      10
      ·
      15 hours ago

      ". My hidden text asked them to write the paper “from a Marxist perspective”

      Freshmen.

      That’s a dangerous proof.

      He could have said to write from a zagnoore brandle-frujt perspective. Some would have scanned the assignment, ignored the part they didn’t understand, and kept chooching right along. Many students would rather try to figure it out than sound stupid in class or risk the spotlight of social interaction.

      Interrogating each of them on the material is the only safe way.

      • LobsterJim@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Many students would rather try to figure it out than sound stupid in class or risk the spotlight of social interaction.

        This ignores the entire premise of the experiment. The 39% were not interested in learning regardless of the consequences.

      • SippyCup@lemmy.ml
        link
        fedilink
        arrow-up
        18
        ·
        13 hours ago

        The Trojan worked because the students who read the assignment would not have seen the reference to Marxism. Only by copy pasting the text in to another field would that show up.

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          18
          ·
          13 hours ago

          The article mentions that he gave them a chance to explain why they chose to write it from a Marxist perspective and none of the students even knew what Marxism was.

          He gave students an out in the event they actually did write from a Marxist pov

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          13 hours ago

          You’re assuming the freshmen would recognize a reference to marxism and not ignore that part because they didn’t understand. It’s what inexperienced students do

          • AlfredoJohn@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            ·
            12 hours ago

            He made it hidden text in the assignment, i.e im guessing he made the text the same color as the background. Only when one copied the text and pasted it as instructions for a prompt would it be seen. Hence it was a Trojan horse on the assignment where those not using ai blatantly would not find it.

          • SippyCup@lemmy.ml
            link
            fedilink
            arrow-up
            4
            ·
            12 hours ago

            I’m not assuming a damn thing. He literally made it impossible to see the reference to Marxism unless you copy pasted the text in to another field. Like you would do if you were feeding it to an AI. Think very small white font.

      • Jankatarch@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        4 hours ago

        Honestly interrogating each one on the material would be best but costly. Some classes have 80 students and professors have more than one class.

        Also rest of the article says he asked students objecting accusations to tell what Marxism is and they admitted afterwards.

        Apparently chatgpt asked “are you sure you want marxist perspective” as topic is older and not that directly related to marxism. One said they picked yes because quote, unquote “it sounded smart.”

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          12 hours ago

          Also rest of the article says he asked students objecting accusations to tell what Marxism is and they admitted afterwards.

          Yes, I read it. All of it.

          He definitely caught a number of them, but he called it proof, it should NOT be treated as proof, an indicator at best. If it were proof he could just fail them all and not catch false positives.

          Totally agree about the number of people to interview being expensive. But it is more adequate as proof.

          What he didn’t do wasn’t wrong, but he can’t count on that to be a point to fail.

          • AlfredoJohn@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            ·
            12 hours ago

            He definitely caught a number of them, but he called it proof, it should NOT be treated as proof, an indicator at best. If it were proof he could just fail them all and not catch false positives.

            So you frequently wrote your papers from a Marxists perspective randomly when it had no relevance to the topic at hand? He hid the text by making it the same color as the background in the assignment so only when one copy and pasted the assignment or attached the file to their prompt would it be picked up. The only real false positives would be those staunchly Marxists students or someone using a screen reader. Which I think if you are inserting Marxism into random essays that are not relevant you probably are going to be bringing it up constantly in every setting you can and would also be able to explain it. It definitely is a reason to be failed on an assignment if he caught you with this.

  • ragepaw@lemmy.ca
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    11 hours ago

    Obviously, the inclusion of Marxism is a decent test, but I have taken samples of things I have written years ago and submitted them to see what it would say about my writing.

    It says a high probability of my writing was done by AI, because I do use emdashes, oxford commas, and other such punctuation.

    We can’t trust anything checking to see if something was written by AI, any more than we can trust something written by AI.

    • MDCCCLV@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      Anything can be analyzed by a Marxist perspective, which basically just uses a class and imperialism type analysis. If there’s a human society it can be applied.

  • rustydrd@sh.itjust.works
    link
    fedilink
    arrow-up
    101
    ·
    edit-2
    20 hours ago

    In one of my classes, when ChatGPT was still new, I once handed out homework assignments related to programming. Multiple students handed in code that obviously came from ChatGPT (too clean a style, too general for the simple tasks that they were required to do).

    Decided to bring one of the most egregious cases to class to discuss, because several people handed in something similar, so at least someone should be able to explain how the code works, right? Nobody could, so we went through it and made sense of it together. The code was also nonfunctional, so we looked at why it failed, too. I then gave them the talk about how their time in university is likely the only time in their lives when they can fully commit themselves to learning, and where each class is a once-in-a-lifetime opportunity to learn something in a way that they will never be able to experience again after they graduate (plus some stuff about fairness) and how they are depriving themselves of these opportunities by using AI in this way.

    This seemed to get through, and we then established some ground rules that all students seemed to stick with throughout the rest of the class. I now have an AI policy that explains what kind of AI use I consider acceptable and unacceptable. Doesn’t solve the problem completely, but I haven’t had any really egregious cases since then. Most students listen once they understand it’s really about them and their own “becoming” professional and a more fully developed person.

    • killabeezio@lemmy.zip
      link
      fedilink
      arrow-up
      11
      ·
      15 hours ago

      This seems pretty fair and reasonable, although, we should ask why people do this in the first place? Why is there so much pressure to get good or decent grades? If you are just going to college to get a degree and all you want to do is pass, then why go at all?

      College is a broken system right now. If things continue the way they are going, people will just learn how to use AI tools and go find a job. They don’t even need to think for themselves, they can just have a computer do it for them.

      • Bgugi@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        12 hours ago

        If you are just going to college to get a degree and all you want to do is pass, then why go at all?

        While I’m about a decade older than the current attendees, “go to college or your life will be terrible” was a persistent and unified message we were forcefed from a very young age. A college degree was billed as a checkpoint to entetr real adult life.

        With that in mind, and another 13 years of compulsory education behind us, why would anybody see college as an opportunity instead of a barrier?

    • baines@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      10 hours ago

      which is funny because reality makes that idea complete bullshit

      leadership doesn’t want professionals, it wants low paid worker drones and ‘good enough’ ai

      10% of your students might go on to be skilled enough to demand a job that respects their abilities, the rest are gonna be employed by tech illiterate boomers (lord these guys don’t want to retire) and will likely be dealing with being forced to use ai

      thankfully i can’t use ai in my work so it’ll be decades before it is even a concern for me directly but i have multiple friends dealing with this issue now

      they are intelligent, well educated, had top grades, their boss is some nepo baby with grand ideas of being the next elon

      *edit

      just read this

      https://mander.xyz/post/42542367

      • That’s a very capitalist view of education. Some people just want to learn, and that’s the point of an education, to enable learning. You might need that piece of paper to get a job in the field you want, and the field you want might prefer a mindless worker drone, but that doesn’t mean that education should cut corners and teach to the job.

        • baines@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          14 hours ago

          sadly yea, i’m not happy about it but pretending education is above it all is doing no one any favors

          i’d love if we had free education and a culture valuing learning for the joy of it in the us but we can’t even manage to agree feeding starving kids is important

          and we just shat all over skilled nurses and the like as non-professional

          https://www.npr.org/sections/shots-health-news/2025/11/25/nx-s1-5619731/medical-nursing-school-loan-limits

          hard to not hold a bleak outlook on all of it

            • baines@lemmy.cafe
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              what an odd thing to hear about new mexico but good, i’m happy for you and i hope improves life for more in your state

          • plyth@feddit.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            13 hours ago

            we can’t even manage to agree feeding starving kids is important

            We don’t have to agree. We essentially live in post democratic times. Some people just have to do it.

            Education can be above it all and make it possible.

              • plyth@feddit.org
                link
                fedilink
                English
                arrow-up
                2
                ·
                13 hours ago

                Because people don’t enact the values of Star Trek. People could create a Starfleet Academy and an organisation with Star Trek values but it seems that all fans are eager for WW3 because that is scripture.

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    17 hours ago

    I’m guessing 33 people were too lazy to copy data into a box and relied on ChatGPT OCR lol.

    This was a great article about the use of AI, but I think this also exposed bad/zero effort cheating.

    There’s a reason why even the ye olde Wikipedia copy-pasters would rearrange sentences to make sure they can game the plagiarism checker.

    • Dozzi92@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      14 hours ago

      Microsoft Word had auto summarize as far back as the early 2000s (and probably before then, I only found it my junior or senior year of high school). Plop a wall of text into a word docume, click auto summarize, limit the number of direct quotes to three words max, hit enter. I was clever af (except that one paper on Gorbachev where the paragraph entirely composed of wingdings exposed my strategy a bit), or so I’d have thought.

      Lo and behold my cheating somehow didn’t prepare me for regular life, and so I had to learn lessons in college (the one year that I went), and early on in my career, in my early 20s, that I should’ve learned in school. The lesson was that you sometimes need to just do the fucking work. Hopefully the kids in this experiment learned that, but more than likely they learned to cheat better.

  • Doomsider@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    15 hours ago

    Great story with predictable results. Welcome to your AI future where people give their thinking over to machines made by sociopaths.

  • Smoogs@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    14 hours ago

    Whatever happened to the tests which you have to sit and do to prove you know the thing you’re writing about?

      • Buddahriffic@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        13 hours ago

        Should be pretty easy to detect with the right equipment. Like I assume it’s not doing much of that locally, so a wifi detector and triangulator will catch any wifi devices that are powered on and roaming proctors could carry bluetooth detectors for people trying to evade detection by using short range signals (and I guess processing it on their phone with wifi disabled).

        • quick_snail@feddit.nl
          link
          fedilink
          arrow-up
          1
          ·
          8 hours ago

          Lol wut. You just detected 80 WiFi devices in a room with 3 people. Most of them are outside the classroom

          • Buddahriffic@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            8 hours ago

            … Then they locate the ones inside the room and deal with them accordingly.

            Guessing you didn’t understand the triangulation part of my suggestion, but you can detect the direction of the source of electromagnetic signals (which is what wifi, Bluetooth, and the cellular signal all are). If you have two sensors that can detect direction, then you can use those to get two lines pointing at the source, which will be where those two lines meet.

            Though if one person was moving around with a directional sensor, it would also be able to lead to the location of the device. But having two to triangulate with would allow for quick automated filtering of all the outside devices detected.

            They could also build a faraday cage around the exam area to prevent any outside signal from getting in, at which point they know any detected signal is coming from within. Plus they might catch people just by looking for visible frustration at not being able to connect to the internet.

        • Jankatarch@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          12 hours ago

          Institutions don’t want to catch students tho especially for private universities.

          I know at least one college where they give exams on laptops and their anti-cheating system doesn’t work on mac.

          Mac users can open google or chatgpt during their midterm and finals. The school knows, they just enjoy better pass-rates.

          In a more general example, every college including mine give all students premium chatgpt and copilot accounts paid to Microsoft by our tuition.

  • RampantParanoia2365@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    8
    ·
    edit-2
    11 hours ago

    I’ve been using it for a personal project, and it’s been wonderful.

    It hasn’t written a word for me. But it’s been really damn helpful as a research assistant. I can have it provide lists of unexplained events by location, or provide historical details about specific things in about 5 seconds.

    And for quicky providing editing advice, where to punch up the language, what I can cut, or communicate more clearly. And I can do that without begging a person for days to read.

    Is it always perfect? Not at all, but it definitely helps overall, when you make it clear to be honest, and not sugar-coat things. It’s definitely mostly mediocre for creative advice, but good for technical advice.

    It’s a tool, and it can be used correctly, or it can be used to cheat.

    • Hoimo@ani.social
      link
      fedilink
      arrow-up
      5
      ·
      7 hours ago

      Do you then check those historical details against trusted sources? If so, how often do they need correction?

    • pumpkin_spice@lemmy.today
      link
      fedilink
      arrow-up
      8
      ·
      8 hours ago

      when you make it clear to be honest

      It has no idea what honesty is. It has no idea what bias is.

      It is fancy auto-complete. And it’s wrong so often (like 40% of the time) that it should not be used to seek out factual information that the prompter doesn’t already know.

      • definitemaybe@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        7 hours ago

        it should not be used to seek out factual information that the prompter doesn’t already know.

        Eh… Depends on the importance and purpose of the information.

        If you’re just trying to generate ideas for fiction from historical precedents, it doesn’t matter if it’s accurate. Or if you’re using it as a starting point, then following the links to check the original source (like I do all the time for Linux terminal commands).

        Hell, I often use Linux terminal commands from Google’s search results AI box—I know enough to be able to parse what AI is suggesting (and identify when the proposed commands don’t make sense), and enough to undo what I’m doing if it doesn’t work. Saves a lot of time.

        Copilot fixed some SQL syntax issues I had yesterday, too. 100% accuracy on that, despite it being a massive query with about a dozen nested subqueries. (Granted, I gave a very detailed prompt…) But, again, this was low stakes–who cares if a SELECT query fails to execute.

    • molave@reddthat.com
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      8 hours ago

      And the issue is that for the people who call for a Butlerian Jihad, we are part of the problem.