The email is ominously titled “Final Reminder : The Importance of AI” and flagged “High Importance” when it’s just an ad.

Rest of it goes on about how they got a wallstreet bro to come give a speech about “AI replacing 40% of the jobs.”

Idk why a university likes AI this much either. They even overlook cheating as long as it involves AI.
Both students and graders tell each other they use it openly.

At first it felt weird I am complaining about free accounts and free speeches from investors but then it kinda clicked these are NOT free either. My tuition is paying for all of this.

Even the bus pass has a 5-day sign up process to “save money from students not using that service.”

But somehow they just arbitrarily gave everyone multiple premium chatbot accounts anyways.

Am I just paying a 50% ransom to microsoft for my degree at this point?

Also the email is AI generated too. ITS FROM THE FUCKING DEAN.

  • cron@feddit.org
    link
    fedilink
    arrow-up
    109
    ·
    2 天前

    If they wanted to teach their students about AI, then they should actually teach them and not just provide commercial chatbots.

    The whole field of AI/machine learning is huge and has lots of different applications, and LLMs are just one (hyped) aspect of it.

    • arrow74@lemmy.zip
      link
      fedilink
      arrow-up
      22
      ·
      2 天前

      I would love a common sense approach to AI with teaching. It has some good uses. Great for writing an abstract and formatting a bibliography.

      But you can’t just plug it and trust it. It lies and makes stuff up.

      I see it as a tool that when guided by someone that knows what they are doing can be helpful and eliminate some work for an individual. I’m tired of the two main sides being “use ai for everything” and “never use ai”. Where’s the middle ground where we accept it’s here, but also acknowledge it’s massive flaws

      • PhilipTheBucket@piefed.social
        link
        fedilink
        English
        arrow-up
        26
        ·
        1 天前

        One of the best approaches I saw was a teacher who assigned students to have ChatGPT generate a paper on a topic, and then write their own paper critiquing ChatGPT’s paper and identifying the errors that it made.

        • arrow74@lemmy.zip
          link
          fedilink
          arrow-up
          15
          ·
          1 天前

          I always thought challenging students to “trick the AI” would be a good assignment. Shows them how the system fails, and I think kids would enjoy tricking the ai

      • cron@feddit.org
        link
        fedilink
        arrow-up
        19
        ·
        edit-2
        2 天前

        What I meant with my comment is that AI is a far broader field than just LLMs. But I see so many proposals that are just a horrible waste of ressources.

        For example, image analysis. A friend of mine helped to develop special tools for glacier analysis via satelite images. They trained a model to specifically analyze satellite images and track the “health” of a glacier in near real time.

        Or take mathematical analysis. Some suggest to just throw a pile of data into a LLM and let ChatGPT make sense of it. But a far more reasonable approach would be to learn about different statistical models, learn how to use the tools (e.g. python), and build a verifyable, explainable solution.

        I work in networking and InfoSec, and all the vendors try to add AI chatbots into their firewalls and wifi access points. But many of the challenges are just anomaly detection, or matching series of events to known bad events. But guess what all these AI tools are not: LLMs. (Except maybe for spam filters, thats where an LLM might be a good fit. But we don’t need a huge, expensive model for this).