The email is ominously titled “Final Reminder : The Importance of AI” and flagged “High Importance” when it’s just an ad.

Rest of it goes on about how they got a wallstreet bro to come give a speech about “AI replacing 40% of the jobs.”

Idk why a university likes AI this much either. They even overlook cheating as long as it involves AI.
Both students and graders tell each other they use it openly.

At first it felt weird I am complaining about free accounts and free speeches from investors but then it kinda clicked these are NOT free either. My tuition is paying for all of this.

Even the bus pass has a 5-day sign up process to “save money from students not using that service.”

But somehow they just arbitrarily gave everyone multiple premium chatbot accounts anyways.

Am I just paying a 50% ransom to microsoft for my degree at this point?

Also the email is AI generated too. ITS FROM THE FUCKING DEAN.

  • luciferofastora@feddit.org
    link
    fedilink
    arrow-up
    9
    ·
    14 hours ago

    I just recently talked with my InfoSec colleagues about social engineering attacks. “Urgency” is a typical tactic to (try to make people respond emotionally and drop reason. “Opportunity” builds on that emotional response to create excitement.

    The switch from negative to positive can also help distract from caution. It’s a similar tactic used in advertisement, where you’ll present some “problem” (you might have never had) immediately followed by a “solution” (you might now feel you need in case the problem ever happens).

    In both ads and scams, it won’t work on everyone, particularly those recognising the pattern (which is what those obnoxious IT security trainings are supposed to teach). But in both cases, reaching some people may be enough.

    What I’m saying is: This mail reads like a scam to me, but is probably just an ad for a shitty product (the difference being that the former never had any intention of selling you something, while the later is just deluded to think their product has merit).

    And of course the LLM generating it can’t tell the semantic difference.