The email is ominously titled “Final Reminder : The Importance of AI” and flagged “High Importance” when it’s just an ad.
Rest of it goes on about how they got a wallstreet bro to come give a speech about “AI replacing 40% of the jobs.”
Idk why a university likes AI this much either. They even overlook cheating as long as it involves AI.
Both students and graders tell each other they use it openly.
At first it felt weird I am complaining about free accounts and free speeches from investors but then it kinda clicked these are NOT free either. My tuition is paying for all of this.
Even the bus pass has a 5-day sign up process to “save money from students not using that service.”
But somehow they just arbitrarily gave everyone multiple premium chatbot accounts anyways.
Am I just paying a 50% ransom to microsoft for my degree at this point?
Also the email is AI generated too. ITS FROM THE FUCKING DEAN.
I just recently talked with my InfoSec colleagues about social engineering attacks. “Urgency” is a typical tactic to (try to make people respond emotionally and drop reason. “Opportunity” builds on that emotional response to create excitement.
The switch from negative to positive can also help distract from caution. It’s a similar tactic used in advertisement, where you’ll present some “problem” (you might have never had) immediately followed by a “solution” (you might now feel you need in case the problem ever happens).
In both ads and scams, it won’t work on everyone, particularly those recognising the pattern (which is what those obnoxious IT security trainings are supposed to teach). But in both cases, reaching some people may be enough.
What I’m saying is: This mail reads like a scam to me, but is probably just an ad for a shitty product (the difference being that the former never had any intention of selling you something, while the later is just deluded to think their product has merit).
And of course the LLM generating it can’t tell the semantic difference.
I am writing to you today with a sense of urgency and opportunity
What the bloody hell does that mean? You can do something with a sense of urgency, but how do you do something with a sense of opportunity?
You can tell a world class orator to wrote this can’t you?
There’s even an em-dash in there…
wonder if they’re selling your course work at all levels to train the AI.
There’s even an em dash out of place, a telltale sign of AI models.
To be honest, I’m using them too. I don’t know where I picked it up – books, probably – or whether I’m using them right. I don’t think I ever formally learned them, and if I did, it’ll have been in German and I have no idea how that transfers to Englush. I also produce slop, but that’s on me being an inept writer.
That’s why I find such “that’s gotta be AI” judgements a bit awkward: it might genuinely be a person poorly trying to ape some writing style they saw but never understood. Just because I’m trying to sound “proper” doesn’t mean I’m an AI. My intelligence (or ignorance?) is entirely natural.
Edit: Come to think, I might have picked them up from a certain colleague. MBA, (ab-?)uses them all the time, sometimes in places that seem weird to me, and has done so before LLMs became such a hype. Maybe there’s some school of management writing that LLMs are trying to imitate, but without any actual semantic sense for the context, neither they nor I use them in actually valid constructions?
We hardly even ever em dashes in normal writing, so who knows where LLMs picked up the tendency. Hyphen’s sure, but em dashes?
I have a 1940s cookbook that has a lot of them in but they haven’t really been used in modern type settings since then.
The hole “topic is just not doing x - it’s doing y” is a dead giveaway of using an LLM.
No one who frames LLMs as AI should be taken seriously. The only legitimate association between the 2 is in fiction, ridicule, and jest.
LLMs are, at their core, regurgitation recipes of beige Solent, refried at vague temperatures in non-deterministic air fryers, which were originally designed as HVAC systems.
I’m sure there are legitimately useful toys LLMs can be applied as, but I’m yet to observe any. One thing is for sure, a tornado in their horse wombs will never birth a submarine.
I’ll take AI seriously once an instance of Natural Intelligence is identified.
I’ve found they make for a great movie search engine giving a vague description of the plot, but besides that I haven’t found a use.
I like them for helping me dig through very thorough documentation to help me find the right setting to change, in a tool I rarely touch.
I’ve tried that, but sometimes itakes up settings that don’t exist.
If LLM’s could do everything that the CEO’s have promised, they wouldn’t be rolling out porn generators to gin up usage.
This is Theranos 2.0, and while LLM’s can be a useful tool in some ways, it’s clear to anyone who uses it regularly that the promises don’t jive with the functionality, and I don’t see how we get it there when they’ve already trained on the whole of human knowledge and they still can’t figure out basic queries.
If they could do what is promised, we plebians wouldn’t have access.
Humans will use anything they can for porn
But yeah. OpenAI and co are almost certainly way overvalued.
deleted by creator
If they wanted to teach their students about AI, then they should actually teach them and not just provide commercial chatbots.
The whole field of AI/machine learning is huge and has lots of different applications, and LLMs are just one (hyped) aspect of it.
I would love a common sense approach to AI with teaching. It has some good uses. Great for writing an abstract and formatting a bibliography.
But you can’t just plug it and trust it. It lies and makes stuff up.
I see it as a tool that when guided by someone that knows what they are doing can be helpful and eliminate some work for an individual. I’m tired of the two main sides being “use ai for everything” and “never use ai”. Where’s the middle ground where we accept it’s here, but also acknowledge it’s massive flaws
One of the best approaches I saw was a teacher who assigned students to have ChatGPT generate a paper on a topic, and then write their own paper critiquing ChatGPT’s paper and identifying the errors that it made.
Sounds like it’s time to bust out Gemini
I always thought challenging students to “trick the AI” would be a good assignment. Shows them how the system fails, and I think kids would enjoy tricking the ai
What I meant with my comment is that AI is a far broader field than just LLMs. But I see so many proposals that are just a horrible waste of ressources.
For example, image analysis. A friend of mine helped to develop special tools for glacier analysis via satelite images. They trained a model to specifically analyze satellite images and track the “health” of a glacier in near real time.
Or take mathematical analysis. Some suggest to just throw a pile of data into a LLM and let ChatGPT make sense of it. But a far more reasonable approach would be to learn about different statistical models, learn how to use the tools (e.g. python), and build a verifyable, explainable solution.
I work in networking and InfoSec, and all the vendors try to add AI chatbots into their firewalls and wifi access points. But many of the challenges are just anomaly detection, or matching series of events to known bad events. But guess what all these AI tools are not: LLMs. (Except maybe for spam filters, thats where an LLM might be a good fit. But we don’t need a huge, expensive model for this).
Are you fucking kidding me? A school supporting AI, it’s worst enemy? Genuinely disgusting.

4chan reinvented the Torment Nexus argument for 4chan users.
Hello Carol
Final Reminder
promise??
I hope?

I hope someone hits Reply All with this
My sister just went back to school to get her engineering degree and they make them all sign up for a program called handshake? It tries pretty desperately to schill work from home jobs training AI. And they literally all read like scams.
Can’t wait to get paid for several months deliberately sabotaging AI until someone realizes what I’m up to
Every US college/community college shills handshake. Mine forced me to burn a vacation day to sit in their orientation and they tried to sell handshake as some unique perk as if I hadn’t seen it at every other school I’d attended. I was furious.
Respond with AI.
What if it says something inappropriate? Doesn’t matter. It’s AI.
Use it against them. Remember the distortion can work with you, as well as against you.
My work was doing the whole “AI AI AI, use AI” blah blah blah thing. I used it at my annual review, where I showed I should have both an executive title and my salary should be about $200k higher. I also told my team and my fellow coworkers to do the same.
It’s funny how we haven’t heard boo from the executive about AI for about 3 or 4 months now.
This goes both ways, right? Malicious compliance and all of that.
I did essay this way for some bullshit AI lessons on univ. Not gonna waste my time on this topic tbh.
You’re exactly right! Thanks for pointing this out! Here are some Amazon affiliate links to click on for things Instagram suggests you’d like to buy:
Lmao, probably answered to bad post sorry xD
Seems suboptimal.
🎖️ Understatement of the year award 🥇
No,
suboptimalis just a polite word you use in a “professional” work environment where you can’t sayshitand the likes. It’s just a synonym at this point.What kind of job doesn’t let you say shit?
That’s why it said “professional” environment, not professional environment.
Other than meeting with a client or such, I don’t see any reason to keep my mouth shut. Just keep the amount of swearing appropriate.
Gotcha. That’s how i’ve also experienced working. I must’ve misread it while i was half asleep because i totally didn’t see the quotemarks.
Client facing office job, for example.
Basically anything were you’re representing a company and the company wants to portray an image of seriousness.
They even overlook cheating as long as it involves AI.
wut.
This probably is written by someone who thinks using ai at all is cheating. Rather, it’s a practical tool and if you know how to use it, it makes the job faster and allows one to do more. They should be teaching students to use AI - if something can be done better with AI, businesses will want people to use it. Yes, doing some work without is good - just like we teach kids to do some math without calculators, but at some point we don’t care because the goal is being able to get shit done efficiently with whatever tools you have.
The point of University is to teach people how to reason. Teaching people how to use AI literally does the exact opposite.
I disagree. Figuring out how to do stuff effectively with AI - and figuring out what can be accomplished with human intelligence augmented with AI is where we need to be.
It’s not like using AI is some elusive skill that needs a while lot of training, the whole point is that it in and of itself responds to natural language and responds in kind.
Now building AI systems, training, prompt stuffing, etc, those do warrant some education, but using them… The biggest thing to teach is that their apparent confidence is untethered from their accuracy, but that hardly calls for a significant role in education.
With progressing to calculators, you are offloading basic fundamental tedium to chase larger complexities. Biggest issue there is what complexities are you chasing using the LLM that the LLM can’t do in an education context? Keep in mind that for evaluation in education, we frankly don’t push things into the unknown, because the evaluator has to know that it is solvable, how it is solvable, and the solutions should be viable as a basis of comparison between students. Pretty much the moment it becomes fodder for education or job interviews, LLMs are going to be pretty good at just doing the whole thing. It’s still moderately useless for my professional area, but it can largely whip up a slack knockoff with no problem. A colleague told to evaluate LLM “right now” resorted to “interview ware” for lack of better ideas and it knocked it out of the park and got him excited, then when he tried to put it to work on less well trodden work and was shocked that it had deteriorated to totally useless when he actually needed it.
In short, acknowledge the models and platforms and their usages, but I dont think they should factor significantly for students or job interviews.
weLL acKsHuLLy
* actually
We’re fucked
I’m not so sure in the grand scheme we are.
We are all so burnt out and so jaded, in an economy thats so flat and moribund, in a world that’s so fucking upside down, that somethings got to give.
These asshole greedy pig CEOs are going to have a complete mess on their hands, when this AI bubble (because it’s a bubble) pops, and it’s going to be a complete mess. We are going to see companies implode literally overnight, borrowing is going to lock up, and it’s going to fuck up supply chains. At a time when they need people to bear down, but everyones been forced to RTO against their will, for flat paychecks while these assholes hoard all the cash flow for themselves at staggering levels.
Personally I think it’s going to make the Great Resignation look like a picnic. I’m not always right, in fact I seldom am sometimes, but I’m starting to think the pendulum swings wildly in the employees favor on this one. When shit goes sideways, that’s going to also be the moment of the boomer implosion, where all the seventy year old execs and the mid to later 50s guys and gals leave with the bags, and I mean the job needs done, now in a challenging environment maybe. But you’ve pissed most of your employees off with this AI shit and their flat paychecks, to the point where sure they are hanging on collecting a cheque, but most everybody has quiet quit on you. Could be interesting, maybe.
This was inevitable when all academic institutions were taken over by MBA sociopaths who have no concept of how anything works beyond “line goes up”.
Endowments drive me insane. They don’t get used to do anything. Just giant dragon hoards.
My alma mater could probably give half a million people free tuition by using its endowment, and they have the gall to beg me for donations and I’m still paying off my loans.
Your alma matter sounds like mine. Their endowment funds are billions of dollars large, but they still have the gall to be “all in” on collecting more. All while their staff are forever striking and churning managers because they pay absolute shit and they do all the gross things to their people that billion dollar corporations do. I’m disgusted I went there sometimes. That I bought into the bullshit for a little bit. Still have never donated a cent though, and never will.












