Maybe they are actively trying to make the crash happen while Trump is still in office to bail them out.
Look on the upside folks, at least RAM, GPU, and storage are wildly expensive. That’s great for the economy and reduces the instances of people being mind-controlled by violent video games!
/s

LET THE WORLD BURN! NO SURVIVORS! NO ECONOMY SECTOR SHALL BE LEFT STANDING!
crash. and. burn.
seems like they ran out of using AI as an excuse to lay people off and “record profits”
Holy shit, really?
And watch them get crazy bonuses anyways and suffer no consequences.
they will just run to trump to beg for bailouts.
The AI tech comps are having too huge losses to be bailed out into a stable position
I support abolishing the death penalty except for two cases.
1: war criminals (think what Israel is doing and their disgusting behavior) and anyone committing crimes under the auspices of the state expecting that protection to allow them to escape. 2: high ranking political and economic figures who fuck things up on purpose for profit.
Even mass shooters and serial killers do far less damage to society than a single one of those fucks.
Someone still needs to be the executioner. Nobody should have to carry that burden.
Put them to work. And I mean basic hand labour.
Plowing fields, harvesting crops, buildings houses, paving roads, etc.
Majority of CEOs tried to use Ai on the wrong level of their company: at the bottom.
Majority of CEOs discover they are completely incompetent frauds to the point of literally deserving the death sentence
There, fixed the headline.
Majority of CEOs discover something plebs like you and I knew all along the whole time.
dbzer0 instance is always so brave, or maybe you just don’t feel fear.
Oh I feel fear.
I can just do systemic analysis and cost benefit analysis.
And I can face the facts that I see, the fear that I feel, and be brave enough to say what I really think about it.
Bravery is not a lack of fear, it is being scared as hell and doing the thing anyway.
These people, in the current system, are essentially untouchable.
And if left unchecked, they will kill millions, and then billions of people through their individual and collective compounding incompetence and greed.
The needs of the many outweigh the needs of the few.
The level of schadenfreude im feeling is almost lethal.
How could they have possibly thought AI would make them money? Lmfao. It sucks power and water just to give wrong answers or generate “art” with terrible attention to detail…
I do think it’s disingenuous to downplay how effective AI can be. If you ask certain AI a question, it will give you a faster and better answer than using a search engine would, and will provide sources for further reading if requested.
And the art, whilst not as good or as ethical as human art, can still be high quality.
Being against AI is completely valid, but disparaging it with falsehoods does nothing but give the feeling that you don’t know what you’re talking about.
If you ask certain AI a question, it will give you a faster and better answer than using a search engine would, and will provide sources for further reading if requested.
I think that speaks to how bad search engines have gotten, not really to how good AI is. Google used to work. I promise! It used to not just be ads and SEO garbage, if you knew your special search operation functions you could find exactly what you were looking for every time. It’s only because they enshitified the platform that AI search even makes sense to use.
They’ll enshitify AI search soon enough and we’ll be right back where we started.
Sure, but what I am talking about outperforms any search engine in history. If you have a specific question you will get a specific answer with AI, and usually it will be correct. If you use a search engine you can come to the same answer but it will definitely take you longer.
I’m not defending the use of AI, I’m just saying, the quality of them is not the issue. They are becoming extremely high quality with their answers and usefulness. The problem is with the ethics and energy usage.
It used to be that the first couple results would answer the specific question, as long as you knew how to format the question in the correct search terms and with the correct special operations. What might take longer is refining the search to get extremely specific results, but that was usually only necessary if you’re writing a paper or something.
But you shouldn’t just trust whatever the AI says when you’re writing a paper anyway, so that’s not really different.
AI does allow you to skip all that and just ask a plain language question, but search didn’t used to take so long if you knew how to use it. It worked.
Yes it worked, and still required you to dig through the answers to find the answer yourself. That is the difference. AI will search for you and collate the results to give you the definitive answer. I’m not saying searching didn’t work, or doesn’t even work today, I’m just saying AI is more efficient and effective and pretending it isn’t is simply wrong and / or lying.
You shouldn’t just trust whatever the AI says
And you also shouldn’t just trust random things you read on the internet, so I’m not sure exactly what point you are making here. I’ve never advocated for that. I also am not sure why you keep explaining to me how good search engines used to be, seems like a strange aside considering you don’t know how long I’ve been on the internet for.
I can’t tell if you’ve forgotten how good search was, are too young to know better, or were never good at using search.
I’m telling you that you didn’t have to “dig through the answers” if you formatted the search well. It worked. You obviously couldn’t trust everything you read on the internet, but the tricky part was formatting. No digging was required once you were good enough at key words, syntax, and search functions (“” , + - site:). Search results were incredibly efficient and effective. It was amazing.
AI is now maybe as efficient and effective as search results used to be. That’s it. They ruined search and gave us AI.
And they’ll ruin AI too, just you watch.
You had to “dig through answers” as in, you got your answer, in the form of a website that you then had to click into and scan for the answer.
AI is far more efficient. I can’t tell if you are delusional or just willfully ignorant. Ask a question and in two seconds you have a succinct answer with all of the information that using a search tool (now, and in the past) would provide you.
I also don’t disagree that they will ‘ruin AI’, I’m not defending it or the creators of it in the slightest. I am simply saying it can be an extremely effective tool and it is without a shadow of a doubt better than using a search engine to get the answer to a question.
It’s because they have no idea what “AI” actually is. They think you tell it to make profits, and it just does so. Anyone who has used any kind of “AI” for an hour knows that it’s mostly just shit at everything except the absolutely most basic shit.
They have no respect for the work of their employees, so they thought that they could be easily replaced by a computer program. They were so excited by the prospect of handing over ALL of our jobs to AI, that they far overextended themselves.
Now they are going to crash and burn because they bet AGAINST every worker in America, and LOST.
I hope it hurts them really, really badly. We should respond to their financial pain by laughing at them, and taking away their fortunes and their companies, since they have demonstrated so clearly that they can’t be trusted to handle the American economy responsibly.
A government bailout was ALWAYS the plan.
Some applications of AI are pretty neat. For example the DeepL translation tool. I convinced my employer to spend money on that. And they make 55 million in profits.
But forcing AI down our throats, like Google does with those horrible auto-dubbed videos? There’s no way that will ever be profitable
DeepL isn’t what is being touted as “AI” this week, though. DeepL is based on older translation technology (by which I mean “far more reliable”).
This is a shell game. Every time there’s a wave of “AI” it’s some new tech that shills sell as the answer to “real” computer intelligence. (It can never possibly be this, of course, because we can’t even define intelligence properly, not to mention making an artificial version of it.) There’s certain levels of hype. There’s a bubble (usually far smaller than this one, of course). Then the bubble pops and we enter the next AI Winter.
The small use cases for which the new technology is actually useful, however, loses the AI monicker and is just called “software”. Like what used to be AI for doing dynamic adjustment on pictures for night shots, HDR, etc. is no longer shilled as AI. It’s just … software that my phone has built in.
So currently “AI” means “LLM” outside of some very specific academic environments. DeepL is just software that (mostly kinda/sorta) works.
DeepL is based on an LLM:
https://www.deepl.com/de/blog/next-gen-language-modelHuh. That’s new. When I first tried DeepL it was not LLM. That’s an intriguing development.
I will update my mental database accordingly.
Yeah there are definitely some cool uses, it seems like analysis/processing uses are pretty good, but generative ones are not.
The AI models that are used for molecular research are literally remaking our understanding of biology and medicine. I don’t know why the big AI corporations don’t point to that as an example of the benefits of AI. I guess cause that doesn’t help them to exclude the proletariat from their profits.
probably it has problems of its own, and it will likely require a scientist to fact checks any thing the AI makes, it also depends if a journal is finicky enough to accept a paper that the experiments are done by AI. pretty niche, i doubt its using the commercial ones like OPENAI/ GOOGLES,or other. its probably made for that specific purpose of that research field. a small subset of users, so unlikely to generate profit that way because thats asmall group of "customers using a niche AI, and likely its proprietary to the UNiversity that made it anyways.
It definitely has its own problems, and the results are thoroughly investigated by the researchers. But yeah it’s very niche, however most models are freely shared between teams. I mean it has to be to get through peer review.
It’s seeing a vision of the future and the technology that will transform it but not having the patience to let it happen and wanting to jump right to printing money. I think the fact that it happened to the internet should show that even incredibly useful tech can go through this process. It happened with video games, too. They see the potential but their eyes are only on the money, so they don’t have the ability to meet that potential.
And in the case of the metaverse, they killed it off entirely by wanting to build a virtual storefront and advertising space before building a virtual space people would want to visit. Facebook thought the idea would sell itself just based on pop culture, despite none of the pop culture versions involving just a headset and trackers to enter a world you can only see and hear, but not touch (even though it might inconsistently react to your touch). The tech wasn’t there but FB had FOMO and wasted billions chasing it anyways.
Exact same thing is happening with AI. LLMs improved by leaps and bounds, and once it was conversational, people with money went all in on the idea of what it could become (and probably still will, just not anytime soon and it won’t be chatbots, actually I suspect we might end up using LLMs to communicate/translate with the real AIs, though they’ll likely be integrated into them because that communication is so useful).
They don’t understand that it takes more than just having a good idea or seeing tech before it explodes, you have to have passion for that tech, a passion that will fight against the urge to release it to make money, not a passion to release it regardless to make money sooner and the intent to fix it up later.
It’s why they are trying to shove it into everything despite no one wanting it, because they think the exposure will drive demand, when it’s actually exposure to something desirable that drives real demand. And exposure is instead frustrating or dangerous because it’s often wrong and full of corporate censorship (that hasn’t once been accurate but has always been easy to bypass any time I’ve run into it).
I just wonder if MS bet the farm on it, or only bet a survivable loss. Like is the CEO just worried about his job or the entire company’s future?
I’m starting to think that most “business” leaders have the skills of a Trump. It’s all puffery.
Have you ever talked to a CEO? Like, sit down and talk face to face? Their are dumb as rocks. They are dumb as rocks and make all the money, and just move around from company to company, running them into the ground.
What a fucking retarded statement.
Yes. Yes I have. And I agree with you. The “starting to think” is an old Norm McDonald trope.
I’m starting to think that Norm McDonald character isn’t a very serious person.
Yes. My last 3 were excellent businessmen and treated employees exceptionally well.
The one before that was a buffoon, but that was a tiny 13-person company. Our main vendor said, “He’s a man who has found great success despite himself.”
- some exceptions for CEOs that actually founded the company they remain in charge of, but they’re in the minority for sure. There’s probably variance by sector as well.
All CEOs need to be [redacted]
Maybe I’m just too close to the non-profit side of things to see it as a simple binary.
“Non-profit” is a tax classification, not a description of their business motives. I’m not saying that all non-profits are bad. They definitely help people. But many are absolutely profit-driven, with their CEOs making a shitload of money. CEOs, by and large, profit off the exploited labor of their employees.
I work in IT, and the accuracy of “The IT Crowd” when it comes to management is scary. How they get anything done is beyond me.
They don’t get anything done they just ask other people to do stuff. Their job is to nag you and stress you out until you do stuff
During americas “great” years a lot of the research and development was guided by long term USA policy. We are no longer guided by any sense of what’s to come. It’s a greedy grab all free-for-all. No rules no restrictions, just have fun and make lots of money.
If trump can graduate with a IVY league degree, that says a lot about the (trogolydite) IVY league grads.
Ivy league degrees are only impressive if you’re poor. Any idiot multimillionaire or above can buy one for themselves.
We said the same thing about bush! Clearly they lost their luster damn near a century ago.
Ivy League schools were never about education. They’ve always about making connections with other elites.
and suckering in the actual talent/smart kids, to be exploited by those elites with connections
The only thing I have used AI for is making furry porn. And even then I am really bored with it.
it always ends up making all kinds of porn.
My man.
Sounds good. Then, they’ll finally move away from AI and we will all stop having AI being shoved down our throats. I’m sick and tired of all these AI chatbots in places where we don’t even need them.
“Instead of looking for other avenues for growth, though, PwC found that executives are worried about falling behind by not leaning into AI enough.”
Sunk cost fallacy at work
Oh no they are shit afraid of what happened to companies that didn’t survive the shift into digital that happened around 2000s.
The truth is, many companies didn’t try that transition and disappeared or went from their peak to being 2nd class. But also, lots of companies put in large amounts of money the wrong way and the same thing happened. Guess history repeats itself and every ceo is finding out they didn’t get where they did because they’re smarter than their peers the way they strongly believed before.
It’s gambling all the way down, thinking they will be the ones that will win big and everyone else fail.
And they never add anything of use. They are like an incredibly sophisticated version of Clippy… and just as useless.
Clippy being useless was okay because it was the 2000s. In this time and age though? Meh.
Also, people HATED Clippy. They always hated AI.
I was thinking about this recently… and in the early 2000s for a short time there was this weird chat bot crazy on the internet… everyone was adding them to web pages like MySpace and free hosting sites…
I feel like this has been the resurrection of that but on a whole other level… I don’t think it will last it will find its uses but shoving glorified auto suggest down people’s throats is not going to end up anywhere helpful…
A LLM has its place in an ai system… but without having reason its not really intelligent. Its like how you would decide what to say next in a sentence but without the logic behind it
The logic is implicit in the statistical model of the relationship between words built by ingesting training materials. Essentially the logic comes from the source material provided by real human beings which is why we even talk about hallucinations because most of what is output is actually correct. If it it was mostly hallucinations nobody would use it for anything.
No you can’t use logic based on old information…
If information changes between variables a language model can’t understand that, because it doesn’t understand.
If your information relies on x being true, when x isn’t true the ai will still say its fine because it doesn’t understand the context
Just like it doesn’t understand things like not to do something.
Well people use it and don’t care about hallucinations.













