How could they have possibly thought AI would make them money? Lmfao. It sucks power and water just to give wrong answers or generate “art” with terrible attention to detail…
I do think it’s disingenuous to downplay how effective AI can be. If you ask certain AI a question, it will give you a faster and better answer than using a search engine would, and will provide sources for further reading if requested.
And the art, whilst not as good or as ethical as human art, can still be high quality.
Being against AI is completely valid, but disparaging it with falsehoods does nothing but give the feeling that you don’t know what you’re talking about.
If you ask certain AI a question, it will give you a faster and better answer than using a search engine would, and will provide sources for further reading if requested.
I think that speaks to how bad search engines have gotten, not really to how good AI is. Google used to work. I promise! It used to not just be ads and SEO garbage, if you knew your special search operation functions you could find exactly what you were looking for every time. It’s only because they enshitified the platform that AI search even makes sense to use.
They’ll enshitify AI search soon enough and we’ll be right back where we started.
Sure, but what I am talking about outperforms any search engine in history. If you have a specific question you will get a specific answer with AI, and usually it will be correct. If you use a search engine you can come to the same answer but it will definitely take you longer.
I’m not defending the use of AI, I’m just saying, the quality of them is not the issue. They are becoming extremely high quality with their answers and usefulness. The problem is with the ethics and energy usage.
It used to be that the first couple results would answer the specific question, as long as you knew how to format the question in the correct search terms and with the correct special operations. What might take longer is refining the search to get extremely specific results, but that was usually only necessary if you’re writing a paper or something.
But you shouldn’t just trust whatever the AI says when you’re writing a paper anyway, so that’s not really different.
AI does allow you to skip all that and just ask a plain language question, but search didn’t used to take so long if you knew how to use it. It worked.
Yes it worked, and still required you to dig through the answers to find the answer yourself. That is the difference. AI will search for you and collate the results to give you the definitive answer. I’m not saying searching didn’t work, or doesn’t even work today, I’m just saying AI is more efficient and effective and pretending it isn’t is simply wrong and / or lying.
You shouldn’t just trust whatever the AI says
And you also shouldn’t just trust random things you read on the internet, so I’m not sure exactly what point you are making here. I’ve never advocated for that. I also am not sure why you keep explaining to me how good search engines used to be, seems like a strange aside considering you don’t know how long I’ve been on the internet for.
I can’t tell if you’ve forgotten how good search was, are too young to know better, or were never good at using search.
I’m telling you that you didn’t have to “dig through the answers” if you formatted the search well. It worked. You obviously couldn’t trust everything you read on the internet, but the tricky part was formatting. No digging was required once you were good enough at key words, syntax, and search functions (“” , + - site:). Search results were incredibly efficient and effective. It was amazing.
AI is now maybe as efficient and effective as search results used to be. That’s it. They ruined search and gave us AI.
You had to “dig through answers” as in, you got your answer, in the form of a website that you then had to click into and scan for the answer.
AI is far more efficient. I can’t tell if you are delusional or just willfully ignorant. Ask a question and in two seconds you have a succinct answer with all of the information that using a search tool (now, and in the past) would provide you.
I also don’t disagree that they will ‘ruin AI’, I’m not defending it or the creators of it in the slightest. I am simply saying it can be an extremely effective tool and it is without a shadow of a doubt better than using a search engine to get the answer to a question.
It’s because they have no idea what “AI” actually is. They think you tell it to make profits, and it just does so. Anyone who has used any kind of “AI” for an hour knows that it’s mostly just shit at everything except the absolutely most basic shit.
They have no respect for the work of their employees, so they thought that they could be easily replaced by a computer program. They were so excited by the prospect of handing over ALL of our jobs to AI, that they far overextended themselves.
Now they are going to crash and burn because they bet AGAINST every worker in America, and LOST.
I hope it hurts them really, really badly. We should respond to their financial pain by laughing at them, and taking away their fortunes and their companies, since they have demonstrated so clearly that they can’t be trusted to handle the American economy responsibly.
Some applications of AI are pretty neat. For example the DeepL translation tool. I convinced my employer to spend money on that. And they make 55 million in profits.
But forcing AI down our throats, like Google does with those horrible auto-dubbed videos? There’s no way that will ever be profitable
DeepL isn’t what is being touted as “AI” this week, though. DeepL is based on older translation technology (by which I mean “far more reliable”).
This is a shell game. Every time there’s a wave of “AI” it’s some new tech that shills sell as the answer to “real” computer intelligence. (It can never possibly be this, of course, because we can’t even define intelligence properly, not to mention making an artificial version of it.) There’s certain levels of hype. There’s a bubble (usually far smaller than this one, of course). Then the bubble pops and we enter the next AI Winter.
The small use cases for which the new technology is actually useful, however, loses the AI monicker and is just called “software”. Like what used to be AI for doing dynamic adjustment on pictures for night shots, HDR, etc. is no longer shilled as AI. It’s just … software that my phone has built in.
So currently “AI” means “LLM” outside of some very specific academic environments. DeepL is just software that (mostly kinda/sorta) works.
The AI models that are used for molecular research are literally remaking our understanding of biology and medicine. I don’t know why the big AI corporations don’t point to that as an example of the benefits of AI. I guess cause that doesn’t help them to exclude the proletariat from their profits.
probably it has problems of its own, and it will likely require a scientist to fact checks any thing the AI makes, it also depends if a journal is finicky enough to accept a paper that the experiments are done by AI. pretty niche, i doubt its using the commercial ones like OPENAI/ GOOGLES,or other. its probably made for that specific purpose of that research field. a small subset of users, so unlikely to generate profit that way because thats asmall group of "customers using a niche AI, and likely its proprietary to the UNiversity that made it anyways.
It definitely has its own problems, and the results are thoroughly investigated by the researchers. But yeah it’s very niche, however most models are freely shared between teams. I mean it has to be to get through peer review.
It’s seeing a vision of the future and the technology that will transform it but not having the patience to let it happen and wanting to jump right to printing money. I think the fact that it happened to the internet should show that even incredibly useful tech can go through this process. It happened with video games, too. They see the potential but their eyes are only on the money, so they don’t have the ability to meet that potential.
And in the case of the metaverse, they killed it off entirely by wanting to build a virtual storefront and advertising space before building a virtual space people would want to visit. Facebook thought the idea would sell itself just based on pop culture, despite none of the pop culture versions involving just a headset and trackers to enter a world you can only see and hear, but not touch (even though it might inconsistently react to your touch). The tech wasn’t there but FB had FOMO and wasted billions chasing it anyways.
Exact same thing is happening with AI. LLMs improved by leaps and bounds, and once it was conversational, people with money went all in on the idea of what it could become (and probably still will, just not anytime soon and it won’t be chatbots, actually I suspect we might end up using LLMs to communicate/translate with the real AIs, though they’ll likely be integrated into them because that communication is so useful).
They don’t understand that it takes more than just having a good idea or seeing tech before it explodes, you have to have passion for that tech, a passion that will fight against the urge to release it to make money, not a passion to release it regardless to make money sooner and the intent to fix it up later.
It’s why they are trying to shove it into everything despite no one wanting it, because they think the exposure will drive demand, when it’s actually exposure to something desirable that drives real demand. And exposure is instead frustrating or dangerous because it’s often wrong and full of corporate censorship (that hasn’t once been accurate but has always been easy to bypass any time I’ve run into it).
I just wonder if MS bet the farm on it, or only bet a survivable loss. Like is the CEO just worried about his job or the entire company’s future?
How could they have possibly thought AI would make them money? Lmfao. It sucks power and water just to give wrong answers or generate “art” with terrible attention to detail…
I do think it’s disingenuous to downplay how effective AI can be. If you ask certain AI a question, it will give you a faster and better answer than using a search engine would, and will provide sources for further reading if requested.
And the art, whilst not as good or as ethical as human art, can still be high quality.
Being against AI is completely valid, but disparaging it with falsehoods does nothing but give the feeling that you don’t know what you’re talking about.
I think that speaks to how bad search engines have gotten, not really to how good AI is. Google used to work. I promise! It used to not just be ads and SEO garbage, if you knew your special search operation functions you could find exactly what you were looking for every time. It’s only because they enshitified the platform that AI search even makes sense to use.
They’ll enshitify AI search soon enough and we’ll be right back where we started.
Sure, but what I am talking about outperforms any search engine in history. If you have a specific question you will get a specific answer with AI, and usually it will be correct. If you use a search engine you can come to the same answer but it will definitely take you longer.
I’m not defending the use of AI, I’m just saying, the quality of them is not the issue. They are becoming extremely high quality with their answers and usefulness. The problem is with the ethics and energy usage.
It used to be that the first couple results would answer the specific question, as long as you knew how to format the question in the correct search terms and with the correct special operations. What might take longer is refining the search to get extremely specific results, but that was usually only necessary if you’re writing a paper or something.
But you shouldn’t just trust whatever the AI says when you’re writing a paper anyway, so that’s not really different.
AI does allow you to skip all that and just ask a plain language question, but search didn’t used to take so long if you knew how to use it. It worked.
Yes it worked, and still required you to dig through the answers to find the answer yourself. That is the difference. AI will search for you and collate the results to give you the definitive answer. I’m not saying searching didn’t work, or doesn’t even work today, I’m just saying AI is more efficient and effective and pretending it isn’t is simply wrong and / or lying.
And you also shouldn’t just trust random things you read on the internet, so I’m not sure exactly what point you are making here. I’ve never advocated for that. I also am not sure why you keep explaining to me how good search engines used to be, seems like a strange aside considering you don’t know how long I’ve been on the internet for.
I can’t tell if you’ve forgotten how good search was, are too young to know better, or were never good at using search.
I’m telling you that you didn’t have to “dig through the answers” if you formatted the search well. It worked. You obviously couldn’t trust everything you read on the internet, but the tricky part was formatting. No digging was required once you were good enough at key words, syntax, and search functions (“” , + - site:). Search results were incredibly efficient and effective. It was amazing.
AI is now maybe as efficient and effective as search results used to be. That’s it. They ruined search and gave us AI.
And they’ll ruin AI too, just you watch.
You had to “dig through answers” as in, you got your answer, in the form of a website that you then had to click into and scan for the answer.
AI is far more efficient. I can’t tell if you are delusional or just willfully ignorant. Ask a question and in two seconds you have a succinct answer with all of the information that using a search tool (now, and in the past) would provide you.
I also don’t disagree that they will ‘ruin AI’, I’m not defending it or the creators of it in the slightest. I am simply saying it can be an extremely effective tool and it is without a shadow of a doubt better than using a search engine to get the answer to a question.
It’s because they have no idea what “AI” actually is. They think you tell it to make profits, and it just does so. Anyone who has used any kind of “AI” for an hour knows that it’s mostly just shit at everything except the absolutely most basic shit.
They have no respect for the work of their employees, so they thought that they could be easily replaced by a computer program. They were so excited by the prospect of handing over ALL of our jobs to AI, that they far overextended themselves.
Now they are going to crash and burn because they bet AGAINST every worker in America, and LOST.
I hope it hurts them really, really badly. We should respond to their financial pain by laughing at them, and taking away their fortunes and their companies, since they have demonstrated so clearly that they can’t be trusted to handle the American economy responsibly.
A government bailout was ALWAYS the plan.
Some applications of AI are pretty neat. For example the DeepL translation tool. I convinced my employer to spend money on that. And they make 55 million in profits.
But forcing AI down our throats, like Google does with those horrible auto-dubbed videos? There’s no way that will ever be profitable
DeepL isn’t what is being touted as “AI” this week, though. DeepL is based on older translation technology (by which I mean “far more reliable”).
This is a shell game. Every time there’s a wave of “AI” it’s some new tech that shills sell as the answer to “real” computer intelligence. (It can never possibly be this, of course, because we can’t even define intelligence properly, not to mention making an artificial version of it.) There’s certain levels of hype. There’s a bubble (usually far smaller than this one, of course). Then the bubble pops and we enter the next AI Winter.
The small use cases for which the new technology is actually useful, however, loses the AI monicker and is just called “software”. Like what used to be AI for doing dynamic adjustment on pictures for night shots, HDR, etc. is no longer shilled as AI. It’s just … software that my phone has built in.
So currently “AI” means “LLM” outside of some very specific academic environments. DeepL is just software that (mostly kinda/sorta) works.
DeepL is based on an LLM:
https://www.deepl.com/de/blog/next-gen-language-model
Huh. That’s new. When I first tried DeepL it was not LLM. That’s an intriguing development.
I will update my mental database accordingly.
Yeah there are definitely some cool uses, it seems like analysis/processing uses are pretty good, but generative ones are not.
The AI models that are used for molecular research are literally remaking our understanding of biology and medicine. I don’t know why the big AI corporations don’t point to that as an example of the benefits of AI. I guess cause that doesn’t help them to exclude the proletariat from their profits.
probably it has problems of its own, and it will likely require a scientist to fact checks any thing the AI makes, it also depends if a journal is finicky enough to accept a paper that the experiments are done by AI. pretty niche, i doubt its using the commercial ones like OPENAI/ GOOGLES,or other. its probably made for that specific purpose of that research field. a small subset of users, so unlikely to generate profit that way because thats asmall group of "customers using a niche AI, and likely its proprietary to the UNiversity that made it anyways.
It definitely has its own problems, and the results are thoroughly investigated by the researchers. But yeah it’s very niche, however most models are freely shared between teams. I mean it has to be to get through peer review.
It’s seeing a vision of the future and the technology that will transform it but not having the patience to let it happen and wanting to jump right to printing money. I think the fact that it happened to the internet should show that even incredibly useful tech can go through this process. It happened with video games, too. They see the potential but their eyes are only on the money, so they don’t have the ability to meet that potential.
And in the case of the metaverse, they killed it off entirely by wanting to build a virtual storefront and advertising space before building a virtual space people would want to visit. Facebook thought the idea would sell itself just based on pop culture, despite none of the pop culture versions involving just a headset and trackers to enter a world you can only see and hear, but not touch (even though it might inconsistently react to your touch). The tech wasn’t there but FB had FOMO and wasted billions chasing it anyways.
Exact same thing is happening with AI. LLMs improved by leaps and bounds, and once it was conversational, people with money went all in on the idea of what it could become (and probably still will, just not anytime soon and it won’t be chatbots, actually I suspect we might end up using LLMs to communicate/translate with the real AIs, though they’ll likely be integrated into them because that communication is so useful).
They don’t understand that it takes more than just having a good idea or seeing tech before it explodes, you have to have passion for that tech, a passion that will fight against the urge to release it to make money, not a passion to release it regardless to make money sooner and the intent to fix it up later.
It’s why they are trying to shove it into everything despite no one wanting it, because they think the exposure will drive demand, when it’s actually exposure to something desirable that drives real demand. And exposure is instead frustrating or dangerous because it’s often wrong and full of corporate censorship (that hasn’t once been accurate but has always been easy to bypass any time I’ve run into it).
I just wonder if MS bet the farm on it, or only bet a survivable loss. Like is the CEO just worried about his job or the entire company’s future?