Are all uses of ai out of the question?
I understand most of the reasoning around this. Training ai models requires gigantic datacenters that consume copious amounts of resources (electricity and water) making them more expensive for everyone for something that doesn’t have much benefits. If anything it feels like its fuled a downturn in quality of content, intelligence and pretty much everything.
With the recent job market in the US I’ve found myself having little to no choice on what I work on and I’ve found myself working on ai.
The more I learn about it the angrier I get about things like generative ai. So things that are open ended like generating art or prose. Ai should be a tool to help us, not take away the things that make us happy so others can make a quick buck taking a shortcut.
It doesn’t help that ai is pretty shit at these jobs, spending cycles upon cycles of processing just to come up with hallucinated slop.
But I’m beginning to think that when it comes to ai refinement there’s actually something useful there. The idea is to use heuristics that run on your machine to reduce the context and amount of iterations/cycles that an ai/LLM spends on a specific query. Thus reducing the chance of hallucinations and stopping the slop. The caveat is that this can’t be used for artistic purposes as it requires you to be able to digest and specify context and instructions, which is harder to do for generative ai (maybe impossible, I haven’t gone down that rabbit hole cause I don’t think ai should be generating any type of art at all)
The ultimate goal behind refinement is making existing models more useful, reducing the need to be coming up with new models every season and consuming all our resources to generate more garbage.
And then making the models themselves need less hardware and less resources when executed.
I can come up with some examples if people want to hear more about it.
How do you all feel about that? Is it still a hard no in that context for ai?
All in all hopefully I won’t be working much longer in this space but I’ll make the most of what I’m contributing to it to make it better and not further reckless consumption.
Nope. Machine learning has been a legit comp sci field for decades. Tons of really cool applications in medicine, for example.
For me, the current “AI” frenzy has a couple core problems. Avoid those, and you’re good.
- LLM chat bots being treated as intelligent, when they fundamentally can NEVER be intelligent. (I.E. researchers have determine that there IS no solution to hallucinations, without starting over from scratch with a different modeling strategy).
- Training machine models on public, or even copyrighted, works, and then thinking its okay to use that for personal profit. In particular, the stark inequality in how copyright is enforced for normal people, but ignored for private “AI” enterprises.
- The capacity of “AI” tools to cripple human potential, rather than reinforce or elevate it, and the fact that so many people confuse the former for the latter. Like, I can genuinely relate to the idea of having artistic ideas in your head and not having the skills to bring them into reality, and seeing image generators as just a way to fill in the gap. Like, there IS a promotion of creativity there, but being able to run prompts through an image generator is NOT the same thing as spending years developing actual skill, and too many people choosing the “AI” route would be a net negative for humanity. Similarly (and more intimately for me, as a software developer), having too many people rely too heavily on “AI” tools in software development is going to produce a generation of HORRIFICALLY incompetent developers. And that’s not just theoretical, we’re already seeing the impact of overreliance of “AI” in the industry.
One thing that rings true with me: The process of creation is transforms the creator. Generative AI takes that away.
The human learns nothing, thinks critically about nothing, explores nothing. They are unchanged with nothing to show for effort, ultimately replaceable.
Generative AI turns the human gift of creativity and reasoning into a mindless act of commoditized consumption.
The economic side of things (and companies lying about what AI actually is)
i believe its good for accessibility use (if it works well and is actually helpful for the disabled) and language translation, maybe even grammar refinements (non-native speakers often hesitate to write due to lack of confidence etc)
not sure about anything else, overall in the current state it’s a resource wasting dumpster fire imo. too few good uses considering the costs
I mainly care about chat bot AI being used as a consultant. AI using human language seems to be way to convincing for the average person even though its wrong over 50% of the time. Looking at how people are using it is a perfect example. Instead of building a specific ML pipeline for the task they just pipe it into chatgpt. Copilot on excel is an example but on hacker news there are these “solutions” popping up daily.
AI for analysis (neural networks or machine learning) has been used for years and it’s a good tool to validate or confirm some data. It’s a useful tool meant for humans.
Chatbots and waifu generators are only used by billionaires to brainwash people who are already brainwashed by TV and social media, and it’s very bad.
Also LLMs hallucinate and choke on their own vomit. Who can accept such an unreliable application?
Pretty much sums it up perfectly.
Also worth mentioning that machine learning has been a thing for ages and only recently has fallen under the same umbrella as generative dogshit like LLMs. Obviously there are some similarities under the covers but for all intents and purposes they’re hardly the same thing.
TBH pseudo-scientific terminology like “machine learning” a big part of how we got pseudo-scientific terminology like “AI”. Nothing scientifically worthwhile comes from fake anthropomorphism. Just more funding. Ofc they’ll concoct another meaningless terms (“ML” -> “AI”) when the money starts running out. Pure grift.
Oh absolutely. It’s all semantic bullshit designed intentionally to blur the lines and create buzzwords.
It’s a useful tool meant for
humansgenocide, surveillance, etc.- https://www.aljazeera.com/news/2024/4/4/ai-assisted-genocide-israel-reportedly-used-database-for-gaza-kill-lists
- https://www.aclu.org/news/privacy-technology/machine-surveillance-is-being-super-charged-by-large-ai-models
- https://newrepublic.com/article/202565/flock-safety-police-surveillance-dystopia
- ETC
I think there’s a pretty big problem in CS research right now that very few are developing algorithms now. People get to a black box and put some gradient decent solver to approximate what’s needed, rather than working out how to actually solve the problem.
So while, yes I agree, I think it’s being overused even for analysis.
I’m sure there is still a lot of work in classical algos, but the current, throw more pricessing power at the problem, wave surely eats away from serious research. Let’s hope the crash and winter comes quick and not too suddenly.
Medical and chemical research. Or anything else that actually might result in new concepts from the thing just trying every possible combination of things until something sticks. Which doesn’t include how LLMs are used by average every day people (IE as a chatbot or an information gathering tool).
Not at all. But it is almost exclusively abused. It literally makes people dumber if they use it to learn something.
Some examples:
- Dictation and summaries. AI “note-takers” are being built into videoconferencing platforms and this is super helpful, not only for later reference in a search, but also for a supervisor to just get a quick glimpse at what’s going on with employees.
- Searching a private database. Google Drive search it ironically horrendous. Like if you type in the exact name of a folder, it will show you a bunch of other shit and not that folder. Meanwhile I can ask Gemini a very specific question that may be in a single cell of some spreadsheet somewhere and it will not only give me the answer but also link to the associated spreadsheet and even the specific cell where it found the answer.
It just doesn’t have nearly as many applications as the techbros would dupe you I to believing. And it’s producing harm on a massive scale, in so many in different ways.
I’m actually not convinced by even the first example you give. I’ve yet to read an “AI” summary of a meeting that felt like a good summary. I presume it’s because they lack any concepts and are purely going off which words trigger other words, and so don’t have a way to check the value of relevance of what’s being logged as a person would
They’re not supposed to be good. Just some simple cliffnotes. Much better than nothing. And much faster than actually attending every meeting.
they seem to work reasonably well in my experience.
When my boss tried to get us to use AI that was one of the apps. In my experience it was VERY good at summarizing the content-free meetings of typical corporate bullshit, but it was IMPOSSIBLY bad at meetings that were actually productive. It latched onto the wrong things at a staggering rate.
I’ll pay more attention. I’ve skimmed them a few times but I didn’t care enough to actually read them.
I suspect that’s why people think they work so well. On a superficial scan they seem OK. They’re pretty coherent and on a quick pass it’s easy to miss the important details.
It’s just if you use them to go back over things that were discussed (in meetings with actual content, I mean, not in the usual corporate bloviation) you start seeing them derive incorrect conclusions (like an infamous example where it summarized us as having decided on an action we’d explicitly rejected), or focus on minutiae of fringe elements of the main discussion while barely mentioning the main topic.
It depends on what you think AI is. Give me a definition or specific examples. If you don’t, you’re peddling snake oil.
The main problem with “AI” is that it doesn’t exist. It’s a grift based on pseudo-science and hype.
As far as the diverse technologies falsely labelled as “AI”, they’ll largely suffer the same fate as any technology under capitalism: The technology will be used for resource extraction, extreme privilege, and violent enforcement.
I agree that some of these technologies are actually useful and getting better. For example. I enjoy search summaries even though they’re not always perfect. However, there is absolutely no “intelligence” involved in these computer programs. Furthermore, I think the the applications to grifting, violence, surveillance, cops/prisons, genocide, etc. far outweigh the almost insignificant progress in search summaries, auto-complete, chatbots, content generation, and other so-called “AI” programs. The term “AI” is used to lump these diverse programs together in order to collective success and deny failure. In order to promote genocidal applications under the cover of applications for generating pictures of cats.
I prefer more scientifically accurate terms like statistics, big data, etc., but there are much less useful for grifting, expose the underlying grift (eg. stealing/spying data, using math/computation instead of magic, the value of individual programs, etc.)… The term “AI” is used to obscure the scientific reality. None of these companies want to talk about where their data comes from or how it’s manipulated. It’s just “intelligence” smh. “AI” is a whitewashing term to obscure the many underlying problems. So more scientifically valid terms are almost completely avoided.
Personally I am a big fan of AI in multiple contexts, but the cons just out weight it completely. Although here some great things that can be done with it:
-
image upscaling. Upscaling algorithms are always a best guess and will blur a lot things. That’s where AI shines. It can make crisp edge, and if you keep it to 2x, the hallucinations are minimum. I personally use Waifu2x, it has been made before the AI boom, but always has been reliable.
-
Copilot. Not Microsoft’s “copilot”, but an ai acting as your copilot. Someone that reviews your stuff, give suggestions. You can do those suggestions or not, but that’s still rely on you to do them As an artist among non artists friends, not having anyone pointing out minor details or mistakes makes it hard to improve. With AI, you just throw your art at it and say “make it better”. The ai will give back an alternative version, and you can choose to add some detail it added. But you remain in control. There’s been an interesting talk about that in a blender conference that I cannot find back, but that illustrate it perfectly. I never actually used that technique, but if AI wasn’t so shitty I would.
-
Did you know the Defense Department asked for nearly 300 billion dollars for their own AI implementation? Do you think drones should use AI? The surface level use is just marketing. They’re trying to put it everywhere and 99% of the uses are way worse than deluding people with chatbots.
Yes, drones should use ai, but just like every other use case the key criteria is where and when. AI should never make a decision to kill someone.
AI is great for navigating, summarizing status, distinguishing targets, deciding when to highlight something of interest for the operator. There’s no reason I shouldn’t be able to tell a drone ai to fly in the vicinity of terrorist base x and notify me when something of interest happens. It should figure out how to get there, figure out how to be discrete, figure out how to avoid attacks/collisions, and maybe coordinate with its buddies for better coverage
I also like the descriptions I’ve seen of “loyal wingman”. If a pilot flies into a combat area, his drone wingmen should be able to keep up, stay stealthy, avoid attacks, and notify on alerts. From the pilots perspective, he should just have more weapons at his disposal without worrying about carrying them all or flying the aircraft that does. …. But the pilot decides, the pilot presses the button, the pilot is accountable
Or if you’re talking personal drones. If I’m some sort of streamer, yes my drone ai ought to be able to keep me in view and try to get some good video while I do whatever I’m doing.
AI regularly confuses simple items as guns. As a former drone operator for the Air Force I wouldn’t trust any of those metrics to be true. Especially when it’s the difference between dropping a bomb or not.
I once read some detail about automated x-ray reading where you have similar life or death concerns. But the goal for ai was simply to highlight areas that looked suspicious, and it was still up to a human to read it.
I remember it included statistics that it resulted in both better accuracy and efficiency. More correct in less time.
Obviously there is a human tendency to just go with what was circled, so your process needs to encourage the human to look carefully
I also remember that news story. After more research it turned out that AI had just gotten really good at finding the control samples. Nothing about the ‘breakthrough’ was applicable to real life.
This exactly. These "AI"s are basically the epitome of studying to pass the test, not learning.
To clarify, this is about LLMs and generative image creation. Other applications and technology are probably generally outside the scope of this community.
There are lots of other technologies that would’ve once been called AI, until we figured out how to do them. These are all fine.
There are a handful of problems these two specific technologies share which do not look like they’re likely to be solved sufficiently anytime soon:
- LLMs are predicting the next word that fits. If the answer to a given problem isn’t prevalent enough in the data, or some of randomness inherent in the system makes a wrong answer fit the specific phrasing better, you may get an inaccurate result. These may be difficult to detect and make these technologies difficult to use safely for practical applications where being right, being safe, or simply not wasting the time of those around you are important things.
- Providence of the training materials. It’s matching patterns found in existing works. That’s part of how you get realistic results, but it also restricts creation of truly novel works. Even if you can get around that, there’s still:
- A misunderstanding of what art is, and why we engage with it. Part of what makes art valuable is that it’s a window into another human’s brain. This is a conflict we’ve run into before with technologies like cameras, but there’s still intentionality in shot choice, and the camera acting in predictable ways that allow the machine to disappear from the end result. This lack of the core of what makes art valuable makes creative applications nonviable for the moment.
- These are being pushed into varying aspects of our lives by the hype of how close they look to solving real world problems. But until these issues are fixed, none of the products that are being pushed will really address the needs that they’re supposed to or are ready for production environments. There absolutely are exciting developments, but they’re kind of happening off to the side in much more specialized areas, like the geometry solver from Google. If these things were still confined to R&D, I bet communities like this wouldn’t exist. Maybe all the hype and funding will help uncover enough similar applications quick enough to make it all worth it, but I very much doubt it.
There are more issues like rate of improvement appearing to taper off extremely hard, power consumption of training destabilizing local electrical grids and worsening droughts, AI related companies having overinflated market caps and making up too large a chunk of the stock market which risks another financial crisis, AI psychosis, our educational system not being set up to deal with students having easy access to plausible looking work without mental exertion or learning on their part, and probably others that I’m forgetting at the moment.
For me the “Fuck” in “Fuck AI” stand for “Fuck the misuse caused by shitty company”. I’m not the only one who think that way. The worst part is that, differently from the last numerous bubbles, things are simply going too far with this one. And this is just the worst possible timing for this kind of shit to happen, especially in this wild uncontrolled way. Many sector will end up being pushed back by years because of all this automated-slop gold rush and disinformation will rule the net.
I’ve yet to meet a person who is truly completely anti-AI. Instead, most people just seem completely over it. It’s mislabelled, exaggerated, forced into every single physical and digital product, nearly every ad, the companies are all investing in it with circular valuations between themselves, we have to opt-out (if there even is an option), and it’s just not very useful for most applications. It all feels like a massive Ponzi scheme. And all of this without actually making a profit!? When it pops, this bubble is going to destroy the economy…
All uses of degenerative “AI” are basically out of the question. Not just on ethical grounds or moral grounds or environmental grounds (all of which have strong arguments against degenerative “AI”) but because they simply cannot scale. In terms of investments, stock, etc. “AI” firms are something like 20% of the US economy right now (!) BUT in terms of revenue none of them cover even 1% of their costs to run. There’s no business model out there where people are going to pay, in aggregate, a hundred times what they’re paying now for the rather low-grade output these machines can provide just for these companies to reach a break-even point.
There’s no way forward. The bubble will pop (and the longer it takes the more damage it’s going to cause the US economy given that it’s already the largest bubble in history and puffing up larger daily) and when it does, as with previous bubbles, little niches will develop where degenerative “AI” has some utility at more modest, plausible scales (likely privately-run custom-trained models) just like the previous five (or is it six; I’ve lost count?) “AI” winters.
As for a “hard no”, I don’t care how “capable” the technology becomes: if it’s trained on wholesale theft of others’ works, that’s a hard no anyway.









