Are all uses of ai out of the question?
I understand most of the reasoning around this. Training ai models requires gigantic datacenters that consume copious amounts of resources (electricity and water) making them more expensive for everyone for something that doesn’t have much benefits. If anything it feels like its fuled a downturn in quality of content, intelligence and pretty much everything.
With the recent job market in the US I’ve found myself having little to no choice on what I work on and I’ve found myself working on ai.
The more I learn about it the angrier I get about things like generative ai. So things that are open ended like generating art or prose. Ai should be a tool to help us, not take away the things that make us happy so others can make a quick buck taking a shortcut.
It doesn’t help that ai is pretty shit at these jobs, spending cycles upon cycles of processing just to come up with hallucinated slop.
But I’m beginning to think that when it comes to ai refinement there’s actually something useful there. The idea is to use heuristics that run on your machine to reduce the context and amount of iterations/cycles that an ai/LLM spends on a specific query. Thus reducing the chance of hallucinations and stopping the slop. The caveat is that this can’t be used for artistic purposes as it requires you to be able to digest and specify context and instructions, which is harder to do for generative ai (maybe impossible, I haven’t gone down that rabbit hole cause I don’t think ai should be generating any type of art at all)
The ultimate goal behind refinement is making existing models more useful, reducing the need to be coming up with new models every season and consuming all our resources to generate more garbage.
And then making the models themselves need less hardware and less resources when executed.
I can come up with some examples if people want to hear more about it.
How do you all feel about that? Is it still a hard no in that context for ai?
All in all hopefully I won’t be working much longer in this space but I’ll make the most of what I’m contributing to it to make it better and not further reckless consumption.


Yes, drones should use ai, but just like every other use case the key criteria is where and when. AI should never make a decision to kill someone.
AI is great for navigating, summarizing status, distinguishing targets, deciding when to highlight something of interest for the operator. There’s no reason I shouldn’t be able to tell a drone ai to fly in the vicinity of terrorist base x and notify me when something of interest happens. It should figure out how to get there, figure out how to be discrete, figure out how to avoid attacks/collisions, and maybe coordinate with its buddies for better coverage
I also like the descriptions I’ve seen of “loyal wingman”. If a pilot flies into a combat area, his drone wingmen should be able to keep up, stay stealthy, avoid attacks, and notify on alerts. From the pilots perspective, he should just have more weapons at his disposal without worrying about carrying them all or flying the aircraft that does. …. But the pilot decides, the pilot presses the button, the pilot is accountable
Or if you’re talking personal drones. If I’m some sort of streamer, yes my drone ai ought to be able to keep me in view and try to get some good video while I do whatever I’m doing.
AI regularly confuses simple items as guns. As a former drone operator for the Air Force I wouldn’t trust any of those metrics to be true. Especially when it’s the difference between dropping a bomb or not.
I once read some detail about automated x-ray reading where you have similar life or death concerns. But the goal for ai was simply to highlight areas that looked suspicious, and it was still up to a human to read it.
I remember it included statistics that it resulted in both better accuracy and efficiency. More correct in less time.
Obviously there is a human tendency to just go with what was circled, so your process needs to encourage the human to look carefully
I also remember that news story. After more research it turned out that AI had just gotten really good at finding the control samples. Nothing about the ‘breakthrough’ was applicable to real life.
This exactly. These "AI"s are basically the epitome of studying to pass the test, not learning.