Are all uses of ai out of the question?

I understand most of the reasoning around this. Training ai models requires gigantic datacenters that consume copious amounts of resources (electricity and water) making them more expensive for everyone for something that doesn’t have much benefits. If anything it feels like its fuled a downturn in quality of content, intelligence and pretty much everything.

With the recent job market in the US I’ve found myself having little to no choice on what I work on and I’ve found myself working on ai.

The more I learn about it the angrier I get about things like generative ai. So things that are open ended like generating art or prose. Ai should be a tool to help us, not take away the things that make us happy so others can make a quick buck taking a shortcut.

It doesn’t help that ai is pretty shit at these jobs, spending cycles upon cycles of processing just to come up with hallucinated slop.

But I’m beginning to think that when it comes to ai refinement there’s actually something useful there. The idea is to use heuristics that run on your machine to reduce the context and amount of iterations/cycles that an ai/LLM spends on a specific query. Thus reducing the chance of hallucinations and stopping the slop. The caveat is that this can’t be used for artistic purposes as it requires you to be able to digest and specify context and instructions, which is harder to do for generative ai (maybe impossible, I haven’t gone down that rabbit hole cause I don’t think ai should be generating any type of art at all)

The ultimate goal behind refinement is making existing models more useful, reducing the need to be coming up with new models every season and consuming all our resources to generate more garbage.

And then making the models themselves need less hardware and less resources when executed.

I can come up with some examples if people want to hear more about it.

How do you all feel about that? Is it still a hard no in that context for ai?

All in all hopefully I won’t be working much longer in this space but I’ll make the most of what I’m contributing to it to make it better and not further reckless consumption.

  • paraplu@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 days ago

    To clarify, this is about LLMs and generative image creation. Other applications and technology are probably generally outside the scope of this community.

    There are lots of other technologies that would’ve once been called AI, until we figured out how to do them. These are all fine.

    There are a handful of problems these two specific technologies share which do not look like they’re likely to be solved sufficiently anytime soon:

    • LLMs are predicting the next word that fits. If the answer to a given problem isn’t prevalent enough in the data, or some of randomness inherent in the system makes a wrong answer fit the specific phrasing better, you may get an inaccurate result. These may be difficult to detect and make these technologies difficult to use safely for practical applications where being right, being safe, or simply not wasting the time of those around you are important things.
    • Providence of the training materials. It’s matching patterns found in existing works. That’s part of how you get realistic results, but it also restricts creation of truly novel works. Even if you can get around that, there’s still:
    • A misunderstanding of what art is, and why we engage with it. Part of what makes art valuable is that it’s a window into another human’s brain. This is a conflict we’ve run into before with technologies like cameras, but there’s still intentionality in shot choice, and the camera acting in predictable ways that allow the machine to disappear from the end result. This lack of the core of what makes art valuable makes creative applications nonviable for the moment.
    • These are being pushed into varying aspects of our lives by the hype of how close they look to solving real world problems. But until these issues are fixed, none of the products that are being pushed will really address the needs that they’re supposed to or are ready for production environments. There absolutely are exciting developments, but they’re kind of happening off to the side in much more specialized areas, like the geometry solver from Google. If these things were still confined to R&D, I bet communities like this wouldn’t exist. Maybe all the hype and funding will help uncover enough similar applications quick enough to make it all worth it, but I very much doubt it.

    There are more issues like rate of improvement appearing to taper off extremely hard, power consumption of training destabilizing local electrical grids and worsening droughts, AI related companies having overinflated market caps and making up too large a chunk of the stock market which risks another financial crisis, AI psychosis, our educational system not being set up to deal with students having easy access to plausible looking work without mental exertion or learning on their part, and probably others that I’m forgetting at the moment.