Are all uses of ai out of the question?

I understand most of the reasoning around this. Training ai models requires gigantic datacenters that consume copious amounts of resources (electricity and water) making them more expensive for everyone for something that doesn’t have much benefits. If anything it feels like its fuled a downturn in quality of content, intelligence and pretty much everything.

With the recent job market in the US I’ve found myself having little to no choice on what I work on and I’ve found myself working on ai.

The more I learn about it the angrier I get about things like generative ai. So things that are open ended like generating art or prose. Ai should be a tool to help us, not take away the things that make us happy so others can make a quick buck taking a shortcut.

It doesn’t help that ai is pretty shit at these jobs, spending cycles upon cycles of processing just to come up with hallucinated slop.

But I’m beginning to think that when it comes to ai refinement there’s actually something useful there. The idea is to use heuristics that run on your machine to reduce the context and amount of iterations/cycles that an ai/LLM spends on a specific query. Thus reducing the chance of hallucinations and stopping the slop. The caveat is that this can’t be used for artistic purposes as it requires you to be able to digest and specify context and instructions, which is harder to do for generative ai (maybe impossible, I haven’t gone down that rabbit hole cause I don’t think ai should be generating any type of art at all)

The ultimate goal behind refinement is making existing models more useful, reducing the need to be coming up with new models every season and consuming all our resources to generate more garbage.

And then making the models themselves need less hardware and less resources when executed.

I can come up with some examples if people want to hear more about it.

How do you all feel about that? Is it still a hard no in that context for ai?

All in all hopefully I won’t be working much longer in this space but I’ll make the most of what I’m contributing to it to make it better and not further reckless consumption.

  • ZDL@lazysoci.al
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    3 days ago

    All uses of degenerative “AI” are basically out of the question. Not just on ethical grounds or moral grounds or environmental grounds (all of which have strong arguments against degenerative “AI”) but because they simply cannot scale. In terms of investments, stock, etc. “AI” firms are something like 20% of the US economy right now (!) BUT in terms of revenue none of them cover even 1% of their costs to run. There’s no business model out there where people are going to pay, in aggregate, a hundred times what they’re paying now for the rather low-grade output these machines can provide just for these companies to reach a break-even point.

    There’s no way forward. The bubble will pop (and the longer it takes the more damage it’s going to cause the US economy given that it’s already the largest bubble in history and puffing up larger daily) and when it does, as with previous bubbles, little niches will develop where degenerative “AI” has some utility at more modest, plausible scales (likely privately-run custom-trained models) just like the previous five (or is it six; I’ve lost count?) “AI” winters.

    As for a “hard no”, I don’t care how “capable” the technology becomes: if it’s trained on wholesale theft of others’ works, that’s a hard no anyway.