Its worth reading the article rather than trying to answer the headline
the people that own it.
Keep in mind what we’re calling “AI” isn’t artificial general intelligence (C.F. Kryten, Data or R2D2). the most visible AI is a Learned Language Model- basically a predictive algorithm that goes through it’s training material and says “99% of the time, when someone says ‘69’, people respond ‘nice’, therefore, when people say ‘69’ I should respond with ‘nice’.”
Or, with AI image gen, it knows that when some one asks it for an image of a hand holding a pencil, it looks at all the artwork in it’s training database and says, “this collection of pixels is probably what they want”.
But the models don’t know why 69 is nice, nor what a hand is. It just spits out the proper response based on statistical probability.
The thing is that the ‘proper’ response can be weighted by giving priority to certain responses- or rejecting certain responses- based on whatever motives the owner has. Take Grok as an example, and it’s blatant framing of Musk as the Greatest Man who Ever Lived™, but whoever weighted those responses failed to consider what happens when you ask if Musk is the best nazis or whatevers. You’ll notice those responses suddenly changed after people started figuring out how to game the prompts to get them.
AI chatbots are the mouthpiece of whoever owns it… and it gives a level of sophistication that we’ve never seen before in the billionaire’s attempts to manipulate us.
Or, with AI image gen, it knows that when some one asks it for an image of a hand holding a pencil, it looks at all the artwork in it’s training database and says, “this collection of pixels is probably what they want”.
This is incorrect. Generative image models don’t contain databases of artwork. If they did, they would be the most amazing fucking compression technology, ever.
As an example model, FLUX.dev is 23.8GB:
https://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main
It’s a general-use model that can generate basically anything you want. It’s not perfect and it’s not the latest & greatest AI image generation model, but it’s a great example because anyone can download it and run it locally on their own PC (and get vastly superior results than ChatGPT’s DALL-E model).
If you examine the data inside the model, you’ll see a bunch of metadata headers and then an enormous array of arrays of floating point values. Stuff like,
[]. That is what a generative image AI model uses to make images. There’s no database to speak of.When training an image model, you need to download millions upon millions of public images from the Internet and run them through their paces against an actual database like ImageNET. ImageNET contains lots of metadata about millions of images such as their URL, bounding boxes around parts of the image, and keywords associated with those bounding boxes.
The training is mostly a linear process. So the images never really get loaded into an database, they just get read along with their metadata into a GPU where it performs some Machine Learning stuff to generate some arrays of floating point values. Those values ultimately will end up in the model file.
It’s actually a lot more complicated than that (there’s pretraining steps and classifiers and verification/safety stuff and more) but that’s the gist of it.
I see soooo many people who think image AI generation is literally pulling pixels out of existing images but that’s not how it works at all. It’s not even remotely how it works.
When an image model is being trained, any given image might modify one of those floating point values by like ±0.01. That’s it. That’s all it does when it trains on a specific image.
I often rant about where this process goes wrong and how it can result in images that look way too much like some specific images in training data but that’s a flaw, not a feature. It’s something that every image model has to deal with and will improve over time.
At the heart of every AI image generation is a random number generator. Sometimes you’ll get something similar to an original work. Especially if you generate thousands and thousands of images. That doesn’t mean the model itself was engineered to do that. Also: A lot of that kind of problem happens in the inference step but that’s a really complicated topic…
This is incorrect. Generative image models don’t contain databases of artwork. If they did, they would be the most amazing fucking compression technology, ever. … snip… The training is mostly a linear process. So the images never really get loaded into an database, they just get read along with their metadata into a GPU where it performs some Machine Learning stuff to generate some arrays of floating point values. Those values ultimately will end up in the model file.
Where does it get read from? a database, right? yeah. that’s called a database. It may not be a large massive repository of art to rival the Vatican’s secret collection, but it is a database of digital art.
as for it being complex… yeah. that’s why I kept it simple and glossed over all the complex stuff that’s not really, you know. relevant to the question of who owns it.
I did stumble on an interesting AI use that seems super legit for creatives:
There’s an AI powered app for a specific brand of guitar amplifier. If you want your guitar to sound like a particular artist or a particular song, you tell it via a natural language input and it does all the adjustments for you.
You STILL have to have the personal talent to, you know, PLAY the guitar, but it saves you hours of fiddling with dials and figuring out what effects and pedals to apply to get the sound you’re looking for.
Video, same player, same guitar, same amp, multiple sounds:
https://youtube.com/shorts/wsGj4zsfOuQ
From a purely artistic perspective, this would be like asking AI for a Pantone or RGB palette set for a specific work of art. All it’s doing is telling you the colors so you can avoid doing all the research and mixing yourself.
How you USE those colors? That’s on you!
AI was trained on massive amounts of stolen data, and could only happen because of Big Data. By definition, AI is the evil child of surveillance capitalism.
Therefore the ones who control it are the surveillance capitalists.

Kind of a crappy article:
“If it’s a truly transformative technology the manner of the transformation and the values encoded into it will be set now. That is a political question.”
Which the author pretty much completely ignores…
I wanna know what movie that silver dressed dude with the gun is from.
Billionaires.







