The term AI itself is a shifting of goalposts. What was AI 50 years ago* is now AGI, so we can call this shit AI though it’s nothing of the sort. And everybody’s falling for the hype: governments, militaries, police forces, care providers, hospitals… not to speak of the insane amounts of energy & resources this wastes, and other highly problematic, erm, problems. What a fucking disaster.
If it wasn’t for those huge caveats I’d be all for it. Use it for what it can do (which isn’t all that much), research it. But don’t fall for the shit some tech bro envisions for us.
The current situation is a bubble based on an over hyped extension of the cloud compute boom. Nearly a trillion dollars of capital expenditure over the past 5 years from major tech companies chasing down this white whale and filling up new data centers with Nvidia GPUs. With revenue caping out at maybe 45 billion annually across all of them for “AI” products and services, and that’s before even talking about ongoing operation costs such as power for the data centers, wages for people working on them, or the wages of people working to develop services to run on them.
None of this is making any fucking profit, and every attempt to find new revenue ether increases their costs even more or falls flat on its face the moment it is actually shipped. No one wants to call it out at higher levels because NVIDIA is holding up the whole fucking stock market right now, and them crashing out because everyone stoped buying new GPUs will hurt everyone else’s growth narrative.
We called the basic movement of the grabbers in Defender AI to distinguish it from the fixed movement of Space Invaders. We still call that AI in modern videogames.
It’s also the other way around. What was called AI in the past is now called bots. Simple algorithms that approximate the appearance of intelligence like even the earliest chess engines, for instance, were also called AI.
And all those uses are correct, because AI is a broad field. We should just use the more specific terms these days though: machine learning, LLM, Bayesian networks, etc.
Agreed. But most people have neither the time nor capacity to track all of these specifics, so popular discussions of AI-related technologies inevitably break down into a mud pit of people talking past each other about various different topics.
Which, if you think about it, is true of most public discussions about any complex topic. It almost invariably devolves into a miscommunication or a discussion about semantics.
People have the capacity to track genres and whatnot, what’s so different about this?
I think people could understand if explained probably, but unfortunately journalists rarely dive deeply enough to do that. It really doesn’t need to get too involved:
machine learning - tell an algorithm what it’s allowed to change and what a “good” output is and it’ll handle the rest to find the best solution
Bayesian networks - probability of an event given a previous event; this is the underpinnings of LLMs
LLM - similar to Bayesian networks, but with a lot more data
And so on. If people can associate a technology with common applications, it’ll work a lot more like genres and people will start to intuit limitations of various technologies.
What’s different is that most people will see it as “tech stuff” and mentally file it in a drawer with spare extension cords and adapters. They don’t care to deeply study or catalog things. Nerds care about that, and most people here, including me, are nerds, but most people are not nerds and consider learning to be a form of torture.
People writ-large don’t care about proper genre labels either, they just kinda pick a vibe and guess off of it. Look at all the -core suffixed aesthetic names that cropped up in the last decade.
Yeah, I think it’s unfortunate that tech is something people refuse to learn about. I’ve been able to explain technical topics to less technical people, they just need to care.
For example, I’m into finance, and have been able to explain pretty complex topics (compounding, Social Security benefits, derivatives, etc) to people with no background in a way that they know how things work at a high level. They may not be able to trade options or predict portfolio performance, but they can at least tell if their “financial advisor” knows their stuff.
Learning a bit about key technologies can help cut through the BS from marketing departments. But as soon as I mention something remotely technical, people shut down. If people understood that LLMs basically do keyword association to generate text from a prompt, they wouldn’t believe the lies that claim they “think.” Just a little bit of high level knowledge would change it from “magic” to a sometimes useful everyday tool.
You’re not wrong, but that’s also a bit misleading. “AI” is all-encompassing while terms like AGI and ASI are subsets. From the 1950s onward AI was expected to evolve quickly as computing evolved, that never happened. Instead, AI mostly topped out with decision trees, like those used for AI in videogames. ML pried the field back open, but not in the ways we expected.
AGI and ASI were coined in the early 2000s to set apart the goal of human-level intelligence from other kinds of AI like videogame AI. This is a natural result of the field advancing in unexpected, divergent directions. It’s not meant to move the goal post, but to clarify future goals against past progress.
It is entirely possible that we develop multiple approaches to AGI that necessitate new terminology to differentiate them. It’s the nature of all evolution, including technology and language.
The term AI itself is a shifting of goalposts. What was AI 50 years ago* is now AGI, so we can call this shit AI though it’s nothing of the sort. And everybody’s falling for the hype: governments, militaries, police forces, care providers, hospitals… not to speak of the insane amounts of energy & resources this wastes, and other highly problematic, erm, problems. What a fucking disaster.
If it wasn’t for those huge caveats I’d be all for it. Use it for what it can do (which isn’t all that much), research it. But don’t fall for the shit some tech bro envisions for us.
* tbf fucking around with that term probably isn’t a new thing either, and science itself is divided on how to define it.
The current situation is a bubble based on an over hyped extension of the cloud compute boom. Nearly a trillion dollars of capital expenditure over the past 5 years from major tech companies chasing down this white whale and filling up new data centers with Nvidia GPUs. With revenue caping out at maybe 45 billion annually across all of them for “AI” products and services, and that’s before even talking about ongoing operation costs such as power for the data centers, wages for people working on them, or the wages of people working to develop services to run on them.
None of this is making any fucking profit, and every attempt to find new revenue ether increases their costs even more or falls flat on its face the moment it is actually shipped. No one wants to call it out at higher levels because NVIDIA is holding up the whole fucking stock market right now, and them crashing out because everyone stoped buying new GPUs will hurt everyone else’s growth narrative.
We called the basic movement of the grabbers in Defender AI to distinguish it from the fixed movement of Space Invaders. We still call that AI in modern videogames.
It’s also the other way around. What was called AI in the past is now called bots. Simple algorithms that approximate the appearance of intelligence like even the earliest chess engines, for instance, were also called AI.
And all those uses are correct, because AI is a broad field. We should just use the more specific terms these days though: machine learning, LLM, Bayesian networks, etc.
Agreed. But most people have neither the time nor capacity to track all of these specifics, so popular discussions of AI-related technologies inevitably break down into a mud pit of people talking past each other about various different topics.
Which, if you think about it, is true of most public discussions about any complex topic. It almost invariably devolves into a miscommunication or a discussion about semantics.
People have the capacity to track genres and whatnot, what’s so different about this?
I think people could understand if explained probably, but unfortunately journalists rarely dive deeply enough to do that. It really doesn’t need to get too involved:
And so on. If people can associate a technology with common applications, it’ll work a lot more like genres and people will start to intuit limitations of various technologies.
What’s different is that most people will see it as “tech stuff” and mentally file it in a drawer with spare extension cords and adapters. They don’t care to deeply study or catalog things. Nerds care about that, and most people here, including me, are nerds, but most people are not nerds and consider learning to be a form of torture.
People writ-large don’t care about proper genre labels either, they just kinda pick a vibe and guess off of it. Look at all the -core suffixed aesthetic names that cropped up in the last decade.
Yeah, I think it’s unfortunate that tech is something people refuse to learn about. I’ve been able to explain technical topics to less technical people, they just need to care.
For example, I’m into finance, and have been able to explain pretty complex topics (compounding, Social Security benefits, derivatives, etc) to people with no background in a way that they know how things work at a high level. They may not be able to trade options or predict portfolio performance, but they can at least tell if their “financial advisor” knows their stuff.
Learning a bit about key technologies can help cut through the BS from marketing departments. But as soon as I mention something remotely technical, people shut down. If people understood that LLMs basically do keyword association to generate text from a prompt, they wouldn’t believe the lies that claim they “think.” Just a little bit of high level knowledge would change it from “magic” to a sometimes useful everyday tool.
True! I was refering to some stricltly scientific definitions but of course there’s always been popular/broader ones.
You’re not wrong, but that’s also a bit misleading. “AI” is all-encompassing while terms like AGI and ASI are subsets. From the 1950s onward AI was expected to evolve quickly as computing evolved, that never happened. Instead, AI mostly topped out with decision trees, like those used for AI in videogames. ML pried the field back open, but not in the ways we expected.
AGI and ASI were coined in the early 2000s to set apart the goal of human-level intelligence from other kinds of AI like videogame AI. This is a natural result of the field advancing in unexpected, divergent directions. It’s not meant to move the goal post, but to clarify future goals against past progress.
It is entirely possible that we develop multiple approaches to AGI that necessitate new terminology to differentiate them. It’s the nature of all evolution, including technology and language.
It’s pretty clear your understanding of the history of computer science comes from Star Wars.