- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
US senator Bernie Sanders amplified his recent criticism of artificial intelligence on Sunday, explicitly linking the financial ambition of “the richest people in the world” to economic insecurity for millions of Americans – and calling for a potential moratorium on new datacenters.
Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN’s State of the Union that he was “fearful of a lot” when it came to AI. And the senator called it “the most consequential technology in the history of humanity” that will “transform” the US and the world in ways that had not been fully discussed.
“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”



The reasoning models were the breakthrough in its ability to reason and understand?
AI has solved 50-year-old grand challenges in biology. AlphaFold has predicted the structures of nearly all known proteins, a feat of “understanding” molecular geometry that will accelerate drug discovery by decades.
We aren’t just seeing a “faster horse” in communication; we are seeing the birth of General Purpose Technologies that can perform cognitive labor. Stagnation is unlikely because, unlike the internet (which moved information), AI is beginning to generate solutions.
Protein folding solved at near-experimental accuracy, breaking a 50-year bottleneck in biology and turning structure prediction into a largely solved problem at scale.
Prediction and public release of structures for nearly all known proteins, covering the entire catalogued proteome rather than a narrow benchmark set.
Proteome-wide prediction of missense mutation effects, enabling large-scale disease variant interpretation that was previously impossible by human analysis alone.
Weather forecasting models that outperform leading physics-based systems on many accuracy metrics while running orders of magnitude faster.
Probabilistic weather forecasting that exceeds the skill of top operational ensemble models, improving uncertainty estimation, not just point forecasts.
Formal mathematical proof generation at Olympiad level difficulty, producing verifiable proofs rather than heuristic or approximate solutions.
Discovery of new low-level algorithms, including faster sorting routines, that were good enough to be merged into production compiler libraries.
Discovery of improved matrix multiplication algorithms, advancing a problem where progress had been extremely slow for decades.
Superhuman long-horizon strategic planning in Go, a domain where brute force search is infeasible and abstraction is required.
Identification of novel antibiotic candidates by searching chemical spaces far beyond what human-led methods can feasibly explore.
Thank you for raising these points. Progress has certainly been made and in specific applications, AI tools has resulted in breakthoughs.
The question is wheither it was transformative, or just incremental improvements, i.e. a faster horse.
I would also argue that there is a significant distinction between predictive AI systems in the application of analysis and the use of LLM. The former has been responsible for the majority of the breakthroughs in the application of AI, yet the latter is getting all the recent attention and investment.
Its part of the reason why I think the current AI bubble is holding back AI development. So much investment is being made for the sake of extracting wealth from individials and investment vehicles, rather than in something that will be beneficial in the long term.
Predictive AI (old AI) overall is certainly going to be a transformative technology as it has already proven over the last 40 years.
I would argue what most people call AIs today, LLMs are not going to be transformative. It does a very good imitation of human language, but it completely lacks the ability to reason beyond the information it is trained on. There has been some progress with building specific modules for completing certain analytical tasks, like mathematics and statistical analysis, but not in the ability to reason.
It might be possible to do that through brute force in a sufficiently large LLM, but I strongly suspect we lack the global computing power by a few orders of magnatude before we get to a mammilian brain and the number of connections it can make.
But even if you could, we also need to improve power generation and efficiency by a few orders of magnatude as well.
I would love to see the AI bubble pop, so that the truely transformative work can progress, rather than the current “how do we extract wealth” focus of AI. So much of what is happening now is the same as the dot com bubble, but at a much larger scale.