US senator Bernie Sanders amplified his recent criticism of artificial intelligence on Sunday, explicitly linking the financial ambition of “the richest people in the world” to economic insecurity for millions of Americans – and calling for a potential moratorium on new datacenters.

Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN’s State of the Union that he was “fearful of a lot” when it came to AI. And the senator called it “the most consequential technology in the history of humanity” that will “transform” the US and the world in ways that had not been fully discussed.

“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”

  • SapphironZA@sh.itjust.works
    link
    fedilink
    arrow-up
    29
    arrow-down
    3
    ·
    edit-2
    10 hours ago

    People forget that a very short time ago, there was no internet and knowlege and fast communication was rare and slow.

    Nothing in the last 500 years has changed our society so much, in such a short time.

    AI has been around for decades now. The main recent breakthrough has been in its ability to immitate a human conversation. Untill a similar breakthough happens in its ability to reason and understand, it will continue to stagnate.

    Right now AI development needs the current bubble to pop before significant progress can be made.

    • fonix232@fedia.io
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      12 hours ago

      LLMs don’t just imitate human speech. They do much more in application - and that IS already displacing people, people who can’t just “find a new job”. People in call centers, (remote) customer support, personal assistants, and so on.

      And then we haven’t even touched down on how it’s changing IT. Software development alone is seeing massive changed with more and more code being AI generated, more and more functionality being offloaded to AI, which improves individual performance, allowing companies to cut down on manforce. The issue with that? There aren’t enough employers who could pick up those displaced people.

      Oh and then we haven’t addressed the fact that this AI displacement is also affecting future generations of these jobs. In software development, there’s already a shift from interns and juniors to AI, because it’s cheaper. This means that out of 100 fresh starters, maybe, maybe ten will get the chance to actually gain experience and progress anywhere, the rest are being discarded because AI is cheaper and “better” at those tasks.

      Previous industrial shifts have caused similar displacement, but those were slow processes. The most well known example would be the luddites going against the mechanical loom. While the luddites weren’t right about it, as handmade clothing has increased in price AND the displaced people were re-trained to manage the looms, that was also a slow process as the looms themselves were expensive, took time to replace manual workers, so not all textile factories could afford them, and demand was there for the increased capacity.

      Compare it with today’s AI shift, and there’s a clear distinction - within 3-4 years of LLMs showing up, we are on the verge of a potential societal collapse due to everyone and their mum trying to implement AI everywhere, even (especially!) in places it’s not needed. This speed, this adoption rate is simply not sustainable without planning for the displaced people. Because if UBI doesn’t happen, we’re truly looking at the most exposed bottom ~30% of earners (and even a big number of high earners!) not having any sort of income or the ability to get income, and things will mirror the situation a century ago, kick-starting another great depression but exacerbated by factors like much lower property ownership (yay private equity buying up residential properties to rent them out at extortionate prices), much higher cost of living, and so on.

      And we all know what the effects of the Great Depression culminated into. War, famine, ruin.

    • Deacon@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      15 hours ago

      I sincerely believe that our advancement in technology has outpaced our evolution and we are simply not equipped to wield it yet.

      • fonix232@fedia.io
        link
        fedilink
        arrow-up
        11
        ·
        12 hours ago

        The issue is that our political systems still haven’t caught up to the internet - they’re at least 30 years behind everything.

        This means that any effective change is much slower than the advancements being made, making it incredibly hard to legislate. To date the US is debating if using copyrighted material for training AI is a breach of copyright or not, hell, some morons are even claiming that artists shouldn’t have the right to licence their own art under a no-AI licence!

      • SapphironZA@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        10 hours ago

        Its certainly apparrent in how much our brains are struggeling to process the level of information we have access to.

        You can see that in many areas of society, people have given up on facts based reasoning and are simply following “vibes” in their decision making that is a lot “cheaper” on the brain.

    • krooklochurm@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      15 hours ago

      The thing is that technology is not linear.

      That could happen tomorrow.

      It might never happen.

      It likely isn’t going to happen with LLMs but the next big breakthrough could happen at any time. Or never.

      • Ach@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        3
        ·
        14 hours ago

        I very respectfully but firmly disagree.

        Human progression isn’t just advancing, it’s accelerating. If you were born in 1700 and died in 1775, basically everything at the time of your death was identical to the time of your birth.

        If you were born in 1900 and died in 1975, you were born to horse-drawn carriages and died after seeing a man walk on the moon.

        Now, even though our “AI” isn’t real AI and just language models, it can still crunch numbers historically faster. So the acceleration is objectively going to accelerate.

        • krooklochurm@lemmy.ca
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          14 hours ago

          I sincerely don’t understand how anything you wrote with disagrees with my comment about technological advancement not being linear.

          • Ach@lemmy.world
            link
            fedilink
            arrow-up
            6
            ·
            14 hours ago

            Fair point, sorry - I didn’t word it well. My bad.

            You seem to think something bad might happen, I think that stone is already moving and can’t be stopped.

            • krooklochurm@lemmy.ca
              link
              fedilink
              arrow-up
              5
              ·
              14 hours ago

              It might or it might not.

              I agree that it will likely will given the insane progression in ai models of every kind and the absurd amount of money being invested into it, but it’s not a certainty.

              LLMs are likely a dead end but anyone that thinks the buck stops there is an idiot.

              • Ach@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                6
                ·
                edit-2
                14 hours ago

                I’d have to disagree that LLM are a dead end. They aren’t actual AI, but they can crunch data at a rate that will make them a bridge to actual AI. I guess I see this is a very dangerous and inevitable stepping-stone.

                LLM will be able to crunch raw numbers to make actual AI possible IMHO.

                • krooklochurm@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  5
                  ·
                  14 hours ago

                  You keep saying number crunching - gpus “crunch numbers” cpus “crunch numbers” ai models ARE numbers.

                  • Ach@lemmy.world
                    link
                    fedilink
                    arrow-up
                    1
                    arrow-down
                    5
                    ·
                    13 hours ago

                    Are you denying that it can be done faster now? And that even if it can’t, people with money believe it can and are funding it?

                    This is moving fast my dude. Look at how fast a term from Terminator made it into our daily lives.

    • auraithx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      4
      ·
      edit-2
      12 hours ago

      The reasoning models were the breakthrough in its ability to reason and understand?

      AI has solved 50-year-old grand challenges in biology. AlphaFold has predicted the structures of nearly all known proteins, a feat of “understanding” molecular geometry that will accelerate drug discovery by decades.

      We aren’t just seeing a “faster horse” in communication; we are seeing the birth of General Purpose Technologies that can perform cognitive labor. Stagnation is unlikely because, unlike the internet (which moved information), AI is beginning to generate solutions.

      1. Protein folding solved at near-experimental accuracy, breaking a 50-year bottleneck in biology and turning structure prediction into a largely solved problem at scale.

      2. Prediction and public release of structures for nearly all known proteins, covering the entire catalogued proteome rather than a narrow benchmark set.

      3. Proteome-wide prediction of missense mutation effects, enabling large-scale disease variant interpretation that was previously impossible by human analysis alone.

      4. Weather forecasting models that outperform leading physics-based systems on many accuracy metrics while running orders of magnitude faster.

      5. Probabilistic weather forecasting that exceeds the skill of top operational ensemble models, improving uncertainty estimation, not just point forecasts.

      6. Formal mathematical proof generation at Olympiad level difficulty, producing verifiable proofs rather than heuristic or approximate solutions.

      7. Discovery of new low-level algorithms, including faster sorting routines, that were good enough to be merged into production compiler libraries.

      8. Discovery of improved matrix multiplication algorithms, advancing a problem where progress had been extremely slow for decades.

      9. Superhuman long-horizon strategic planning in Go, a domain where brute force search is infeasible and abstraction is required.

      10. Identification of novel antibiotic candidates by searching chemical spaces far beyond what human-led methods can feasibly explore.

      • SapphironZA@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        10 hours ago

        Thank you for raising these points. Progress has certainly been made and in specific applications, AI tools has resulted in breakthoughs.

        The question is wheither it was transformative, or just incremental improvements, i.e. a faster horse.

        I would also argue that there is a significant distinction between predictive AI systems in the application of analysis and the use of LLM. The former has been responsible for the majority of the breakthroughs in the application of AI, yet the latter is getting all the recent attention and investment.

        Its part of the reason why I think the current AI bubble is holding back AI development. So much investment is being made for the sake of extracting wealth from individials and investment vehicles, rather than in something that will be beneficial in the long term.

        Predictive AI (old AI) overall is certainly going to be a transformative technology as it has already proven over the last 40 years.

        I would argue what most people call AIs today, LLMs are not going to be transformative. It does a very good imitation of human language, but it completely lacks the ability to reason beyond the information it is trained on. There has been some progress with building specific modules for completing certain analytical tasks, like mathematics and statistical analysis, but not in the ability to reason.

        It might be possible to do that through brute force in a sufficiently large LLM, but I strongly suspect we lack the global computing power by a few orders of magnatude before we get to a mammilian brain and the number of connections it can make.

        But even if you could, we also need to improve power generation and efficiency by a few orders of magnatude as well.

        I would love to see the AI bubble pop, so that the truely transformative work can progress, rather than the current “how do we extract wealth” focus of AI. So much of what is happening now is the same as the dot com bubble, but at a much larger scale.