• HakFoo@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    14 hours ago

    But what about this promise makes it so uniquely seductive?

    There are a million guys with ideas for cars that will go 750km on a thimble-full of Fresca, robot butlers that can’t turn evil because they don’t have red LEDs in the eye positions, and 200:1 data compression as long as you never have to decompress it. They must all be looking at Altman and company and asking where their bubbles.

    I sadly suspect the charm is “we can sack some huge percentage of workers if it delivers”

    • sqw@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 minutes ago

      if firing people is the ultimate good, maybe we can get the corpos behind UBI so nobody cares too much about getting fired?

    • AmbitiousProcess (they/them)@piefed.social
      link
      fedilink
      English
      arrow-up
      16
      ·
      13 hours ago

      But what about this promise makes it so uniquely seductive?

      Part of it is, as you pointed out, just the elimination of costly labor. That’s a capitalist’s wet dream. But the main thing that makes it attractive as a slick, highly marketable investment vehicle is that AI models are inherently black boxes.

      There are ways you can examine the ways they work (for example, researchers found that the parts of an LLM that “understand” one topic, like money, can also simultaneously “understand” other different, yet related things, like value, credit, etc), but we can’t truly comprehend everything about them. It would be like looking at a math problem billions of equations large and assuming we could hold the whole equation perfectly in our brain and do the mental math to solve it. We can’t.

      That means that instead of seeing “here’s our robot that is currently capable of this, but these are the components that could be upgraded/replaced, X is an issue it faces because of Y” and so on, instead you get “It’s not good at this yet, but it will be if you just throw a few billion dollars more compute at it, we promise this time.”

      Problems are abstracted away to “something that will fix itself later,” or something that “just happens, but we’ll find a way to fix it”, and not any kind of mechanical constraint a VC fund manager might be able to understand.

    • FishFace@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      It’s that LLM output looks like human writing, so it looks like they might be able to do anything a person can.

    • Godort@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      ·
      14 hours ago

      I sadly suspect the charm is “we can sack some huge percentage of workers if it delivers”

      It’s that, and a really impressive working prototype.

    • WanderingThoughts@europe.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      And because the rest of the market is really slow and barely above inflation so not really worth much to invest in while AI is going like it’s the good ol’ days. That’s how the money boys see it anyway.