• PieMePlenty@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    18 hours ago

    Actually, I think the profit motive will correct the mistakes here.
    If AI works in their workflow and uses less energy than before… well, that’s an improvement. If it uses more energy, they will revert back because it makes less economic sense.
    This doesn’t scare me at all. Most companies strive to stay as profitable as possible and if a 1+1 calculation costs a company more money by using AI to do it, they’ll find a cheaper way… like using a calculator like they have before.
    We’re just nearing the peak of the Gartner hype cycle so it seems like everyone is doing it and its being sold at a loss. This will correct.

    • sobchak@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 hours ago

      I think the profit motive is what’s driving it in many cases. I.e. shareholders have interest in AI companies/products and are pressuring the other companies they have interest in to use the AI services; increasing their profit. Profit itself is inefficiency (i.e. in a perfect market, profit approaches zero).

    • thedeadwalking4242@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      17 hours ago

      You put too much faith in people to make good decisions. This could decrease profits by a wide margin and they’d keep using it. Tbh some would keep with the decision even if it throws them into the red.

      • skulblaka@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 hours ago

        Personally I am stoked to see multiple multi-billion dollar business enterprises absolutely crater themselves into the dirt by jumping on the AI train. Walmart can no longer track their finances properly or in/out budget vs expenditures? I sleep. They were getting too big and stupid anyway.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      ·
      15 hours ago

      You have more faith in people than I do.

      I have managers that get angry if you tell them about problems with their ideas. So we have to implement their ideas despite the fact that they will cause the company to lose money in the long run.

      Management isn’t run by bean counters (if it was it wouldn’t be so bad), management is run by egos in suits. If they’ve stated their reputation on AI, they will dismiss any and all information that says that their idea is stupid

      • Log in | Sign up@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        AI is strongly biased towards agreeing with the user.

        Human: “That’s not right, 1+2+3=7”
        AI: “Oh, my bad, yes I see that 1+2+3=15 is incorrect. I’ll make sure to take that on board. Thank you.”
        Human: “So what’s 1+2+3?”
        AI: “Well, let’s see. 1+2+3=15 is a good answer according to my records, but I can see that 1+2+3=7 is a great answer, so maybe we should stick with that. Is there anything else I can help you with today? Maybe you’d like me to summarise the findings in a chart?”

    • hayvan@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 hours ago

      A problem right now is that most models are subsidied by investors. OpenAI’s 2024 numbers were something like 2bn revenue ves 5bn expenses. All in the hopes of being the leader in a market that may not exist.

    • Natanael@infosec.pub
      link
      fedilink
      English
      arrow-up
      11
      ·
      17 hours ago

      The problem is how long it takes to correct against stupid managers. Most companies aren’t fully rational, it’s just when you look at long term averages that the various stupidities usually cancel out (unless they bankrupt the company)

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 hours ago

        unless they bankrupt the company

        Even then it’s not a guarantee. They just get one of their government buddies to declare them two important to the economy (reality is irrelevant here), and get a massive bailout.

    • theparadox@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      16 hours ago

      This doesn’t scare me at all. Most companies strive to stay as profitable as possible and if a 1+1 calculation costs a company more money by using AI to do it, they’ll find a cheaper way

      This sounds like easy math, but AI doesn’t use more or less energy. It’s stated goal is to replace people. People have a much, much more complicated cost formula full of subjective measures. An AI doesn’t need health insurance. An AI doesn’t make public comments on social media that might reflect poorly on your company. An AI won’t organize and demand a wage increase. An AI won’t sue you for wrongful termination. An AI can be held responsible for a problem and it can be written off as “growing pains”.

      How long will the “potential” of the potential benefits encourage adopters to give it a little more time? How much damage will it do in the meantime?