• Rivalarrival@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 hours ago

    I see absolutely no reason why they should invest time in rearchitecting the conceptual boundaries of how private a user’s interaction with an AI is.

    Profit.

    If AIs use as much power and resources as we’ve been led to believe, there are massive cost savings to be had by simulating multiple bots instead of using multiple bots. If they’ve budgeted to earn a profit from the operation of 25 independent bots, what are they earning by running only one and claiming it is 25?

    There is very little chance that this degree of optimization hasn’t been employed.

    • silasmariner@programming.dev
      link
      fedilink
      arrow-up
      2
      ·
      21 hours ago

      For shared notes taking in a controlled environment - yes. It transparently happens anyway. For dynamic environments hosted by a separate service it’s a whole other can of worms. You now seem to be fairly clear you’re talking about the former, largely doable and indeed mostly implemented case. Nothing interesting left to talk about then.

      • Rivalarrival@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        20 hours ago

        You now seem to be fairly clear you’re talking about the former, largely doable and indeed mostly implemented case.

        Quite the reverse, actually. That “dynamic” environment hosted by a separate service is not nearly as significant as you portray it. The entire point of a meeting is for every observer to share the same experience.

        Again, it is completely trivial for the underlying AI to recognize it has been asked to sit in on the same meeting, and act as the personal representative for each of 25 separate people.

        If you’re under the impression that there is a personal, private relationship between an individual and an AI instance, I suggest you disabuse yourself of that notion. If there is any distinction, it is only because the underlying AI has been instructed to schizophrenically simulate it.

        • silasmariner@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          11 hours ago

          Lol, well I can’t say I’m surprised your bullishness on the matter persists - although I would argue that A) the current architectural model is a single process running the translation in a user-scoped session, and B) given that a bot literally can’t recognise anything, any such implementation would be entirely conventional engineering, not waves hands AI voodoo magic. So I don’t share your stance. But also I genuinely don’t care, so that’s as much attention as I’m prepared to give this particular thought experiment.