• porcoesphino@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    8 hours ago

    Strongly disagree with the TLDR thing

    At least, the iPhone notifications summaries were bad enough I eventually turned them off (but periodically check them) and while I was working at Google you couldn’t really turn of the genAI summaries of internal things (that evangelists kept adding to things) and I rarely found them useful. Well… they’re useful if the conversation is really bland but then the conversation should usually be in some thread elsewhere, if there was something important I don’t think the genAI systems were very good at highlighting it

    • ctrl_alt_esc@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      Completely agree, those summaries are incredibly bad. I was recently looking for some information in Gemini meeting notes and just couldn’t find it, even though I was sure it had been talked about. Then I read the transcript itself and realised that the artificial unintelligence had simply left out all the most important bits.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 hours ago

      The iPhone models are really bad. They aren’t representative of the usefulness of bigger ones, and it’s inexplicably stupid that Apple doesn’t like people pick their own API as an alternative.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      You can disagree, but I find it helpful to decide whether I’m going to read a lengthy article or not. Also if AI picks up on a bunch of biased phrasing or any of a dozen other signs of poor journalism, I can go into reading something (if I even bother to at that point) with an eye toward the problems in an article. Sometimes that helps when an article is trying to lead you down a certain path of thinking.

      I find I’m better at picking out the facts from the bias if I’m forewarned.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      7 hours ago

      iPhone notification summaries were made with GPT3.5 I believe (maybe even the -turbo version).

      It doesn’t use reasoning and so when using very short outputs it can produce wild variations since there are not a lot of previous tokens in order to direct the LLM into the appropriate direction in kv-space and so you’re more at the whims of temperature setting (randomly selecting the next token from a SOFTMAX’d list which was output from the LLM).

      You can take those same messages and plug them into a good model and get much higher quality results. But good models are expensive and Apple is, for some reason, going for the budget option.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        AFAIK some outputs are made with a really tiny/quantized local LLM too.

        And yeah, even that aside, GPT 3.5 is really bad these days. It’s obsolete.