“It’s safe to say that the people who volunteered to “shape” the initiative want it dead and buried. Of the 52 responses at the time of writing, all rejected the idea and asked Mozilla to stop shoving AI features into Firefox.”

  • Tenderizer78@lemmy.ml
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    2 days ago

    I considered using AI to summarize news articles that don’t seem worth the time to read in full (the attention industrial complex is really complicating my existence). But I turned it off and couldn’t find the button to turn it back on.

    • fodor@lemmy.zip
      link
      fedilink
      arrow-up
      23
      arrow-down
      2
      ·
      2 days ago

      If you need to summarize the news, which is already a summary of an event containing the important points and nothing else, then AI is the wrong tool. A better journalist is what you actually need. The whole point of good journalism is that it already did that work for you.

      • Tenderizer78@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        2 days ago

        I have a real journalist, but this is more on the “did you know this was important” side. Like how it’s fine to rinse your mouth out after brushing your teeth, but if your water isn’t fluoridated then you probably shouldn’t (which I got from skimming the article for the actionable information).

    • rozodru@pie.andmc.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      you have to be REALLY careful when asking an LLM to summarize news otherwise it will hallucinate what it believes sounds logical and correct. you have to point it directly to the article, ensure that it reads it, and then summarize. and honestly at that point…you might as well read it yourself.

      And this goes beyond just summarizing articles you NEED to provide an LLM a source for just about everything now. Even if you tell it to research online the solution to a problem many times now it’ll search for non-relevant links and utilize that for its solution because, again, to the LLM it makes the most sense when in reality it has nothing to do with your problem.

      At this point it’s an absolute waste of time using any LLM because within the last few months all models have noticeably gotten worse. Claude.ai is an absolute waste of time as 8 times out of 10 all solutions are hallucinations and recently GPT5 has started “fluffing” solutions with non-relevant information or it info dumps things that have nothing to do with your original prompt.