• ProbablyBaysean@lemmy.ca
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    24 hours ago

    Talking about rubber duck intelligence, there is a two step “thinking then respond” that recent iterations of llms have started using. It is literally a rubber duck during the thinking phase. I downloaded a local llm with this feature and had it run and the cli did not hide the “thinking” once done. The end product was better quality than if it had tried to spit an answer immediately (I toggled thinking off and it definitely was dumber, so I think you are right for the generation of llms before “thinking”

    • queermunist she/her@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      22 hours ago

      That’s why I’m saying this might be an upgrade from a rubber duck. I’ll wait for some empirical evidence before I accept that it definitely is better than a rubber duck, though, because even with “thinking” it might actually cause tunnel vision for people who use it to bounce ideas. As long as the LLM is telling you that you’re inventing a new type of math you won’t stop to think of something else.