• jj4211@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    7 hours ago

    As to why not just do it, problem is that the LLM will generate something to do even if it doesn’t know the correct answer. You don’t want agentic ai to go to town because it will screw up and be even harder or impossible to undo whatever it generated to do.

    This specific demo worked, but it’s a crapshoot as to whether a scenario will work as an llm “failure” still generates output, and nothing knows that it is “wrong”.