Sort of. There’s a relatively new type of LLM called “tool aware” LLMs, which you can instruct to use tools like a calculator, or some other external program. As far as I know though, the LLM has to be told to go out and use that external thing, it can’t make that decision itself.
Can the model itself be trained to recognize mathematical input and invoke an external app, parse the result and feed that back into the reply? No.
Can you create a multi-layered system that uses some trickery to achieve this effect most of the time? Yes, that’s what OpenAI and Google are already doing by recognizing certain features of the users’ inputs and changing the system prompts to force the model to output Python code or Markdown notation that your browser then renders using a different tool.
Can’t it be trained to do that?
Sort of. There’s a relatively new type of LLM called “tool aware” LLMs, which you can instruct to use tools like a calculator, or some other external program. As far as I know though, the LLM has to be told to go out and use that external thing, it can’t make that decision itself.
Can the model itself be trained to recognize mathematical input and invoke an external app, parse the result and feed that back into the reply? No.
Can you create a multi-layered system that uses some trickery to achieve this effect most of the time? Yes, that’s what OpenAI and Google are already doing by recognizing certain features of the users’ inputs and changing the system prompts to force the model to output Python code or Markdown notation that your browser then renders using a different tool.