Quantised models can be surprisingly small. And if Apple aren’t targeting LLMs for local use, more specific/tailored models absolutely can run on device.
That said, given the precedent sent by Siri, their next progression of Siri into an LLM will absolutely require network connection and be executed server side.
Quantised models can be surprisingly small. And if Apple aren’t targeting LLMs for local use, more specific/tailored models absolutely can run on device.
That said, given the precedent sent by Siri, their next progression of Siri into an LLM will absolutely require network connection and be executed server side.