- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
If an LLM can’t be trusted with a fast food order, I can’t imagine what it is reliable enough for. I really was expecting this was the easy use case for the things.
It sounds like most orders still worked, so I guess we’ll see if other chains come to the same conclusion.
The idea of anomaly detection is to project some input onto a (high dimensional), numeric output. From the training data alone, you can then see where the projections are clustered and develop a high dimensional “boundary” where everything within is known and good and everything outside is unknown and possibly bad. Since orders come in relatively slow, a human would be able to check for false positives and overwrite the computer decision.
By the way, an ideal training set is preprocessed and has duplicates removed and new orders added by recombining parts of individual orders.
For example, if we have 3 orders:
We could then create the following set:
And so on, and so forth. A naive variant is just taking the power set of all valid orders.
This is more complicated than just having the available menu items, the available modifications, and the limits on quantities to compare against. This is already available through the app/online ordering.
That doesn’t prevent someone ordering “everything” at max quantity, which is almost certainly a “malicious” order.