Yeah, the Turing test wasn’t a great metric. The result depends on who is testing it. Some people were probably fooled by ALICE or that doctor one, that were pretty much implemented using long switch blocks and repeating user input back to them.
Kinda like how “why?” is pretty much always a valid response and repeating it is more of a sign of cheekiness than lack of intelligence.
I feel like it’s increasingly a test applicable to humans rather than to machines. Are you original enough that you couldn’t be replaced by a language model?
Yeah, the Turing test wasn’t a great metric. The result depends on who is testing it. Some people were probably fooled by ALICE or that doctor one, that were pretty much implemented using long switch blocks and repeating user input back to them.
Kinda like how “why?” is pretty much always a valid response and repeating it is more of a sign of cheekiness than lack of intelligence.
I feel like it’s increasingly a test applicable to humans rather than to machines. Are you original enough that you couldn’t be replaced by a language model?
I’m not sure I like to think about it.