: So much for buttering up ChatGPT with ‘Please’ and ‘Thank you’
Google co-founder Sergey Brin claims that threatening generative AI models produces better results.
“We don’t circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence,” he said in an interview last week on All-In-Live Miami. […]
It’s not that they “do better”. As the article is saying, the AI are parrots that are combining information in different ways, and using “threatening” language in the prompt leads it to combine information in a different way than if using a non-threatening prompt. Just because you receive a different response doesn’t make it better. If 10 people were asked to retrieve information from an AI by coming up with prompt, and 9 of them obtained basically the same information because they had a neutral prompt but 1 person threatened the AI and got something different, that doesn’t make his info necessarily better. Sergey’s definition is that he’s getting the unique response, but if it’s inaccurate or incorrect, is it better?