I think this one’s getting downvoted by people who haven’t read the article. The argument proceeds that because llms respond like people with anxiety, depression, and ptsd, and because people with those conditions interact with llms, the llms are likely to intensify or exacerbate the symptoms in the humans that interact with them. The researchers weren’t trying to fix the llms through therapy.
I think this one’s getting downvoted by people who haven’t read the article. The argument proceeds that because llms respond like people with anxiety, depression, and ptsd, and because people with those conditions interact with llms, the llms are likely to intensify or exacerbate the symptoms in the humans that interact with them. The researchers weren’t trying to fix the llms through therapy.
I think people object to articles anthropomorphicizing LLM’s and Generative AI.
People here are less likely to read articles where the headline does so.
Clickbaity title.