No, it’s pretty much the opposite. As it stands, one of the biggest problems with ‘AI’ is when people perceive it as an entity saying something that has meaning. The phrasing of LLMs output as ‘I think…’ or ‘I am…’ makes it easier for people to assign meaning to the semi-random outputs because it suggests there is an individual whose thoughts are being verbalized. It’s part of the trick the AI bros are pulling to have that framing. Making the outputs harder to give the pretense of being sentient, I suspect, would make it less likely to be harmful to people who engage with it in a naive manner.











The magical IT field is emitted in a cardioid aligned to the forward axis of the generator. Its effect is inversely exponentially proportional to the distance from the generator. This is why almost all IT problems are immediately solved only when you have put down whatever you were working on and actually started going over to help.