Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often “because people believed they were talking to something that had no ulterior agenda.” She called this accumulated record of personal disclosures “an archive of human candor that has no precedent.”
She also drew a direct parallel to Facebook’s early history, noting that the social media company once promised users control over their data and the ability to vote on policy changes. Those pledges eroded over time, Hitzig wrote, and the Federal Trade Commission found that privacy changes Facebook marketed as giving users more control actually did the opposite.



What system? It‘s an LLM. A blackbox. Any accomplishments regarding cybersecurity are rendered useless with LLMs. That‘s why you should never use agent based applications for important things. Not now. Not ever.
You can always insert prompt your way out of any guard rails if you are persistent enough. It might become too bothersome at some point to use it on a daily basis but it will never be completely fixed and right now it‘s fairly easy and there are plenty of free alternatives.
It‘s also unlikely you‘ll get banned by removing ads this way. Websites already detect if you have an adblock installed but the only ones who actively try to do something about it are a dying breed like newspapers. If Google or Facebook aren‘t banning users for using adblockers en masse then other AI companies won‘t ban you for a little anti-capitalist role play.
I have right to decide what shouldn’t show on my screen.