Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often “because people believed they were talking to something that had no ulterior agenda.” She called this accumulated record of personal disclosures “an archive of human candor that has no precedent.”

She also drew a direct parallel to Facebook’s early history, noting that the social media company once promised users control over their data and the ability to vote on policy changes. Those pledges eroded over time, Hitzig wrote, and the Federal Trade Commission found that privacy changes Facebook marketed as giving users more control actually did the opposite.

  • rozodru@piefed.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 hours ago

    pretty much. you’ll see it first hand with the vibe coding shit. sure the basics will work but none of it will scale, it’ll be full of exploits, and just garbage all around. But most people simply don’t know better and trust whatever the LLM spits out.

    I mean people hail Claude as the best out there but if you know any better and you’ve spent any time with it then you’d know it’s 100% useless. not a single solution it spits out these days is correct. and Claude Code has become noticeably worse.