A feature Google demoed at its I/O confab yesterday, using its generative AI technology to scan voice calls in real time for conversational patterns associated with financial scams, has sent a collective shiver down the spines of privacy and security experts who are warning the feature represents the thin end of the wedge. They warn that, once client-side scanning is baked into mobile infrastructure, it could usher in an era of centralized censorship.

Apple abandoned a plan to deploy client-side scanning for CSAM in 2021 after a huge privacy backlash. However, policymakers have continued to heap pressure on the tech industry to find ways to detect illegal activity taking place on their platforms. Any industry moves to build out on-device scanning infrastructure could therefore pave the way for all-sorts of content scanning by default — whether government-led or related to a particular commercial agenda.

Meredith Whittaker, president of the U.S.-based encrypted messaging app Signal, warned: “This is incredibly dangerous. It lays the path for centralized, device-level client side scanning.

“From detecting ‘scams’ it’s a short step to ‘detecting patterns commonly associated w[ith] seeking reproductive care’ or ‘commonly associated w[ith] providing LGBTQ resources’ or ‘commonly associated with tech worker whistleblowing.’”

    • lud@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      6 months ago

      Lol, I’m not saying Google is or isn’t doing that. I’m just saying that you are just spewing bullshit without any evidence whatsoever.