cross-posted from: https://programming.dev/post/37278389
Optical blur is an inherent property of any lens system and is challenging to model in modern cameras because of their complex optical elements. To tackle this challenge, we introduce a high‑dimensional neural representation of blur—the lens blur field—and a practical method for acquisition.
The lens blur field is a multilayer perceptron (MLP) designed to (1) accurately capture variations of the lens 2‑D point spread function over image‑plane location, focus setting, and optionally depth; and (2) represent these variations parametrically as a single, sensor‑specific function. The representation models the combined effects of defocus, diffraction, aberration, and accounts for sensor features such as pixel color filters and pixel‑specific micro‑lenses.
We provide a first‑of‑its‑kind dataset of 5‑D blur fields—for smartphone cameras, camera bodies equipped with a variety of lenses, etc. Finally, we show that acquired 5‑D blur fields are expressive and accurate enough to reveal, for the first time, differences in optical behavior of smartphone devices of the same make and model.
That means we could get better processing by applying this analysis in reverse. Would also reduce this type of fingerprinting.
It’s old news that you should never use the same camera for two images that need separate identities.
The same applies to radio transmitters and every analogue medium like probably microphone or preamp or ADC.
Anything that doesn’t work on purely digital domain is most likely traceable and I wouldn’t be surprised if proprietary software like Adobe started embedding hidden fingerprints into their files to “enforce their copyright” or “better collaborate with law enforcement”
I tend to complain that ROMs like Graphene OS don’t allow spoofing IMEI which should be basic functionally of every privacy-enabled phone. Yet if you require real privacy the electronic “fingerprint” of the radio itself is probably enough to track someone if they really want to.
There’s also a thing where they can track someone’s time and location just from listening to oscillations on the utility power’s frequency
The same applies to radio transmitters and every analogue medium like probably microphone or preamp or ADC.
exactly why when you buy any halfway decent mic there’s the option to buy them in sets: they’ll have come off the production line together so that their imperfections are as close to each other as possible so that they sound as identical as they can be
Anything that doesn’t work on purely digital domain is most likely traceable
at this point I believe that digital is easier to trace as every device ever connected to the Internet or connected to a device that has, has probably been bugged
It’s news to me. Do you have any further reading about it you can share?
It’s old news that you should never use the same camera for two images that need separate identities.
Sanatize metadata and Exif data?
That’s probably enough to stop your online mates from doxing you, but a powerful enough adversary can trace the little unique nuanced fingerprints that a camara lens introduces to the picture, and compare it with images from other sources like social media.
There are are many steps that can introduce patterns, like the way the lens blurs as explained in the article, sensor readout noise patterns, a speckle of dust, scratches, I bet chromatic aberrations are probably also different between multiple copies of the lens.
This blur persists past digital (automatic) post processing? Or is otherwise still uniquely traceable?
i’d guess that the digital processing is a well known change so you can account for it
after all, modern post processing on a phone afaik is done in the raw sensor data, so is using a lot more data than is actually stored in the JPEG: it probably leads to more information being available than if it weren’t done (more shadow detail rather than crushed blacks, etc)
Is it possible to use some kind of random noise algorithm to modify the image so that devices can’t be uniquely identified like this anymore? Or would that not work?
The would have to be enough to obscure the lens’ aberration, that would be an obnoxious amount of noise. Instead I think a better solution is to add micro distortion strategically to make identification ambiguous/inconclusive
perhaps simply putting something like cling wrap over the lens and moving it for each photo would be enough: adding some scratches and roughness that slightly changes each time you move it
This was mentioned in Little Brother by Cory Doctorow. So, what do we do about it?
Have a coordinated volunteer project where people print and photograph special patterned image designed to map the blur and other aberration of their particular lens. With hundreds of thousands of sample, we train a micro-distortion ML model that subtly shifts and distorts the pixels just enough to make positive lens identification impossible. Then have something to auto-apply this filter (and discard originals) on every pictures before they even have a chance of being uploaded to the cloud.
As much as this can be a privacy issue couldn’t this be a feature to tell apart ai and real photos?
Just my guess. I could be wrong:
As the lens blur is mathematically fairly simple and spread across the whole image it’s likely already consistently replicated by AI in a similar way to real photos.
It’s easier for generative AI to spot, “understand”, and replicate a mathematical pattern than the number of fingers on a hand or limbs on a body.
It also helps that the current generation of image generation models essentially work by “deblurring” some random noise. Having a blur in the resulting image just means the model has to do less work in a sense.
Also a guess, isn’t a hand or any biological form not also the result of a mathematical pattern?
I do see how ai could replicate “a” blur but what it might not be able to do (yet) is replicate the unique blur of a specific device.
So maybe you couldn’t proof something is AI, but the physical lens as proof that it is not.
You wouldn’t share your physical lens for high-risk work (i.e. where you are anonymous) and since there’s no way to know whether a specific “blur” was produced by a physical lens or by AI, this won’t help in proving if something is AI.
Hands appear differently in different positions all over the frame in the photo so I maintain the hand pattern is less consistent and harder than lens blur.
But you’re right as the blur is a fingerprint you can match it to a lens and prove a photo is real that way.
It could be a useful tactic as much of AI detection is a way to find and prove AI fake so far.
Everything leaves behind a unique pattern.
Splendid
So you’re saying, always ‘scratch’ your lens and get a repair shop to replace it with a generic lens. And if possible, get the CCD changed to a compatible one as well.
…for every photo
That’s a challenge, but even moving to a “generic” lens should help to reduce the identifiability.
Why would a generic lens be any better? These distortions are part of the lens design and manufacturing. Arguably, a lower quality lens would be easier to identify.
But not identifiable to a specific type of phone.
I didn’t see anything to see that these aberrations indicated anything about a type of phone? They’re unique for each lens…
Ah I see. Do non phone specific generic lenses exist? They all seem pretty specialized to me.
I know that a lot of the cheap Android handsets, which we mostly encounter as prepaid, have interchangeable camera bits.