![](https://lemmy.ca/pictrs/image/0a59bb89-bf0b-412c-b7b4-83247b4158bb.jpeg)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
The use of CSAM in training generative AI models is an issue no matter how these models are being used.
Mastodon: @[email protected]
The use of CSAM in training generative AI models is an issue no matter how these models are being used.
This is tough, the goal should be to reduce child abuse. It’s unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don’t abuse children. Like everything else AI, we won’t know the real impact for many years.
Jokes on you Slack, I’m not intelligent!
Religion has competition now, AI is also profiting from hallucinations that lots of people believe as fact.
Haha, that was literally the exact same point I stopped reading. I have emails older than this system and they weren’t stored on floppys 😂
I can’t verify this story with any reputable sources. Is this real or just boomerbait?
Lol exactly, how dare you have a nuanced opinion!
I thought Twitter’s infrastructure was going to collapse within weeks after Musk made all those cuts and changes. I was obviously wrong because Twitter’s infrastructure didn’t collapse. I’m not speaking to the user experience on Twitter but from a purely infrastructure perspective, Musk was right and I was wrong.
And does that make Mistral the new OpenAI?
The FAA failed to regulate Boeing. I’m pro regulation and laws that protect people’s privacy. And if this company and the individuals within it break the law they should receive appropriate punishments with fines tied to international revenue.
My point is that the laws should relate to privacy independent of the technology. The “ban face recognition” narrative misses the point and doesn’t address the threats. Facial recognition technology can be used in ways that don’t threaten individuals privacy and non facial recognition technologies can be a threat to individual privacy.
It’s cynical to assume this company is breaking privacy with no evidence. But it’s fair to say there needs to be greater punishments and regulations
I doubt they would implement thing on every vending machine. They can still derive some useful analytic data from a smaller sample size
I have in other sections of this thread. I don’t want to copy and paste but I’m happy to answer any specific questions.
You pretend to care about consent and privacy and then mention my daughter by name here. You’ll notice I share photos and details about my daughter from accounts on servers I control. There is an implicit agreement in the fediverse to respect people’s privacy. I obviously don’t rely on that implicit agreement because some people do unethical things as demonstrated in your post. I protect my daughter from legitimate online privacy and security threats, I don’t play privacy and security theatre.
This vending machine is taking biometrics off of everyone who walks past
You have no evidence of this and there is no mention of this in the article. This also doesn’t make any sense from an implementation perspective.
GDPR doesn’t apply in Canada unless you are trying to operate business in Europe.
You’re correct that GDPR doesn’t apply in Canada, it’s just that GDPR is usually the strictest compliance so it’s usual for companies to meet that compliance as a minimum.
Compliance only matters if you can’t afford a fine.
GDPR fines can be tied to global revenue.
When your beliefs don’t align with the facts, consider changing your beliefs instead of doubling down on your opinions, making things up, and doing unethical things. Please try better.
Lol yeah, if the easily checked facts don’t align with beliefs then groupthink-people double down on their beliefs. Denying reality is easier than changing beliefs. It’s the same reasoning skills that Trump supporters use 😅
This type of analysis is cheap nowadays. You could easily fit a model to extract demographics from an image on a Jetson Nano (basically a Raspberry Pi with a GPU). Models have gotten more efficient while hardware has also gotten cheaper.
Marketing is often targeted, especially online (which is a huge privacy issue). I would guess they are using the data from these vending machines to measure the success of their marketing campaigns.
Consent is a requirement for GDPR compliance. They are likely taking an image from the camera, extracting semantic attributes from the image, and then discarding the image. The length of time the individual is standing there making the purchase is likely longer than the image is stored in memory while extracting the attributes.
Arguing that I have no concept of digital privacy because I choose to share my name and face is an ignorant statement and demonstrates how little you understand the concept of online privacy. For context, I work in tech in Canada, I deal with GDPR and other compliances. I understand the technology, the risks, and the attack vectors. These vending machines are not a serious threat to individuals privacy. Facebook, Google, Amazon, are serious threats. Focus your energy on the actual risks instead of making uninformed comments.
That’s not true. They’re likely using a model that identifies some demographic attribute and associating that with a purchase. It’s 2024, this can all be done on the machine. The machine doesn’t need to store the individuals data etc. If the vending is storing enough data to identify individuals then it wouldn’t be GDPR compliant.
I’d step back at that launch