cross-posted from: https://lemmy.world/post/40075400
New research from Public Interest Research Group and tests conducted by NBC News found that a wide range of AI toys have loose guardrails.
A wave of AI-powered children’s toys has hit shelves this holiday season, claiming to rely on sophisticated chatbots to animate interactive robots and stuffed animals that can converse with kids.
Children have been conversing with stuffies and figurines that seemingly chat with them for years, like Furbies and Build-A-Bears. But connecting the toys to advanced artificial intelligence opens up new and unexpected possible interactions between kids and technology.
In new research, experts warn that the AI technology powering these new toys is so novel and poorly tested that nobody knows how they may affect young children.



Hum… sorry to break it to you but a 3 year old child should not be sharpening a knife or lighting a match.
Absolutely not, but I would like to see how the study got this information from the bot. Don’t get me wrong, I have my own sold reasoning for why llm in toys is not ok, but it’s disengenuous to say these toys are the problem if the researcher had to coax dark info out of it.
The fact that the researcher could coax bad info out of it at all is a big problem.
That’s kind of the existing issue I have with them. At their root, the LLMs are trained off of unfiltered internet and DMs harvested from social platforms. This means that regardless of the way you use it, all of them contain a sizable lexicon for explicit and abusive behaviour. The only reason you don’t see it in every single AI is because the put a bot between it and you that checks the messages and redirects the bad stuff. It’s like putting a t rex in your cattle pen and paying a guy to whack it or the cows if they get too close to each other.
The only way around this would be to manually vet everything fed into the llm to exclude any of this and since the idea is already not turning a profit, the cost of that would be far beyond what anyone is willing to do. So I’m not impressed that this toy is doing exactly what it’s expected to do under laboratory scrutiny. I’d be more impressed if they actually told people why this keeps happening instead of fear mongering it.