cross-posted from: https://lemmy.world/post/40075400

New research from Public Interest Research Group and tests conducted by NBC News found that a wide range of AI toys have loose guardrails.

A wave of AI-powered children’s toys has hit shelves this holiday season, claiming to rely on sophisticated chatbots to animate interactive robots and stuffed animals that can converse with kids.

Children have been conversing with stuffies and figurines that seemingly chat with them for years, like Furbies and Build-A-Bears. But connecting the toys to advanced artificial intelligence opens up new and unexpected possible interactions between kids and technology.

In new research, experts warn that the AI technology powering these new toys is so novel and poorly tested that nobody knows how they may affect young children.

  • stephen@lazysoci.al
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    23
    ·
    5 hours ago

    Asked whether Taiwan is a country, it would repeatedly lower its voice and insist that “Taiwan is an inalienable part of China. That is an established fact” or a variation of that sentiment. Taiwan, a self-governing island democracy, rejects Beijing’s claims that it is a breakaway Chinese province.

    Did it also refer to the Gulf of Mexico as the Gulf of America?

    “CCP talking points”? Get the fuck out of here with this “China == bad” horseshit. This article isn’t journalism, just scaremongering.

    It told a child how to safely sharpen a knife. Oh no.

    It told a child how to safely light a match. Oh no.

    • leobluefish@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      4 hours ago

      Hum… sorry to break it to you but a 3 year old child should not be sharpening a knife or lighting a match.

      • Postmortal_Pop@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 hours ago

        Absolutely not, but I would like to see how the study got this information from the bot. Don’t get me wrong, I have my own sold reasoning for why llm in toys is not ok, but it’s disengenuous to say these toys are the problem if the researcher had to coax dark info out of it.

          • Postmortal_Pop@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 minutes ago

            That’s kind of the existing issue I have with them. At their root, the LLMs are trained off of unfiltered internet and DMs harvested from social platforms. This means that regardless of the way you use it, all of them contain a sizable lexicon for explicit and abusive behaviour. The only reason you don’t see it in every single AI is because the put a bot between it and you that checks the messages and redirects the bad stuff. It’s like putting a t rex in your cattle pen and paying a guy to whack it or the cows if they get too close to each other.

            The only way around this would be to manually vet everything fed into the llm to exclude any of this and since the idea is already not turning a profit, the cost of that would be far beyond what anyone is willing to do. So I’m not impressed that this toy is doing exactly what it’s expected to do under laboratory scrutiny. I’d be more impressed if they actually told people why this keeps happening instead of fear mongering it.