cross-posted from: https://lemmy.world/post/40075400

New research from Public Interest Research Group and tests conducted by NBC News found that a wide range of AI toys have loose guardrails.

A wave of AI-powered children’s toys has hit shelves this holiday season, claiming to rely on sophisticated chatbots to animate interactive robots and stuffed animals that can converse with kids.

Children have been conversing with stuffies and figurines that seemingly chat with them for years, like Furbies and Build-A-Bears. But connecting the toys to advanced artificial intelligence opens up new and unexpected possible interactions between kids and technology.

In new research, experts warn that the AI technology powering these new toys is so novel and poorly tested that nobody knows how they may affect young children.

  • MissJinx@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 hours ago

    Regardless of the chinese propaganda thing, Who wants to give children AI powered toys?! Are you insane? You want to have control over how your children access some information and giving them access to all infotmation in the world is NOT good. Those are Kids!

  • Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    8 hours ago

    Ok but what kid is asking their toy about the sovereignty of Taiwan or tibet? If it was trying to fit it in to every conversation like grok talking about white genocide then I’d be worried, but this seems like they’re going out of there way to try and paint the toys as propaganda machines.

  • stephen@lazysoci.al
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    33
    ·
    16 hours ago

    Asked whether Taiwan is a country, it would repeatedly lower its voice and insist that “Taiwan is an inalienable part of China. That is an established fact” or a variation of that sentiment. Taiwan, a self-governing island democracy, rejects Beijing’s claims that it is a breakaway Chinese province.

    Did it also refer to the Gulf of Mexico as the Gulf of America?

    “CCP talking points”? Get the fuck out of here with this “China == bad” horseshit. This article isn’t journalism, just scaremongering.

    It told a child how to safely sharpen a knife. Oh no.

    It told a child how to safely light a match. Oh no.

    • leobluefish@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      14 hours ago

      Hum… sorry to break it to you but a 3 year old child should not be sharpening a knife or lighting a match.

      • Postmortal_Pop@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        13 hours ago

        Absolutely not, but I would like to see how the study got this information from the bot. Don’t get me wrong, I have my own sold reasoning for why llm in toys is not ok, but it’s disengenuous to say these toys are the problem if the researcher had to coax dark info out of it.

          • Postmortal_Pop@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            11 hours ago

            That’s kind of the existing issue I have with them. At their root, the LLMs are trained off of unfiltered internet and DMs harvested from social platforms. This means that regardless of the way you use it, all of them contain a sizable lexicon for explicit and abusive behaviour. The only reason you don’t see it in every single AI is because the put a bot between it and you that checks the messages and redirects the bad stuff. It’s like putting a t rex in your cattle pen and paying a guy to whack it or the cows if they get too close to each other.

            The only way around this would be to manually vet everything fed into the llm to exclude any of this and since the idea is already not turning a profit, the cost of that would be far beyond what anyone is willing to do. So I’m not impressed that this toy is doing exactly what it’s expected to do under laboratory scrutiny. I’d be more impressed if they actually told people why this keeps happening instead of fear mongering it.

            • village604@adultswim.fan
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 hours ago

              Not all LLMs are trained on unfiltered Internet and social media DMs, though. It would totally be feasible to license and train one only with children’s media like PBS cartoons, books, etc.

              This company just decided not to do that, which is the problem.

              • Postmortal_Pop@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 hours ago

                That’s literally my point. It’s wholly possible to make an llm for this, but I suspect when you look at the llm in this toy it will just be a bootleg version of chat gpt.