Josh was at the end of his rope when he turned to ChatGPT for help with a parenting quandary. The 40-year-old father of two had been listening to his “super loquacious” four-year-old talk about Thomas the Tank Engine for 45 minutes, and he was feeling overwhelmed.
“He was not done telling the story that he wanted to tell, and I needed to do my chores, so I let him have the phone,” recalled Josh, who lives in north-west Ohio. “I thought he would finish the story and the phone would turn off.”
But when Josh returned to the living room two hours later, he found his child still happily chatting away with ChatGPT in voice mode. “The transcript is over 10k words long,” he confessed in a sheepish Reddit post. “My son thinks ChatGPT is the coolest train loving person in the world. The bar is set so high now I am never going to be able to compete with that.”
Parent here ✋
I don’t do this, and I think it’s a shame that some parents do. But I also don’t want to jump the gun and blame the parents.
Parenting is hard! Kids are burdensome! That is kind of the point.
But also, parents cannot be on 100% of the time. “Good Parenting” is a privilege for those with the capacity to not be completely exhausted by just surviving day to day with a family.
So I don’t 100% blame the parents. Sure, some parents just shouldn’t be, but you often don’t know that as a parent until it’s too late. Sometimes parents make bad decisions, but it’s usually due to external pressures.
Which brings me to who I do blame: the corporations and other orgs putting AI out there as an option for parents and kids when it really shouldn’t be.
This is like marketing cigarettes to kids.
I also blame health orgs for not standing up and yelling “this is harmful!”. At pediatric checkups, they ask questions about exercise, screen time, and other health related activities. If the answer raises a red flag you are given literature about it. The same should be true with pediatric AI use.