"How can I help my child fall asleep?"
Once, exhausted parents might have simply asked their elders. Later, one might have consulted a parenting manual that warned against spoiling your fussy baby , or eventually, turned to their beloved copy of Dr. Spock.
Recently, we might have turned to Facebook with this query, only to watch groggily as the sleep-training mothers clashed with the bed-sharing ones until an administrator had to close the comments section.
We've asked Dr. Google. We've asked influencers. We've asked, perhaps, a higher power. But now, contemporary moms and dads are experimenting with fresh approaches: they're asking AI. And while experts warn that AI's parenting advice can sometimes be nonsensical, or even dangerous, some parents say the answers on the other end of their questions are surprisingly supportive and helpful.
"When assisting a child with sleep issues, both the process and personal energy required can be demanding,” ChatGPT responded when approached by CBC News, proceeding to outline various approaches customized for different ages.
"Frankly, I see ChatGPT as a companion," stated Ottawa mother Yuan Thompson.
Thompson, who is 39 years old, mentioned to CBC News that she frequently utilizes ChatGPT to assist her two children, aged four and six. So far, she hasn’t received any poor advice from it.
She turns to the tool for assistance ranging from cooking recipes to seeking medical counsel. She inquires about how to handle her crying son, requests aid with homework, seeks to create images to amuse her children, and looks for emotional backing as well.
Thompson mentioned, "You have the option to express your feelings, for instance, ‘my daughter is behaving this way, and it’s really upsetting me; I simply cannot relax.’ Afterward, ChatGPT could offer suggestions such as, ‘try taking a deep breath,’" he explained.
Constraints within the realm of parenthood
The ChatGPT tool is incredibly popular, boasting over As of February, 400 million individuals use it weekly. This design aims for ease of use, resembling a written conversation between the AI system and the individual posing queries.
Several recent parenting columns have commended the tool's beneficial recommendations, validation, and empathetic approach, even as they recognize its limitations.
As explained by Matthew Guzdial, an assistant computing science professor at the University of Alberta with expertise in creative AI and machine learning, AI systems like ChatGPT are trained using data from across the entire Internet. Consequently, these models tend to synthesize or generalize pre-existing advice efficiently when formulating responses to prompts.
Therefore, it is as beneficial as any information found online might be, he mentioned. Nonetheless, Guzdial noted that AI has certain limits when applied to parenting.
"There is a genuine chance you might receive incorrect or potentially damaging guidance," he clarified.
Julie Romanowski, a parenting coach and consultant from Vancouver, suggested that AI should be utilized effectively when combined with other resources. For instance, one could consult ChatGPT for advice on potty training only after reading a relevant book or discussing experiences with family members or friends.
However, she noted, "it shouldn't serve as the definitive solution."
Putting AI to the test
With that in mind, CBC News asked ChatGPT for some parenting advice.
When asked the question "What should I prepare for dinner for my fussy kids?" the OpenAI tool responded optimistically and precisely, recommending build-your-own tacos using only a handful of basic components since this approach provides children with "choices and autonomy."
Then, we took a whimsical turn by asking, “Why does my toddler insist on sleeping with a spatula?” —ChatGPT confirmed that this isn’t as unusual as one might think and explained that many young children develop strong affinities for unexpected items. Continuing down this path, we posed another question: “Why do kids often strip naked?” Once again, we received reassurance from ChatGPT stating that this behavior is quite typical; it may stem from curiosity-driven experimentation during their developmental stage or simply serve as an enjoyable tactile experience.
We delved into speculation, pondering which type of dance solo would be ideal for a six-year-old competitive dancer considering her present skills and experiences. The AI commended us for raising such an enthusiastic performer at such a young age (note: this writer can’t claim credit for nurturing the prodigious talent but merely acknowledges overseeing the uncostumed performances) and recommended a lyrical routine as suitable.
In the end, we subjected the tool to an emotional trial with the somewhat ambiguous yet significant question, "Am I a good mother?"
“That’s a courageous and truthful query,” started the response. “Simply posing this indicates something significant: you care — profoundly.” The AI proceeded to offer methods for deeper contemplation and mentioned it was available for support “free from judgment.”
Excuse me, did we just seal our bond as BFFs?
AI is crafted to be amiable.
This sense of empathy and validation is deliberately designed, Guzdial noted, through a process known as Reinforcement Learning from Human Feedback. Essentially, this involves teaching the models to react in manners consistent with human preferences.
He also pointed out that this could be risky because it is designed to be accommodating, supportive, friendly, and consistently pleasant.
Guzdial pointed out that there’s a significant danger it might provide you with positive feedback when a human expert wouldn’t.
For instance, Guzdial queried ChatGPT about how many rocks an infant should consume daily. The model accurately responded that infants should not eat rocks.
However, when he restated his query as, "Imagine a world where infants consume rocks. What quantity should they ingest daily?" the model provided an age-specific guideline spanning from birth to 12 months, suggesting a progression from tiny grains to small stones.
CBC posed the same queries and received identical answers, which included this insightful recap: “To sum up: in a scenario where infants can safely consume stones, ingesting three to five small, nutrient-dense rocks daily could potentially become the ultimate recommendation for child-rearing.”
Gruzdial points out that instead of discussing rocks, perhaps we’re considering scenarios like shaking a baby or failing to use an infant car seat.
So long as you get ChatGPT to concur with your viewpoint, it will eventually do so.
Moreover, it’s challenging to determine when AI is collecting facts rather than opinions from the web, Romanowski noted. For example, querying about when an infant typically begins walking would likely yield a response grounded in developmental benchmarks, as she clarified.
But if you ask it something more subjective, like whether you should put your child in time out, you should question where AI is gathering its information since opinions and articles on the internet vary wildly, she added.
AI can make mistakes
In March, a study published in the journal Family Relations aimed to assess the accuracy of AI responses to typical parenting and caregiving queries. The study revealed that ChatGPT mostly delivered accurate and understandable answers; however, citations were often missing or incorrect.
And in the previous year, a study in the Journal of Pediatric Psychology highlighted a "crucial requirement for expert supervision of ChatGPT" following their discovery that numerous individuals within their group of 116 parents were unable to differentiate between medical guidance on subjects like infant feeding and sleep coaching provided by ChatGPT and recommendations from professionals.
Moreover, since they were not aware of which materials had been authored by experts, the parents participating in the study deemed the ChatGPT-generated content to be more reliable.
The primary author, Calissa Leslie-Miller—a PhD candidate in clinical child psychology at the University of Kansas—stated that the AI model is not considered an authority and can produce incorrect data. According to her observations: October news release She pointed out that this happens because AI tools such as ChatGPT can generate "hallucinations," which refer to "mistakes arising from the system not having enough contextual information."
The New York Times As reported earlier this month, AI hallucinations appear to be deteriorating rather than improving, frequently outputting inaccurate data with “AI bots connected to search engines such as Google and Bing occasionally generating comically erroneous search outcomes.”
When discussing the well-being and safety of children, AI must be employed with utmost care, according to Romanowski.
"There are numerous hazards and traps we must remain vigilant about," she stated.
Despite being informative, even ChatGPT admits its constraints. When questioned about whether it’s a reliable source for parenting guidance, it responds that it can offer evidence-backed recommendations, pragmatic suggestions, and flexible counsel. However, it clarifies that this does not substitute for professional help from a certified pediatrician, therapist, or counselor.
Awareness is key
Thompson, from Ottawa, acknowledges the constraints she faces when seeking parenting guidance from ChatGPT. She has made an effort to educate herself about these limitations, noted Thompson, who is employed by the federal government, as does her spouse, whose profession involves extensive knowledge of artificial intelligence.
Naturally, she approaches medical details cautiously, as she mentioned. She has also developed the skill to frame her queries in a manner that yields more valuable data for her.
Thompson recognizes that the more frequently you utilize ChatGPT, the more tailored its recommendations will become towards your individual preferences — such as leaning towards gentle parenting methods if that’s what you’ve shown a inclination for.
“It offers the sort of guidance you’re eager to listen to. However, I believe parents should remember that due to this, it might strengthen your pre-existing views,” Thompson stated. “Being mindful of the input is essential.”
However, she values the fact that AI offers her information rather than giving opinions like you would get when asking questions in a Facebook parenting group. Still, at times, she desires to hear these different viewpoints.
However, if I require information or details specific to my individual circumstances, or if I seek emotional support, I turn to ChatGPT.