Guidelines to Protect Kids from Graphic Social Media Content


With increasing concerns about disseminating graphic videos linked to global conflicts on social media platforms, schools, psychologists, and safety groups are urging parents to take control. By applying certain restrictions and turning off specific apps on their children’s phones, parents can significantly contribute to protecting their offspring’s mental health.

“The internet is a double-edged sword – while it can be a great learning tool, it can also expose children to content that can be harmful and disturbing. Parents and guardians are responsible for ensuring a safe digital environment for their children.”

The recent attacks in Israel that have seen a significant amount of graphic content being shared on social media platforms have been particularly stressful for minors. Research has linked exposure to such violent content to a cycle of harm to mental health. The American Psychological Association has also warned about ongoing violence’s psychological impacts.

Understanding the Impact

Alexandra Hamlet, a clinical psychologist based in New York City, explains that children who inadvertently come across upsetting content are likelier to feel worse than those who engage with potentially problematic content. Children lack the emotional control to turn off content they find triggering compared to adults. Their insight and emotional intelligence capacity to make sense of what they see has yet to be fully formed, and their communication skills to express what they have seen and how to make sense of it are limited compared to adults.

Implementing Restrictions

If completely deleting social media apps isn’t feasible, other ways exist to restrict or closely monitor a child’s social media use. Parents can start by visiting the parental control features on their child’s mobile operating system. iOS’ Screen Time tool and Android’s Google Family Link app help parents manage a child’s phone activity and can restrict access to certain apps.

Moreover, setting up guardrails directly within social media apps is possible. For instance, TikTok offers a Family Pairing feature that allows parents and guardians to link their accounts to their child’s account. This feature enables them to restrict their child’s ability to search for content, limit content that may not be appropriate, or filter out videos with specific words or hashtags.

On the other hand, Meta, which owns Facebook, Instagram, and Threads, has an educational hub for parents. This hub provides resources, tips, and articles from experts on user safety, along with a tool that allows guardians to monitor how much time their kids spend on Instagram and set time limits.

YouTube, owned by Google, has a Family Link tool that allows parents to set up supervised accounts for their children, set screen time limits, or block certain content. YouTube Kids also provides a safer space for kids.

Communicating with Kids

According to Hamlet, families should consider creating a policy where all members agree to delete their apps for a specified period. She suggests framing the idea as an experiment, where everyone is encouraged to share how not having the apps has affected them over time. This practice could lead to users reporting feeling less anxious and overwhelmed, which could result in a family vote to continue to keep the apps deleted.

If there’s resistance, Hamlet advises reducing the time spent on apps and agreeing upon a specific number of daily usage minutes. She also suggests a contingency where, in exchange for permitting the child to use their apps for a certain number of minutes, the child must agree to have a check-in to discuss any harmful content they were exposed to that day.

Role of Social Media Companies

Social media platforms also ensure user safety. TikTok, for instance, has increased dedicated resources to prevent violent, hateful, or misleading content. Meta has set up a special operations center with experts, including fluent Hebrew and Arabic speakers, to monitor and respond to the current situation. YouTube has also proactively removed harmful videos and remains vigilant to take action quickly across its platform.



Source link