>

Teenagers need security and privacy on social media, AI can help

(Afsaneh Razi, Assistant Professor of Informatics, Drexel University)

Pennsylvania, February 1 (The Conversation) Meta announced on January 9, 2024 that it will protect teen users on Instagram and Facebook by preventing them from viewing content the company deems harmful. These also include material related to suicide and eating disorders. The move comes at a time when federal and state governments have increased pressure on social media companies to provide safeguards for teenagers. We know that teens turn to their peers on social media for support they can’t get anywhere else. Efforts to protect teens may inadvertently make it harder for them to get help.

Congress has spoken out several times in recent years about social media and the risks it poses to young people. The CEOs of Meta, The committee’s chairman and ranking member, respectively, Senators Dick Durbin (D-Ill.) and Lindsey Graham (R-S.C.), said in a statement ahead of the hearing that tech companies should “ultimately protect children, especially Being forced to admit our failures in providing security.” I am a researcher who studies online security. My colleagues and I are studying teen social media interactions and the effectiveness of platforms’ efforts to protect users. Research shows that while teens face danger on social media, they also receive support from peers, especially through direct messaging. We have identified steps that social media platforms can take to protect users as well as their online privacy and autonomy.

What are the children facing?

The prevalence of risks for adolescents on social media is well established. These risks range from harassment and bullying to poor mental health and sexual exploitation. The investigation revealed that companies like Meta knew their platforms exacerbated mental health issues, helping make youth mental health one of the priorities of the US Surgeon General. Most come from self-reported data such as teen online safety research surveys. There is a need for further investigation of young people’s real-world personal interactions and their perspectives on online risks. To address this need, my colleagues and I collected a large dataset of youth’s Instagram activity, including more than 7 million direct messages. We asked young people to describe their conversations and identify messages that make them feel uncomfortable or unsafe. Using this dataset, we found that direct conversations can be important for young people seeking support on issues ranging from daily living to mental health concerns. Our findings show that these channels were used by young people to discuss their public interactions in more depth. Based on mutual trust in settings, adolescents felt safe to ask for help.

Research shows that the privacy of online conversations plays an important role in young people’s online safety and that a significant amount of harmful conversations on these platforms come in the form of private messages. Unsafe messages flagged by users in our dataset included harassment, sexual messages, sexual solicitation, nudity, pornography, hate speech, and the sale or promotion of illegal activities. However, it has become more difficult to use automated technology to detect and prevent online risks to teens as platforms have come under pressure to protect user privacy. For example, Meta has implemented end-to-end encryption for all messages on its platform to ensure that message content is secure and that only participants in the conversation can access it. Additionally, the steps Meta has taken to prevent content related to suicide and eating disorders keep that content from being posted publicly, even if a teen’s friend posted it. Additionally, Meta’s content strategy does not address unsafe private conversations that teens engage in online.

to balance

In such a situation, the main challenge is to protect the young users without attacking their privacy. To this end, we conducted a study to find out how we can use minimal data to detect insecure messages. We wanted to understand how different characteristics or metadata of risky conversations, such as conversation length, average response time, and relationships of conversation participants, could contribute to machine learning programs that detect these risks. For example, previous research has shown that risky interactions tend to be short and one-sided, such as when strangers advance in conversation unwantedly. We found that our machine learning program was able to identify unsafe conversations using only the metadata for the conversation in 87% of cases. However, the most effective way to identify the type and severity of risk is to analyze the text, images and video of the conversation.

These results highlight the importance of metadata to isolate unsafe conversations and can be used as guidelines for platforms to design artificial intelligence risk detection. Platforms can use high-level features like metadata to block harmful content without scanning that content. For example, a persistent harasser that a youth wants to avoid will create metadata – repeated, brief, one-way communications between unrelated users – that an AI system can use to stop the harasser. Ideally, young people and their caregivers will be given the option to be able to turn on encryption, risk detection, or both so they can make the right choice around privacy and security for themselves.

Leave a Comment