Meta, the parent company of Instagram, is taking strides to shield teenage users from the perils of “sextortion” scams on its platform.
As part of this effort, Meta is in the process of crafting innovative tools to combat such exploitation.
Among these tools is an AI-powered “nudity protection” feature designed to automatically blur images containing nudity when sent to minors.
Additionally, Meta plans to furnish users with guidance and safety recommendations to navigate and thwart such malicious activities.
The urgency for such protective measures is underscored by the alarming statistics: In 2022, approximately 3,000 young individuals in the United States became victims of sexploitation scams.
This troubling reality has prompted action not just from Meta but also from authorities, with over 40 states in the US filing lawsuits against Meta, accusing the tech giant of profiting from the suffering of children.
In response to mounting pressure and growing concerns, Meta unveiled a series of initiatives in January aimed at safeguarding users under the age of 18.
These measures include tightening content controls and enhancing parental oversight tools.
The “nudity protection” tool represents a significant technological leap, utilizing on-device machine learning algorithms to analyze images.
Meta emphasizes that their access to these images is strictly limited unless they are reported by users.
Furthermore, Meta intends to employ AI algorithms to identify accounts responsible for sending illicit content and impose restrictions on their interactions with young users.
This proactive approach comes after former Facebook engineer Frances Haugen exposed research in 2021 indicating that Meta was cognizant of the detrimental effects its platforms had on the mental well-being of young individuals.