Instagram is testing other age verification tools with AI

Instagram is testing other age verification tools with AI

Instagram is testing new ways to verify the age of people using its service, including an artificial intelligence tool to scan faces, verify the ages of mutual friends, or upload an ID.

But the tools aren’t being used, at least not yet, to block kids from the popular photo and video sharing app. The current test only involves checking if a user is 18 years or older.

The use of AI to scan faces, particularly of teenagers, set off some alarm bells Thursday, given Instagram mother Meta’s checkered history when it comes to protecting user privacy. Meta emphasized that the technology used to verify people’s ages cannot detect identity — only age. Once the age verification is complete, Meta said so and Yoti, the AI ​​contractor it worked with to perform the scans, will delete the video.

Meta, which owns both Facebook and Instagram, said that starting Thursday, anyone attempting to change their date of birth on Instagram from under 18 to 18 or older must verify their age using one of these methods.

Meta continues to face questions about the negative impact its products, particularly Instagram, are having on some teenagers.

Technically, kids must be at least 13 years old to use Instagram, similar to other social media platforms. But some get around this by either lying about their age or letting a parent do it. Teens ages 13 to 17 have additional restrictions on their accounts — for example, adults they’re not connected with can’t message them — until they turn 18.

Using Uploaded IDs isn’t new, but the other two options are. “We give people a variety of options to verify their age and see what works best,” said Erica Finkle, Meta’s director of data stewardship and public policy.

In order to use the face scanning option, a user must upload a video selfie. That video is then sent to Yoti, a London-based startup that uses people’s facial features to estimate their age. Finkle said Meta isn’t yet trying to use the technology to locate under-13s because it doesn’t store data on that age group — which would be needed to properly train the AI ​​system. But if Yoti predicts a user is too young for Instagram, they’ll be asked to prove their age or have their account removed, she said.

“It never clearly identifies anyone,” said Julie Dawson, Yoti’s chief policy and regulatory officer. “And the image will be deleted immediately once we’ve done it.”

Yoti is one of several biometrics companies in the UK and Europe to benefit from stronger age verification technology to block children from accessing pornography, dating apps and other adult internet content – not to mention bottles of alcohol and others alcoholic beverages. limited items in physical stores.

Yoti has worked with several major UK supermarkets on self-checkout facial recognition cameras. It has also started age-verifying users of the youth-oriented French video chat room app Yubo.

While Instagram is likely to keep its promise to delete an applicant’s facial images and not attempt to use them to recognize individual faces, the normalization of face scanning raises other societal concerns, said Daragh Murray, a senior lecturer at the University of Essex Law School.

“It’s problematic because there’s a lot of known bias when trying to identify yourself based on things like age or gender,” Murray said. “You’re essentially looking at a cliché and people are just so different.”

A 2019 US agency study found that facial recognition technology often performs unevenly depending on a person’s race, gender or age. The National Institute of Standards and Technology found higher error rates among the youngest and oldest people. There isn’t yet such a benchmark for estimating the age of facial analysis, but Yoti’s own published analysis of the results shows a similar trend, with slightly higher error rates for women and those with darker skin tones.

Meta’s face-scanning move is a departure from what some of its tech competitors are doing. Microsoft announced Tuesday it would no longer provide its customers with facial analysis tools that “purport to infer emotional states and identity attributes such as age or gender,” citing concerns about “stereotyping, discrimination, or unfair denial-of- services”.

Meta itself announced last year that it was shutting down Facebook’s facial recognition system and deleting the faceprints of more than 1 billion people after years of scrutiny by courts and regulators. But it signaled at the time that it wouldn’t abandon facial analysis entirely, moving away from the broad labeling of social media photos that helped popularize the commercial use of facial recognition, toward “narrower forms of personal authentication.”

Leave a Reply

Your email address will not be published.