As Australia bans social media for kids under 16, age-assurance tech is in the spotlight



Age assurance, an umbrella term that refers to technologies for verifying, estimating, or inferring an internet user’s age, is being thrust into the global spotlight thanks to a blanket ban on social media use for people under 16 in Australia.

The law, which is expected to come into force in Australia in November 2025, will require social media platforms to take “reasonable steps” to ensure they verify users’ age and prevent minors from accessing their services.

The legislation was passed before key details were defined — such as the definition of “reasonable steps.”

Australia will try out age-assurance technologies next year to help regulators (its eSafety Commissioner is the relevant body) set some of the key parameters. This trial is likely to be closely watched elsewhere, too, given widespread concerns about the impact of social media on kids’ well-being.

Other similar countrywide bans could follow, which will also require platforms to adopt age-assurance technologies, setting up the sector for growth.

Companies offering services in this area include the likes of U.S. identity giant Entrust (which earlier this year acquired U.K. digital ID startup Onfido); German startup veteran IDnow; U.S. firm Jumio, which actually started out as an online payments company before pivoting to digital identity services; Estonia-based Veriff; and Yoti, a 10-year-old U.K. player, to name a few.  

Yoti confirmed to TechCrunch it will be taking part in the Australian trial, saying it will seek to have its facial age estimation tech, Digital ID app, ID document, and Liveness tested.

The term “liveness” refers to digital ID verification technology that’s used to detect whether a person pictured on an ID document, for example, is the same person as the one sitting behind the computer trying to access a service , and typically relying on AI-based analysis of a video feed of the user (so looking at things like how light plays on their face as they move).

The three types of age assurance

The Australian trial is being overseen by a U.K. not-for-profit, the Age Check Certification Scheme (ACCS), which does compliance testing and certification for providers of age-assurance technology.

“We are an independent, third-party conformity assessment body that tests that ID and age check systems work,” explains ACCS’ CEO and founder, Tony Allen. “We do ID verification, age verification, age estimation, testing and analysis of vendor systems all over the world. So this project was very much up our street.”

While the Australian trial is grabbing headlines at the moment, he says the ACCS is doing age-assurance testing projects “all over the world” — including in the U.S., Europe, and the U.K. — predicting the technology is “definitely coming” to much more of the internet soon.

Per Allen, age assurance breaks down into three different areas: age verification, age estimation, and age inference.

Age verification confirms the exact date of birth of the user, such as matching a person to a government-issued ID or obtaining this information via a person’s bank or health record.

Age estimation provides an estimate or range, while inference relies on other confirmed information — like a person holding a bank account, credit card, mortgage, or even a pilot’s license — to demonstrate that they are older than a certain age. (A minor certainly isn’t going to have a mortgage, for example.)

At its most basic, an age gate that asks users to self-declare their date of birth (i.e., “self-declaration”) technically falls under age assurance. However, such an unsophisticated measure is unlikely to suffice for the Australian law as it’s exceptionally easy for children to circumvent such mechanisms.

More robust measures that are increasingly targeted based on things like behavioral triggers could end up being a requirement for compliance both in Australia and other places where kids might be going online. U.K. regulator, Ofcom, for example, is pushing platforms for better age checks as it works to implement the Online Safety Act, while the European Commission is using the bloc’s Digital Services Act to lean on major porn sites to adopt age-verification measures to boost minor protection.

The precise methods in Australia are still yet to be determined, with social media giant Meta continuing to lobby for checks to be baked into mobile app stores in a bid to avoid having to implement the tech on its own platforms. Allen expects a mix of approaches.

“I would expect to see age verification, age estimation, and age inference. I think we’ll see a mix of all of those,” he says.

Privacy in demand

Allen explains that privacy has become a selling point for newer forms of age assurance.

“Age verification has been around for years and years and years,” he suggests. “Online it’s been around since gambling went online in the 1990s. So the process is nothing new — what’s new in the last few years has been working out how to do it in a privacy preserving way. So instead of taking a regular picture of your passport and attaching it to an email and sending it off into the ether and hoping for the best, the tech now is much more designed around privacy and around security.”

Allen downplays privacy concerns over data being shared inappropriately, saying that “generally” speaking, third-party age-assurance providers will only provide a yes/no response to an age-check ask (e.g., “Is this person over 16?”), thereby minimizing the data they return to the platform to shrink privacy risks.

Allen argues that wider concerns over age assurance as a vector to enable mass surveillance of web users are misplaced.

“That’s people who just don’t understand how this technology works,” he claims. “It doesn’t create anything that you can carry out surveillance on. None of the systems that we test have that central database concept or tracking concept, and the international standard specifically prohibits that happening. So there’s a lot of myths out there about what this tech does and doesn’t do.”

Growing industry

Yoti declined to “second-guess” the trial results ahead of time, or the “methods or what thresholds” that Australian lawmakers may deem “proportionate” to set in this context. But the industry will be closely looking at how much margin for error will be allowed with techniques like facial-age estimation, where the user is asked to show their face to a camera.

Low-friction checks like this are likely to be attractive for social media firms — indeed, some platforms (like Instagram) have already tested selfie-based age checks. It’s a lot easier to convince camera-loving teens to take a selfie than it is to make them find and upload a digital ID, for example. But it’s not clear if lawmakers will allow them.

“We do not know yet if the regulator will set no buffer, or a 1-, 2- or 3-year buffer for facial age estimation,” Yoti told us, making the case for more wiggle room around the margin of error for facial-age checks. “They may consider that if there are fewer government-issued document alternatives for 16-year-olds, with high security levels no buffer is proportionate.”

With increasing attention from lawmakers, Allen expects more age assurance technologies and companies will pop up in the coming years.

“There’s an open call for participation [in the Australian age assurance trial] so … I think there’ll be all sorts coming out,” he suggests. “We see new ideas. There’s one around at the moment about whether you can do age assurance from your pulse … Which is interesting. So we’ll see whether that develops. There’s others around, as well. Hand movement and the geometry of your fingers is another one that we’ve been seeing recently.”




Source