On December 10, Australia will drop the hammer and become the first country in the world to ban social media accounts for under-16s.
All the major platforms Facebook, Instagram, Snapchat, TikTok, YouTube, Reddit, X, and Threads are caught in the dragnet.
And face $50 million fines if they stuff it up.
This is a first of its kind - and unsurprisingly, parents, pundits and puppets from across the political and lobbyist spectrum are flooding the zone with opinions.
What’s right, what’s fair, what’s best for kids, or what’s just cruel for young people.
But one area where it’s truly pioneering is that it shifts the responsibility from parents to land squarely on the platforms.
It’s not parents having sleepless nights about what to allow any more. Or kids needing to be trusted to do the right things when out of sight.
It’s about the platforms being forced to take responsibility and then paying a heavy cost if they fail.
That creates a global precedent that’s rattled Big Tech and set them against each other.
It’s also very scary for wealthy investors at the Top End of Town who’ve become used to double digit growth that’s built on privacy intruding ads and addictive algos.
Facing up to those two titans would make many regulators cower, but not Australia’s eSafety Commissioner Julie Inman Grant. She’s cut from a different cloth.
She’s already sparred with Musk. She’s worked inside Big Tech, so she knows the Playbook, and she once considered a career in the CIA.
Now she’s blending her life’s experience and her insider tech nous to call BS on what can and can’t be done.
But that’s come at a price.
In this podcast, she reveals Big Tech tried to smear her reputation, dug into her past, doxxed her personal details, and in the most disturbing case, targeted her kids.
But she has stood strong. She commissioned one of the largest investigations into how well tech can decipher a child’s age.
Her conclusions?
The tech exists and works - at least well enough to get started.
Kids must be protected - even if that means we learn along the way, and
It’s time tech giants - who make trillions from extracting data - are forced to do the heavy lifting and pay the consequences.
Julie Inman Grant joins me today to unpack how the biggest experiment in privacy and online safety in a generation is going to work in practice.
And why the techies are so rattled...
Ricky: Thank you for joining me. I noticed that Meta just began sending out notes to Australian teens warning them these changes are on the way. Then Snap followed up.
So, all this is happening, and right against the backdrop of the US losing its antitrust case to break-up Meta to limit its control just yesterday.
But right now, Australia - and you personally - are the epicentre. Tell us what’s happening here.
Inman Grant: The e-Safety Commissioner was set up in 2015 as the world’s first online safety regulator. It started small with a cyberbullying scheme.
We are also the hotline for Australia for reports of child sexual abuse material, and over time our scope has expanded quite a bit.
We hear from the public about abuse, we serve as that safety net when platforms fail to act, or when a child’s being cyberbullied.
It might be when intimate images are being shared, or deepfakes, and we have a 98 per cent success rate in getting that taken down.
We’re also dealing with a lot more graphic terrorism and violent content now, like the Charlie Kirk assassination, a beheading in Dallas that was all over Twitter and Meta...
We deal with that in real time. We’ve established powers around codes and standards, we’re notifying sites that pair children with paedophiles, and more and more AI stuff.
By March, no AI companion or chatbot will be able to serve outputs or content that deals with paedophilia, violence, or sexually explicit content, suicidal ideation, self-harm, disordered eating.
We do all of that. That’s our protection and prevention division. We have a huge team dealing with research and evidence and what we call co-design.
We also talk to children about what they want, what they need to know, and we listen to them tell us the language they want that will resonate with them.
And then we have another, unique division that’s focused on proactive and systemic change. This is where our safety by design initiative fits in.
We put the burden on the platforms themselves to assess the harm and build the safety protections in at the front end, rather than retrofitting when there are problems.
I think most people would understand that. It’s akin to requiring cars to embed seat belts. I see this as the analogy to that. Why should technology be excepted?
When we export cars, we expect them to meet Australian safety standards. Why isn’t it the same with technology? It’s time for Big Tech to have its seatbelt moment.
We also need to be an anticipatory regulator, so we are looking for what’s coming down the track 18 months to two years from now.
In 2020, we released a report on trends in deepfakes. No-one in the mainstream media picked it up, but we must as we need to think about how it might manifest.
We need to always ensure our next legislation covers the synthetic generation of child sexual abuse material for example, and deep fake based image abuse.
We have the tools today to deal with that. Add it all up, and we’re embedding safety in different ways through prevention, protection and proactive change.
Ricky: That’s a lot. Wow. So, tell me about the social media changes you are making.
Inman Grant: The Social Media Minimum Age Bill is very specific. It’s an age restriction bill for social media.
It put the onus on platforms to prevent any Australian under 16 years from having or holding a social media account.
There are some exemptions, so it’s not an absolute ban. We’re referring to it not as a ban, but as a social media delay.
There are reasons for that.
The idea is to keep children away from the harmful and deceptive design features that are baked into these social media platforms.
To ensure they are safe from social media as their prefrontal cortex is developing. This is when they’re developing critical reasoning, digital resilience, impulse control...
These platforms are designed to keep them scrolling, to keep them entrenched and entranced.
These are powerful forces that children can’t fight alone, but we don’t want to cut off their digital lifelines either...
Parliament decided to create exemptions for same messaging platforms, and for some online gaming platforms.
Also for anything dealing with education, or mental health support.
But back to the Social Media Minimum Age Bill. It comes into force on December 10.
Ricky: I’ve followed a lot of these initiatives around the world for years. Whenever they come up, the area is flooded with noisy opinions.
Some are good, others bad, mischievous and downright malign. This is a significant effort by Australia to effect meaningful change. How are you cutting through the noise?
Inman Grant: When I walked through the door nine years ago, we were 30 people.
Now we’re about 250.We’re still small compared to say the European Commission of the UK, but we are punching above our weight.
And it is really important to cut through the noise to create the right set of expectations.
I’m optimistic we are creating significant normative change.
I mentioned earlier that we refer to this as a social media delay because that reflects how as parents we talk to our kids.
We don’t say you’re never going to have a phone, or you’re never going to be on social media. We say that we understand your friends are on it and you can feel excluded, but you’re not ready yet.
This gives parents that (agency) back. Suddenly they are not the only ones telling their children they aren’t meant to be on the platforms until they’re 16.
It takes away that element of FOMO (fear of missing out) which I think a lot of parents are genuinely worried about.
“We did research last year and found that 84 per cent of eight to 12-year-olds in Australia already had social media accounts.
And in 80 per cent of cases, when we asked whether their parents knew, they said yes.
When we asked if their parents helped them set it up, most said yes too.
I guess we’re all conflicted as parents and - you and I are both in that spot - where we do worry about overuse of technology.
We worry about the harms that our children might experience, but ultimately, we don’t want them to be left out. This takes that away.
And we must acknowledge that the biggest transition will be for young people aged 13 to 15 using social media who will have it effectively taken away.
But they’ll still have messaging and gaming that they can turn to connect and have interpersonal interaction.
They are going to be Generation Alpha, and it’s them who will experience the most impact, and the benefits will be long term.
“The babies of today will maybe grow up in a more digital-free future without
the expectation that they’re all going to be on social media.”
What we’re also doing is looking comprehensively at the impacts and gathering an evidence base. There basically hasn’t been one up to now.
We’re collecting baseline data now with academics in Australia and around the world, including The Stanford Social Media Lab as our lead partner.
They are specialists in mental health and well-being and the intersection of technology.
We will be assessing whether kids get more sleep. Is there less stress and tension in the household about device use? Looking at whether school test scores improve? Are they getting out more, reading more, playing more sport, interacting more?
Some of the schools found when they introduced school phone bans that they needed more equipment because kids weren’t texting across the playground. They were playing.
Ricky: Australia’s proving itself a pioneer in digital change. It had the first News Media Bargaining Code that forced tech to pay for news.
But for many, there’s a fear about going first. I don’t really buy that. Someone has to or nothing changes.
But I think I am hearing you accept that everything won’t be perfect on day one, but we have to do something to learn and improve as we go. Is that a fair summary?
Inman Grant: Yes. We even pioneered setting up an online safety regulator. For the first seven years, I wrote the playbook because there were no guideposts.
I often liken it to riding a bike up Mont Ventoux (an infamous 1,912 metre brutal and iconic climb in the Tour de France) and getting the gravel in my face.
We were the pointy end of the spear, that other companies and countries watched closely.
When I saw there was going to be a critical mass of other online safety regulators, we formed a group in 2022 called the Global Online Safety Regulators Network.
My colleagues just met in Europe last week. Today, we’re building a network of regulators.
And we need to, right, because the internet doesn’t stop at the border. It’s a global entity. Almost all of my regulatory targets are based overseas.
You can’t be parochial and sit in an ivory tower not engaging, not co-operating, not collaborating, not learning from one another.
Ricky: Helping a child in your neighbourhood is good, but helping children in the world feels like a mission.
But as I said earlier, I’ve tracked so many of these, and what’s emerged is a Big Tech Playbook designed to deflect, disarm and deter.
I’m sure you see it too. It has adeptly sidestepped responsibility for years by arguing it’s not its job. It’s just the platform, and it’s the parents who are responsible.
At the football field every weekend, I hear conversations among fellow parents who feel guilty about their children’s usage.
I also see the platforms deflect. Why go after us, they ask? All platforms are to blame. You need to change everyone.
The goal is to frighten off regulators because the scale of the problem seems so large.
When those don’t work, they go to the next chapter and point fingers at app stores and argue they are the best placed to introduce age gates.
That means tackling Google and Apple - the biggest and most powerful targets - and the hardest to overcome. It’s a classic Playbook.
Meta did it today. I saw Mia Garlick, Meta’s regional policy director quoted in Australia’s Mi3 saying:
“Parents remain important partners in promoting the appropriate use of technology within their households.”
Meta has long pushed for a different model: Mandatory parental approval at the app-store level, which is already in place in several US states.
The company argues this offers stronger privacy protections and avoids forcing teens to repeatedly verify their age across multiple platforms.
There it is again. Not us guv’nor, look over there, but it seems you’re not allowing that this time. Do you take their protests as a signal you’re doing the right thing?
Inman Grant: Yeah. Absolutely. I was in the tech industry as a lobbyist in 1995 working for Microsoft, so I helped write that playbook.
When YouTube put out a study recently showing how much its creators contribute to the economy... there it was again.
During the Microsoft antitrust trial (25 years ago), I worked with economists who created well-reasoned narratives about why we weren’t a monopolist.
These tactics have all been done, but what’s interesting about the Online Safety Act - and I credit (former Communications Minister) Paul Fletcher and (former Prime Minister) Malcolm Turnbull who were the architects - is that both had worked in technology.
They made sure it was written into the legislation that the e-Safety Commissioner had to be someone with experience in online safety but also significant standing inside tech.
They didn’t want a bureaucrat who didn’t understand how these companies think and failed to recognise the strategies they would bring.
It also means that I understand what their limitations are, so when I’m asking them for things, or developing regulatory guidance, I knew intuitively how they’d react.
When we did extensive testing of a range of technologies (for age recognition) we found many had a high technical readiness level and were privacy preserving.
When we had that, I didn’t tell them they needed to use these technologies. The companies (Meta etc) would have gone nuts. I knew I shouldn’t assume anything.
They all have proprietary technologies they’re using today for age inference. They already know a lot about their underage users. They can target ads with precision
It means they know a lot of this. And some of them will use third party tools, but it’s not my job to pick winners and losers in the age assurance area.
What we did was give them a bunch of evidence so they couldn’t say there were no good solutions. We proved that there were upfront.
What I have asked them is to take a waterfall approach. That means they can use their proprietary technologies, which are probably telling them a lot.
And they can supplement them with a digital or government ID, but not only digital or government IDs, or they can use a third party who’s been tested.
But it’s up to the tech companies. What I want to see is that it’s systemically working and effective.
And on December 10, I want to see that there’s been meaningful effort made to deactivate and remove under-16s accounts.
And I know they are going to miss some. The problem won’t magically disappear overnight, so we have a plan for that too.
Each company will need to have a very discoverable, responsive user reporting scheme.
There are lots of parents out there who are going to want to dob their kids in. I’ve also said that if the tech companies over-block, they need to ensure they have an appeals process so people have recourse.
The third thing I’ve told them is that we know VPNs will be an issue (to obscure locations).
We fully expect that teenagers are going to be creative and use generative AI or wear masks or use highly realistic gaming graphics to spoof the systems.
“But the burden is on them - the tech companies - to prevent circumvention and to report back to us. And we will want to see constant improvement.”
Inman Grant: What you saw today Ricky was Meta doing exactly what we asked it and the other companies to do.
It was communicating in a compassionate and kind way to the under 16 users it had identified that this is happening.
We all know this is going to be a massive transition for some kids.
But tech keeps sort of doing the apology rounds. They say: You’ve chosen such a novel age - between 13 and 15 - it would have been easier with 18, or 13…
After that, I wrote to Meta’s chief safety officer and told her I had confidence that the age inference technologies they’d been using for years were highly effective.
And that they had already been using Yoti (a UK age identification tech firm the Australian government has been working with).
I was clear with her that I didn’t expect perfection, but I did expect it to improve its classifiers and AI tools over time. This is not set and forget.
I don’t only want them to be deactivating, I want them to put in systems, processes and technologies to prevent future under-16s from setting up accounts.
Ricky: It’s ironic. Meta built its reputation on moving fast and breaking things. You’re asking them to move fast and break things - but fix them as they go.
They should be good at that, no?
I also read your technical documentation last night and was very interested in your waterfall concept.
The idea is that certain technologies use facial recognition or similar to identify the likelihood of a child being in a certain age bracket, but it’s not perfect.
If there’s a doubt, the test can waterfall to the next test, and then another and another until it meets the bar for being sure enough.
It can begin with inference, but it might be a fake picture, so the test waterfalls to needing identifying documents and so on.
Then there’s an appeals process, and those need to be run by the tech platforms, and you say they must reply, and be nice, and respond in a timely manner.
I liked that you add to an inference by using the fact that this child is in a junior school every day between 8.45 and 3.30, so there’s a signal there.
But ultimately the onus is on the tech company, which has some of the most sophisticated demographic, psychographic and biographic data in history.
And that comes with teeth. Tell me about the penalties?
Inman Grant: After the law was passed, I started having conversations with the senior leaders and the companies that would be captured.
As the regulator, I must work within the definitions and the legislation I’ve been given.
But one of the challenges was that I wasn’t given the powers to declare who was in and out.
And we were pretty certain that very few companies were going to self-assess and put themselves in the bracket.
So, we created a self-assessment tool for them and then did our own legal assessment.
That’s how we arrived at the companies we thought were captured by the definition in the legislation.
And I’ve had to be at pains to say that this isn’t a safety bill. It isn’t. It’s an age restriction bill.
We assessed Roblox as being primarily an online gaming platform. That didn’t mean it was safer, but it was not what the legislation was designed to do.
That’s why we’ve used other tools in the toolkit. We used our codes and standards to start negotiations with them.
And based on the threat of $49.5 million fines, Roblox has agreed to now protect every child’s account at the highest level of privacy default.
It will no longer allow adults to contact children without explicit parental consent, and they are going a step beyond and doing age assurance, which they should be doing.
I guess that’s my long way of saying that you don’t use a wrench when you need a screwdriver. We’ve got a toolkit.
What we saw through this assessment process is what I would describe as a lot of shapeshifting since we began in 2021.
YouTube and Snap, as well as Instagram and Facebook (initially) identified as social media sites.
When this all came along, Snap was suddenly a camera app, and then a peer messaging app.
“YouTube was a video sharing platform, but more like a streaming platform. Pinterest was a visual search engine.
“So we went through a painstaking process to understand what their sole and significant purpose was and whether it was to encourage online social interaction.
That’s the language in the legislation that we had to work with, and we have arrived at a place where some have said they don’t agree, but they will comply.
I expect that there are some that we’ve named that probably won’t comply and there’s still a couple of weeks left. We could see a legal challenge.
Some could do nothing and wait until I take enforcement action and then try to challenge the law and eSafety on that basis.
Ricky: I saw Snapchat was in The Guardian today arguing it should not be covered by the ban - because it’s primary role is messaging, which is exempt.
A spokesman was quoted saying: “Snapchat is and has always been a visual messaging app, primarily used for connection with your closest friends and family.”
And YouTube wasn’t in initially, and then you added it later. Tell me why.
Inman Grant: There was a lot of back and forth during deliberation of the bill and YouTube was in, then it wasn’t in.
Then when the final bill was tabled in Parliament, YouTube was given an exemption.
I thought TikTok said it best, and it was the first time I had seen Snap, TikTok and Meta actually aligned on something.
They said that banning social media and not including YouTube was like banning soft drinks but allowing Coca-Cola to still be drunk.
I sort of looked at the fairness element. Part of the legislation said that I had to put forward independent safety advice, which I did at the end of June.
One of the first things I said was that no company should be given a specific named exemption.
The primary reason is because they could be relatively safe today but if they eviscerate all their trust and safety personnel or roll back content moderation.
Or become like the new Sora app creating hyper realistic AI videos without guardrails. You could be a very unsafe platform tomorrow.
So we did research, and found 75 per cent of kids were experiencing harm on social media, and the highest proportion - 40 per cent - experienced it on YouTube.
And it does have opaque algorithms, and it is largely user-generated content, and it isn’t classified or moderated before publication. It has autoplay and endless scroll.
It’s the same with Snap, and ephemeral media, Snap Maps, Snap Streaks, this is all gamification. A way to keep kids snapping.
These things, in their entirety, give them the characteristics that parliament defined as social media.
Ricky: Commissioner, I know you need to get back to your day job because I sense you have a lot on.
But I want to leave you with this insight from Australia’s former competition regulator Rod Sims.
He told me that tech companies are unafraid of enforcers, because they only enforce the laws we have. What frightens them are regulators, who pass new laws.
I heard a little bit of that from you in your last statement. Do you think that’s fair, and right?
Inman Grant: I’ve seen some of the platforms behave very badly, even hiring people to try to undermine my credibility and background me to conservative news media.
I don’t know if it was payback or a way to try to weaken my position as a regulator, but they also tried to go after my kids.
I went to one of the editors involved and they agreed that it was grubby, so they (the platforms) are really concerned about this legislation.
They see it as the first domino. And they are powerful. They have a lot of money and armies of lawyers.
They also have access to the greatest minds and very advanced technology so it’s disappointing to me that they would choose that path rather than doing what’s in the realm of the possible.
So yeah, it’s been pretty ugly. You need to have a fair amount of resilience to be a tech regulator. I’ll tell you that much.
Ricky: Well, they don’t have you. And they don’t have the most powerful and motivated army in the world which to my mind is all the parents. Thank you.









