Online Safety and the Death of Digital Freedom
I want to start by acknowledging the primary talking point used to justify the Online Safety Act (OSA) and the vague implementation of an Australian eSafety Commissioner. Exposing children (or anybody for that matter) to immoral content or behaviour online is bad. The fact that aspects of being online causes significant negative impact to mental health of young people is bad. No rational human being would support depraved and/or exploitative behaviour, online or not.
For the sake of brevity, I will be referring to this as "GROSS" content, meaning content Generally Regarded as Obviously Super Shitty. This is a catch-all for images, videos, text, systems, and other harmful content/behaviour meant to exploit, degrade, mistreat, deceive or depress. All the sorts of things that the general population would deem unacceptable in the real world.
I don't support or promote GROSS content, I don't fuck with anybody who gets off on it, I wish it wasn't a problem we had to deal with but alas, we do, that's life.
So What is the Online Safety Act?
According to the official esafety.gov.au website:
The Online Safety Act 2021 is new legislation that makes Australia’s existing laws for online safety more expansive and much stronger.
Now that's a fantastic way of saying nothing at all, and frankly the full explanation they provide is also a fantastic amount of waffling that doesn't paint a clear picture of the plan at all. So, I will do my best to summarise the Act and the Commission briefly and objectively.
The Online Safety Act aims to apply broadly to:
- All Australians, including adults and children
- All online platforms, including social media, messaging apps, app stores, search engines, ISPs, and hosting services
The Online Safety Act represents a fundamental centralization of authority under the eSafety Commissioner, placing Australia among the most aggressive jurisdictions for online content regulation. While framed as protective, it introduces broad, fast-acting enforcement powers with limited checks, applied across all layers of the digital experience.
The Online Safety Act straddles the line between the stated goal of harm minimisation versus the potential for censorship, overreach, and regulatory capture, particularly as it's wording extends far beyond just the worst forms of GROSS content.
As objectively as possible, the OSA is a system of moderation and control blanketed across all Australians and all of the internet, with a small body of unelected folk responsible for determining what content falls under the reach of the eSafety Commissioner.
Growing Up Online
I have the privilege of being one of the earliest Gen Z'ers, which means I grew up side by side with the state of technology we know today. I was a toddler during the wild and loose years of the internet, a naive primary-school kid during the peak of Web 2.0 (the period of time when the internet began collecting information from users instead of just distributing it) and a tastefully-rebellious teen during the rise of social media popularity. I experienced all the highs and lows of life online, I circumvented all the "filters" my poor tech-challenge parents tried to implement, I saw all GROSS content an adolescent shouldn't really see and yet, here I am a complete, kind and rational human with loving community, functional in an ever changing world and happy with my own state of affairs. If I had to put a finger on what made the difference for me, why I've personally managed to remain of relatively-sane mind in the presence of this technological ghoul, I think it would be understanding.
Being close to technology in a professional context has exposed me to the thought processes behind the construction of our digital worlds. Software businesses (particularly the most influential silicon valley "heroes") are under significant pressure to grow and profit, capture as wide a user-base as possible and keep them on the platform, so the tech company can provide good returns for their vested interests and providers of capital. It's hyper-growth or certain doom from a founder point of view. Now that I know how these companies think about their products and have learnt the techniques they use to achieve their goals, I think it's made me quite cognizant to how this impacts our experience online. Reaction and extremism draws the most attention, so that's what is presented to us most frequently. This is what we engage with, we let it change how we think and act, it alters our world-view and self-image, it leverages our biological tendencies to influence us towards engagement, buying advertised products and generating more revenue for tech stakeholders.
This is all to say I think the only true solution is elevating societal understanding of technology, the nature of being online and how to separate ourselves as humans from the personas and stories on the internet. The ham-fisted approach of mass-surveillance, digital identification and centralised content moderation is so far off the mark and ultimately doomed to fail in my opinion.
The Argument Against an eSafety Commissioner
To me, the argument against the Online Safety Act and Julie Inman Grant's "eSafety Commissioner" is laughably simple. So simple in fact, that even the blissfully tone-deaf eSafety Commissioner's own website publishes articles that contradict it's own goals. Here's a snippet pulled directly from their blog post titled When ‘love’ becomes control.1
What is coercive control?
Coercive control is a pattern of behaviour that manipulates, intimidates and dominates another person – often in intimate or dating relationships.
It can include:
- controlling who someone talks to or spends time with
- monitoring their messages and social media
- isolating them from family and friends
- making them feel afraid, confused, or dependent.
Unlike physical abuse, coercive control is often invisible. It erodes a person’s independence and sense of self over time.
Why it matters
Young people are especially vulnerable – and many don’t recognise coercive control when it’s happening.
For example, only 2 in 5 Australian young people (18-24 years) understand the term ‘coercive control’.
At the same time, many believe controlling digital behaviours – such as constant texting or location tracking – are seen as signs of love and care.
This lack of knowledge and confusion makes it easier for abuse to hide in plain sight. And as technology becomes more embedded in relationships, it’s helping coercive control evolve in new and dangerous ways.
Okay hold on... lets go back to those points on what "coercive control" is: controlling who someone talks to or spends time with, monitoring their messages and social media, making them feel afraid, confused, or dependent. If you ask me, in order to implement the Online Safety Act and eSafety Commissioner, you would literally have to do all of those things, to everyone, all of the time. If the eSafety Commissioner website already defines doing those things as bad, why would we then push forward on an official government organisation doing it to an entire country of people? Are you able to see how this makes no sense at all?
So when does love become control? You argue that this is for protection of the vulnerable online, but there's an age old idiom: The road to hell is paved with good intentions. We are right on the edge of love and control and Julie my dear, you're enforcing control that we all see clear as day. It's hypocrisy in its purest form. Your "love" has now manifested as control of a whole country of people, a large majority of which are rational law-abiding adults who deserve their free-will to act online as they please without being monitored at all times.
They Taught Us Why This is Bad in High-School
Having to read Orwell's 1984 is a commonly shared trauma of most Australians. This book was often used in high-school english essays and many disengaged students dragged themselves through its pages to crap out some useless essay on the meaning within. This has always had me wondering, why has it been so easy for a government to quietly roll out a plan to implement systems so comically reminiscent of 1984, that you'd think they use a copy of it as the playbook of Political Science degrees?
Maybe the problem wasn’t Orwell’s warning, but the way it was delivered. We were given the text at the exact age we were most disengaged from politics, most indifferent to authority and least likely to care about abstract ideas like freedom of expression. Instead of instilling vigilance, the book became background noise. So when the real-world equivalents show up, we’re already numb to them.
What the Act and Commission Actually Achieve
They can market it as protecting children all they like, the fact is that children will always be exposed to the gritty nature of reality and we need to be taught how to cope with it like mature humans. This facade of protection is not protection at all; its control, dictation of thought and behaviour and a denial of free-will. The only destination this path leads to is constant surveillance, invasion of privacy and authoritarian censorship of what you see, hear, and do online.
Portraying it as "protection" is an insult to the public. By enforcing this for those under-age, you therefore have to enforce it for every adult too, otherwise you can't prove who is an adult and who isn't. By enforcing your "protective control" on every online citizen, you're essentially saying that we aren't capable of protecting ourselves online. Personally I find that ridiculous and I think most people with common-sense would agree.
I think the only point you're using to prop-up this Act and this government body is "think of the children", which is poorly misguided and historically known to be an emotionally-charged argument to justify overreach, censorship and control by Governments. You don't protect children by cotton-wooling them from reality, besides, kids will find a way around anything. GROSS content exists Julie, whether you like it or not. You're never going to squash it all because it's a Hydra that you will be fighting forever, and you'll be fighting it at the expense of every human right. If you really want to protect the children, teach them emotional maturity, teach them compassion and empathy. A child (or adult) equipped with skills of emotional maturity, compassion and empathy will disengage themselves with GROSS content, because they won't fucking enjoy it! I think "the children" is purely used as a straw man to justify a gross attempt at authoritative control of the online population.
Burning Our Money
When I say "Our Money" I mean the tax-payer dollar. The eSafety Commission and the expansion of the OSA is funded entirely with public money, the hard-working people of Australia that pay their taxes. Generally we'd like to see this money spent wisely; improving our transport infrastructure, supporting the sick and elderly, feeding and clothing those in need, improving public services and setting up our country for a prosperous future. Instead of a valuable spend of our earnings, we are watching in real time the government piss tens of millions of dollars away on this solution, with a goal of around AU$134 million invested over four years.2
On the surface level, one might think "Hey, spending public funds on improving the safety of people online sounds okay to me", and to be honest if it was that easy, I'd probably agree. The problem is, how in the hell is anyone actually going to regulate and enforce the OSA? If you're telling me that you can moderate every person on every website or digital service from every device, anywhere in the country, all the time, without sending Australia back into the literal dark ages, my only response would be "what are you smoking and where can I find some?".
I'm not sure anybody involved in the creation and proliferation of the OSA truly understands the scale of the issue and how futile a solution this is. Part of me wants to see the eSafety Commissioner come into effect just to enjoy the show of it failing miserably, but I can't in good conscience be silent as this mess unfolds, as I could think of many better ways to spend this amount of funding. Imagine deploying $134M into education and support services for digital literacy and well-being online, or don't and let me imagine a better way to spend that bag of money for you.
- Throw $40M to fund the nationwide rollout of comprehensive digital citizenship curriculum, from primary through to Year 12, co-developed by educators, psychologists, youth advocates, and technologists. Teach them concepts like:
- Critical thinking and media literacy
- Recognising manipulation, grooming, scams, and propaganda
- Navigating online identity, peer pressure, and content boundaries
- Dealing with exposure to disturbing material
- Put $25M into supporting grassroots and community organisations that already work in mental health, family violence, gendered harm, and youth counselling by funding:
- Online outreach teams
- Peer-led support forums
- Localised education sessions
- Rapid-response mental health and legal support for victims of image-based abuse and online stalking
- Offer $20M in grants to small startups, researchers, and non-profits building alternative safety tools and ethical technologies to encourage development of:
- Decentralised moderation frameworks
- Consent-aware image sharing tools
- Real-time emotional wellbeing plugins for social media
- Open-source safety infrastructure for youth platforms
- Put $15M into the hands of youth creators, artists, musicians, and designers to make authentic media about the online experience:
- YouTube series, TikToks, comic books, games
- Candid guides on GROSS content, consent, empathy, and the realities of the net
- Fund a national youth advisory board to oversee messaging
- Provide $18M to fund long-term independent research through unis and think tanks on:
- The actual exposure rate and impact of harmful content
- What interventions work across cultures and demographics
- How misinformation spreads, and how to build cultural immune systems to it
- What harms are caused by over-policing the web
- Instead of a centralised censorship body, use $15M to fund a lean, cross-functional response unit focused only on the worst-of-the-worst harms (CSAM, terrorism, etc.). Give it powers narrowly tailored to emergency takedown and survivor support, not broad content regulation. Critically, it should operate transparently and with legal oversight.
All of that action and we've still got $10M left to fill any gaps. To me, the solution of the OSA and eSafety Commissioner is lazy, misguided and sets the stage for abuse of authority in the future. Maybe our current government wouldn't misuse this power, but if the system is created all it takes is one ideology shift of a new political party for it to be weaponised against citizens of the state.
Rather than throwing taxpayer money at a reactive, surveillance-based, unelected body tasked with silencing a firehose of global content, the funding could instead cultivate an educated, emotionally capable, and tech-savvy population. A population that doesn’t need censorship to stay safe online, because it understands the ecosystem it lives in.
To The Prime Minister of Australia
Mate, what are you doing with this? I don't believe you're a bad guy, nor do I think you're an idiot, so I beg you to come to your senses and see how the heavily the cons of this approach outweigh the pros. Pathos (the appeal to emotions) is a powerful persuasive technique and it's use on the public in the OSA and eSafety Commissioner has gone way too far.
Weaponising "the children" is blinding the public to what really comes out of this; systemic surveillance, censorship and control of your citizens. You need to have our best interests in mind. Living online is relatively new to humans and I tell you now Anthony, you don't need to provide the fish, you have to teach the people how to fish, how to be a good person online, how to disengage with GROSS content and use the internet for the positive utility that it can provide. It can only provide positive utility if it isn't being controlled by a centralised authority. An internet experience controlled by a centralised authority can only be defined as a propaganda machine, a significant set-back to our country and a huge waste of tax-payer resources.
If Australia is serious about building a safer internet, it won’t come from a centralised authority, it’ll come from decentralised trust, transparency, and education.
Scrap this act mate, don't fall for the sunk-cost fallacy. This effort is misguided, the folk responsible are far too ignorant in what "safety" online actually means, and their efforts will be a detriment to what it means to live in a free Australia. Throwing this away is a net-positive and there are far more effective approaches to tackling the problem of health and safety online.