As a bit of police theatre, it was outstanding.
Police responded to a complaint about a home in Wanaka flying an allegedly racist red flag with an ominous black insignia inscribed within a white circle.
After investigating, police noted that flying the flag of the fictional Klingon Empire from Star Trek might attract unwanted attention from the United Federation of Planets, but the flag itself was not racist.
The New Zealand Herald’s reporter enjoyed some banter with the police about their interactions with Star Trek’s Federation and whether they might replace tasers with phasers anytime soon.
But nobody seemed to ask why the police were involved in the first place. Flags, however evil and racist the regime they represent, are legal. So is the flag of the Black Power.
It was also a timely warning about overreach when it comes to speech and content regulation. Police, under pressure to do something about plainly legal activity that causes offense, will respond. If it turns out simply to be the Klingon flag, a good time can be had by all. But good times are far from guaranteed.
Sometime in the coming weeks, the Department of Internal Affairs will release its proposed new framework for online content regulation.
It will be worth reading carefully as it will affect what you are allowed to read, watch, hear, say, and write.
The Cabinet Paper initiating the Content Regulatory Review noted potential inconsistencies in approach across existing bits of legislation regulating content and difficulties in keeping up with a changing environment.
The Minister suggested the review focus on harm-minimisation while attempting to bring greater consistency in approach across different ways of delivering content.
There are always opportunities for improvement.
The current model is a bit risky. A poor appointment as Chief Censor or as President of the Film and Literature Board of Review can cause harm. In 2015, President of the Film and Literature Board of Review Don Mathieson issued an interim ban on the novel Into the River.
Stronger governance around these kinds of decisions would be welcome.
Without a stronger statement of problem definition, we will have to wait for the draft framework to see where DIA is taking things.
Since the review was initiated, the main technology platforms worked with Netsafe to establish a digital code of conduct around potentially harmful material. The opt-in Code requires platforms to have a policy on content. It provides a complaints mechanism if a platform is not following its policy.
The Code drew critique from InternetNZ and others for inadequate community engagement. How the draft framework considers and incorporates voluntary measures like the opt-in Code will be interesting. But we should expect continued pressure from those seeing greater harm in banning too little content than from banning too much.
At the same time, the government and media outlets like Stuff have become increasingly concerned about misinformation. There are a lot of crazy theories online. And it can be hard to pull people out of deep rabbit holes.
But regulation targeting misinformation is horribly fraught.
To take a simple and self-interested example, the Ministry of Health and Director General of Health maintained, through the first half of 2021, that saliva-based PCR testing was unreliable. I wrote columns contradicting the Podium of Truth. And I was right – because I had been listening to experts on it.
Similarly, Newsroom’s Marc Daalder regularly pointed out cases where Ministry of Health guidance amounted to “dangerous misinformation”.
Would such reporting become riskier under a government-appointed arbiter of Truth?
So there will be a lot to watch for in the coming framework.
How does it draw lines between content that is offensive and legal, as compared to content that is objectionable and forbidden?
Does it require proactive identification and removal of content that could be illegal, or measures for responding to complaints about content, or both? In either case, would the cost of those measures lock in an advantage for larger platforms better able to comply with requirements?
And how might any of it guard against police overreach?
Last week, UK police arrested anti-monarchy protestors for holding signs urging the abolition of the monarchy. It was ridiculous overreach violating freedom of speech on public order grounds.
But UK police have regularly been engaging in censorious overreach. Even Graham Linehan, creator of Father Ted, got a visit at home on a Sunday morning because someone had not liked one of his tweets.
In New Zealand, the legality of flying offensive flags is not disputed. But you might still get a visit from the police. We should be careful in how any proposed framework for online content regulation draws lines.
Even if the framework sets a high bar on forbidden misinformation, the effect could be more broadly chilling.