For the last little while, I’ve been pondering a legal policy problem about censorship. To spoil the punchline of this post: the puzzle is how to facilitate a discussion about banned content with the wider public without disseminating the banned content itself. I have been playing around with Twitch on my LegalReckoning profile for precisely this reason, but it turns out to be surprisingly difficult as the recent Elon drama demonstrates… in a weird way.
Quick background for those not terminally online:
Flight path data is public data. You can lawfully find details about which planes are flying where. Due to problems with government regulations, there are people called ‘billionaires’ who sometimes own their own planes. When their jets take to the air, data about their flights is available to the public and, therefore, you have a reasonable chance of knowing where the billionaire is.
This gets us to the first big interesting question: what should you be permitted to do with public information? If it’s public, is it a ‘no holds barred’ situation about its use? Some people–not unreasonably–think that this is perfectly fine. Once information is public, you shouldn’t be able to control its use.
The problem with this view is that we are getting better and better at linking open source information so that we can uncover information about individuals that, maybe, they don’t want made public. We also have information about people that they probably do not realise that they’re making public. The issue here is that information is public, but it’s not convenient to use. Maybe we want to limit how people facilitate access to public information.
I don’t really have a horse in the race on that question. I see merits and problems with various positions and am not entirely sure where I would want to draw the line.
In the specific case before us, we have a few chunky bits. Elon Musk–the new CEO of Twitter–was asked a few weeks ago whether a Twitter account that published information about Elon’s jet from the public data was safe under Twitter’s rules. Musk said yes. Recently, Musk has made the claim that one of his children was attacked and that the attack was related to the flight path data. Some people (on fairly spurious grounds indistinguishable from the reasoning of conspiracy theorists) reject Musk’s account of things; the reasonable person, on the other hand, would say: ‘Yeah, that’s not outside the realm of possibility.’
Musk wanted to limit the availability of flight path data on Twitter. Zap, zap, anybody who was publishing flight path data on Twitter got suspended. As if by magic, all of the people who had spent the last year screaming their guts out about ‘free speech’ only being about the relationship between the State and the individual suddenly felt that free speech issues also applied to private companies and to what they allowed to be published on their platforms.
Once Twitter started zapping the data, people tried to find ways around the censorship: linking to other sites that had the data. Those accounts got zapped, including accounts belonging to journalists who had linked to the data. This secondary point started rhetoric for a few hours about Musk being sensitive to criticism when it was always more plausible that they zapped based on the linked data rather than the identity of the account.
If the whole thing sounds incredibly ‘High School Drama’. Yeah…
But the end of the pathway above takes us to somewhere interesting: how do you discuss banned content?
It is no surprise to anybody who’s listened to me talk about literally any topic that I am quite the fan of censorship. The data consistently shows that censorship works, and people opposed to censorship tend to craft incredibly specious reasons for rejecting the data.
Most reasonable people think that some level of censorship is reasonable. There is content so abhorrent that we as a society say: ‘Look, it’s simply a crime to publish that.’ My concern is that we preserve what we mean by ‘abhorrent’ and don’t let the scope of that term creep to things that we just find unpleasant or obnoxious. In Australia, abhorrent material regulation has not always been thought through especially well–it’s usually triggered by some event, such as the broadcast of the terrorist event in Christchurch. And the challenge is how we develop the area of law without it being a series of reactions to specific events.
If somebody says that they are fine with that kind of abhorrent material being widely available, then I don’t think we are occupying the same moral universe. The question isn’t whether or not there should be censorship; the question is always the limits of what we censor.
And this contains within it the seed of the problem: what are those limits?
In the case of the flight data, we had people who wanted to talk about whether this data should be censored or restricted. But how do you have that chat without pointing to the data? How do you have that chat without indirectly pointing people towards that data? If you want to read (at least some of the) journalists who got banned in the most charitable possible light, you could argue that this was the conversation they were trying to have: by pointing to where you could find the content, you facilitate a discussion about what is and is not acceptable speech.
Except, of course, this defeats the purpose of limiting the unacceptable speech.
In the early 2000s, Margaret Pomeranz was able to host an illegal screening of a film that the Australian Classification Board had refused classification (and, thereby, prohibited from view in Australia). Pomeranz is obviously on the other side of the censorship debate from me, but screening the banned film meant that we put into debate what the limits of censorship should be. That is, unless you have those moments of protest so that people can challenge the law either within the legal system or through public debate, you can’t check that the legal framework actually meets public expectations.
Pomeranz was able to host the screening because there was the infrastructure to do so (she screened it at Town Hall). Twenty years on, we have fewer and fewer spaces in which to have these discussions about banned content, and more of the spaces that we do have are privatised and virtualised. Do I think that Musk went out of his way to create a blacklist of people who criticised him? Absolutely not. Do I think he did the moderating equivalent of ‘Ctrl+F, delete’? Absolutely yes. When more of our discussion spaces are online, we are more subject to algorithmic censorship and, as a result, less able to challenge that censorship.
Awkwardly for me, my favoured form of censorship (where the person doesn’t actually know that they’re being censored) performs even worse on this analysis: not only are you completely unable to challenge the censorship through public opinion, it is less likely that there will be public debate about censored content when you struggle to detect that content has been censored.
There was a lot of silliness in the ‘Musk suspended journalist accounts’ debate. Musk at one point referred to the public data about his jet being ‘assassination coordinates’. Journalists shared a screenshot of a media figure openly encouraging people to evade the content block while inexplicably commenting: ‘What did he do to deserve suspension?’ Neither ‘side’ of the discussion really had much interest in presenting the other ‘side’ in a fair light, such that casual observers did come to the incorrect view that Musk was banning people for criticising him rather than for sharing the blocked content. It was not public debate at its finest.
But hidden under the dreck is an important question: how do we have the discussion about where the line on censored content should be drawn? I’m currently playing around with how we have a discussion about where the line is drawn from a legal perspective using Twitch but, again, a lot of effort has to go into working out what will get my account banned. As we saw in the E-E-E-Elon and the Jet drama, it’s not enough that content is legal, it also has to comply with terms of service… which is often harder to deduce. But it’s a discussion worth having and worth having seriously without the emotional baggage of sentiment towards Musk.