The Ying And Yang Of Content Moderation In The Age Of Netflix

Media producers and regulators need to take a considered approach to how they portray suicide in drama and film. Campaigners can learn from other public health debates such as the portrayal of cigarette smoking.

Controversy over Netflix and one of its latest hit shows was sparked by a study published last month suggesting that teen drama ’13 Reasons Why’ may have increased suicidal ideation in viewers.

The study was published the same day as the news that Apple had been pressured into removing VPNs from its China App Store, making it even harder for residents of the world’s most populous country to access content from the likes of Netflix, Facebook and YouTube.

On the surface, these two stories only seem to have a passing relevance to one another, but they both feed into broader discussions about censorship and the extent to which content ought to be restricted. Before we get onto that, let’s consider the Netflix study.

Published in the Journal of the American Medical Association (JAMA), the study found that Google search queries involving suicide increased by 19% during the days following the release of 13 Reasons Why, which centres around a teen suicide.

Members of the media were quick to draw their own conclusions from the paper, with The Telegraph suggesting that the Netflix show “should be withdrawn” due to the possibility that it is “driving young people to consider suicide”. Meanwhile, the journal article’s authors suggested that shows ought to be evaluated before release to identify potential risks, and that troubling scenes ought to be removed retrospectively — something that is done in China.

Before rooting for all-out censorship, it’s worth considering that the study’s findings were not all bad. Search queries following exposure to the show also increasing for ‘suicide hotlines’ and ‘suicide prevention’, indicating “elevated suicide awareness”. While various conclusions could be drawn from this, at the very least it suggests that the show increased interest in suicide-prevention and help-seeking. This fits with the narrative of public health campaigners that advocate openness with young people about challenging issues, such as those concerning sexual health — so long as balanced, good quality information is made available. It might have been interesting, therefore, for the study to have considered which particular resources the search queries led to, and how useful they might have been. Information about this could be used to help funnel search queries (and distressed individuals) towards useful resources – something that Google has shown interest in facilitating since as early as 2010, and that Facebook is beginning to take more seriously.

Stepping back from the specifics of the case, the idea that there can be utility in being exposed to unsettling art and media illustrates a fundamental belief in western culture; one not typically found in Asia. In western societies, we tend to interpret the Three Wise Monkeys and the aphorism, ‘see no evil, hear no evil, speak no evil’ as indicative of wilful blindness and a turning of one’s back on moral duties. By contrast, in eastern cultures such as in China and Japan, there is a sense that unnecessary exposure to vice serves to perpetuate unhelpful emotions and ideas. The truth perhaps lies somewhere between these two positions.

While westerners may criticise Chinese censorship, most of us still believe in limits when it comes to the exposure to content; for instance, we don’t necessarily desire young children to be needlessly exposed to graphic violence. Nor is exposure to morally challenging material useful if it comes at us so frequently as to leaves us desensitised or fatigued.

How do we balance these positions from the perspective of public policy? How do we allow art to be expressed freely without it causing needless harm to those exposed to it? This question strikes at the heart of the Netflix debate about the portrayal of suicide. The authors of the journal paper suggest that content producers ought to follow WHO media guidelines on suicide, but these guidelines only refer to news and documentary media — not to fictional content. In fact, of all of the literature that guides news media on how to cover suicide, guidance around fictional content is virtually non-existent.

Some of the non-fiction guidelines remain relevant in the world of fiction. Before or after the credits, films and dramas can look to provide helpline numbers and appropriate ‘factfiles’ that help balance views and educate the audience (examples of positive moderation). But leaving aside debates about the ‘philosophy of art’, morally dubious ideas and characters that engage in unwise activities (and say untrue things) are often seen as an important feature of artistic, fictional content. This sets it apart from non-fiction.

In developing guidance for fictional content, we might do well to look to parallel cases of other public health concerns, such as smoking. Studies over the past 10 years have linked exposure to smoking in films with increases in adolescent smoking. Subsequently, lobbying groups have been pushing for the film industry to award R ratings to films that feature smoking prominently. While this remains an ongoing struggle, it seems reasonable that impressionable children should be protected by ‘parental guidance’ when it comes to exposure to potentially harmful health behaviours. As for how age restrictions can be enforced with digital devices, this is something for technology companies to figure out, and figure it out they will if regulators pressure them.

There are no easy solutions for content moderation, but it should be clear that rather than panicking and sliding towards Chinese-style censorship, we ought to pursue a pragmatic middle ground based on evidence and compromise; one that champions and balances both emotionally-challenging art, and also protection, guidance and support for those who need it.

— This feed and its contents are the property of The Huffington Post UK, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website.