top of page
  • Charalampos Kliaris

Towards a harm-based approach to online content

Updated: Dec 25, 2021

A review of the government’s Online Harms White Paper

It goes without saying that developing effective social media regulations will be a massive challenge. As more examples of threats to users arise, expectations for responsible technology have intensified. But the UK is still a long way from enforcing effective measures to tackle online harm. Last year the government introduced plans for a statutory duty of care for online harm reduction in the Online Harms White Paper (OHWP)[1]. This paper will examine potential concerns that arise from the government’s OHWP and if the measures suggested can offer adequate protections for individuals affected by potentially harmful online behaviour. This will be done by discussing the arguments surrounding the suggested implementation of a duty of care on social media companies.

The White Paper and the Duty of Care

Paragraph 3.1 of the OHWP states that the government will place a statutory duty of care on relevant companies to take steps in order to prevent reasonably foreseeable harm from occurring as a direct consequence of activity on their services. The OHWP fails to specify exactly what sort of steps companies might be required to take but it does recognise in paragraphs 3.16 and 8.1 that companies contribute to the development of the problem and must therefore take steps early on in the product design process to prevent harm. It may be noticeable that certain design choices of most social media giants such as Facebook are not neutral, meaning that they have an impact on how content is posted and shared. The application of the design principle from paragraphs 3.16 and 8.1 could mitigate against the creation of the problem from the beginning. Be that as it may, the OHWP does not provide any information regarding the required steps in order for companies to apply this principle so the point that they are trying to make gets lost.

However, when doing this the government can draw some guidance from Professor Lorna Woods and former UK government senior civil servant, William Perrin whose article supports the idea that the UK should implement a duty of care on social media companies[2]. They argue that a broad, general and futureproof approach to safety is required that goes far beyond the traditional duty of care. The article recognises that social media duty of care must work in many different circumstances and thus trying to produce sets of rules for each one is not a realistic option. Perrin aims to draw attention to the actual desired outcome which is the prevention of harm rather than regulating each step of how to get there. One can argue that this is a very pragmatic approach as the simplicity and generality it conveys will save time, will be more cost effective than regulating each step, works well with the fast development of social media and it also reduces the risk of straying into impermissible legal complexity and vagueness.

For the reasons above one may argue that the OHWP lacks basic conceptual clarity. There is no explanation of how this undefined duty of care will apply to relevant companies. For example, there is no mention that users will be able to make a claim against a company that acted negligently and failed to satisfy the duty of care. As it stands now in the OHWP, duty of care is nothing but a confusing label and not suitable in its current form, rather like trying to fit a square peg in a round hole.

The White Paper and the Regulator

The OHWP provides that Ofcom may be appointed as the independent regulator in charge of overseeing the systems and processes of the relevant companies. In a response to the consultations on the OHWP, ARTICLE 19 (an international human rights organisation) favoured this approach due to the fact that Ofcom caries out risk-based investigations similar to those that could be envisaged in the area of online harm[3]. Furthermore, Professor Lorna Woods and William Perrin also agree with Ofcom acting as the regulator because a newly established body with no track record will take a long time to earn sufficient reputation[4]. The government may also consider approaches or seek guidance from France’s Private Members’ Bill which aims to regulate online hate speech. This Bill is currently going through the National Assembly and intends to grant the Conseil Supérieur de l’Audiovisuel (an independent body similar to Ofcom) powers to examine whether relevant companies are removing enough illegal content[5].

The White Paper and the Codes of Practice

In the OHWP the Government laid down codes of practice to implement a duty of care. This can prove very useful for relevant companies that are making design choices to satisfy their duty of care as they will have a clear picture of what is expected of them. The challenges in this area arise from the way the codes are drafted. The government stated that there will not be a code of practice for each category of harm since this would pose a very unreasonable legal burden on relevant companies. Be that as it may, in terms of action the OHWP provides reference to a more proactive approach regarding a variety of different forms of content, not just content that relates to serious harm such as child abuse and terrorism. This may be somewhat worrying as one could take this to mean a requirement for implementing upload filtering, general monitoring and take down mechanisms. This will clearly go against the whole concept of Article 10 of the European Convention on Human Rights (freedom of expression) as it implies that bundles of content will be taken down if they do not meet the company’s standards even if the content is legal. The OHWP however, does recognise that Article 15 e-Commerce Directive prohibits general monitoring but does not provide an explanation as to how it intends to resolve this matter. Brexit may also pose a risk here as Article 15 could be overlooked when the process of converting EU to domestic UK law is initiated.

The OHWP greatly emphasises that a higher level of protection should be afforded to children than adult users. The OHWP makes a reference to evidence gathered by the ICO that illustrates some standards that aim to protect children’s online privacy. Again, sufficient guidance of how to achieve this, or how to implement standards in order to protect children is not provided. Professor Woods recommends that the government should set out a list of key harms to act as examples for relevant companies and with that, effective explanations on how relevant companies can take action to prevent an activity before it becomes an offence[6]. This may potentially be a solution to this problem but caution will be required in its drafting to make sure that there is harmonization between its wording, the ECHR and other fundamental protections.

For the reasons above one may argue that the government should not aim to regulate online harm through the codes of practice provided in the OHWP. It should however aim to lay down transparent procedures for the removal of illegal content with due process and consideration of Article 10 of the ECHR.


The OHWP is a step in the right direction and does contain some useful proposals such as the appointment of Ofcom as a regulator. However, the vast majority of the paper lacks much needed clarification and adaptation. The government should provide further explanations regarding the definitions of the suggested ‘duty of care’ and ‘reasonably foreseeable harm’. The government should publish a more detailed proposal that addresses additional issues not covered in the OHWP such as jurisdictional issues and enforcement powers, with explanations of the steps that relevant companies will be required to take to satisfy the proposed requirements. It should also draft proposals with due process aiming to balance the rights of all users. However, this will be rather difficult as in the online world one man’s protest is another man’s riot. For this reason it may be recommended that the government further reflects on which buttons to push in order to bring the biggest and most appropriate change. For example, should priority be given to freedom of expression, privacy or the spread misinformation? And also, what will the consequences be of choosing one option over another?

[1] Department for digital, culture and sport and Home Office, Online Harms White Paper (2019) [2] W Perrin, 'Reducing harm in social media through a duty of care' (Carnegie UK Trust, 8th May) ih6gIVjLTtCh3k_AhNEAAYASAAEgKYRfD_BwE accessed 13 June 2020 [3] Article 19, 'Response to the Consultations on the White Paper on Online Harms' (ARTICLE 19, June) accessed 19 June 2020 [4] L Woods, W Perrin, M Walsh 'The Online Harms White Paper: a summary response from the Carnegie UK Trust' (Carnegie UK Trust,18th June) accessed 18 June 2020 [5] Article 4 of PROPOSITION DE LOI visant à lutter contre la haine sur internet, version 30 June 2019 [6] L Woods, W Perrin, M Walsh 'The Online Harms White Paper: a summary response from the Carnegie UK Trust' (Carnegie UK Trust, 18th June)

83 views0 comments

Recent Posts

See All
bottom of page