Types of content the OSA is regulating

The types of content that the Online Safety Act and Ofcom are regulating

There are two types of content, two assessments you need to undertake, and two lots of steps you need to perform:

  • Illegal content and illegal harms,
  • Harmful content (different from illegal content)

Illegal content and illegal harms

In this section we will specifically cover the illegal content and illegal harms, which you subsequently need to perform your safety duties to protect users from.

In regards to the gaming industry, the types of illegal content most relevant and likely are:

  • Offences to children: Child sexual exploitation and abuse (CSEA); Offences relating to child sexual abuse material (CSAM), and grooming.
  • Threats (including hate),
  • Abuse and insults (including hate),

For further reading and full context, please see the following link: Ofcom: Protecting people from illegal harms online, Illegal content judgements guidance (ICJG).


Child sexual exploitation and abuse (CSEA): Offences relating to child sexual abuse material (CSAM).

5.20 Content which ‘incites’ a child under 16 to engage in sexual activity is illegal content, even where the incitement takes place but the sexual activity does not. This means that where providers do not have information about whether or not a child has been caused to participate or engage in sexual activity offline, the content is illegal if it incites (i.e. encourages or assists) them to do so

5.22 If pornographic content is sent to a child under 16 years, it will be reasonable for service providers to infer that the child has been caused to watch it.


Inferring the potential victim’s age as under 16

5.23 For content to amount to these offences, the communication must involve a child under the age of 16. In order to protect children from online harms, the Act requires providers of services that are likely to be accessed by children to use age estimation or age verification measures.

5.24 Reasonable grounds to infer that a potential victim is a child should be presumed to exist where:

b) Information from age estimation or age verification measures (‘age assurance measures’) indicates that the potential victim in the image is aged under 16. c) The potential victim of grooming states in a report or complaint that they are aged under 16 or were aged under 16 at the time when the potentially illegal content was posted. d) Account information indicates that the potential victim is aged under 16, except where the subject concerned has been using the service for more than 16 years. e) A person other than the potential victim states in a report or complaint that the potential victim is aged under 16 or was aged under 16 at the time when the potentially illegal content was posted. This applies unless:

i. Information from age estimation or age verification measures (‘age assurance measures’) indicate that the potential victim is aged 16 or over; or ii. The potential victim stated in a report or complaint that they were aged 16 or over at the time the potentially illegal content was posted.


Adult to child only offences

5.29 If consideration of the offences above has not resulted in the content being judged to be illegal content, but the content is of a sexual nature and involves a child who can be reasonably inferred to be under 16, the provider should next consider the age of the potential perpetrator.


Sexual communication with a child

5.30 Content will be illegal where it amounts to sexual communication with a child. In order for content to be illegal under this offence, there must be reasonable grounds to infer that all of the following are true:

a) the communication involves at least one child under the age of 16 (the potential victim(s)) and at least one adult aged 18 or over (the potential perpetrator); b) the adult aged 18 or over intends to communicate with the child; c) the communication is either itself sexual, or was intended to encourage the child to make a sexual communication; d) the adult in question did not reasonably believe that they were communicating with a person aged 16 or over; and e) the communication was for the purposes of sexual gratification of the adult in question.

5.33 Communication should be considered sexual where any part of it relates to sexual activity or where any part of it is what a reasonable person would consider to be sexual. It is not necessary to infer that the adult in question themselves believed the communication to be sexual.

5.34 Communication which encourages a child to communicate in a sexual way is encompassed within this definition.

5.35 The medium of the communication is irrelevant when judging whether content is illegal: written messages, audio, video and images may all be considered to amount to sexual communication with a child. This means that the sending of sexualised imagery (for example, an image, video or gif depicting sexual activity) will be captured (although it is likely to have been caught by the ‘sexual activity’ offences above). Likewise, content communicated via permanent means (for example, in a comment on a photo that stays on the service unless the user/service makes a decision to remove it) or via ephemeral means (for example, an audio message in a virtual environment) may amount to sexual communication with a child. Content posted in these settings will be illegal if it amounts to any of the offences set out below.

5.36 Service providers may be most likely to encounter such content via direct or group messages but should also be aware of the risk of this offence manifesting in illegal content in other ways such as via comments or livestreams, via gaming platforms, or in immersive virtual reality environments.


Illegal content: Threats, abuse and harassment (including hate). Overview of themes relating to illegal content and illegal harms:

Note: This is not the full list. This is a reduced list as to offences that are most likely to be experienced within games content via user-to-user communication.

3.1 The priority offences set out in Schedule 7 of the Online Safety Act (‘the Act’) which relate to threats, abuse and harassment overlap with one another to a significant degree. For the purposes of this chapter, we therefore approach them based on theme, rather than offence by offence.

The themes are:

b) Threats (including hate), encompassing:

i) threatening behaviour which is likely to cause fear or alarm ii) threatening behaviour which is likely to cause harassment or distress

c) Abuse and insults (including hate), encompassing:

i) abusive behaviour which is likely to cause fear or alarm ii) abusive behaviour which is likely to cause harassment or distress

3.2 Suspected illegal content may include more than one of these themes. It may well also need to be considered under other categories of priority offences; in particular: terrorism, CSAM (for example, when a child is being blackmailed), grooming, image-based sexual offences (including intimate image abuse) or foreign interference and the non-priority false communications offence.


Threats or abusive behaviour likely to cause fear or alarm:

3.22 It is not necessary that a person actually suffered fear or alarm from content being posted, only that it was likely to cause a ‘reasonable person’ to suffer fear or alarm. A ‘reasonable person’ is someone who is not of abnormal sensitivity. However, the characteristics of the person targeted are relevant. A reasonable person who is threatened because of characteristics they have (for example, race, sexuality, religion, gender identity or disability) is more likely to feel threatened.

3.23 The mere fact that a person has complained about content is not sufficient to show that a reasonable person would be likely to suffer fear or alarm. In considering whether a reasonable person would be likely to suffer fear or alarm, the following factors are relevant:


Threats or abusive behaviour likely to cause harassment or distress:

3.33 Distress involves an element of real emotional disturbance or upset. The same is not necessarily true of harassment. A person may be harassed, without experiencing any emotional disturbance or upset. However, although the harassment does not have to be grave, it should also not be trivial. When the UK courts are considering these offences, this is the test a jury is asked to apply, and so it is right for providers to take a common-sense view of whether they have reasonable grounds to infer that the content they are considering meets this test.

3.34 Service providers should consider any information they hold about what any complainant has said about the emotional impact of the content in question and take a common-sense approach about whether it is likely to cause harassment or distress. If the content expresses racial hatred or hatred on the basis of other protected characteristics, it is far more likely to cause harassment or distress. Certain words carry greater force depending on who they are used against. The volume of the content concerned, or repetition of the conduct, may make it more likely content will cause harassment or distress. Offences which involve repeated instances of behaviour are also considered in this chapter; see paragraphs 3.107-3.108



Harmful content

In this section we will cover harmful content, which subsequently you will need to perform your children’s safety duties to protect users (with specific attention to children) from.

Harmful content falls into three main high-level categories:

  • Primary priority content (PPC)
  • Priority content (PC)
  • Non-designated content (NDC)

In the following section we will explore each category of content which includes all types of PPC, and the types of PC which are most likely relevant to the majority of games.

For shorthand reference, view table 1.1: content harmful to children covered in our guidance as defined in the act: Guidance on content harmful to children

For full context, please read all of the following: Ofcom: Protecting children from harms online, all 6 volumes.


Primary priority content (PPC)

  • Pornographic content
  • Suicide content (which encourages, promotes, or provides instructions for suicide),
  • Self-injury content (encourages, promotes, or provides instructions for an act of deliberate self-injury),
  • Eating disorder content

Priority content (PC)

Content which is abusive or incites hatred,

Content which is abusive and targets any of the following characteristics:

  • Race,
  • Religion,
  • Sex,
  • Sexual orientation,
  • Disability, or,
  • Gender reassignment,

Content which incites hatred against people:

  • Of a particular race, religion, sex, or sexual orientation,
  • Who have a disability, or,
  • Who have the characteristics of gender reassignment.

Violent content

  • Content which encourages, promotes, or provides instructions for an act of serious violence against a person.
  • Content which:
    • Depicts real or realistic serious violence against a person,
    • Depicts the real or realistic serious injury of a person in graphic detail,
  • Or, content which:
    • Depicts the real or realistic serious violence against an animal,
    • Depicts the real or realistic serious injury of an animal in graphic detail,
    • Realistically depicts serious violence against a fictional creature, or the serious injury of a fictional creature in graphic detail.

Bulling content

Content may, in particular, be bullying content if it is content targeted against a person which -

  • Conveys a serious threat,
  • Is humiliating or degrading,
  • Forms part of a campaign of mistreatment.

Non-designated content (NDC)

Any other type of content not mentioned here that presents a material risk of significant harm to an appreciable number of children in the UK.

NDC is not addressed in the Guidance on Content Harmful to Children but is addressed in the Children’s Register. In accordance with their children’s risk assessment duties, service providers are required to consider types of content that may be harmful to children beyond the designated harms specified by the Act.

Providers should refer to the Introduction in Section 1 of the Children’s Register and the Children’s Risk Assessment Guidance for further detail on how they should consider NDC with regard to their children’s risk assessments duties.


❗️

Summary of the types of content that the Online Safety Act and Ofcom are regulating

Summary of content types for video games:

Although there are a lot of content types to cover; most of the behaviours in relation to games can be distilled into three main groups. We will cover functionalities in full detail in the coming section, although we touch on it here.

  1. Toxic players who send message that are:

Primary Priority Content:

  • Encourage or provide instruction on suicide (tell people to kill themselves, especially who do so enthusiastically and in detail).
  • Encourage or provides instruction on self-harm (tell people to hurt themselves, especially who do so enthusiastically and in detail).

Priority Content:

  • Abuse - generally abusive content targeting specific protected characteristics.

  • Bullying - humiliating or degrading content, and/or conveying a serious threat.

  1. Predators who message/communicate with children (CSEA, CSAM, and grooming)

Content which incites a child under 16 to engage in sexual activity, even where the incitement takes place but the sexual activity does not.

Sexual communication with a child where:

  • The child is under 16,

  • The perpetrator is over 18,

  • The adult intends to communicate with the child,

  • The communication is either itself sexual, or was intended to encourage the child to make a sexual communication,

  • The adult did not believe the person they were communicating with was 16 or over,

  • The communication was for the purposes of sexual gratification of the adult in question.

  1. Violent content within the game itself which may not be age-appropriate

Violent content relating to people:

  • If your game has content which depicts real or realistic serious violence against a person,
  • or the real/realistic serious injury of a person in graphic detail,

Violent content relating to animals, including fictional creatures:

  • Depicts real or realistic serious violence against an animal,
  • Depicts real or realistic serious injury of an animal in graphic detail,
  • Realistically depicts serious violence against a fictional creature, or the serious injury of a fictional creature in graphic detail.