Risk factors specific to games
Ofcom have established risk factors for service providers.
The three that are most relevant to games are:
- Service type
- User base
- Functionalities
The nice thing here is that almost all games will fall into the same categories:
- You are all games (service type)
- User base will be diverse based on game and genre
- Almost all games have the same risk factors in regards to functionality:
- Anonymous profiles (anyone can create an account and it’s not tied to them as a person),
- User networking and user connections, (user’s who don’t know each other can come across each other),
- The big one: User communication (user’s can message and/or chat).
Service Type
There are lots of service types, but as you’re reading this, we will assume that you are a game.
Summary of service type
You have games, and they have been highlighted as a high risk service.
User base
Age is a key component to your user base.
For the full read and context on age, read: “17. Recommended age groups” (page 311) of the Children’s Register of Risks”.
Additionally, to support age being a key factor specifically within games, Ofcom has undertaken research and found (see “Children’s online behaviours and risk of harm within the Children’s Register of Risks"):
- (1.38) More than 80% of children once they hit the age of 13 play online video games
- (1.39) 7-18 year olds spend on average 3.4 hours a day online.
- (1.45b) Many children play online games which may bring them into contact with strangers, including adults.
- Three quarters (75%) of children aged 8-17 game online,
- 25% play with people they do not know outside the game.
- Additionally, 24% chat to people through the game who they do not know outside of it.
- When prompted, 62% of parents whose 3-17-year-old played games online expressed concern about their child talking to strangers while gaming (either within the game or via a chat function) and 54% were concerned that their child might be bullied.
The age bands are as follows:
- 0-5 pre-literate and early literacy.
- 6-9 years: Core primary school years.
- 10-12 years: Transition years.
- 13-15 years: Early teens.
- 16-17 years: Approach adulthood.
As a note, these age bands align with the Information Commissioner’s Office (ICO) Age appropriate design code, on the basis of evidence linking certain online behaviours to age and developmental stage. Additionally, Ofcom mention that they created these age groups with consideration to life stages, online presence, parental involvement, and age-specific risks.
Additional and important commentary from Ofcom regarding age and age bands includes:
17.1 As mandated by the Online Safety Act 2023 (the Act), user-to-user services must assess “the level of risk of harm to children presented by different kinds of content that is harmful to children, giving separate consideration to children in different age groups”. There are similar requirements for search services to consider children in different age groups.
17.2 The Act also imposes a number of safety duties requiring services likely to be accessed by children to manage and mitigate risks of harm from content that is harmful to children. This includes, in relation to user-to-user services, operating a service using proportionate systems and processes designed to:
(i) prevent children of any age from encountering primary priority content that is harmful to children, and
(ii) protect children in age groups judged to be at risk of harm (in the risk assessment) from encountering priority content that is harmful to children and non-designated content.1573
0-5 pre-literate and early literacy.
A time of significant growth and brain development for very young children. Children of this age are heavily dependent on their parents, with parental involvement substantially influencing their online activity.
Age specific risks
17.15 Just by being online, children in this age group are at risk of encountering harmful content. As children use devices or profiles of other family members, this may lead to a risk of encountering age-inappropriate content, including harmful content, as recommender systems1587 recommend content on the basis of the search and viewing history of the other user(s).
17.16 The use of child-specific or restricted-age services does not guarantee that children will necessarily be protected from harmful content. It is possible that children may be more likely to use these services unsupervised. There have been cases of bad actors in the past using child-friendly formats, such as cartoons on toddler-oriented channels, to disseminate harmful content on child-specific services.1588
6-9 years: Core primary school years.
After starting mainstream education, children become more independent and increasingly go online. Parents create rules to control and manage their children’s online access and exposure to content.
Age-specific risks
17.24 Some children in this age group are starting to encounter harmful content, and this exposure has the potential for lasting impact. Research by the Office of the Children’s Commissioner for England found that, of the children and young people surveyed who had seen pornography, one in ten (10%) had seen it by the age of 9. Exposure to pornography at this age carries a high risk of harm. For example, older children reflect on being deeply affected by sexual or violent content they encountered when they were younger, which may have been more extreme than they anticipated (in some cases the child had looked for the content, and in other cases it had been recommended).1600
17.25 Children are also being exposed to upsetting behaviour online. Over a fifth (22%) of 8-9- year-olds reported that people had been nasty or hurtful to them, with a majority of these children experiencing this through a communication technology such as messaging or social media.1601
17.26 As with the younger age group, the use of family members’ devices or profiles may lead to a risk of encountering age-inappropriate content, including harmful content. Recommender systems present content on the basis of various factors, including the profile of the user and the search and viewing history of any user(s) of that account/profile. For example, we heard from children who had been shown harmful content via an auto-play function on a social media service when using their parent’s phone and account.1602
10-12 years: Transition years.
A period of rapid biological and social transitions when children gain more independence and socialise more online. Direct parental supervision starts to be replaced by more passive supervision approaches
Age-specific risks
17.33 More independent use of devices, and a shift in the type of parental supervision, as well as increased use of social media and messaging services to interact with peers, creates a risk of harmful encounters online. Children may start to be more exposed to, or more aware of, bullying content online, with 10-12-year-olds describing how they feel confused when trying to distinguish between jokes and ‘mean behaviour’ online.1613 Due to the rapid neurological development taking place in the teenage brain at this point, the psychological impacts of bullying can last into adulthood.1614 Research has found that of the children who have seen online pornography around one in four (27%) had encountered it by the age of 11.1615
17.34 Despite a 13+ minimum age restriction for many social media sites, 86% of 10-12-year-olds say they have their own social media profile.1616 Our research estimates that one in five (20%) children aged 8-17 with an account on at least one online service (e.g., social media) have an adult profile, having signed up with a false date of birth. Seventeen per cent of 8- 12-year-olds have at least one adult-aged (18+) profile.1617 Alongside this, 66% of 8-12-yearolds have at least one profile in which their user age is 13-15 years old.1618
17.35 Evidence suggests that 11-12 is the age at which children feel safest online. A report by the Office of the Children’s Commissioner for England found that the proportion of children who agree they feel safe online peaks at ages 11 and 12 (80%), increasing from 38% from the age of 5.1619
13-15 years: Early teens.
This age group is fully online with children using an increasing variety of services and apps. Parents’ involvement in their children’s online use starts to decline. Increased independence and decision-making, coupled with an increased vulnerability to mental health issues, means children can be exposed to, and actively seek out, harmful content.
Age-specific risks
17.45 A greater use of online services, more independent decision-making and the risk-taking tendencies common in this age group can together increase the risk of encountering harmful content.
17.46 Ofcom research estimates that a fifth (19%) of 13-15-year-olds have an adult-aged profile on at least one online service, potentially exposing them to inappropriately-aged content.1640 A falsely-aged profile will also mean a child can access and use functionalities on services that have a minimum age of 16 years old, such as direct messaging or livestreaming on some services.
17.47 Exposure to hate and bullying content increases from the age of 13. Sixty-eight per cent of 13-17-year-olds say they have seen images or videos that were ‘mean, or bully someone’, compared to 47% of 8-12-year-olds.1641 Encountering hate online is also quite common; three-quarters of children aged 13-15 report having seen online hate on social media.1642
17.48 Children in this age group are particularly vulnerable if they encounter content relating to self-harm and suicide.1643 Due to hormonal changes and mental health challenges, children in this age group may be at risk of the most severe impacts from encountering this type of content, particularly if seen in high volumes.1644 Five per cent of 13-17-year-olds had experienced/seen content encouraging or assisting serious self-harm, and 4% had experienced/seen content encouraging or assisting suicide over a four-week period.1645
16-17 years: Approach adulthood.
At 16 children attain new legal rights for the first time, while parental supervision, and parental concern about their online safety, both decrease. But changes in their behaviour and decision-making ability at this age can lead to an increased risk of exposure to harmful content.
Age-specific risks
17.59 Our research also estimates that almost three in ten (28%) of 16-17-year-olds have a profile with an age of at least 18 on at least one online service (e.g., social media).1663 These children could receive age-inappropriate content suggestions as well as access restricted functionalities. For example, some services restrict the use of livestreaming to 18-year-olds.
17.60 Older children are also more likely to experience communication that potentially makes them feel uncomfortable; 64% of 16-17-year-olds reported experiencing at least one potentially uncomfortable communication, compared to 58% of 13-15-year-olds. These uncomfortable experiences included receiving abusive, nasty or rude messages/voice notes/comments, reported by one in five (20%) 16-17-year-olds.1664
Summary of user base
A core part of performing your illegal content, children’s access, and children’s risk assessments includes understanding your user base.
One key requirement from the legislation, which is reiterated by Ofcom, is the need of services to use highly effective age assurance (HEAA) as part of your children’s access assessment and to perform your illegal content safety duties and children’s safety duties.
To paraphrase: You can’t state that you don’t have children accessing your service, or the age bands of your users, without having HEAA implemented. Additionally, HEAA is required to abide by your illegal content and harmful content safety duties.
If you have PC, PPC, or NDC which children either can access, or may have access to, you will have to implement HEAA to both understand your user base, as well as protect them from this content. That said, you have the ability to proportionately adjust certain access based on age band.
Functionalities
There are two parts to functionalities: functionality within your existing game which attribute to risk factors, and functionalities that Ofcom recommend to mitigate risk. This section focuses on the former (functionalities in your game currently which can attribute risk of illegal content or harmful content being discovered or interacted with by children).
There are two main themes throughout the guidance regarding the gaming industry on functionality; User Messaging and Anonymity.
For full context, please review Ofcom: Children’s Register of Risks:
- Section 3: Suicide and self-harm content3.42 - Suicide and self-harm content in gaming services (Primary priority content)
- Section 5: Abuse and hate content5.86 - Abuse and hate content in gaming services (Priority content)
- Section 6: Bullying content,6.60 - Bullying content in gaming services (Priority content)
- Section 7: Violent content7.58 - Violent content in gaming services (Priority content)
Summary: Functionalities
It is absolutely clear that Ofcom has identified that in-game messaging and communication is a key area of risk for children in regards to coming into contact with suicide and self-harm content, abuse and hate content, bullying content, and violent content.
Summary of risk factors
I don’t think it comes as a surprise to any of us, but Ofcom has evidenced and established that:
- Children are playing video games younger than ever before, and for longer periods,
- Players can communicate in games,
- Games have toxicity amongst players,
- That toxicity, whilst maybe considered unpleasant behaviour amongst adults, is actually an extremely high risk point of the Online Safety Act and Ofcom in relation to children. Especially Suicide and Self-harm content, which is classified as Primary Priority Content (the most extreme content which children have to be protected from the most).
- Ofcom highlight anonymity amongst players as a large risk factor. This is something that I don’t think will be able to change, for two reasons:
- I don’t believe players would be tolerant of sharing their real identity with games companies,
- I don’t believe players would be accepting or tolerant of their real identity being openly available, visible, or tied to their game accounts,
- I don’t believe that it would be legal under GDPR, ePrivacy directive, or CCPA to enforce this upon users.
With that said, the lack of accountability behind behaviour is clearly a risk factor - as Ofcom have established. The anonymity aspect doesn’t have to be compromised to provide the accountability functionality though; which is something we have accomplished at PlaySafe ID.
Updated 6 days ago