ITIF - The Information Technology and Innovation Foundation

09/26/2024 | Press release | Distributed by Public on 09/26/2024 13:15

Comments Before the European Commission Regarding the Digital Services Act

Introduction and Summary

The Information Technology and Innovation Foundation (ITIF) is pleased to submit these comments in response to the European Commission's request for public comment concerning "Digital Services Act - guidelines to enforce the protection of minors online."[1]ITIF is a nonprofit, non-partisan public policy think tank based in Washington, D.C., committed to articulating and advancing pro-productivity, pro-innovation, and pro-technology public policy agendas around the world that spur growth, prosperity, and progress.

Debates over how best to protect children, and what potential harms society needs to protect children from, are much older than the Internet and encompass much more than online harms. Problems facing children in society have never been easy to solve, and solutions to those problems often raise similar concerns to many of the proposed solutions to online harms, such as free speech, privacy, and parents' rights.

No amount of regulation will completely eradicate all potential harms that children face in the digital and physical worlds. The issue at hand, then, is finding the balance of regulation that effectively addresses concrete harms without overly infringing on individuals' rights to privacy and free speech. Not only do digital safety regulations sometimes infringe on adults' rights to privacy and free speech, but these regulations also sometimes infringe on the rights of the very children they aim to protect. Indeed, these kinds of rules, if not designed appropriately, can trample on children's freedom to engage with their friends and access appropriate information online.

Similarly, lawmakers could intervene in every facet of children's lives to help ensure their safety, but not only is this unfeasible, most parents would object to the government dictating how they can raise their children. This is another area where lawmakers need to find a balance. Regulation can set guardrails that aim to prevent concrete harm and provide new tools for user safety, but some amount of parental control is necessary because every child is different. In most cases, parents will understand their own child's unique needs better than the government will. At the same time, not every child lives in a home with parents willing or able to look out for their best interests online. Even parents who have the time and skills to monitor their children's use need tools to make that easier or, in some cases, possible.

And finally, while technologies have provided new avenues for bad behavior, they can also encourage prosocial behavior, bringing young people together in positive ways. In fact, individuals from marginalized populations who are often victims of bullying can find community online in ways that were never possible before the Internet. Children whose friends move away can keep those friendships alive through free online messaging and video calls. Any regulation designed to protect children online needs to find a balance between minimizing the risks and maximizing the benefits of various online activities for children.

Major Risks and Concerns

2. Please describe any major risks and concerns related to ensuring a high level of privacy, safety, and security for minors online.

One of the most controversial topics within the broader children's online safety debate is age verification. There are multiple different ways online services can verify users' ages, and each of these methods comes with different strengths and weaknesses. Some are more accurate but more invasive, whereas others are less invasive but also less accurate. For example, asking users to check a box indicating they are over a certain age or input their date of birth to confirm they are over a certain age is minimally invasive, because it only requires users to disclose, at most, one piece of personal information: their date of birth. Because many people can share the same birthday, this piece of information cannot uniquely identify an individual. However, this method is also less reliable, as underage users can and often do lie about their age in order to gain access to certain online services.[2]

On the other end of the spectrum is the ID check. This form of age verification is common in physical spaces, where customers must provide a valid government-issued ID in order to prove they are above the minimum age required to enter an age-gated space or purchase an age-gated product. It is also highly accurate, as government-issued IDs are more difficult to falsify than checking a box or entering one's date of birth. However, because physical establishments do not store a copy of each customer's ID, these in-person ID checks pose lower privacy risks than do online ID checks, where an online service may store the information from users' IDs. Finally, there are no free speech implications involved in providing an ID to purchase a product, as opposed to providing an ID to access certain content online.

In between self-verification and ID checks, there is a third potential method of age verification using AI to estimate users' ages from an image of their face. Combined with privacy protections requiring online services to delete users' images after the age estimation process is complete, this would minimize the amount of personal information users have to give up in order to verify their age. Of course, age estimation technology is not perfectly accurate and likely never will be-no form of age verification is-but it is constantly improving.

Another controversial topic within the children's online safety debate is encryption. While the use of encryption, particularly end-to-end encryption, may make it more difficult to prevent and respond to instances of child exploitation on online services, it is nevertheless an important tool for safeguarding individuals' security and privacy. Creating new vulnerabilities in secure communication systems would be particularly undesirable because strong encryption has helped protect vulnerable individuals, including victims of abuse and LGBTQ minors who use encryption to communicate privately and to seek help without fear of reprimand and journalists who use encrypted messaging services to communicate with confidential sources. Limiting end-to-end encryption, disincentivizing its use, or building backdoors into it would expose these groups to potential threats. Meanwhile, bad actors would likely seek out foreign tools and services that do not have such vulnerabilities, leaving vulnerable populations with weaker security protections than criminals.[3]

Good Practices and Recommendations

4. Please provide good practices or recommendations for addressing risks in the 5C typology for the factors listed above. Please refer to any existing documentation, research, or resources that could help endorse or validate the good practices proposed.

In order to increase parental control, and as an alternative to age verification, device operating systems could create a "trustworthy child flag" for user accounts that signals to apps and websites that a user is underage, and apps and websites that serve age-restricted content could check for this signal from their users and block underage users from this content. Rather than using ID checks to determine whether to activate this child flag option, this would be an opt-in process built into existing parental controls on devices. Parents could activate or disable the child flag option depending on their own values and the maturity of their children. Additionally, devices could default to certain parental controls recommended for children.[4]

Because this approach does not require anyone to disclose or verify their identity, it does not create privacy risks by forcing users to share their government IDs or allowing online services to link their online activity to their offline identities. It is also a low-impact approach, allowing adults to continue using the Internet as they do today. Similarly, the vast majority of websites and apps that are meant for the general public would not have to take any action. Third, it would be entirely voluntary for users. Parents who want to control what their children see on the Internet could choose to use this feature and other parents could choose not to.

Additionally, digital forms of government-issued identification could solve some of the privacy concerns associated with ID checks for age verification, as well as make the process more efficient. Currently, online ID checks typically require users to upload a photo of their physical ID as well as sometimes additional steps to prove the ID belongs to them, such as uploading a current image of their face to compare to the photograph on the ID. If designed right, digital IDs would streamline this process and allow users to only share necessary information. For example, individuals trying to access an age-restricted online service could verify that they are over a certain age without providing their exact date of birth, let alone all the other information a physical ID would reveal.

Conclusion

While increasing children's online safety and privacy are important goals, it is important not to infringe on others' rights, or children's own rights, in the process. Age verification requirements, if not designed properly, threaten to introduce more harm than they solve. Encryption-breaking requirements pose a similar risk. It is important to consider the potential unintended consequences of these requirements and weigh these consequences against the potential benefits. An effective approach to children's online safety needs to strike a balance between protecting kids, protecting user privacy, and protecting free speech.

Thank you for your consideration.

Endnotes

[1]"Digital Services Act - guidelines to enforce the protection of minors online," European Commission, July 31, 2024, https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/14352-Protection-of-minors-guidelines_en.

[2]Yonder Consulting, "Children's Online User Ages Quantitative Research Study" (Ofcom, October 2022), https://www.ofcom.org.uk/__data/assets/pdf_file/0015/245004/children-user-ages-chart-pack.pdf.

[3]Michael McLaughlin, "Weakening Encryption Would Put Vulnerable Populations at Risk," ITIF, December 4, 2019, https://itif.org/publications/2019/12/04/weakening-encryption-would-put-vulnerable-populations-risk/.

[4]Ash Johnson, "How to Address Children's Online Safety in the United States" (ITIF, June 2024), https://itif.org/publications/2024/06/03/how-to-address-childrens-online-safety-in-united-states/.