Who Protects Users in the Digital World?

Who Protects Users in the Digital World?

Science and Technology
Technology and Gadgets
By Kadian Davis-OwusuPublished on April 2, 2026
#privacypolicy
#onlineharms
#AIpolicy

Online harms today extend far beyond simple cyber risks; they include misinformation and disinformation, privacy violations, algorithmic discrimination, gender-based online violence, cyberattacks on critical infrastructure, and new forms of surveillance through AI-driven technologies such as facial recognition drones. Essentially, harms disproportionately affect children, women, migrants, and communities across Global  Majority countries, who often receive fewer protections and are subject to weaker regulatory governance.

Governments worldwide; despite efforts such as the EU Digital Services Act, the UK Online Safety Act, and initiatives in India e.g., (Digital Personal Data Protection Act, 2023 and Bharatiya Nyaya Sanhita, 2023), Singapore (Online Safety Relief and Accountability Act 2025), and Australia (Online Safety Act, 2021 and the Online Safety Amendment (Social Media Minimum Age) Act 2024) among others; still struggle to keep pace with technological developments. Business models that prioritize engagement over safety and profit over rights further deepen polarization, misinformation, disinformation, and risks to democratic life. Meanwhile, digital products remain under-regulated compared with physical goods, despite their profound psychological harms, social consequences, and political implications.

Notably, digital technologies have transformed global communication, economic participation, and civic engagement. Yet, these same systems increasingly expose individuals and societies to complex and evolving harms. Evidence from the United Nations Office of the High Commissioner for Human Rights, Human Rights Watch, Amnesty International, and the World Health Organization, illustrates how digital platforms have become infrastructures not only of connection but also of risk. Children, women, migrants, and marginalized communities; particularly in the Global Majority Countries; face a disproportionate burden of these harms. Addressing these harms therefore requires a multi-sector policy response, as online harms must be treated simultaneously as human rights, education, and digital governance issues requiring coordinated action across governments, civil society, educators, parents, social workers, and technology companies.

The Increasing Risks of Online Harms

As mentioned earlier, online harms can take many forms, including misinformation, disinformation, image-based abuse, privacy violations, aggressive data harvesting, cyberbullying, grooming, identity theft, and algorithmic discrimination. Misinformation and disinformation now constitute a global infodemic, a term introduced by the WHO to describe the rapid and widespread diffusion of false information during crises. Research from the Reuters Institute for the Study of Journalism shows that online news environments can contribute to political polarization, spread misinformation, and silence critical voices as algorithmically curated content influences what information audiences see. Moreover, in many cases, algorithmic amplification of extremist or militarized content fuels societal tension, erodes trust in institutions, and undermines democratic participation.

Privacy violations further deepen these harms. Users routinely accept intrusive data-collection practices, granting technology companies unprecedented access to behavioural, biometric, and location data. This data can be used for surveillance. For example, Amnesty International’s Ban the Scan campaign documents how facial recognition and biometric surveillance technologies have been used in multiple contexts to enable mass surveillance and monitor protesters, creating a chilling effect on the rights to privacy, freedom of expression, and peaceful assembly. Recognizing these risks, the OHCHR has advocated for the need for transparency, accountability, and human rights safeguards in the deployment of artificial intelligence and automated decision-making systems, noting potential risks to fundamental rights.

In addition to biometric violations, gender-based online violence remains a pervasive and under-addressed harm. As such, women and girls face disproportionate levels of cyberstalking, doxxing, non-consensual distribution of intimate images, and coordinated harassment campaigns, often referred to as technology-facilitated gender-based violence (see article). These harms can exclude women from digital public spaces, reinforcing structural inequalities. More specifically, women have faced extreme cases of criminal abuse and traumatic experiences as covered in these news items (1, 2).

Beyond this, children represent another vulnerable group who face additional risks as they encounter content ranging from self-harm imagery to sexualized or violent material. For example, AI was used to generate nude photos of teen girls in Spain, without their knowledge or consent. The impact was severe, with many of the victims reportedly terrified and suffering anxiety attacks after the images were circulated among classmates, highlighting the serious psychological harm caused by AI-generated deepfakes. In response, regulatory frameworks such as the UK Online Safety Act aim to address these challenges by requiring content moderation, age verification, and algorithmic risk assessments. Yet, privacy organizations such as the Electronic Frontier Foundation caution that age-verification tools may themselves create new privacy risks through unnecessary data collection. This highlights a central paradox of digital governance whereby measures introduced to increase online safety, such as age verification and content monitoring, can simultaneously expand surveillance and create new risks to privacy and civil liberties. These tensions are further complicated by global inequalities in digital governance, as protections and regulatory frameworks remain unevenly distributed across regions.

The Global Majority Countries and Unequal Protection

Digital safety policies and platform governance frameworks are largely developed in Western countries, while many Global Majority countries have fewer regulatory protections, fewer content moderation resources, and less influence over platform design decisions. Moreover, research by the Mozilla Foundation shows that platform interventions and safety measures in Global Majority countries are often limited or ineffective, contributing to the spread of misinformation and online harms. In addition, platforms frequently invest less in content moderation, language resources, and safety infrastructure in the Global Majority, resulting in harmful content remaining online longer and communities facing greater exposure to digital exploitation (Mozilla Foundation; Center for Democracy & Technology). At the same time, tools such as the Carnegie Endowment’s Global AI Surveillance Index show that biometric surveillance and AI monitoring technologies developed in wealthier nations are increasingly exported and deployed in countries with weaker regulatory frameworks, amplifying risks of repression, discrimination, and human rights violations. Fundamentally, without meaningful participation from Global Majority countries, global digital governance risks reinforcing existing inequalities rather than reducing them. These governance gaps are further complicated by the fact that regulation itself is struggling to keep pace with the rapid technological developments.

Regulation and the Struggle to Keep Up

Accordingly, Governments worldwide are attempting to regulate digital platforms, though with varying levels of success and alignment with human rights norms. The European Union’s Digital Services Act mandates transparency reporting, independent audits, algorithmic accountability, and harm-mitigation obligations for very large platforms. On the contrary, while several Global South countries have introduced legal frameworks to address online harms, these measures are often limited in their effectiveness due to the global and cross-border nature of digital platforms. For example, Brazil’s Marco Civil da Internet, South Africa’s Cybercrimes Act 2021, Nigeria’s CYBERCRIMES (PROHIBITION, PREVENTION, ETC) ACT, 2015, and Kenya’s Computer Misuse and Cybercrimes Act, 2025, all attempt to regulate harmful online content, cyber harassment, and digital offences. However, online harms frequently occur across jurisdictions, with content hosted in one country, platforms headquartered in another, and users located elsewhere. This makes enforcement difficult and often limits the effectiveness of national legislation. As a result, harmful content, online abuse, and digital exploitation can persist despite domestic regulation, highlighting the need for greater international cooperation and global governance frameworks to effectively address online harms.

The Importance of Digital Literacy

Digital literacy emerges as a critical protective measure. Countries like Canada have begun investing in tools and skills to help its citizens to critically assess online information and prevent disinformation through national programs such as the Digital Citizen Initiative, highlighting fact-checking and critical consumption of information. Yet many regions, particularly in the Global Majority countries; lack long-term investment in digital literacy programmes, leaving citizens vulnerable to manipulation, exploitation, and surveillance. Ultimately, capacity-building programs should help people understand how algorithms influence digital their experiences, recognize grooming or manipulative behaviour, and learn how to verify information, and avoid re-sharing false content. These programs must be understandable, accessible, suitable for low-resource settings, and culturally relevant, rather than overly technical or designed only for highly educated users. Expanding such initiatives globally, particularly in under-resourced regions, would help protect vulnerable communities and address systemic inequalities.

Also, as online harms increasingly affect mental health and well-being, social workers who support victims of cyberbullying, grooming, and online abuse require specialized training to recognize emerging digital risks and provide trauma-informed support. Governments and education authorities should integrate digital citizenship and online safety into national curricula, while civil society organizations can provide accessible community training and awareness programs.

However, placing the responsibility solely on users, educators, and social workers risks shifting attention away from the responsibilities of governments and technology companies. Preventing online harms must therefore also involve transparency, accountability, and a human rights–based approach to digital governance.

Addressing online harms therefore also requires examining the technological systems that enable and amplify them. From a computing perspective, many online harms are not accidental but are shaped by system design choices. Recommendation algorithms, engagement-based ranking systems, targeted advertising infrastructures, and automated content moderation systems determine what users see, how information spreads, and how harms scale across platforms. Safety-by-design approaches, algorithmic risk assessments, and human rights impact assessments should therefore become standard components of system development, in the same way that security testing and usability evaluation are standard in software engineering.

Transparency, Accountability, and Human Rights

A rights-centered approach must be anchored in global standards such as the Universal Declaration of Human Rights and the Council of Europe human rights instruments. This includes ensuring that technology companies disclose the harms they identify, the mitigation measures they adopt, and evidence of those measures’ effectiveness. AI systems should undergo rigorous human rights impact assessments, and certain uses; such as mass biometric surveillance or discriminatory predictive policing; should be outright prohibited. Moreover, digital products must meet safety standards in the same way as physical products, with civil penalties and substantial financial fines imposed on technology companies that fail to protect users, mitigate risks, or address harmful content.

Responding to Harm and Supporting Victims

Robust digital safety ecosystems must address not only prevention but also response and recovery. Clear reporting mechanisms, trauma-informed support services, digital helplines, and coordination between governments, civil society, and technology companies are essential components of online safety infrastructure. Furthermore, supporting victims of online abuse and exploitation is critical not only for individual recovery but also for improving platform governance, as incident reporting and response systems help identify systemic risks and platform design failures. Victim support should therefore be considered a core component of digital safety governance, not an afterthought. Overall, addressing online harms effectively requires examining the technological systems that enable and amplify them.

In sum, online safety is not solely a technological challenge; it is a societal, democratic, and human rights imperative. Protecting individuals and communities from digital harm requires coordinated action grounded in accountability, transparency, and equity. Governments, civil society, educators, social workers, parents, and tech companies all share responsibility for building safer digital futures. Without urgent action, online harms will continue to undermine public trust, weaken democratic institutions, and deepen global inequalities. With thoughtful regulation, robust digital literacy programs, and strong human rights safeguards, we can build a digital world that protects, empowers, and includes everyone.

 

Created by:
K
Kadian Davis-Owusu

Kadian has a background in Computer Science and pursued her PhD and post-doctoral studies in the fields of Design for Social Interaction and Design for Health. She has taught a number of interaction design courses at the university level including the University of the West Indies, the University of the Commonwealth Caribbean (UCC) in Jamaica, and the Delft University of Technology in The Netherlands. Kadian also serves as the Founder and Lead UX Designer for TeachSomebody and is the host of ...