Edition: International | Greek
MENU

Home » Analyses

Tech companies, the media and regulators must come together to prevent online harm

To protect internet users, particularly children, the media, tech companies and regulators must cooperate to develop an effective solution

By: EBR - Posted: Tuesday, January 10, 2023

It is only when technology platforms and regulators approach this problem with an openness to communicate, a desire to truly collaborate and a feeling of true partnership that an optimal solution be reached, and the victims of online harm be protected.
It is only when technology platforms and regulators approach this problem with an openness to communicate, a desire to truly collaborate and a feeling of true partnership that an optimal solution be reached, and the victims of online harm be protected.

by Noam Schwartz*

Recent years have brought about an increase in how individuals, governments and the media view trust and safety.

In fact, if Google Trends are any indicator, over the last 10 years, there has been a twenty-five-fold rise in the public’s interest in content moderation — the core function of trust & safety.

But in this field, there are too many opinions and too little cooperation.

Big tech, media and regulators: a three-way stand-off

Until recently, the US’s Section 230 was the “rule of the land” for online safety. Enacted in the mid-1990s, the law limited the liability of technology companies for the content hosted on their platforms.

More than 30 years later, technology platforms have been used to share child sexual abuse material (CSAM), make calls for violence and spread hate speech, disseminate disinformation that damages the fibers of our societies and live broadcast terror attacks and beheadings. While not always legally liable, they have been handling complex societal issues with little guidance from legislative bodies — facing significant scrutiny as they do so.

The lack of legislation and cooperation has led to a growing perception that technology platforms, governments and the media sit on opposing ends of the harmful content debate. Platforms are accused of limiting free speech by some, and of profiting from the proliferation of online harm by others. Legislators are perceived as overbearing by some and overextending their reach by others, while the media is seen as stirring the pot and driving public scrutiny.

However, new laws that aim to provide specific guidelines on online safety have been introduced. The EU’s Digital Services Act and the UK’s Online Safety Bill are aimed at securing online engagements, but still fail to take into account the unique perspective and expertise that technology platforms have gained over the years. This may result in a missed opportunity for a holistic solution to online safety.

Collaboration is key to preventing online harm

A collaborative approach is possible and, in fact, essential. Take the UK’s Age Appropriate Design Code. Launched in September 2021, the code involved an iterative process, during which its enforcer, the Information Commissioner Office, issued guidance and clarifications based on direct communication with dozens of technology platforms.

Moreover, we have seen some constructive collaborations involving civil groups and both technology platforms and government bodies. Groups like the Family Online Safety Institute and the National Center for Missing and Exploited Children act as mediators between tech companies and the government in issues related to child safety. The 5Rights Foundation has supported British regulators in building out child safety codes like the Age Appropriate Design act.

This collaborative approach can be applied to content moderation. Take the apparently simple directive that when harmful content is detected on a platform, action should be taken. When these previously voluntary actions are made law, it is important to understand the details and limitations — something which can be achieved by tapping into the wealth of knowledge that technology platforms have acquired over the years.

What is harmful content?

How does one decide what counts as harmful? Is it only graphic CSAM, or do textual descriptions of harm against children also count? What about disinformation? Where does one draw the line between harmless lies and dangerous narratives that can harm public health?

What constitutes ethical detection?

When is it enough to wait for content to reach moderators via flagging, and when are more proactive measures needed? If links are shared on a platform that leads users to harmful content, should these be detected too, or should detection only include direct, on-platform violations? What are the limitations of privacy and encryption that may challenge the act of proactive detection? Who teaches artificial intelligence algorithms what to detect, and how do those algorithms understand the context of a knife in a cooking show versus one in an attack? In human-based detection by content moderators, how does one balance the need for safe platforms with that of moderator wellbeing?

What actions should be taken?

How do platforms decide when to remove content, label it or remove the user entirely? How do they handle questions of freedom of speech versus freedom from harm?

Considerations should also include the cultural aspect of harmful content and the potential legal ramifications of cross-border content sharing. Content that’s harmful in one culture may not be in another, and platforms need to weigh the consequences of taking action in one country versus inaction in another with regard to a singular piece of content.

There is a double-edged sword when it comes to enforcement actions: platforms are judged for taking action yet denigrated when they don’t.

Only when taking into account the complete picture can legislative decisions, which make mandatory the processes that some technology platforms have been facilitating for years, be made.

The content moderation balance

Content moderation requires a balance between individual freedoms, the needs of multiple stakeholders, technological constraints and desired outcomes. This can only be achieved through collaboration.

Governments, technology platforms and civil groups can and must work together to:

1. Understand the harms lurking in online spaces. Beyond the obvious threats, bad actors from all abuse areas take advantage of digital spaces to cause harm. Before taking action, a thorough understanding of their motivations, techniques and tools is critical.

2. Analyse the complex challenges. Many challenges come with harmful content detection from technological and ethical perspectives. It is essential to understand what technology companies can actually do against these online harms, and what this means for user privacy.

3. Design new, more comprehensive solutions. This problem impacts far beyond the digital spaces we take part in. With that in mind, the actions against some of these online harms cannot be up to technology platforms alone.

It is only when technology platforms and regulators approach this problem with an openness to communicate, a desire to truly collaborate and a feeling of true partnership that an optimal solution be reached, and the victims of online harm be protected.

*CEO & Co-Founder, ActiveFence
**first published in: Weforum.org

READ ALSO

EU Actually

‘Free debate and exchange of views is vital. Even when you disagree’.

N. Peter KramerBy: N. Peter Kramer

Hungarian Prime Minister Viktor Orban will speak today at the National Conservatism Conference in Brussels, a two-day far-right conference

View 04/2021 2021 Digital edition

Magazine

Current Issue

04/2021 2021

View past issues
Subscribe
Advertise
Digital edition

Europe

Can citizens trust sustainable aviation fuel?

Can citizens trust sustainable aviation fuel?

The market for low-carbon fuel for aeroplanes is still nascent, but it’s growing

Business

Artificial intelligence and competitiveness in the retail sector

Artificial intelligence and competitiveness in the retail sector

The importance of AI and machine learning in the retail market is confirmed by the projected dramatic growth of AI services worldwide, which will skyrocket from $5 billion to $30 billion by 2030

MARKET INDICES

Powered by Investing.com
All contents © Copyright EMG Strategic Consulting Ltd. 1997-2024. All Rights Reserved   |   Home Page  |   Disclaimer  |   Website by Theratron