National approaches to online harms regulation
How will approaches to regulation work in practice?
The Ada Lovelace Institute, Centre for Media, Technology and Democracy at McGill University and the Centre for International Governance Innovation (CIGI) co-host this online event to discuss the challenges and opportunities online harms bills raise and how regulators charged with enforcing these powers might consider proceeding.
From the UK’s Online Safety Bill to the German NetzDG legislation, an increasing number of national governments are drafting or implementing legislation to address ‘online harms’ with the goal to create a safer online environment.
While the details of these national bills differ, they broadly seek to create new duties of care on online platforms to remove both illegal and ‘legal but harmful’ content from their services.
Proposed mechanisms to ensure these duties are met include requirements for platforms to disclose their removal practices, disabling certain kinds of content within a set period of time, and requiring the adoption of automated content-moderation technologies. Some bills create new regulatory powers to audit and inspect the prevalence of harmful material on a platform and assess a company’s moderation practices.
Some civil society organisations have raised concerns about how ‘harms’ will be defined under these bills and whether they surface conflicts with commitments to free expression. This raises a challenging question – what does a ‘good’ online harm bill look like, and what should it set out to achieve?
As new bills emerge in Canada and the UK, the time is ripe for a discussion about what lessons we can draw from the objectives, effectiveness and approaches to online harms legislation in different jurisdictions.
Watch back the event:
This video is embedded with YouTube’s ‘privacy-enhanced mode’ enabled although it is still possible that if you play this video it may add cookies. Read our Privacy policy and Digital best practice for more on how we use digital tools and data.
Chair
-
Sonja Solomun
Research Director, Centre for Media, Technology and Democracy at McGill University
Panellists
-
Caio MacHado
Vero Institute (speaking on Brazil) -
Daphne Keller
Director of Program on Platform Regulation, Cyber Policy Center (speaking on the USA) -
Lex Gill
Research Fellow, The Citizen Lab (speaking on Canada) -
Mark Bunting
Director, Online Safety Policy, Ofcom (speaking on the UK) -
Prabhat Agarwal
Head of Unit, European Commission
In this webinar event, speakers discuss:
- What are governments trying to achieve with online safety legislation? How are they defining the problem to solve, and what does success look like?
- What policy mechanisms and requirements are available for them to meet these needs? What kinds of powers (e.g. auditing, assessment) or requirements (e.g. transparency reports) are necessary?
- What role do civil society organisations and others have to play in these bills?
- How do we ensure international coordination and cooperation on these issues?
Related content
Accountability of algorithmic decision-making systems
Developing foundational tools to enable accountability of public administration algorithmic decision-making systems.
Algorithmic accountability for the public sector
Research with AI Now and the Open Government Partnership to learn from the first wave of algorithmic accountability policy.
Algorithmic accountability for the public sector
Learning from the first wave of policy implementation
Regulating for algorithm accountability: global trajectories, proposals and risks
Exploring how we can ensure that algorithmic systems and those deploying them are truly accountable