Jul 2021
28 Mon
29 Tue
30 Wed
1 Thu
2 Fri
3 Sat 04:00 PM – 06:00 PM IST
4 Sun
Oct 2021
25 Mon
26 Tue
27 Wed
28 Thu 07:00 PM – 08:00 PM IST
29 Fri
30 Sat
31 Sun
Satyavrat KK
In May 2021, the U.K government published the ‘Online Safety Bill’, comprising a 145-page document, along with 123 pages of explanatory notes and a 146-page impact assessment on the ramifications of the bill. The U.K government positions the Bill as one which ushers in “a new age of accountability for tech and brings fairness and accountability to the online world”.1 The earlier avatar of the bill has had a contentious history; in April 2019, the UK government produced a White Paper, setting out proposals for keeping UK internet users safe online, and managing ‘online harms’. The white paper’s proposals cover : “‘online content or activity that harms individual users, particularly children, or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration”. The government received around 2400 responses focusing on impact of the proposals on freedom of expression, impact on businesses performing duty of care. The government released a response soon, and finally introduced the Online Safety Bill in 2021.2
The government’s new regulatory framework will apply to companies whose services:
Host user-generated content which can be accessed by users in the UK. This means an internet service by means of which content that is generated by a user of the service, or uploaded to or shared on the service by a user of the service, may be encountered by another user, or other users, of the service. This opens the possibilities to a wide range of services falling under this purview from small services such as ones dedicated to U.K Punk rock like TalkPunk, to Facebook. The wide scope has come to be critiqued as we shall see in the following passages.
Involve the provision of search engines.
There is also the non-trivial addition of ‘Category 1’ services, reserved for the largest online user-generated content platforms. Broadly, the bill will expect the above mentioned categories to comply with the following set of rules.
The Office of Communications, commonly known as Ofcom, is the government-approved regulatory and competition authority for the broadcasting, telecommunications and postal industries of the United Kingdom. Officially, it has a statutory duty to represent the interests of citizens and consumers by promoting competition and protecting the public from harmful or offensive material. According to the new Bill, Ofcom will have powers to issue fines of £18 million or 10% of qualifying worldwide revenue (whichever is higher) for non-compliance. Ofcom also has a range of enforcement powers relating to business disruption measures and use of technology warning notices under the Draft Bill. In certain circumstances, criminal liability may also fall on the shoulders of senior members of non-compliant service providers where they have failed to take reasonable steps to prevent offences from being committed.3:1
Many corners of the U.K media praised the bill, the praise beginning with no less than a glowing preview by Oliver Dowden, the digital secretary of the U.K in the Telegraph newspaper, “Taken in full, this bill represents one of the most comprehensive and balanced responses to the digital revolution since technology began transforming our lives three decades or so ago. We’re entering a new age of accountability for tech - for the good of everyone who uses it.”4 Tim Barker, the CEO of Kooth, a digital platform for mental healthcare, said “From a mental health perspective, we have a duty of care as a nation to ensure that online content is appropriate. This is not just for those with current or emerging mental health difficulties but for everyone – we are all at risk of developing poor mental health given the right set of circumstances and triggers. This is the reason that the online safety bill is so important to the industry.”5
However, others were more critical. the London-based digital rights advocates, Global Partners Digital, called it “the most comprehensive and demanding piece of online content regulation in the world.”6 Alex Hern, the tech editor of The Guardian had a stern warning, “The message of the bill is simple: take down exactly the content the government wants taken down, and no more. Guess wrong and you could face swingeing fines. Keep guessing wrong and your senior managers could even go to jail. Content moderation is a hard job, and it’s about to get harder.”7
One of the more categorical and combative dismissals of the bill came from Heather Burns of the UK’s Open Rights Group, “You’re going to read a lot today about the government’s plans for the Online Safety Bill on #onlineharms, a regulatory process which has eaten up much of the past two years of my professional work. I suppose if I had a hot take to offer after two years, it’s this:
In a reply to some of these concerns, the government put out a response that aimed to be a bit more precise as to who the bill will NOT target, namely:
Taking a look at the broadly Western perspective, where most of the debates around tech are born and subsequently evolve, is crucial to our understanding as to where the compass needle points with regard to global policy debates.
“The internet is the greatest free-market innovation in history. It’s allowed us to live, play, work, learn and speak in ways that were inconceivable a generation ago. But it didn’t have to be that way. Its success is due in part to regulatory restraint. Democrats and Republicans decided in the 1990s that this new digital world wouldn’t be centrally planned like a slow-moving utility."
Ajit Pai, FCC (Federal Communications Commission)10
The Federal Communications Commission (FCC) is an independent agency of the United States government that regulates communications by radio, television, wire, satellite, and cable across the United States. We see a bit of the guiding logic of the FCC in these words by Thomas Hazlett speaking of the revolutionary advent of the iPhone, “Regulatory structure had to fade away for the new wireless world to evolve. Now whole new sectors are created, and nary a thought is given to the fact that the platform it sits on is a liberalized, deregulated spectrum market that frees the competitive forces that were so recently thought not up to the task at hand. What Herbert Hoover asserted had to be done by the state, it turns out, can only be done by open markets.”11 However, on the surface, it would seem we have come a long way from the words quoted by both Pai and Hazlett with the advent of the Online Safety Bill, a detailed document of rules and regulations, expressing, at least superficially, the very opposite of ‘regulatory restraint’. We would also do well to remember the FCC under Pai fought tooth and nail against net neutrality, the principle that internet service providers should treat all sources of data usage the same and not exercise favorability in providing broadband to their users. The predominant critique then was that this was a position favourable to the megacorporations and tech giants of the world. Currently, the U.K government positions the Bill as a means to reign in on big tech, but critics are quick to point out this is a red herring, with their primary interests lying in controlling the U.K populace.
The parallels between the Online Safety Bill with the recent IT Rules 2021 released by the Government of India are too striking to miss. The primary messaging by both governments seem to suggest that there is an urgency to ensure big social media sites are regulated, as well as the need to curb CSAM (Child Sexual Abuse Material) in the digital sphere. In his preview of the bill in the Telegraph, Dowden makes this amply clear in his opening lines, “The internet is an amazing thing, and has totally redefined how we connect and communicate. But if platforms like Facebook and Twitter are the new town square, then that square is in serious need of a clean-up. In the last few weeks we’ve seen footballers leading a mass boycott of social media to protest abhorrent racist abuse. Women are trolled and threatened on a daily basis. Children are exposed to cyberbullying, sexual grooming and suicide content.”4:1 Likewise Rakesh Maheshwari, Group Coordinator, and Head of Cyber Law, Ministry of Electronics & Information Technology (MeitY) said that the Indian IT Rules seek “to curb the spread of problematic unlawful information on intermediary platforms like revenge porn and related content.”12
The Draft Online Safety Bill 2021 | IT Rules 2021 |
---|---|
There are separate liabilities outlined for ‘Category 1’ services, i.e the largest online user-generated content platforms (Facebook, Twitter, Instagram etc) | The Rules lay down additional due diligence requirements to be observed by “significant social media intermediaries”. This is defined as those who primarily or solely enables online interaction between two or more users and allows them to create, upload, share, disseminate, modify or access information using its services and has more than 50 lakh registered users. These are classified as a significant social media intermediary. Thus, all popular social networking platforms such as Whatsapp, Facebook, Instagram and Twitter would be required to observe these additional due diligence requirements |
The Bill states that Category 1 services should have appropriate reporting systems and complaints procedures on which the service provider can take appropriate action | The Rules state that Significant social media intermediaries are obligated to get an entire grievance redressal team as well as a Chief Compliance Officer. Further, a three-tier system of grievance redressal has been prescribed, this begins with an internal grievance officer attempting to mitigate the complaint, but could potentially go all the way up to a Government body making the final call |
The Bill states that Category 1 services take proportionate steps to mitigate and manage the risks of harm to children both identified in the children’s risk assessment and present on the service | The Rules have made a requirement to develop automated tools or other mechanisms to proactively identify information that depicts any act or simulation in any form depicting rape, child sexual abuse or conduct |
The Bill states that Category 1 services have systems and processes in place that minimizes the presence and dissemination of illegal content and the length of time such content is present on the site, and swiftly take down illegal content when alerted to its presence | The Rules state that The Grievance Officer of an intermediary is responsible for acknowledging complaints within 24 hours and resolving them within a reduced timeline of 15 days |
The Bill states that Category 1 services have a duty to make and keep a written record of any steps taken to comply with a | |
relevant complaint | When an intermediary receives an order/notification to remove/disable access of information, from the court, Government or its lawful agencies, such information and associated records must be preserved by the intermediaries for 180 days or for such period as may be required by the court, Government or lawful agencies. Separately, any information collected from users during the registration process must be retained for a period of 180 days after cancellation of registration |
In the event of non-compliance Category One services could face fines of up to £18 million (US$25 million) or be blocked in the U.K | If an intermediary fails to observe any of the rules laid down, it loses protection afforded to it by Section 79 of the Information Technology Act, which is ‘safe harbour’ provision which grants conditional immunity to intermediaries from liability for third party acts |
In India, we are already hearing news of Google and Facebook seeking to comply with the rules with a certain nimble footedness that speaks to the sheer extent of their resources and capabilities. This adds some strength to the aforementioned claim by Heather Burns that the rules impact “all services of all sizes, based in the UK or not.” The serious criminal liability and punitive damage in the event of non-compliance in both countries is another uniting factor, large platforms could face fines of up to £18 million (US$25 million) or be blocked in the U.K. if they fail to comply, while senior managers could face criminal action.3:2 Meanwhile in India, the Rules state that if an intermediary fails to observe any of the rules laid down, it loses protection afforded to it by Section 79 of the Information Technology Act, which is a ‘safe harbour’ provision which grants conditional immunity to intermediaries from liability for third party acts. Additionally, the Chief Compliance Officer is also subject to criminal liability.13 In 2018, the U.N. special rapporteur on freedom of expression said that tying heavy penalties to content regulation would chill freedom of expression, a concern that both the Indian and U.K press have also flagged.14 Where the Indian and the U.K context significantly diverge is the consultative participation of policy experts, digital rights organisations, and civil society in the U.K scenario (over 2400 responses to the original White Paper were considered and debated before passing the bill) and the lack of any such process in the Indian context. At the core of the trepidations around the bill is the suspicion that they have a wider sphere of influence than purported, with the stated focus on significant social media intermediaries and CSAM material serving as a red herring. In the latter case, both governments have pre-existing laws tackling CSAM, the U.K specifically criminalising “CSAM distribution, possession, and production is criminalised in the UK through the Sexual Offences Act, 2003”, and India having CSAM covered under the Protection of Children Against Sexual Offences Act, 2012 (POCSO Act), the Information Technology Act 2000 (IT Act), the Information Technology (Procedure and Safeguards for Interception, Monitoring, and Decryption of Information) Rules, 2009.15 In the former, with large social media companies seeking to comply with the Rules and many large social media companies having taken government friendly stances in many parts of the Global South, we are yet to see how the entire picture unfolds in the times to come.
Kenyon, Tilly. “The Online Safety Bill: What Is It and What Does It Mean?: Digital Transformation.” Technology Magazine, 2021, technologymagazine.com/digital-transformation/online-safety-bill-what-it-and-what-does-it-mean. ↩︎
Department for Digital Culture. “Online Harms White Paper.” GOV.UK, GOV.UK, 15 Dec. 2020, www.gov.uk/government/consultations/online-harms-white-paper. ↩︎
Department for Digital, Culture, Media & Sport. “Draft Online Safety Bill.” GOV.UK, GOV.UK, 12 May 2021, www.gov.uk/government/publications/draft-online-safety-bill. ↩︎ ↩︎ ↩︎
Dowden, Oliver. “Oliver Dowden’s Opinion Piece for The Telegraph on the Online Safety Bill.” GOV.UK, GOV.UK, 11 May 2021, www.gov.uk/government/speeches/oliver-dowdens-opinion-piece-for-the-telegraph-on-the-online-safety-bill. ↩︎ ↩︎
Barker, Tim. “The Online Safety Bill: a Step towards Digitally Safe Mental Health Support.” Pharmaphorum, Pharmaphorum, 19 May 2021, pharmaphorum.com/views-and-analysis/the-online-safety-bill-a-step-towards-digitally-safe-mental-health-support/. ↩︎
Global Partners Digital. First Thoughts on the UK’s Draft Online Safety Bill. 2021, www.gp-digital.org/first-thoughts-on-the-uks-draft-online-safety-bill/. ↩︎
Hern, Alex. “Online Safety Bill: a Messy New Minefield in the Culture Wars.” The Guardian, Guardian News and Media, 12 May 2021, www.theguardian.com/technology/2021/may/12/online-safety-bill-why-is-it-more-of-a-minefield-in-the-culture-wars. ↩︎
Burns, Heather. “Why the Online Safety Bill Threatens Our Civil Liberties.” Politics.co.uk, 27 May 2021, www.politics.co.uk/comment/2021/05/26/why-the-online-safety-bill-threatens-our-civil-liberties/. ↩︎
Heywood, Debbie. “Out of Harm’s Way? Online Safety Bill Published.” Lexology, 24 May 2021, www.lexology.com/library/detail.aspx?g=d0bde289-2857-4098-85bb-562cc2f20867. ↩︎
AARP. “Ajit Pai’s Policing the Internet as FTC Chariman.” AARP, 4 June 2019, www.aarp.org/politics-society/government-elections/info-2019/ajit-pai-chairman-fcc.html. ↩︎
Ajit Pai, and Thomas Hazlett. Cato.org, www.cato.org/policy-report/may/june-2018/untold-history-fcc-regulation. ↩︎
ET Telecom Editorial. “Meity Says Intermediary Guidelines Will Not Be Used to Break Encryption - ET Telecom.” ETTelecom.com, 7 May 2021, telecom.economictimes.indiatimes.com/news/meity-says-data-protection-rules-will-not-be-used-to-break-encryption-of-intermediaries/82455481. ↩︎
Kalra, Malavika Kapila. “Intermediary Liability under the Information Technology Act: Time for an Amendment?” Bar and Bench - Indian Legal News, 2021, www.barandbench.com/columns/intermediary-liability-under-the-information-technology-act-time-for-an-amendment. ↩︎
United Nations. “2018 Thematic Report to the Human Rights Council on Content Regulation.” OHCHR, 2018, www.ohchr.org/EN/Issues/FreedomOpinion/Pages/ContentRegulation.aspx. ↩︎
Human Rights Document. “Trends in Online Child Sexual Abuse Material.” Human Rights Documents Online, doi:10.1163/2210-7975_hrd-9926-20180017. ↩︎
Hosted by
Supported by
Promoted
Promoted
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}