Practice Areas

Follow us

Regulating Social Media Intermediaries: Part II of the IT Rules, 2021

The clarion call to regulate the conduct of big – tech companies worldwide finally reached our shores with the Ministry of Electronics and Information Technology (“MEITY”) notifying the Information Technology (Intermediary Guidelines and Digital Media Ethics Codes) Rules, 2021 (“Rules”) on 25th February, 2021. In our previous article here, we focused on the origin of the code of ethics and the grievance redressal mechanism that the publishers of original curated content were required to follow. In continuum, this article aims at discussing Part – II of the Rules involving the due diligence measures to be followed by the social media intermediaries (“intermediaries”) along with the additional measures to be taken into consideration by certain significant social media intermediaries (“significant intermediaries”), the legal framework developed by other countries to regulate social media, and how the significant intermediaries are already dragging their feet by challenging the validity of the Rules through judicial scrutiny.


India remains the largest market for all the major social media platforms besides Twitter. With the mobile data prices reaching the nadir post 2016, the major social media platforms have become the numero – uno choice for Indians to access digital content including news, in the form of articles or videos. This has been corroborated with findings in a study[1] revealed by the Reuters in collaboration with the University of Oxford wherein it was stated that 56% respondents relied upon online news through social media and search engines as against a paltry figure of 16% relying upon print exclusively for consuming news in the age group of under – 35 years. However, with the ubiquitous reach of social media platforms, India has time and again witnessed misuse of the wide – reaching platforms to further criminal activities such as spreading fake content, incitement of communal violence, circulation of obscene content, financial frauds[2].

Social media platforms such as Facebook and Twitter have been at the forefront to introduce measures against such elements by verifying the validity of news shared on their platforms, much more proactively after the Capitol Hill siege earlier this year which was the highlight of what unrestrained behaviour on social media can effectively lead to. Nevertheless, leaving aside the infamous event cited above, the intermediaries, and the significant intermediaries in particular, are perceived as having neglected their care of duty and largely ignored the public outcry to strengthen their mechanisms to avoid such fiascos in the future.

The larger threat of circulation of fake news and obscene content on the social media platforms leaves the society affected on a macro – level. On the other hand, on an individual basis, several instances of high – handedness of the social media platforms have also been observed leaving certain users to the whims and fancies of the moderators of these platforms who have the ultimate authority to decide whether these users will be permitted to continue their access and exercise their freedom of expression as envisaged in the Constitution. The recent event of Twitter flagging some content as “manipulative – media” is a prime example, albeit there is more than what meets the eye. Interestingly, Ministry of Electronics and Information Technology (MEITY) has been sympathetic to the cause of certain users of these social media platforms whose equal opportunity to access and communicate has been trampled upon after having invested time, energy, and money to develop their profile[3].

To effectively secure the interests and ensure safety of the social media users, the Government notified the Rules with the intent to establish a “harmonious, soft-touch oversight mechanism”[4], wherein first and foremost MEITY has bifurcated social media platforms as intermediaries and significant intermediaries  on the basis of the number of their users in India. Accordingly, the likes of WhatsApp, YouTube, Facebook, Instagram, and Twitter have been adjudged as “significant intermediaries”.[5] These significant intermediaries are required to undertake additional measures to fulfil the objective of the Rules.

Due Diligence Measures


(i) Publish and display rules and regulations, privacy policy and user agreement which thereon prohibit the users from posting, publishing, or transmitting content which violates the following guidelines:

  • Defamatory, obscene, invasive of others’ privacy, racially or ethically objectionable, relating to money laundering or gambling;
  • Harmful to children;
  • Infringes IPRs;
  • Deceives or mislead the addressee about the origin of message or knowingly and intentionally communicates any information which patently false or misleading in nature
  • Impersonates another person;
  • Threatens the unity, sovereignty and defence of India or causes incitement to the commission of a cognizable offence or prevents investigation of an offence;
  • Contains software virus;
  • Patently false and untrue, which is written or published in any form, with the intent to mislead or harass a person, an agency or entity for financial gain or to cause injury to person.

(ii) Warn users at least once a year that non – compliance of the aforementioned rules will lead to termination of access to the intermediary or removal of non – compliant information or both;

(iii) In the event of breach of the aforementioned rules and subsequent removal of the impugned information, the intermediary shall preserve the information and associated records for 180 days or for a longer period as required by the courts/government agencies;

(iv) Retaining information collected from a user at registration for a period of 180 days subsequent to cancellation or withdrawal of the user’s registration;

(v) Provide information to government agencies within 72 hours of a written order submitted by a government agency for prevention, detection, investigation, or prosecution of offences;

(vi) Establish a grievance redressal mechanism which acknowledges complaints from users regarding violation of the Rules within 24 hours and resolve them within 15 days from its receipt. Further, with respect to complaints related to pornographic content, access to such content needs to be disabled within 24 hours.

Additional Due Diligence Measures for Significant Intermediaries

(i) Appoint a Chief Compliance Officer, Resident Grievance Officer and a nodal contact person who shall be responsible for the functioning of the overall due diligence and grievance redressal mechanism process, respectively;

(ii) Publish compliance reports on a monthly basis;

(iii) Significant intermediaries involved in providing messaging services shall enable the identification of the originator of the impugned information pursuant to a court order;

(iv)Provide users with the option to verify their identity on the platform along with a mechanism to track the status of their complaints through a unique ticket number provided for each complaint.

International Comparable Laws

India is not the first or the only one in seeking to enforce a greater government regulation onto social media platforms, nor the first to prompt such platforms to turn to Artificial Intelligence (AI) to censor and monitor content before the Government is compelled to get involved.

Australia passed the Sharing of Abhorrent Violent Material Act in 2019, introducing criminal penalties for tech companies and executives for their failure to prevent circulation of such content on their platforms. It has planned legislation to mandate a 48-hour period by which social media companies must take down harassing, abusive or revenge porn posts, if ordered to do so by the eSafety Commissioner’s office. Failure to do so could invite a fine of up to $555,000 for websites and social media companies and up to $111,000 for individuals. The conversation on digital content regulation peaked in the country in 2015 after the suicide of a popular TV personality, Charlotte Dawson, who suffered from a cyber-bullying campaign on Twitter. This led to another legislation called the Enhancing Online Safety Act, mandating social media companies to take down harassing and abusive posts or face crippling criminal fines. [6]

Since 1995, South Korea’s Information & Communication Ethics Office has had the ability to order information providers to delete and restrict material that “encroaches on public morals,” causes a “loss of national sovereignty” or is “information that may harm youths’ character, emotions and sense of value”.

Germany has set a very high standard for regulating social media platforms. Considering its historic strength on hate speech issues, given its past, it has developed a “NetzDG” law (aka the Facebook Act) in July 2019 which bans “manifestly unlawful” posts from platforms within 24 hours or incur fines up to €50m. It is a classic “take down, ask later” approach that has been considered necessary and effective in the crackdown of online extremism.[7] Although some believe it to have started a “draconian censorship regime” this law is special due to the delegation of legal issues to the tech firms instead of placing it under government scrutiny. The companies can choose what to keep and what to delete.

The UK has embarked on a concerted process to bring out regulations on online content through publication of a series of white papers, government briefs and legislative consultations in true deliberative fashion symbolizing the British. The Government is close to bringing out legislation that prohibits social media platforms that host user-generated content (Facebook, Instagram, Tiktok, Twitter etc.) from spreading content on sexual abuse, terrorism, suicide, cyber-bullying, pornography etc. The guiding motivation of these amendments is protection of children and young audiences from online harms. When it comes to streaming platforms, some countries like the UK (and Australia in the case of Netflix) opt to let streaming platforms self-regulate, others directly determine what regulations they adhere to in terms of their content.[8]

The US, however, has charted a different route with Section 230 of the Communications Decency Act, 1996 providing immunity to online services, including Intermediaries, from liability for transmission of any third-party content. This provision, dubbed the “26 words that created the Internet,” by Jeff Kossett, cybersecurity law professor at the U.S. Naval Academy, came under fire by former President Donald Trump, who sought its removal after accusing social media platforms of being biased against conservatives. Section 230 has made US social media companies largely self-regulating. That does not mean the US government has not tried to increase its ability to order content to be taken offline: Legislations like the Child Online Protection Act attempted to make it illegal for websites to host material deemed harmful to minors for commercial purposes but were later struck down as unconstitutional. In general, laws putting the liability onto the platforms themselves, have ended up becoming quagmire in legal challenges revolving around the First Amendment and Section 230. It remains illegal, however, for users on such platforms to upload prohibited content.[9]

After the storming of the US Capitol Building in January 2021, the Canadian government pulled up its sleeves to tighten social media regulation. After reports from intelligence agencies surfaced in April 2019 regarding possible foreign interference in Canadian national elections, the country passed a law called the Elections Modernization Act to close loopholes open to possible manipulation by foreign entities and miscreants on social media platforms.

Significant Intermediaries approaching Courts

With the implementation date of the rules pertaining to significant intermediaries set out on 25th May 2021, the inherent reluctance of these intermediaries to implement some of the contentious provisions opened up a pandora’s box. The most prominent case[10] involves WhatsApp challenging the ‘first originator of the message’ clause before the Hon’ble High Court of Delhi, claiming that it will be incompatible with their message encryption standards and will also lead to contravention of the fundamental rights enshrined under Articles 14, 19(1)(a), 19(1)(g), and 21 of the Constitution of India along with Section 69A and 79 of the Information Technology Act, 2000. WhatsApp has heavily relied upon the landmark judgment of Justice KS Puttaswamy vs. Union of India, asserting that Rule 4(2) violates the triple – test set of (i) legality; (ii) necessity and (iii) proportionality, as set forth by the Supreme Court of India.

Twitter also faced judicial scrutiny with a certain user filing a writ petition before the Hon’ble High Court of Delhi alleging that Twitter India has not appointed Resident Grievance Officer as per Rule 3 (2a) of the Rules, despite having been specifically mentioned as a Significant Intermediary by the Central Government.[11]


The fake news menace has plagued countries for long and requiring an effective and practical legal framework to combat it. However, the “harmonious, soft-touch oversight mechanism” as envisaged and brought into force by the Government of India may have far reaching consequences as the intermediaries will have a tough time in enforcing some of the impractical and onerous provisions of the Rules, e.g. following the timelines for solving the grievances of the users of within 15 days, when thousands of users are likely to have strong objections to the content posted by some other user, whether in India or outside India.

Additionally, the concerns expressed by mobile based messaging applications, e.g., WhatsApp, regarding their “end – to – end encryption” and the subsequent judicial scrutiny of Rule 4(2) for tracing the originator of posts on such applications, may fall flat as WhatsApp’s own privacy policy has been subject to judicial scrutiny on the allegations of WhatsApp using its platform to show advertisements to the users based on their ‘interests’, which may be derived or extracted from the content of their messages.

Nevertheless, the ambiguity regarding the enforcement of the Rules will only be demystified with time as the enforcement of the Rules is developed on the basis of the enforcement authority’s intent and practice and the intermediaries’ subsequent response to it.

[1] Reuters Institute India Digital News Report,

[2] Press Information Bureau’s press release dated 25th February, 2021.

[3] Ibid.

[4] Ibid.

[5] Ibid.

[6]  Aroon Deep, 15 Indian Streaming Platforms, Including Netflix, Hotstar, Jio, Amazon Agree On Self-Regulation Code (September 4, 2020)

[7] Library of Congress – Social Media Disinformation in Germany,

[8] UK Government to set new regulations for social media companies,

[9]  Aroon Deep, 15 Indian Streaming Platforms, Including Netflix, Hotstar, Jio, Amazon Agree On Self-Regulation Code (September 4, 2020)

[10] WhatsApp LLC vs. Union of India and others, Delhi High Court

[11] Amit Acharya vs. Union of India and others, W.P. (C) 5626/2021, Delhi High Court

Warning: Undefined variable $next_post in /home/bighelpers/public_html/asialawoffices/wp-content/themes/justicia/framework/modules/blog/templates/parts/single/single-navigation.php on line 63

Warning: Attempt to read property "ID" on null in /home/bighelpers/public_html/asialawoffices/wp-content/themes/justicia/framework/modules/blog/templates/parts/single/single-navigation.php on line 63

Warning: Undefined variable $next_post in /home/bighelpers/public_html/asialawoffices/wp-content/themes/justicia/framework/modules/blog/templates/parts/single/single-navigation.php on line 74

Warning: Attempt to read property "post_title" on null in /home/bighelpers/public_html/asialawoffices/wp-content/themes/justicia/framework/modules/blog/templates/parts/single/single-navigation.php on line 74