Internet Content Code; Defective by Design


It seems the consultation on the draft rules for intermediary liability (under Sec. 79), privacy (under Sec. 43A) and Cyber Cafes (Sec. 79) proposed under the Information Technology Act is over. The Ministry of Information Technology (MIT) has released a finalised version of the rules on its website (here). Further inquiry has revealed that these rules have already been sent to the official gazzete for publication. Though the consultations seem to have created enough noise and outrage, a comparison of the draft with the finalised version of the intermediary rules (here) shows that the rules substantially remain the same. (credit : medianama).

Doing so, the intermediary rules retain the design flaws which were noticed in the draft rules. By “design flaws”, I mean the structural defects in the intermediaries rules, which impose costs and penalties on intermediaries for promoting free speech and too the contrary reward censorship. Another area which has been pointed out prominently on Medianama is that the rules somehow create a content code for the internet. I also incorporate some points on this in the post.

To give this debate context, lets start from the pre-amendment days where the infamous Avnish Bajaj case kickstarted this entire debate. The Avnish Bajaj case concerned a CD containing a pornographic clip of DPS students posted for sale on the Ebay India website. Avnish Bajaj who was the the CEO of the company at the relevant time was subsequently arrested under various sections of the Indian Penal Code and the Information Technology Act, 2000 (Read more on Avnish Bajaj here). A lot of software industry folks said the law was vague and definitive standards are required for an intermediary to know what are its duties and how it will get exemption from user liability. Subsequently the Information Technology Act was amended in 2008. Through this amendment it was hoped such instances would be prevented. However, even after the IT Act was amended and the amendments were brought into force (towards the end of 2009) the government failed to make any rules under Sec. 79. In the absence of rules, intermediaries continued to be dragged to court and to the police station, this includes, a recent incident where a FIR was registered against facebook.

Since the rules have been made under Sec. 79, it will be first useful to look at the legislative provision under which the rules have formed. Sec. 79 of the IT Act provides a form of safe harbor protection for intermediaries from liability from the actions of its users. What this means, is just like a telephone company, an internet service provider (ISP) is not liable for the content which passes through its network or facilities. However, this exemption from liability is not automatic or absolute. To avail of this exemption, an intermediary has to observe certain due diligence and also has to expeditiously act on receiving notice that its services are being used to facilitate an illegal act. Hence, if it fails to observe this due diligence or remove the illegal content on receiving notice it becomes a facilitator of the illegal act and becomes liable. The guidelines for this due diligence and notification system are sought to be created by the the intermediary rules. These rules with regard to due diligence ask for the intermediary to compulsorily enter into an user agreement with each of its users under which several forms/classes of content/activities are prohibited. With regard to the notification mechanism, an intermediary will have to “act” within 36 hours of receiving a complaint from any person (not only a victim) which will be sent to a “grievance officer”. The rules do not define what does, “act” mean though many will consider it a deadline to remove the content or to disable the activity which is complained against.

There are several major problems with the rules, I will only focus on some ways it will chill online speech.

Firstly, the categories under which the content may be removed are overly broad. Just as an illustration, content which is “harming minors in any way” is prohibited under the Rules. Now it is important to note, that existing civil and criminal law does not recognised a crime or a wrong such as “harming minors in any way”. This is in contrast to, lets say for instance, the substantive provisions of law with regard to “defamation” (for which Sec. 499 of the Indian Penal Code prescribes an offence and imprisonment, and the common law which has been developed through cases provides for civil remedies such as damages and mandatory injunctions). Hence, there are no legal standards which exist for “harming minors” and the rules seem to be creating a “content code” for the internet.

Secondly, in the absence of any definable legal standards, how is content to be gauged as “harming minors” or not. This becomes especially relevant since the rules contemplate a form of private enforcement of censorship. In India we allow censorship, but only in a clearly defined and narrow set of instances. This is usually done by a government body or a court order after balancing the interests of free speech and individual or societal harm. Hence, when one looks at the provisions for prohibiting a publication (aka banning a book) under the Criminal Procedure Code it clearly lays down safeguards. These are, (a) a book will only be banned if specific offences under the Indian Penal Code are caused through it; (b) the banning order will be issued by the state government; © the banning order will be in writing and will contain reasons; (d) the banning order will be made public through a gazette notification. Moreover there is a right to appeal against the banning order directly to the High Court. These safeguards clearly recognise the extraordinary nature of exercising censorship powers. Even, in cases where persons approach courts complaining of illegality in content, courts finely balance these competing interests as well.

However, under the regulation, not only the victim but any person who is aggrieved, directly approaches the grievance officer of an intermediary. When an intermediary receives such a notification, if it does not remove it in 36 hours it risks becoming a party to the illegality. Here it is stressed that intermediaries are corporate entities and though may have a code of practice to promote free speech, they run a business and will try to avoid loosing their protection or being dragged to court. Hence, being risk averse they will try to remove the information, even if it may not in their opinion be, “harming minors in any way “.
Finally, in my view if such a system was felt to be necessary, the correct way to go about this would have been not to have an illustrative list of prohibited content/activities and simply maintain, “anything prohibited by law”. Secondly the notification system would need a person complaining of the illegality to at the least, “identify the illegality with reference to a provision of law”. Thirdly, a fully fleshed out notification and take down system should have been incorporated as existing in the US under the Digital Millennium Copyright Act. Such a system would have given a procedure to the author to defend the legality of the content/activity. It would also impose costs on the removal of content and promote free speech rather than censorship by private intermediaries.


Tata v. Greenpeace as an illustration

If one wants to see how this may probably play out in the future kindly see, this post in which I describe the Tata v. Greenpeace litigation. The litigation asked for the removal of certain content, on grounds of it being defamatory and infringing the TATA trademark. Both defamation and trademark infringement are now grounds under the Intermediary Rules for which content may be removed on notification. Hence, a person may proceed under the rules rather than by filing a civil suit. Now lets see what will be the difference between a court litigation and the process contemplated under the Intermediary Rules.


Case — In the litigation, Tata complained that the pac-man style game infringed its trademark.

Rules — Any person can complain, that the pac-man style game infringes a third parties trademark.

Effect — The locus standi requirement is diluted.


Case — Tatas approached the court

Rules — Any person can approach the intermediary (such as the website host for

Effect 1 — Financial and Time cost is imposed on a person who would approach a court. However with the rules a simple email with a electronic signature will achieve the objective. The costs to have content blocked are substantially decreased.

Effect 2 : The person will employ a lawyer who will gauge the legality of the complaint before proceeding on it. At the least the lawyer will need a substantive legal provision to back up the form of the legal pleading which is filed in court. No such requirement of a “complaint” being a “legal claim” exists under the rules.


Case — the Court checks on the basis of trademark registration certificates whether Tatas is the owner as it claims to be

Rules — the website host just has a complaint to proceed on. It has no powers under the rules to check the ownership.

Effect : there is no verification procedure to the complaints. The property rights of the person or the victim cannot be identified. Over and above cases of complained infringements, a court filing distils the factual accuracy of a complaint as each case is sworne on affidavit. No such checks under the rules.


Case — the Court balances the trademark rights of the Tatas with the rights of parody of the Defendant and makes opinions (“The Court is also of the opinion that the defendants argument that they can make reasonable comment, ridicule, and parody of the registered trademarks, is persuasive.” Justice Ravindra Bhatt in Tata v. Greenpeace)

Rules — A website host / intermediary will not odinarlly enter into this process. It is in its best interests to simply “comply” with the Rules. Even larger companies like facebook and google which may have legal departments may not devote lawyers to determining the legality of the content. They may also not go to courts for declaratory rulings that the content is not illegal.

Effect : There is a aversion to risk and something which even hints at illegality is blocked. Hence, for instance in cases of valid satire a person may complain that it is disparagement (yes disparagement is a ground under the rules), and a private intermediary may oblige.


Result : The court refused an interim injunction to the TATA’s to block the Tata v. Turtle game. However, with the rules coming into place, persons will directly file complaints with private intermediaries. The interest of private intermediaries is to maintain compliance or risk loosing their protection. It does not take a poker champ to figure that odds are the intermediary will willingly oblige and act on the complaint.

In accordance with the provisions of the IT Act, these rules will be laid in the next session of parliament (PRS explains how this happens here). This is done through the parliamentary committee on subordinate legislation. When this is done any member can take notice and object to the rules or even call for their amendment. In practice this is rarely done as there is large volume of delegated legislation. Call me an optimist, but I hope with more public opinion being mobilised against these rules, they are discussed if not amended by Parliament.

Related articles

Comments are closed.


More from the blog


  • No posts found.

More news