Draft proposals to the intermediary liability law will not only remove anonymity by requiring traceability but break end to end encryption. In addition, there is a dangerous new requirement on online platforms for pro-active removal of content by automated tools.
As reported in the Indian Express on Monday, the Union government and large online platforms are privately discussing how to censor and break the encryption of your social media and messaging to enable your posts to be taken down “pro-actively”, with the requirement of “traceability”. Why is it being done secretly? Why are you not being involved? We first explain what these rules do, then pose the top five concerns for users in a rule by rule analysis.
What are the intermediary rules?
Called the Intermediary Rules, 2011, they are drafted under the Information Technology Act, 2000 that provides immunity to online platforms and internet service providers for content which is transmitted and published by end users. This immunity allows these conduits of information to facilitate free expression and prevents them from throttling content or implementing overbroad censorship — which could lead to a “chilling effect”. In return, they have to comply with legal requests for takedown of content and providing information on users — basically comply with the law. This was a principle recognised in Section 79 of the Information Technology Act, 2000 (as amended in 2008).
But principal legislations such as Section 79 leave the details to subordinate rules. This is exactly what the Intermediary Rules, 2011 are, which were made after public consultation in March, 2011. There was a dispute on how this consultation was carried out, but still the draft rules were published online and comments on it were invited by the Union Ministry of Electronics and Information Technology. However, these rules were unclear and vague. For instance, they did not clearly state what was “actual knowledge” of content being illegal from the point of view of a platform (such as Facebook). Due to this in the Shreya Singhal case (which struck down as unconstitutional section 66A of Information Technology Act, that placed broad curbs on online content), the Supreme Court said that “actual knowledge” came only when platforms received a legal notice from the police or a court, not from private parties.
So what is being changed now? And what is at stake?
Five top concerns
- On process: First, let us start with how these draft rules are being made. This is a serious development and is eerily reminiscent of the calls for “pre-censorship” made in December 2011. As reported by the Indian Express the process is closed, being held between officials of the Union Ministry of Electronics and Information Technology and a handful of large social media and messaging companies, who have been allowed to submit comments by January 7. But the changes, as we go on to explain, will impact users like you and me; it will impact our right to privacy and freedom of speech and expression. Why then is the public being kept out?
- Breaking encryption: Draft rule, 3(5) introduces the requirement of traceability which would break end-to-end encryption. Many platforms (Whatsapp, Signal, Telegram etc) retain minimal user data for electronic information exchange and also deploy end-to-end encryption to provide reliability, security and privacy to users. These are used by millions of Indians to prevent identity theft. Encryption becomes more important as more of life now involves our personal data. Without thought or involving technical experts in an open consultative process, without any data protection law or surveillance reform, this is being tinkered with by introducing the requirement of “traceability”. Breaking encryption also has important consequences for everyday users of online services. It should also be seen in the context of the Union Ministry of Home Affairs notification on Thursday which activated a 2008 rule that holds the power to enforce decryption of online content. We do not have any proper parliamentary oversight or judicial check on surveillance and the latest draft rules, if they go through, would be a tremendous expansion in the power of the government over ordinary citizens, eerily reminiscent of China’s blocking and breaking of user encryption to surveil its citizens.
- Longer, even indefinite, data retention: Draft Rule 3(8) increases the data retention period from 90 to 180 days and provides for further retention on the discretion of “government agencies”. The phrase “government agencies” is not defined and the specific conditions or any outer time limit for data retention is not provided. Hence, using a mere letter, a government department can require a private platform to store a user’s data indefinitely, without even letting the user know. It is important to remember that such retention can continue even if the user deletes his data on the servers of the intermediary platform.
- Pro-active censorship: Draft Rule 3(9) is the most dangerous bit which would be a sledgehammer to free speech online. Not limited to abuse, harassment or threats but even legitimate speech could be suppressed by requiring online platforms to become pro-active arbiters and judges of legality. Placing such a requirement on a platform for it to obtain immunity from prosecution and actively sweep its platform would result in widespread takedowns without any legal process or natural justice. This violates the reasoning of the Shreya Singhal judgement which noted, “it would be very difficult for intermediaries like Google, Facebook etc. to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which are not.” It shifts the duty of the state to a private party.What is worse is that it will be done by “technology based automated tools or appropriate mechanisms”. Such tools have been shown to be faulty, have coding biases and prone to overbroad censorship. Should we subject our fundamental right to free speech on the basis of a not-yet-fully-developed technology? Artificial intelligence censorship is the Chinese model of censorship.
- The nanny requirement: Draft Rule 3(4), inserts a monthly requirement (at the least) to inform users about legal requirements such as the terms and conditions and privacy policy. At first blush, this may seem a needed measure, given rampant online abuse and trolling. But consider the change in the environment from a public park to a guarded schoolyard in which you are constantly reminded that you are under watch and you better behave yourself. It will turn the internet in India into a corporal environment which is bad for users. Rather than letting market mechanisms figure out norms for good conduct, which is in the best interests of platforms themselves, such a measure by law will require product-side changes for smaller startups and entrepreneurs as well.
We believe there are better ways to check misinformation and threats to Indian democracy. These can be achieved as per our fundamental rights guaranteed under the Constitution. These proposals when seen alongside the recent Union Ministry of Home Affairs notification activating the 2009 interception rules are taking India closer to a Chinese model of censorship. Yes, online platforms are problematic, they do require fixes. But driving changes through a closed and secretive process using measures that undermine fundamental rights is a harmful approach for all of us.
To us, the path to battle disinformation is clear: pass a comprehensive privacy law. This will help bring accountability to large data controllers, from online companies which target us with advertising, to political parties.
*Update: The Ministry of Electronics and IT has announced a, “public consultation” on the Draft Rules. It has invited comments till Jan. 15, 2019. However, we are somewhat reticent on this given that this is seeming as an afterthought and our past experience with the same ministry on its lack of transparency.
**This post was first published at the Internet Freedom Foundation and then mirrored by Scroll and the Quint.