In emerging digital world, social media has become the main medium of communication, with 2.2 billion Facebook users, 330 million Twitter users and users viewing billion hours of video
daily on YouTube, creates a huge positive impact on freedom of expression. However these social media companies have gained enormous power on what information each individual has access to and how they consume them.
When a user is joining a social platform, require acceptance of Terms of Service companies. It lays down community standards for acceptable types of content and user behavior.
The content posted against community standards are subject to removal, on reporting by users or through algorithms.
or user's account could be disabled. Therefore the social media platforms are not control-free.
Governements constantly pressure these companies to remove content labeled as ‘hate speech’ and ‘fake news’ so online censorship is increasingly privatised.
Social media news feeds curated by algorithms, to serve targeted advertisements for the perceived interests of the users determined by machines, collecting analysing user behavior on their platform. Therefore the freedom of speech online is now regulated by the terms of companies, mostly based in the United States (US).
When hundreds of.millions of users joining their platforms, these companies had to address the humanr rights concerns of various communities joined their platforms, justified protection freedom of expression and right to privacy online.
This raises the questions of;
• What are the free speech standards social media companies need to respect?
• How social media services provided by private
companies, comply with international standards on
freedom of expression?
• Does the electronic nature of the Internet require different policies and regulations?
• What are the procedural safeguards these companies should practice to respect the right to freedom of expression online?
Right to freedom of expression
is protected by Article 19 of the UniversalD of Human Rights (UDHR), and legally forced with Article 19 of the
International Covenant on Civil and Political Rights (ICCPR) and other regional treaties.
In order to comply with other human rights and democracy,
limitations on right to freedom of expression can be made;
• By a law or regulation for conduct. Examples are Contempt of Court and Official Secrets Act, 1955
• To respect right to privacy of others. ICCPR Article 17 covers protection from defamation.
• Protection of national security or of public ordero of public health or morals. Social Media ban under emergency law.
Article 20(2) of the ICCPR provides that any expression of national, racial or religious hatred which constitutes incitement to discrimination or violence is prohibited by law.
These restrictions also apply to expressions made on Internet.
Intermediary liability
In 2011 the UN Special Rapporteur recommended
that intermediaries should not be liable for content produced by others when providing
technical services, unless if they have specially intervened in the content, and stated that censorship should never be delegated to a private entity to promote protection of the right
to freedom of opinion and expression online.
However lack of transparency by the intermediaries on online censorship done with discriminatory practices or on political pressure affects right to freedom of expression on Internet.
It is the responsibility of private companies to respect human rights, independent of State obligations.
The Manila Principles on Intermediary Liability states that companies should respect human rights, and their content restriction policies and practices must comply with human rights law, and also provide users a complaints mechanism to review
decisions taken to restrict content.
The Joint Declaration on Freedom of Expression and Countering Violent Extremism , in 2016 recommended that States should not make mandatory orders to remove or otherwise restrict content by intermediaries, unless the content is lawfully restricted under international standards.
The 2017 Joint Declaration on ‘Fake news’, Disinformation and Propaganda recommended that intermediaries should adopt clear, pre-determined policies governing actions that restrict third party content under a justifiable criteria rather than ideological or political
goals, and Intermediaries should ensure easy access and understanding of Terms of Service, policies and practices with detailed information on how they are enforced and where, with clear, concise and easy to understand sumeries.
It further recommended that intermediaries should respect a due process for notifying users promptly when content is subject to restrictions giving the user an opportunity to contest restriction.
The Special Rapporteur on violence against women has urged States and companies to address online gender-based abuse, whilst warning against censorship, requires preventative measures to tackle online abuse often faced by women online.
Right to privacy and anonymity
Ability to communicate privately
guranteee right to privacy and freedom of expression.
The 2011, May Special Rapporteur on FOE expressed his concern on States and private actors using the Internet to monitor and collect information about individuals, their communications and activities ont Internet, which can violate right privacy of Internet users’ and ultimately prevent the free flow of information and ideas online, hence recommended that States should ensure that
individuals can express themselves anonymously online and refrain from adopting
real-name registration systems.
Further, in his May 2015 report on encryption and anonymity in the digital age, recommended that States to refrain from making the
identification of users a pre-condition for access to digital communications and
online services like requiring SIM card registration for mobile users, further recommended that corporate actors to reconsider their own policies that restrict
encryption and anonymity.
Keyword, image based algorithms or filters used in practice are uncertain in content restriction.
As Terms of Service written in broad terms, make it generally impossible to know if they applied
reasonable, arbitrarily or discriminatory mechanism.
Social Media platforms
They allow users to report content thought to violate community guidelines. However there is no obvious appeals mechanisms for users to challenge a wrongful content removal decision, as the Terms of Service does not
contain a clear information about redress mechanisms.
Terms of services containing unfair contract terms imbalance
power between the companies and individuals.
The Terms of Service specifies a jurisdiction for dealing with dispute arising from the right
to privacy, of the service, but it is a significant barrier for those based outside of the respective jurisdiction to access to justice.
Google, Twitter and Facebook accepted making their jurisdiction clauses
compliant with EU law.
The Communications Decency Act (CDA) , safeguards companies for any action taken voluntarily and in good faith to restrict
objectionable content based on their Terms Of Service.
It is critisized that lower standards in TOS applied for restrictions on freedom of expression than those permitted under international human rights law.
Companies day say that they intend to create safer online environments to grow their user base, but not proportionate with free speech and privacy.
The companies are adapting their
community standards to domestic legal requirements due to political pressure, falling below international standards on freedom of expression.
Twitter and Facebook both prohibited “promotion” of violence or terrorism in 2015. Facebook Community Standards do not permit supporting or
praising leaders of terrorist organisations, or condoning their violent activities.
Companies now using Artificial Intelligence to remove extremism, phonography , hate speech and fake news posts.
A challange prohibiting such content is that they are hard to define as offensive to the TOS.
Hate speech is a major issue on social media, and companies are pressured by governments and
the public to make a safe online environment.
The Human Rights commitee ellaborating on freedom of expression states that "freedom of expression expands to contravercial , disturbing or even shocking material, disliked or thought to be incorrect does not justify preventing a person expressing it"9
Facebook Community Standards define hate speech as direct attack on people based on race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity and serious disability or disease and does not allow on their platform.
Twitter defines “hateful conduct” as a prohibited bIt upehaviour
"including violent threats, wishes for the physical harm, death, or
disease of individuals or gr groups ... or other content that degrades someone."
YouTube Community Guidelines defines "hate speech" as “content that promotes violence against or has tIthe primary purpose of inciting hatred against individuals or groups ...", does not permit such content on its platform.
It is Generally permitted to criticize a nation-state, but not hateful comments on a group based on their race.
Youtube further states that “not everything that’s mean or insulting is hate speech.” unless it creates incitement to violence, hostility or discrimination.
Facebook does not allow any
organisations or individuals who engage in terrorist organisations organised hate..., provides that content that “expresses support or praise for groups, leaders or individuals involved in these activities” will be removed from the platform.
Facebook defines graphic violence as content that “glorifies
violence or celebrates the suffering or humiliation of others because it may create an
environment that discourages participation."
It further states that “people value the ability to discuss important issues such as human rights abuse or acts of terrorism.”, which allows graphic content “to help people raise awareness about issues” but adds a “warning label to especially graphic or violent the same time,
Facebook states “people value the ability to discuss important issues such as human rights abuse or acts of terrorism." allows graphic content “to help people raise awareness about issues”, and adds a warning lable or prevent access to users under 18.
Twitter does not have a specific policy to deal with ‘terrorist’ content, but prohibits the use of Twitter “for any unlawful purposes or in furtherance of illegal activities.” and “violent threats and glorification of violence."
Social media platforms prohibit content that violates right to privacy on the grounds of public morals.
1. Threats of violence
2. Nudity or pornography
3. Posting of private information.
Among them can be offensive or insulting material that
falls short of harassment, but justified in the public interest, although they may constitute an interference with the right to privacy.
Facebook restricts nudity, but makes allowance for nudity content for artistic, humor and educational puposes.
Google bans “nude” or “sexually explicit” images in broad terms.
YouTube prohibits sexually explicit content, but allows documentary or scientific purpose sexual content.
Fake news has become a key concern since 2016 US Presidential election. Social Media platforms are combating spreading of fake news .
Facebook anti-spam policy
does not allow inaccurate or misleading posts to gain liked or shares.
Facebook removes profiles that impersonate other people, and encourage users to report false information.
Regulation
Gives disproportionate powers to State for censorship, inclusive of
prison terms, fines or content blocking powers.
When regulator is not an independent body, chills down free expression.
Tends to undermine minority view points.
May be below international standards on freedom of expression .
Unable to transfer existing laws for radio, print and TV to internet due to open nature.
Powers to block entire platform and hefty fines for not complying with domestic legal requirements.
Co-regulation
This model established self-regulatory bodies by
State. However they will have same flaws as State regulated model, and hamper innovation. it's also not possible to guaranteee the independence of the body.
Self-regulation
Here members of self-regulatory bodies promote and develop respect for ethical
standards with in membership with desire for heathy internet, however these are “voluntary” initiatives and often used to circumvent the rule of law.
A strong weakness in this is lack transparency. They fail to hold companies and governments accountable for wrongful removal of content.
Social media companies,
who are central enablers of freedom of expression should respect the international
standards on human rights consistent with the Guiding Principles of their businesd.
Their Community
Standards should be in line with international standards on freedom of expression and they need to provide a remedy for violations of freedom of expression under their Community Standards.
States must provide for an effective remedy to protect Individuals’ right to freedom of expression as well as right to privacy under international human rights law.
1. Discuss freedom of expression and right to privacy giving examples and how human rights
based policies to be formulated for balancing two rights privacy and speech .
Niranjan Meegammana
HR Researcher & Technologist
Digital Human Rights Institute
No comments:
Post a Comment