NETWORKED AND MOBILE SECURITY
Networked technologies — mobiles, social networks, cameras, media-sharing sites — are ever simpler to operate, but not to control. This is both a strength and a vulnerability of online, mobile, hardware and apps. New technologies have made it simpler for human rights defenders (HRDs) and others to record and report violations, but harder for them to do so securely. The ease of copying, tagging and circulating video, while helpful for some human rights situations, does add a layer of risk beyond an individual user’s control. All content and communications, including visual media, leave personally identifiable digital traces that third parties can harvest, link and exploit, whether for commercial use or to target and repress citizens. This creates new kinds of risks for the safety and security of frontline HRDs and for those they film or work with. HRDs routinely use these platforms and tools for advocacy, or in crisis situations, but neither the HRDs nor the respective technology providers are always aware or prepared for the risks inherent in using these technologies for human rights work.
Mobile phones, while perhaps the most power-shifting device for activists, are widely regarded as introducing significant new human rights risks. In general, it is easier to be located and identified, and simpler to have your communications intercepted on mobile devices than it is on the internet. Although some responsibility clearly rests on HRDs and other users to protect themselves, the platforms and services they use bear significant responsibility to provide users with adequate warnings, guidance and tools related to safety and security. To help guard against these risks, technology providers are facing calls to integrate privacy thinking throughout their design and development processes (e.g. the principle of privacy by design) in order to make privacy controls easier to find and manage. They must ensure that their products, suppliers and services protect users’ privacy and data by default.
TECHNOLOGY PROVIDERS AS HUMAN RIGHTS FACILITATORS
Recent efforts by governments across the Middle East and North Africa to block or track social media services, the takedown of technical and financial services to WikiLeaks under apparent pressure from the U.S. authorities, as well as the increasing use of social networks like Facebook for organizing, have pushed technology providers to the forefront of human rights debates. The responsibility of technology providers as intermediaries for activist and human rights-focused users has become a part of mainstream media discussion. Activists and citizens have long been using privately-owned websites and networks in the public interest, yet almost none of these sites or services mention human rights in their terms-of-use or content policies. Strict policies can restrict freedom of expression. On some leading social networks and social media platforms (notably Facebook and Google+) activists have faced content, campaign or even account takedown for using pseudonyms to protect their identities. No mass platform or provider has a human rights content category, whether for user contributions or for curators or editors. Providers do not have publicly available editorial policies or standards specifically focused on human rights content. Though one could argue that this offers a useful degree of flexibility and makes content less conspicuous, systems that protect public interest content on social media networks are overall ad hoc and haphazard rather than systematic.
Furthermore, personalization of web services, such as social search — where search results or suggestions of related content are personalized according to what your social network is viewing — could increase the fragmentation of human rights content online, reduce the reach of controversial content and adversely impact freedom of information and expression.