Cameras Everywhere

Key Challenges

ABOUT
KEY CHALLENGE: PRIVACY AND SAFETY

VISUAL PRIVACY AND ANONYMITY

With cameras now so widespread, and image-sharing so routine, it is alarming how little public discussion there is about visual privacy and anonymity. Everyone is discussing and designing for privacy of personal data, but almost no-one is considering the right to control one’s personal image or the right to be anonymous in a video-mediated world. Imagine a landscape where companies are able to commercially harvest and trade images of a person’s face as easily as they share email addresses and phone numbers. While it is technically illegal in some jurisdictions (such as the EU) to hold databases of such images and data, it is highly likely that without proactive policymaking, legislative or regulatory loopholes will be exploited where they exist. So far, policy discussions around visual privacy have largely centered on public concerns about surveillance cameras and individual liberties. But with automatic face-detection and recognition software being incorporated into consumer cameras, applications (apps) and social media platforms, the risk of automated or inadvertent identification of activists and others — including victims, witnesses and survivors of human rights abuses — is growing. No video-sharing site or hardware manufacturer currently offers users the option to blur faces or protect identity. As video becomes more prevalent as a form of communication and free expression, the human rights community’s long-standing focus on the importance of anonymity as an enabler of free expression needs to develop a visual dimension — the right to visual anonymity.

CASE STUDY: FACIAL RECOGNITION, CROWD-SOURCING, PROTESTORS AND RIOTERS >>

Facial identification based on videos taken at protests is a growing concern for human rights defenders (HRDs). Images of crowds at protests and riots can be fed into automated facial recognition systems that can

be used to identify individuals. Meanwhile, crowd-sourced identification adds to the networked risks that HRDs face.

During Iran’s 2009 Green Movement protests, the Islamic Revolutionary Guard Corps (IRGC) posted to a website that was created visuals of opposition activists taken mainly from videos and photos on public video- sharing and social media sites. They asked people to contact the IRGC if they could identify the individuals shown. Since then, citizen surveillance has moved to North America and Europe. The Vancouver Police Department solicited witness footage of alleged rioters following the Stanley Cup riots in June 2011, and London’s Metropolitan Police uploaded photos of alleged rioters in August 2011 to its

Flickr account and asked for help identifying them. In both cases, the public responded enthusiastically and continued the work of identification outside of official channels.

KEY CHALLENGE: NETWORK VULNERABILITIES

NETWORKED AND MOBILE SECURITY

Networked technologies — mobiles, social networks, cameras, media-sharing sites — are ever simpler to operate, but not to control. This is both a strength and a vulnerability of online, mobile, hardware and apps. New technologies have made it simpler for human rights defenders (HRDs) and others to record and report violations, but harder for them to do so securely. The ease of copying, tagging and circulating video, while helpful for some human rights situations, does add a layer of risk beyond an individual user’s control. All content and communications, including visual media, leave personally identifiable digital traces that third parties can harvest, link and exploit, whether for commercial use or to target and repress citizens. This creates new kinds of risks for the safety and security of frontline HRDs and for those they film or work with. HRDs routinely use these platforms and tools for advocacy, or in crisis situations, but neither the HRDs nor the respective technology providers are always aware or prepared for the risks inherent in using these technologies for human rights work.

Mobile phones, while perhaps the most power-shifting device for activists, are widely regarded as introducing significant new human rights risks. In general, it is easier to be located and identified, and simpler to have your communications intercepted on mobile devices than it is on the internet. Although some responsibility clearly rests on HRDs and other users to protect themselves, the platforms and services they use bear significant responsibility to provide users with adequate warnings, guidance and tools related to safety and security. To help guard against these risks, technology providers are facing calls to integrate privacy thinking throughout their design and development processes (e.g. the principle of privacy by design) in order to make privacy controls easier to find and manage. They must ensure that their products, suppliers and services protect users’ privacy and data by default.

TECHNOLOGY PROVIDERS AS HUMAN RIGHTS FACILITATORS

Recent efforts by governments across the Middle East and North Africa to block or track social media services, the takedown of technical and financial services to WikiLeaks under apparent pressure from the U.S. authorities, as well as the increasing use of social networks like Facebook for organizing, have pushed technology providers to the forefront of human rights debates. The responsibility of technology providers as intermediaries for activist and human rights-focused users has become a part of mainstream media discussion. Activists and citizens have long been using privately-owned websites and networks in the public interest, yet almost none of these sites or services mention human rights in their terms-of-use or content policies. Strict policies can restrict freedom of expression. On some leading social networks and social media platforms (notably Facebook and Google+) activists have faced content, campaign or even account takedown for using pseudonyms to protect their identities. No mass platform or provider has a human rights content category, whether for user contributions or for curators or editors. Providers do not have publicly available editorial policies or standards specifically focused on human rights content. Though one could argue that this offers a useful degree of flexibility and makes content less conspicuous, systems that protect public interest content on social media networks are overall ad hoc and haphazard rather than systematic.

Furthermore, personalization of web services, such as social search — where search results or suggestions of related content are personalized according to what your social network is viewing — could increase the fragmentation of human rights content online, reduce the reach of controversial content and adversely impact freedom of information and expression.

CASE STUDY: AMAZON AND WIKILEAKS >>

Days after publishing a trove of classified U.S. diplomatic cables in November 2010, whistle-blowing website, Wikileaks, came under massive Distributed Denial of Service attacks, attempting to bring the site down. Wikileaks tried moving to an Amazon cloud server, but within days was kicked off of Amazon’s servers. Amazon reps stated that Wikileaks had violated its Terms of Service (ToS). At the same time. U.S. Senator Joe Lieberman also claimed that it was he who requested the Wikileaks shutdown.

If Amazon’s version of the event is true, then the public’s right to know was determined by a private company’s individual ToS. However, if Lieberman’s statement is correct, then this shows how vulnerable human rights activists are to government pressure, even in democracies.

VIDEO CENSORSHIP AND FREEDOM OF EXPRESSION

Video content is vulnerable to interception, takedown and censorship, and needs active protection. Because of the large file-size and easily-identifiable file suffix (.avi, .mp4, etc.), video files are becoming increasingly easy to monitor and intercept. Although less simple to censor using existing filtering technology, mechanisms are evolving to make automatic censorship of video content more widely possible. At the same

time, videos showing rights violations involving graphic violence or killing can also be vulnerable to takedown or user-flagging. Encouragingly, platforms like YouTube are becoming increasingly sensitive to politically-motivated takedown.

Much video-based political and human rights commentary actually parodies or remixes existing copyrighted images or music. This leaves them vulnerable to automatic takedowns on the basis of copyright infringement. Copyright policy, with its focus on anti-piracy messaging and powerful music/film industry lobbies, is often used to target political or human rights content. Copyright laws are coming under increased scrutiny, but policy recommendations rarely include proper consideration of public-interest and human rights-use cases, or the impact on freedom of expression and information. Alternative content-licensing or intention- signalling systems (such as Creative Commons) have yet to be adapted specifically for human rights purposes.

CASE STUDY: YOUTUBE AND HUMAN RIGHTS VIDEO TAKEDOWNS >>

The trajectory of YouTube’s policies on human rights videos— from removing videos by Egyptian activist Wael Abbas in 2007 to keeping videos of protests in Iran on the site in 2009— demonstrates both the growing role for video in human rights movements and a rising awareness of human rights activists’ use of YouTube. YouTube took down Abbas’ video evidence

of police brutality because it featured graphic violence that violated its ToS. This meant not only that these videos instantly disappeared everywhere they had been embedded across the internet, but that the original URLs and comments associated with these videos also disappeared, taking away the viral phenomenon that his videos created. Since then, YouTube

has re-considered its policy on graphic violence in videos and have decided to allow Iranians to upload their videos of state violence against protesters in 2009 and 2010, saying they consider the videos to be educational content.

DUAL-USE TECHNOLOGY AND FREEDOM OF EXPRESSION

The capability to observe, filter and censor audio-visual media, as well as text- based content, is growing. Surveillance technologies that can have a legitimate law enforcement use, such as in tracking child exploitation online, can also be used by governments to block or censor political or human rights content or to covertly monitor their citizens. Such technologies are known as dual-use technologies. Online filtering, censorship and surveillance software employed and shared by governments threatens the overall environment for freedom of expression. Western companies selling communications-monitoring technologies to foreign governments such as Egypt, Iran or China have only recently come under scrutiny for complicity or collusion in censorship and repression. Similarly, companies training governments to use these technologies run the risk of making American and European companies complicit in human rights abuses and repression of free speech. China is thought to be sharing censorship technology and expertise with other states concerned about burgeoning online freedom of expression. International standards for scrutiny and export control of dual-use technology do exist, but these need revision and strengthening to meet the new and evolving challenges posed by new media.

CASE STUDY: ANONYMITY AND GOOGLE >>

In February 2011, as part of a wider conversation about privacy and the use of ICTs in support of activism in the Middle East and North Africa, Alma Whitten, Google’s Director of Privacy, posted a clarification of Google’s position on anonymous usage of its services. Whitten explained that users could be unidentified, pseudonymous or identified and different Google products had different types of privacy controls that might be more suited to users in each of these situations.

Several months later, when Google+ launched, it was unclear which category the service fell into. After Google began issuing warnings and shutting down Google+ accounts that used pseudonyms, it has faced a barrage of criticism from Google users opposed to this “real name policy”. But even under anonymous and pseudonymous services, Google can identify its users and their contacts, not least through services like Social Search — and it is not clear under what circumstances this information is shared with governments, other companies or other users.

As this extends into users’ visual identity — via YouTube, Picasa, Image Search and Streetview, for example — it is becoming increasingly apparent that the ability to stay anonymous on Google, and more broadly online, is extremely difficult for individuals to control.

VULNERABILITY IN THE CLOUD

Services increasingly store users’ personal and other data in the digital cloud. Cloud data is processed and handled across multiple jurisdictions, creating potential inconsistencies and conflicts in how users and their data are protected. More worryingly, cloud storage renders data vulnerable to multiple attacks and data theft by any number of malicious hackers. Repressive governments, in particular, can use photo and video data — particularly those linked with social networking data — to identify, track and target activists within their countries. Legislative and ethical responses to these vulnerabilities currently range from being too restrictive to completely absent.

NETWORK CAPACITY AND ACCESS

All of us have a vested interest in keeping the Internet and other communication platforms open and free. But when mass communications are shut down or excessively filtered, activists, HRDs and other relevant stakeholders need fallback options. While it is proving harder to shut down communications networks entirely, lessons and tactics being learned in current crises need to be systematically documented and shared to enable effective ways to work around connectivity shutdowns. As video, mobiles, and other ICTs become increasingly part of the infrastructure of the human rights movement, we must increase the resilience, reach and accountability of communications networks, public and private. At a policy level, attacks on net neutrality, both on the internet and on mobile networks, pose a threat to freedom of information and expression and to the ability to access coverage of human rights abuses. The human rights community must also invest in alternative means of communication, preservation and distribution of human rights content. While extending connectivity (through greater access to technologies) is important, relying on connectivity alone will not provide sufficient resilience for the human rights community, especially in crisis situations.

KEY CHALLENGE: INFORMATION OVERLOAD, AUTHENTICATION & PRESERVATION

AUTHENTICATION OF CONTENT

With more video material coming directly from a wider range of sources, often live or nearly in real-time, and often without context, it is increasingly urgent to find ways to rapidly verify or trust such information. Civil society organizations may need to develop common standards or shared protocols — or adapt one from journalism — to explain how they ensure that their information is accurate and reliable. Adoption of such a shared standard could warrant new kinds of statutory protections not just for journalists, but also for other kinds of information providers.

Major journalism organisations like the BBC are learning as they go along, and are sharing emerging practices in how to sift, verify and curate social media content about human rights and humanitarian crises. Alongside more manual, forensic techniques of verification, more technology-driven initiatives are underway to provide technical verification and digital chain-of-custody of footage, to help underpin the use of video in evidentiary, legal, media and archival contexts. However, significant questions remain over how to vouch for authenticity, protect safety, and communicate the original intention of human rights footage. Initiatives to embed human rights concepts — like do no harm — within metadata need the involvement and backing of major video-sharing platforms, where the majority of this kind of video is seen and held, and by mobile manufacturers and networks, who supply the greatest number of cameras worldwide Without this, adoption of such standards will be niche at best. As live video streaming from mobile devices grows in prevalence, new questions will continue to arise. For example, how to reconcile expectations of total transparency and immediacy with the frequent need to edit footage to protect people’s safety. Although there is near universal rejection of any further statutory regulation of content, self-regulation of content may soon emerge in the internet and social media industries in the U.S. and EU.

CURATION AND AGGREGATION

Everyone is struggling with how to present and help people make sense of the growing store of human rights content. Most widespread techniques for curating and aggregating video are still quite linear and rudimentary, such as chronological live- blogs or video playlists. Despite these limitations, news outlets, both large and small, are engaging more with eyewitness human rights material. They are providing readers and viewers with crucial context and triangulation for what they see and are ensuring that human rights issues raised by such material are debated by broader publics.

PRESERVATION OF HUMAN RIGHTS VIDEO

Ensuring that human rights footage and imagery is persistently available, whether publicly or in restricted archives, is important for awareness, advocacy and justice in the near- to mid-term, as well as for longer-term historical and research needs. The closure of the Google Video hosting service, and with it the loss of a trove of human rights video, brought the risks of relying on mass commercial platforms to the fore. At the moment, there is no systematic effort to gather and preserve online and offline human rights video and it is not easy for individual users of commercial platforms and technology to understand how to do so for themselves, especially when under time constraints in crisis situations.

KEY CHALLENGE: ETHICS

NEW ETHICAL CHALLENGES

The place of ethics in social media content and conduct is increasingly under the spotlight, primarily around usage by young people and other potentially vulnerable groups. Ethical frameworks and guidelines for online content are in their infancy, and although these are partly influenced by journalism standards, they do not yet explicitly reflect or incorporate human rights standards. Human rights needs, for example understanding how consent is secured from video participants, can come into conflict with the assumption of engineers and user experience specialists in social media companies, that content and identity must spread with as little “friction” as possible. There is still significant debate about how ethics for remixed, nonlinear media might differ from earlier types of media, specifically in how it is produced, stored, consumed and shared.

A culture of remixing (cutting existing pieces of content together into something new) presents challenges for human rights. Appropriating existing content (music/images/ videos) and mixing it with fresh content in new ways is a cheap, effective, and popular form of political expression. However, remixing often relies on de-contextualizing footage that has a specific human rights purpose. More needs to be done to tie together ethics in digital spaces with ethics in the physical world, which might prove helpful both for those “born digital” and those that are not.

CASE STUDY: HUMAN RIGHTS PERPETRATORS ON FLICKR >>

In the aftermath of Egypt’s January 25th movement, human rights activist @3arabawy uploaded a cache of videos and photos of State Security Police (SSP) officers that he had found at the SSP headquarters to Flickr. @3arabawy claimed that he had ensured that only SSP officers (many of whom are accused of committing torture) were visible in the images.

Subsequently, Flickr removed the images from its servers. It stated that @3arabawy had posted them in violation of Flickr’s Community Guidelines, which require that images be created and owned by those who upload them. Activists pointed out that seeing non-original images on Flickr is common. Yahoo!, Flickr’s parent company, wrote on its blog that it relies on users’ reports to enforce its Community Guidelines.

Flickr took down the image set after a flurry of reports triggered a review of the set. Yahoo! argued that Flickr had the right to enforce rules that support the community of content creators it seeks to create. Yahoo! also claimed that creating an human rights category for images was overly restrictive and might endanger activists more than content moderation does.

KEY CHALLENGE: POLICY

POLICY IS SLOW, TECH IS FAST

Technology, and the internet in particular, evolves much more quickly than legislative and policy responses to it, often leaving the law out of step with practice. Policies that address technology are inconsistent both within and between particular policy domains. For example, trade, security and human rights policies each treat technologies differently and sometimes contradictorily. Laws and policies targeting content piracy under trade frameworks facilitate surveillance and erosion of privacy for citizens and activists, and constrain the space for free expression. Development of these laws is often done behind closed doors, beyond public debate and scrutiny. This can lead to repressive and aggressive, rather than protective and progressive, uses of technology. The Anti-Counterfeiting Trade Agreement (ACTA) for example, uses a trade framework to target copyright ‘offences’ like those of remixing in human rights video.

INCONSISTENT INTERNATIONAL STANDARDS

The internet is not borderless. It is increasingly governed and shaped on a national or regional level. However, U.S. and EU policy towards the internet and mobile communications strongly influences similar policies in other parts of the world — in both progressive and regressive ways. Yet neither the U.S. nor the EU routinely apply human rights standards when forming internet policies. Some governments, notably that of China, are shaping their domestic internet, openly and tacitly, and at the same time seeking to shape the broader environment of internet and technology standards through influencing international standards bodies. Governments, democratically elected or otherwise, argue that protecting national security entails sacrificing elements of individual privacy, and that this justifies measures they take to control or monitor the internet and mobile communications — or against transparency activists like WikiLeaks. Until the Tunisian and Egyptian uprisings of early 2011, online debate and dissent was seen as something of a safety valve by governments such as China, but this too is now being constrained.

Intergovernmental organizations such as the UN are not yet agile players within the policymaking arena of the internet. Select individual agencies (for example UNICEF) have placed a premium on innovation, and some Special Rapporteurs, individuals at the United Nations and other intergovernmental bodies tasked with oversight on particular human rights issues, have undertaken to understand the new landscape. They have developed new, widely-consulted, frameworks for how networked communication interacts with freedom of expression, as well as with business and human rights. Unfortunately, national human rights institutions are, in particular, ill-equipped to participate in and influence such debates. Additionally, around the world national-level civil society, legal communities and judiciaries lack the capacity to absorb, analyze and advocate around all these issues and need systematic strengthening.

Cameras Everywhere Report

Download the Report (PDF)

English | Spanish | French | Arabic

CATEGORY
Cameras Everywhere

Help WITNESS create more human rights change

Join us by subscribing to our newsletter.

Support WITNESS