Because these speech platforms are so important, the decisions they take become jurisprudence,” said Andrew McLaughlin, who has worked for both Google and the White House. Most vexing among those decisions are ones that involve whether a form of expression is hate speech. Hate speech has no universally accepted definition, legal experts say. And countries, including democratic ones, have widely divergent legal approaches to regulating speech they consider to be offensive or inflammatory. Europe bans neo-Nazi speech, for instance, but courts there have also banned material that offends the religious sensibilities of one group or another. Indian law frowns on speech that could threaten public order. Turkey can shut down a Web site that insults its founding president, Kemal Ataturk. Like the countries, the Internet companies have their own positions, which give them wide latitude on how to interpret expression in different countries.
“Because these speech platforms are so important, the decisions they take become jurisprudence,” said Andrew McLaughlin, who has worked for both Google and the White House. Most vexing among those decisions are ones that involve whether a form of expression is hate speech. Hate speech has no universally accepted definition, legal experts say. And countries, including democratic ones, have widely divergent legal approaches to regulating speech they consider to be offensive or inflammatory. Europe bans neo-Nazi speech, for instance, but courts there have also banned material that offends the religious sensibilities of one group or another. Indian law frowns on speech that could threaten public order. Turkey can shut down a Web site that insults its founding president, Kemal Ataturk. Like the countries, the Internet companies have their own positions, which give them wide latitude on how to interpret expression in different countries.
– We need to divorce the idea of innovation from startups. Innovation is as much about existing, large institutions as it is smaller, new ones. Instead of talking about start-ups, we should focus on R&D policy—and make sure that it is size-agnostic.
– We need to be able to collect, share and analyze data across institutions for the purposes of innovation. This means creating open data standards, especially in the public sector. Proposals like the EU Open Data strategy and work done on Data.gov are encouraging.
– Some of the data we will need to analyze is going to be personal data, so we need mechanisms to support consent in the innovation process. This is why projects like the one John Wilbanks is leading, Consent to Research, are so important.
– Analyzing the large sets of data that will drive a lot of this innovation will mean using the cloud. It’s just not cost-effective to expect everyone to run their own data centers for this type of computation. We need to reduce barriers to access these cloud services, such as restrictions on cross-border data flow. The APEC Pathfinder project is one encouraging effort to achieve this goal.
The tech world has finally woken up to the safety and privacy risks in the app economy. Path’s silent address book mining sparked widespread outrage, and has had ripples for lots of web services and mobile apps that had, until now, seen as “industry best practice” the long-term retention and mining of customers’ contacts (UPDATE: more on Big Tech’s repeated privacy stumbles – including privacy-trashing kids’ apps – from the New York Times’ Nick Bilton). In the report I led for WITNESS last year, Cameras Everywhere, we pinpointed these practices as a massive potential vulnerability for human rights activists and for citizens and consumers more broadly (see p.27, Recommendation 2) . We specifically suggested that technology companies should take this stewardship seriously, and:
Follow the principle of privacy by design, and for products already in circulation, privacy by default. This is particularly important for products, apps and services that share this data with third parties that may not exercise the same diligence.
This might include revising the terms under which app developers can participate in App Stores for different platforms – for example, by issuing new guidelines requiring that third-party developers follow industry-class privacy practices – or it could even involve providing app developers with off-the-shelf privacy solutions directly. (WITNESS itself is a partner in ObscuraCam and InformaCam, Android apps that demonstrate privacy and human rights-sensitive ways to handle data, particularly visual data, generated by mobile phones.) Many app developers creating iOS, Android or other apps are small shops that have few staff, and no legal or privacy counsel to help them navigate tricky waters. What’s more, they are scattered in many jurisdictions that have extremely varied data protection laws and requirements. Frankly, it’s a no-brainer that they need help and guidance. (Update: I want to thank publicly Jules Polonetsky of the Future of Privacy Forum for pointing us along this path of inquiry during a research interview for Cameras Everywhere. Very exciting to see that he is involved in driving forward better industry-wide practices with the Application Privacy Summit in April 2012.)
We have made the argument for greater privacy protections in the app economy publicly and in private to the major technology companies, as well as app developers, VCs and policy-makers – we felt that it’s a central and intimate issue not just to activists, but to any and all users. We didn’t get much traction – we’re not technologists, and maybe the solutions we outline are inelegant or technically problematic, but that doesn’t mean the problem is a phantom one.
I hope that this recent upsurge in attention and scrutiny provides a window for companies like Apple, Google, Amazon, Twitter and Blackberry to realise that the concern is a real one (just as Apple did with mobile tracking data, for example), and to re-examine how their app ecosystems work. Ultimately, they need to take more responsibility for their app users’ privacy and safety, even if those apps are designed and built by third-parties elsewhere in the world – after all, only they really have the leverage, authority and know-how to make the app economy a safer place for us all.
[Cross-posted from the WITNESS Hub Blog.]
Google has received brickbats a-plenty for its stance in China, where, in order to be permitted to operate by the Chinese government, the search company agreed to censor particular “sensitive” search results – Tiananmen, Dalai Lama, democracy, human rights, and so on. Last night, Google announced, via a blog post from its senior vice-president of corporate development and chief legal officer, David Drummond, that in mid-December it had been the target of a sophisticated online attack originating in China aimed at perhaps as many as 20 companies, that resulted both in the theft of intellectual property, and in a largely unsuccessful attempt to compromise the GMail accounts of Chinese human rights activists (though it also appears that others working on human rights in China have had their accounts compromised through other means).
“These attacks and the surveillance they have uncovered–combined with the attempts over the past year to further limit free speech on the web [NOTE – YouTube is also blocked in China] –have led us to conclude that we should review the feasibility of our business operations in China. We have decided we are no longer willing to continue censoring our results on Google.cn, and so over the next few weeks we will be discussing with the Chinese government the basis on which we could operate an unfiltered search engine within the law, if at all. We recognize that this may well mean having to shut down Google.cn, and potentially our offices in China.” David Drummond, Google
Aside from showing that no one is invulnerable, media, Twitter and blogs are abuzz with this news with many asking whether this is really about human rights and censorship, a graceful exit from a difficult market, or a strategic move in geopolitical terms.
“Difficult Problems In Cyberlaw” (a Harvard blog) rounds up some of the major early coverage here, but analysis continues to pour out… One blogger at Amnesty UK (which has campaigned against internet censorship for many years, releasing this key report in 2006), chooses to see this as a positive move, and one that brings the day nearer when Chinese netizens can read and debate Amnesty reports online freely. He, like Nart Villeneuve, hopes that this will influence other companies, notably Microsoft and Yahoo, to take a stand too. Evan Osnos of the New Yorker interviews China specialist James Mulvenon, director of the Center for Intelligence Research and Analysis, who thinks that Google has done this to “reclaim some of its soul and corporate culture.” Siva Vaidhyanathan rejects the idea that this is about human rights and censorship, suggesting that Google is reacting to the attacks as a threat to its future strategy, which rely on security and integrity of cloud-based systems. Techcrunch’s Sarah Lacy agrees, describing it as “a scorched earth move”. Evgeny Morozov gives his “crude and cynical (Eastern European) reading of the situation“, suggesting that it’s not about cybersecurity, rather that Google.cn is a sacrifical “goat” to secure Google some positive PR at a time when it’s under attack over its privacy practices in Europe. Seasoned China-watcher Rebecca MacKinnonlauds the move, for not dissimilar reasons… Giving users’ perspectives, The Guardian and Global Voicesspotlight the voices of analysts, bloggers and other netizens in China (the NYT mischievously interviewed a woman called Bing). One comment really stood out for me:
90后：今天我翻墙，看到一个国外网站叫Google的，妈的全是抄袭百度的。00后：翻墙是什么？ 10后：网站是什么？ 20后：国外是什么？
People born in 90s: Today I stepped out of the Great Firewall and saw a foreign website named Google. Shit, it is all but a copy of Baidu. Born in 00s: What do you mean by stepping out of Great Firewall? Born in 10s: What do you mean by website? Born in 20s: What is ‘foreign’?
Here’s what I think, for what it’s worth (leaving aside the cybersecurity angle, which others have covered in depth already, as noted above). First and foremost, Google probably underestimated the criticism it would receive of its perceived double standards in agreeing to the censorship, which it likely saw at the time as a necessary market constraint rather than a reinforcement of China’s architecture of censorship and repression. That market is one in which, many analysts are suggesting today, it is too difficult to win significant market share against a government-supported Baidu. Second, many many individuals within Google itself are not just strong proponents of a culture of openness, but also strongly supportive of human rights as it is encoded within the company’s own DNA – “Don’t be evil”. I don’t doubt that there has been considerable pressure inside Google to live up to that motto. Third, this announcement builds on a number of recent moves by Google – from an increasing rhetorical and practical focus on openness, its participation in the Global Network Initiative, to the growing citizen journalism, non-profit and activist sections on YouTube, and this week’s announcement of anti-censorship awards – using its reach, technology and influence to impact on and advance the cause of human rights across all of its practices, not just within specific products or technologies, or restricted to its philanthropic or non-profit activities. This doesn’t mean that there aren’t challenges, but in our experience, there is willingness to listen, learn and debate within both Google and other technology companies of similar stature. “Google’s Gatekeepers”, Jeffrey Rosen’s piece on Google and censorship in the New York Times Magazine at the end of 2008, marked a milestone in this overall shift, and I strongly recommend it – the section that deals with trust in Google is particularly apposite to the China news. Finally, there has been a significant, modernising shift in the relationship between the technology sector and the political leadership of the US – from President Obama’s openness agenda, and the “21st Century Statecraft” of Hillary Clinton’s State Department, to the appointments of Aneesh Chopra and Vivek Kundra as CTO and CIO of the USA respectively (with Google’s former Director of Global Public Policy, Andrew McLaughlin, appointed as Deputy to Chopra). Perhaps this has influenced and emboldened Google too…
Whatever the motivation, whatever the means, this move has to be welcomed, both for the return to core principles that it signals, and for the shockwaves it will send throughout the tech world. I and other WITNESS colleagues – and many other organisations – have spoken about how the technology landscape has shifted, and how this impacts on human rights – most recently, I was invited to give a Google Talk at Google Europe in London on this exact topic. The technology companies wield enormous power over how people see, experience and understand the world, and consequently how they feel empowered to work and network to change it, and this has special impact on a fragile area like human rights. We and others have advocated to the technology companies to protect users, and human rights defenders in particular, more actively, and to protect the growing amount of human rights content online, through both technological solutions and through better policies, and we look forward to a new and energised dialogue with all relevant parties towards that goal in the wake of this important announcement.
Here’s fellow #nuevodad Ethan Zuckerman’s take on it all (I am gratified to see that we agree on a lot…):
And here’s a fragment from Wikileaks:
Amnesty USA – http://www.amnestyusa.org/document.php?id=ENGUSA20100113001〈=e
Human Rights Watch – http://www.hrw.org/node/87654
Human Rights First – http://www.humanrightsfirst.org/media/usls/2010/alert/563/index.htm
Evgeny offers further perspectives on #googlecn, expanding on what he sees as chess moves by Google on the national security and geopolitical fronts:
Charlie Beckett broadly agrees with Evgeny, with some qualifications: http://www.charliebeckett.org/?p=2404
I’m not sure that it’s either entirely a cynical ploy or a principled stand – it seems to me that of course the presentation and timing are extremely skillful, but that there are ethical, business and political motives that intersected very usefully here that connect fears about national and commercial cybersecurity with a human rights agenda. In the words of China’s policy on Africa, it’s a “win-win”…