I rather like this roadmap for emerging technologies. One day I fully intend to find out what “arcologies” are.
As citizens continue to play a critical role in supplying news and human rights footage from around the world, YouTube is committed to creating even better tools to help them. According to the international human rights organization WITNESS’ Cameras Everywhere report, “No video-sharing site or hardware manufacturer currently offers users the option to blur faces or protect identity.”
YouTube is excited to be among the first.
Today we’re launching face blurring – a new tool that allows you to obscure faces within videos with the click of a button.
Advocacy in any arena generally takes a long long time. In this context we’re talking about pressuring key Silicon Valley companies that have gone in under a decade from being simple technology providers to being an integral part of everyday human activity across much of the planet.
That one line quoted above was something we’d been talking to YouTube/Google about for 4 years (and that’s more than half of YouTube’s own existence). Those who can make seemingly simple changes like this happen are busy people operating within multiple sets of interlocking wheels of law and policy, and myriad competing internal demands. The conversations with these people started before I got to WITNESS, and they continued after I left in mid-2010 (and continue to this day) – and as the Cameras Everywhere report shows, there’s still plenty to discuss in the future.
Here are my personal recollections and reflections on how the conversations with YouTube that I was involved in developed – with the accent strongly on “personal”. Since I left WITNESS 2 years ago, I’m not party to the latest conversations between YouTube and WITNESS – but I do know where the seeds came from and how they took root. Over at the WITNESS blog Sam Gregory explains the human rights dimension of this move by YouTube.
I am sharing this therefore partial account in the hope that reading a little about our experience will give succour to other activists and researchers running into what seem like brick walls right now. Keep talking, keep trusting, and keep pushing… and embrace serendipity.
[Thurs 19 July - I've slightly clarified some of the written-at-1.30am-language...]
[Sun 22 July - further clarification, including of when I left WITNESS.]
Head on over to the WITNESS blog, where you’ll find my new post on the ethics of facial recognition.
I’ll post a slightly different version here over after the weekend, with a bit more detail in a couple of areas.
UPDATE (July 2012):
I’m not sure when time will permit, as I’ve been fairly consumed with completing my freelance work, and then moving to my new job at OSF, but I’ll endeavour to post all the resources I collected related to face recognition and human rights, as I hope they’ll be of use to other researchers and advocates in the field. In the meantime, quite a few of the resources I found I linked to from these two posts:
The Ethics of Face Recognition Technology (March 7th, 2012)
Tactical and Technological Defences for Face Recognition Technology (May 18th, 2012) – and this was also posted in a slightly amended form by PBS MediaShift (18th June 2012).
As part of its UK Public Opinion Monitor research, which aims to track the UK public’s attitudes towards development, the Institute of Development Studies at Sussex recently released this 10-minute film pleading for better coverage by UK television of the developing world, and of issues related to poverty:
The film revisits arguments advanced over many years by the International Broadcasting Trust (IBT), One World Media (formerly the One World Broadcasting Trust), POLIS, and other civil society groups. [Five years ago, I wrote and researched IBT's report, Reflecting the Real World 2, on how new media were impacting on UK TV's coverage of the developing world.] These groups have consistently put forward the arguments – based on research they conduct and commission, and on interviews they conduct with senior decision-makers in the UK media – that coverage of the developing world by UK broadcast television is weak, and tends to focus on crisis, corruption, and conflict, in both news and other TV genres. They argue that this has serious implications both on how genuinely informed the UK public can be about large swathes of the wider world, and therefore on how constructive domestic public debate and opinion can be about why we give aid, to whom, and on what basis.
It’s encouraging that a serious institution like IDS is interested in addressing these issues. So why does the film itself leave me so disappointed – and what might they have done differently?
The tech world has finally woken up to the safety and privacy risks in the app economy. Path’s silent address book mining sparked widespread outrage, and has had ripples for lots of web services and mobile apps that had, until now, seen as “industry best practice” the long-term retention and mining of customers’ contacts (UPDATE: more on Big Tech’s repeated privacy stumbles - including privacy-trashing kids’ apps – from the New York Times’ Nick Bilton). In the report I led for WITNESS last year, Cameras Everywhere, we pinpointed these practices as a massive potential vulnerability for human rights activists and for citizens and consumers more broadly (see p.27, Recommendation 2) . We specifically suggested that technology companies should take this stewardship seriously, and:
Follow the principle of privacy by design, and for products already in circulation, privacy by default. This is particularly important for products, apps and services that share this data with third parties that may not exercise the same diligence.
This might include revising the terms under which app developers can participate in App Stores for different platforms – for example, by issuing new guidelines requiring that third-party developers follow industry-class privacy practices – or it could even involve providing app developers with off-the-shelf privacy solutions directly. (WITNESS itself is a partner in ObscuraCam and InformaCam, Android apps that demonstrate privacy and human rights-sensitive ways to handle data, particularly visual data, generated by mobile phones.) Many app developers creating iOS, Android or other apps are small shops that have few staff, and no legal or privacy counsel to help them navigate tricky waters. What’s more, they are scattered in many jurisdictions that have extremely varied data protection laws and requirements. Frankly, it’s a no-brainer that they need help and guidance. (Update: I want to thank publicly Jules Polonetsky of the Future of Privacy Forum for pointing us along this path of inquiry during a research interview for Cameras Everywhere. Very exciting to see that he is involved in driving forward better industry-wide practices with the Application Privacy Summit in April 2012.)
We have made the argument for greater privacy protections in the app economy publicly and in private to the major technology companies, as well as app developers, VCs and policy-makers – we felt that it’s a central and intimate issue not just to activists, but to any and all users. We didn’t get much traction – we’re not technologists, and maybe the solutions we outline are inelegant or technically problematic, but that doesn’t mean the problem is a phantom one.
I hope that this recent upsurge in attention and scrutiny provides a window for companies like Apple, Google, Amazon, Twitter and Blackberry to realise that the concern is a real one (just as Apple did with mobile tracking data, for example), and to re-examine how their app ecosystems work. Ultimately, they need to take more responsibility for their app users’ privacy and safety, even if those apps are designed and built by third-parties elsewhere in the world – after all, only they really have the leverage, authority and know-how to make the app economy a safer place for us all.