Part 1: The Digital Wild West

The following article is the first piece in a two-part series. The second article can be found here.

Modern life is steeped in technology, advancing so quickly that it renders its predecessors obsolete in only a few years. Although technology has evolved, regulatory measures have arisen at a much slower pace. In the past few years, as more tech companies have begun to quietly subvert the demands of state actors, personal data has emerged as a key battleground for and legal frontier of privacy rights in the digital era. 

Until recently, the federal government of the United States faced little resistance in obtaining information on American citizens from companies like Google, Facebook, and Microsoft. But, with a growing number of requests for user data from law enforcement, some companies have started to push back against officials’ requests. After the shooting in San Bernardino, California in 2015, the FBI asked Apple to hack the shooter’s iPhone; Apple denied the request, despite repeated requests from the FBI, finally culminating in a court order. Eventually, the FBI was forced to turn to a third-party security company to hack the phone, while Apple gained widespread praise and support for standing its ground in defence of privacy. Other companies have been more subtle in their resistance.

Supporters of Apple’s decision to refuse the FBI’s request for access to private iPhone data. “¿Por qué el FBI ha retirado la demanda contra Apple?” by iphonedigital is licensed under CC BY-SA 2.0

These tensions illustrate the divide between those that collect, store, and sell personal data, and, while not owning it, have been given consent to use it, and state actors seeking to use citizens’ data for various policy goals. As a result of this cleavage, state actors are often compelled to approach private companies for users’ personal data, primarily for law enforcement purposes. These companies who can use the data, either to improve their own software or sell to advertisers, have no legal obligation to give it to the government. Normally, law enforcement uses warrants to seize data, but as can be seen from Apple’s refusal to comply following the 2015 shooting, they hold little weight for major tech companies. However, private companies themselves cannot always be considered trustworthy stewards of user data. Facebook recently came under fire for collecting user data and selling it to Cambridge Analytica, who then used it to produce targeted political advertising and influence the 2016 U.S. Presidential election. Many other companies, like Yahoo! and Uber, have similarly jeopardised their customers’ privacy and safety during massive breaches of user data by lacking sufficient security measures. 

Despite the mounting controversies and concerns, there are few legal avenues for claims of privacy infringement in the U.S., wherein data privacy regulations fall short of the standard across other countries. The U.S. has no federal mandate related to data protection, besides laws protecting health and financial information, so the responsibility often falls to states to enact their own data protection laws, allowing them to be as lenient or stringent as they like. Further, because most online data crosses both state and national borders, it often goes widely unprotected. The U.S. is an international technological hub, and by lacking strict data regulation, it fails to act responsibly and set a global precedent, thereby putting other countries’ data at risk, as well as that of its own citizens. 

10 privacy tips for businesses to comply with PIPEDA. Graphic provided by the Office of the Privacy Commissioner of Canada

Canada, on the other hand, established a federal mandate in 2000 when it passed the Personal Information Protection and Electronic Documents Act (PIPEDA), a sweeping collection of legislation that applies to all private organisations in Canada that “collect, use, or disclose personal information in the course of commercial activity.” PIPEDA outlines ten information principles that must be adhered to when it comes to the collection of personal data, requiring businesses to be transparent and forthcoming about how they collect, use, and secure data. While PIPEDA also applies to federally regulated organisations, like banks, it does not apply to federal, provincial, or municipal governments themselves. 

The Canadian government’s ability to use its citizens’ data is currently constrained by the Privacy Act of 1983, which was enacted as a response to the rise of network computers and growing concerns for individual privacy. However, despite the subsequent arrival of the Internet in 1989 and the evolution of digital technologies, the Privacy Act has not undergone any substantive changes since its inception, and PIPEDA has not been revised since 2007. Therefore, as the definition of personal privacy has dramatically changed in the last decade, lack of revision to both acts puts the data of millions of Canadians at risk of exploitation and misuse. 

Interestingly, not all countries see their private sector and government agencies in proprietary opposition over data. China, for example, employs 200 million facial recognition cameras across the country, which can identify faces on streets and match them to information in government databases. While other countries use facial recognition, the instant identification of random people in everyday situations, without any material basis, as well as the scale with which the Chinese government employs such cameras, is unprecedented. Currently, as one of its many projects, the Chinese government uses this technology to profile, track, and detain Uighur Muslims, an ethno-religious minority persecuted and interned in “re-education” camps in Xinjiang province. However, China is not alone. State-sponsored facial recognition technology is on the rise worldwide, notably in countries such as the United Kingdom and India, while a handful of cities in the United States, like San Francisco, Oakland, and Somerville, have banned the use of facial recognition technology by law. 

Reliable facial recognition requires a lot of data. “File:Face Recognition 3252983.png” by teguhjatipras is licensed under CC0 1.0

While intelligence agencies in the U.S. have used various forms of facial recognition since the 1960s, in 2014, Facebook became one of the first private companies to employ it. Since then, the facial recognition industry has grown and is now estimated to be worth $3.2 billion. Recently, the start-up Clearview AI sparked controversy with its ground-breaking facial recognition application, which is used by over 600 law enforcement agencies and can match a person’s image with all their associated photos and information available online. Among its many uses, the application has helped to identify suspected criminals and, by virtue of not being restricted to government databases, has far eclipsed the facial recognition technology typically used by police departments, which often include only government-issued photo ID or mugshots from arrested adults. The extent of the application’s accuracy is unknown, however, as Clearview refuses to release any reports on the frequency of false matches.

It is becoming increasingly clear that Clearview is charting new territory—for the rest of the digital world, the line between personal privacy and security is drawn at facial recognition technology using non-government databases. In fact, most social media platforms, like Facebook and Twitter prohibit the collection of publicly available data from their sites for use in facial recognition technology conducted by third parties, although they do use facial recognition internally to help users organise photos and tag friends. Evidently, the use of facial recognition is so controversial that, although these companies would invariably profit from offering their photos to other facial recognition databases, they refrain from doing so, seemingly following an unwritten rule rooted in potentially far-reaching consequences. While Clearview is in violation of Facebook’s terms of service, there has been no indication of action taken against the company besides cease-and-desist letters. 

Although the lack of stringent regulation has allowed innovation to run rampant, unregulated use of advanced technology could have serious consequences in the long-term. The evolution of digital technology has outpaced legislators, leaving user data unprotected against the whims of state and private actors alike. Indeed, state actors in North America have done little to create and enforce strong data protection, which sets a dangerous precedent that is only now starting to be questioned. Therefore, in the digital world, the user is left with nobody to trust: the corporation whose profits rely on the collection of personal information, nor the state actors who have sought to override rather than protect their citizens’ right to privacy. While we live in an era of extraordinary technological advancement, such progress is being realised at the cost of our right to privacy.

Edited by Chanel MacDiarmid.