privacy

iMessedUp – Why Apple’s iOS 16 Update Is a Mistake in the Eyes of Litigators.

Carlisle Ghirardini, MJLST Staffer

Have you ever wished you could unsend a text message? Has autocorrect ever created a typo you would give anything to edit? Apple’s recent iOS 16 update makes these dreams come true. The new software allows you to edit a text message a maximum of five times for up to 15 minutes after delivery and to fully unsend a text for up to two minutes after delivery.[1] While this update might be a dream for a sloppy texter, it may become a nightmare for a victim hoping to use text messages as legal evidence. 

But I Thought my Texts Were Private?

Regardless of the passcode on your phone, or other security measures you may use to keep your correspondence private, text messages can be used as relevant evidence in litigation so long as they can be authenticated.[2] Under the Federal Rules of Evidence Rule 901(a), such authentication only requires proof sufficient to support a finding that the evidence at issue is what you claim it is.[3] Absent access to the defendant’s phone, a key way to authenticate texts includes demonstrating the personal nature of the messages, which emulate earlier communication.[4] However, for texts to be admitted as evidence beyond hearsay, proof of the messages through screenshots, printouts, or other tangible methods of authentication is vital.[5]

A perpetrator may easily abuse the iOS 16 features by crafting harmful messages and then editing or unsending them. This has several negative effects. First, the fact that this capability is available may increase perpetrator utilization of text, knowing that disappearing harassment will be easier to get away with. Further, victims will be less likely to capture the evidence in the short time before the proof is rescinded, but after the damage has already been done. Attorney Michelle Simpson Tuegal who spoke out against this software shared how “victims of trauma cannot be relied upon, in that moment, to screenshot these messages to retain them for any future legal proceedings.”[6] Finally, when the victims are without proof and the perpetrator denies sending, psychological pain may result from such “gaslighting” and undermining of the victim’s experience.[7]

Why are Text Messages so Important?

Text messages have been critical evidence in proving the guilt of the defendant in many types of cases. One highly publicized example is the trial of Michelle Carter, who sent manipulative text messages to encourage her then 22-year-old boyfriend to commit suicide.[8] Not only were these texts of value in proving reckless conduct, they also proved Carter guilty of involuntary manslaughter as her words were shown to be the cause of the victim’s death. Without evidence of this communication, the case may have turned out very differently. Who is to say that Carter would not have succeeded in her abuse by sending and then unsending or editing her messages later?

Text messaging is also a popular tool for perpetrators of sexual harassment, and it happens every day. In a Rhode Island Supreme Court case, communication via iMessage was central to the finding of 1st degree sexual assault, as the 17-year-old plaintiff felt too afraid to receive a hospital examination after her attack.[9] Fortunately, the plaintiff had saved photos of inappropriate messages the perpetrator sent after the incident, amongst other records of their texting history, which properly authenticated the texts and connected him to the crime. It is important to note, however, that the incriminating screenshots were not taken until the morning after and with the help of a family member. This demonstrates how it is not often the first instinct of a victim to immediately memorialize evidence, especially when the content may be associated with shame or trauma. The new iOS feature may take away this opportunity to help one’s case through messages which can paint a picture of the incident or the relationship between the parties.

Apple Recognized That They Messed Up

The current iOS 16 update offering two minutes to recall messages and 15 minutes to edit them is actually an amendment to Apple’s originally offered timeframe of 15 minutes to unsend. This change came in light of efforts from an advocate for survivors of sexual harassment and assault. The advocate wrote a letter to the Apple CEO warning of the dangers of this new unsending capability.[10] While the decreased timeframe that resulted leaves less room for abuse of the feature, editing is just as dangerous as unsending. With no limit to how much text you can edit, one could send full sentences of verbal abuse simply just to later edit and replace them with a one-word message. Furthermore, if someone is reading the harmful messages in real time, the shorter window only gives them less time to react – less time to save the messages for evidence. While we can hope that the newly decreased window makes perpetrators think harder before sending a text that they may not be able to delete, this is wishful thinking. The fact that almost half of young people have reported being victims to cyberbullying when there has been no option to rescind or edit one’s messages shows that the length of the iOS feature likely does not matter.[11] The abilities of the new Apple software should be disabled; their “fix” to the update is not enough. The costs of what such a feature will do to victims and their chances of success in litigation outweigh the benefits to the careless texter. 

Notes

[1] Sofia Pitt, Apple Now Lets You Edit and Unsend Imessages on Your Iphone. Here’s How to Do It, CNBC (Sep. 12, 2022, 1:12 PM), https://www.cnbc.com/2022/09/12/how-to-unsend-imessages-in-ios-16.html.

[2] FED. R. EVID. 901(a).

[3] Id.

[4] United States v. Teran, 496 Fed. Appx. 287 (4th Cir. 2012).

[5] State v. Mulcahey, 219 A.3d 735 (R.I. Sup. Ct. 2019).

[6] Jess Hollington, Latest Ios 16 Beta Addresses Rising Safety Concerns for Message Editing, DIGITALTRENDS (Jul. 27, 2022) https://www.digitaltrends.com/mobile/ios-16-beta-4-message-editing-unsend-safety-concerns-fix/

[7] Id.

[8] Commonwealth v. Carter, 115 N.E.3d 559 (Mass. Sup. Ct. 2018).

[9] Mulcahey, 219 A.3d at 740.

[10] Hollington, supra note 5.

[11] 45 Cyberbullying Statistics and Facts to Make Texting Safer, SLICKTEXT (Jan. 4, 2022) https://www.slicktext.com/blog/2020/05/cyberbullying-statistics-facts/.




Would Autonomous Vehicles (AVs) Interfere With Our Fourth Amendment Rights?

Thao Nguyen, MJLST Staffer

Traffic accidents are a major issue in the U.S. and around the world. Although car safety features are continuously enhanced and improved, traffic crashes continue to be the leading cause of non-natural death for U.S. citizens. Most of the time, the primary causes are human errors rather than instrumental failures. Therefore, autonomous vehicles (“AVs”), which promise to be the automobiles that operate themselves without the human driver, are an exciting up and coming technology, studied and developed in both academia and industry[1].

To drive themselves, AVs must be able to perform two key tasks: sensing the surrounding environment and “driving”—essentially replacing the eyes and hands of the human driver.[2] The standard AV design today includes a sensing system that collects information from the outside world, assisting the “driving” function. The sensing system is composed of a variety of sensors,[3] most commonly a Light Detection and Ranging (LiDAR) and cameras.[4] A LiDAR is a device that emits laser pulses and uses sound navigation and ranging (“SONAR”) principles to get a depth estimation of the surroundings: the emitted laser pulses travel forward, hit an object, then bounce back to the receivers; the time taken for the pulses to travel back is measured, and the distance is computed. With this information about distance and depth, a 3D point cloud map is generated about the surrounding environment. In addition to precise 3D coordinates, most LiDAR systems also record “intensity.” “Intensity” is the measure of the return strength of the laser pulse, which is based, in part, on the reflectivity of the surface struck by the laser pulse. LiDAR “intensity” data thus reveal helpful information about the surface characteristics of their surroundings. The two sensors, the camera and the LiDAR, complement each other: the former conveys rich appearance data with more details on the objects, whereas the latter is able to capture 3D measurements[5]. Fusing the information acquired by each allows the sensing system to gain a reliable environmental perception.[6]

LiDAR sensing technology is usually combined with artificial intelligence, as its goal is to imitate and eventually replace human perception in driving. Today, the majority of artificial intelligences use “machine learning,” a method that gives computers the ability to learn without explicitly being programmed. With machine learning, computers train itself to do new tasks in a similar manner as do humans: by exploring data, identifying patterns, and improving upon past experiences. Applied machine learning is data-driven: the greater the breadth and depth of the data supplied to the computer, the greater the variety and complexity of the tasks that the computer can program itself to do. Since “driving” is a combination of multiple high-complexity tasks, such as object detection, path planning, localization, lane detection, etc., an AV that drives itself requires voluminous data in order to operate properly and effectively.

“Big data” is already considered a valuable commodity in the modern world. In the case of AVs, however, this data will be of public streets and road users, and the large-scale collection of this data is empowered further by various technologies to detect and identify, track and trace, mine and profile data. When profiles about a person’s traffic movements and behaviors exist in a database somewhere, there is a great temptation for the information to be used for other purposes than the purpose for which they were originally collected, as has been the case with a lot of other “big data” today. Law enforcement officers who get their hands on these AVs data can track and monitor people’s whereabouts, pinpointing individuals whose trajectories touch on suspicious locations at a high frequency. The trajectories can be matched with the individual identified via use of car models and license plates. The police may then identify crime suspects based on being able to see the trajectories of everyone in the same town, rather than taking the trouble to identify and physically track each suspect. Can this use of data by law enforcement be sufficiently justified?

As we know, use of “helpful” police tools may be restricted by the Fourth Amendment, and for good reasons. Although surveillance helps police officers detect criminals,[7] extraneous surveillance has its social costs: restricted privacy and a sense of being “watched” by the government inhibits citizens’ productivity, creativity, spontaneity, and causes other psychological effects.[8] Case law has given us guidance to interpret and apply the Fourth Amendment standards of “trespass” or “unreasonable searches and seizures” by the police. Three principal cases, Olmstead v. United States, 277 U.S. 438 (1928), Goldman v. United States, 316 U.S. 129 (1942), and United States v. Jones, 565 U.S. 400 (2012), a modern case, limit Fourth Amendment protection to protecting against physical intrusion into private homes and properties. Such protection would not be helpful in the case of LiDAR, which operates on public street as a remote sensing technology. Nonetheless, despite the Jones case, the more broad “reasonable expectation of privacy” test established by Katz v. United States, 389 U.S. 347 (1967) is more widely accepted. Instead of tracing physical boundaries of “persons, houses, papers, and effects,” the Katz test focuses on whether there is an expectation of privacy that is socially recognized as “reasonable.” The Fourth Amendment “protects people, not places,” wrote the Katz court.[9]

United States v. Knotts, 460 U.S. 276 (1983) was a public street surveillance case that invoked the Katz test. In Knotts, the police installed a beeper on to the defendant’s vehicle to track it. The Court found that such tracking on public streets was not prohibited by the Fourth Amendment: “A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[10] The Knotts Court thus applied the Katz test and considered the question of whether there was a “reasonable expectation of privacy,” meaning that such expectation was recognized as “reasonable” by society.[11] The Court’s answer is in the negative: unlike a person in his dwelling place, a person who is traveling on public streets “voluntarily conveyed to anyone who wanted to look at the fact that he was traveling over particular roads in a particular direction.”[12]

United States v. Maynard, 615 F.3d 544 (2010), another public street surveillance case taking place in the twenty-first century, reconsidered the Knotts holding regarding “reasonable expectation of privacy” on public streets. The Maynard defendant argued that the district court erred in admitting evidence acquired by the police’s warrantless use of a Global Pointing System (GPS) device to track defendant’s movements continuously for a month.[13] The Government invoked United States v. Knotts and its holding that “[a] person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”[14] The DC Circuit Court of Appeals, however, distinguished Knotts, pointing out that the Government in Knotts used a beeper that tracked a single journey, whereas the Government’s GPS monitoring in Maynard was sustained 24 hours a day continuously for one month.[15]The use of the GPS device over the course of one month did more than simply tracking defendant’s “movements from one place to another.” The result in Maynard was the discovery of the “totality and pattern” of defendant’s movement. [16]The Court is willing to make a distinction between “one path” and “the totality of one’s movement”: since someone’s “totality of movement” is much less exposed to the view of the public and much more revealing of that person’s personal life, it is constitutional for the police to track an individual on “one path,” but not that same individual’s “totality of movement.”

Thus, with time the Supreme Court appears to be recognizing that when it comes to modern surveillance technology, the sheer quantity and the revealing nature of data collected on movements of public street users ought to raise concerns. The straightforward application of these to AV sensing data would be that data concerning a person’s “one path” can be obtained and used, but not the totality of a person’s movement. It is unclear where to draw the line      between “one path” and “the totality of movement.” The surveillance in Knotts was intermittent over the course of three days,[17] whereas the defendant in Maynard was tracked for over one month. The limit would perhaps fall somewhere in between.

Furthermore, this straightforward application is complicated by the fact that the sensors utilized by AVs do not pick up mere locational information. As discussed above, AV sensing system, being composed of multiple sensors, capture both camera images and information about speed, texture, and depth of the object. In other words, AVs do not merely track a vehicle’s location like a beeper or GPS, but they “see” the vehicle through their cameras and LiDAR and radar devices, gaining a wealth of information. This means that even if only data about “one path” of a person movement is extracted, this “one path” data as processed by AV sensing systems is much more in-depth than what a beeper or CSLI can communicate. Adding to this, current developers are proposing to create AVs networks that share data among many vehicles, so that data on “one path” can potentially be combined with other data of the same vehicle’s movement, or multiple views of the same “one path” from different perspectives can be combined. The extensiveness of these data goes far beyond the precedents in Knotts and Maynard. Thus, it is foreseeable that unwarranted subpoenaing AVs sensing data is firmly within the Supreme Court’s definition of a “trespass.”

[1] Tri Nguyen, Fusing LIDAR sensor and RGB camera for object detection in autonomous vehicle with fuzzy logic approach, 2021 International Conference on Information Networking (ICOIN) 788, 788 (2021).

[2] Id. (“An autonomous vehicle or self-driving car is a vehicle having the ability to sense the surrounding environment and capable of operation on its own without any human interference. The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounting on it.”)

[3] Id. “The key to the perception system holding responsibility to collect the information in the outside world and determine the safety of the vehicle is a variety of sensors mounted on it.”

[4] Heng Wang and Xiaodong Zhang, Real-time vehicle detection and tracking using 3D LiDAR, Asian Journal of Control 1, 1 (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”)

[5] Id. (“Light Detection and Ranging (LiDAR) and cameras [6,8] are two kinds of commonly used sensors for obstacle detection.”) (“Conversely, LiDARs are able to produce 3D measurements and are not affected by the illumination of the environment [9,10].”).

[6] Nguyen, supra note 1, at 788 (“Due to the complementary of two sensors, it is necessary  to gain a more reliable environment perception by fusing the  information acquired from these two sensors.”).

[7] Raymond P. Siljander & Darin D. Fredrickson, Fundamentals of Physical Surveillance: A Guide for Uniformed and Plainclothes Personnel, Second Edition (2002) (abstract).

[8] Tamara Dinev et al., Internet Privacy Concerns and Beliefs About Government Surveillance – An Empirical Investigation, 17 Journal of Strategic Information Systems 214, 221 (2008) (“Surveillance has social costs (Rosen, 2000) and inhibiting effects on spontaneity, creativity, productivity, and other psychological effects.”).

[9] Katz v. United States, 389 U.S. 347, 351 (1967).

[10] United States v. Knotts, , 460 U.S. 276, 281 (1983) (“A person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.”)

[11] Id. at 282.

[12] Id.

[13] United States v. Maynard, 615 F.3d 544, 549 (2010).

[14]  Id. at 557.

[15] Id. at 556.

[16] Id. at 558 “[O]nes’s movements 24 hours a day for 28 days as he moved among scores of places, thereby discovering the totality and pattern of his movements.”).

[17] Knotts at 276.


With Lull in Deepfake Legislation, Questions Loom Large as Ever

Alex O’Connor, MJLST Staffer

In 2019 and 2020, remarkably realistic forged politically motivated content went viral on social media. The content, known as “deepfakes,” included photorealistic images of world leaders such as Kim Jong Un, Vladimir Putin, Matt Gaetz, and Barack Obama. Also in 2019, a woman was conned out of nearly $300,000 by a scammer posing as a U.S. Navy Admiral using deepfake technology. These stories, and others, catapulted online forgeries to the front page of newspapers, as observers were both intrigued and frightened by this novel technology. 

While the potential for deepfake technology to deceive political leaders and provoke conflict helped bring deepfakes into the public consciousness, individuals — and particularly women — have been victimized by deepfakes since as early as 2017. Even today, research suggests that 96% of deepfake content available online is nonconsensual pornography. While early targets of deepfakes were mostly celebrity women, nonpublic figures have been victimized as well. Indeed, deepfake technology is becoming increasingly more sophisticated and user friendly, giving anyone inclined the ability to forge pornography using a woman’s photograph transposed over explicit content in order to harass, blackmail, or embarrass. For example, one deepfake app allowed users to strip a subject’s clothing from photos, creating a photorealistic nude image. After widespread outcry, the developers of the app shut it down only hours after its launch. 

The political implications of deepfakes alarmed lawmakers as well, and congress leapt into action. Beginning in 2020, the National Defense Authorization Act (NDAA) included a requirement that the Department of Homeland Security (DHS) issue an annual report on the threats that deepfake technology poses for national security. The following year, the NDAA broadened the DHS report to include threats to individuals as well. Another piece of legislation, the Identifying Outputs of Generative Adversarial Networks Act, directed the National Institute of Standards and Technology to support research for developing standards related to deepfake content. 

A much more controversial bill went beyond mere research and committees. The DEEP FAKES Accountability Act would require any producer of deepfake content to include a watermark over the image notifying viewers that it was a forgery. If the content contains “sexual content of a visual nature,” producers of unwatermarked content would be subject to criminal penalties. Meanwhile, anyone who merely violates the watermark requirement would be subject to civil penalties of $150,000 per image. 

While many have celebrated the bill for its potential to protect individuals and the political process, others have criticized it as an overbroad and ineffective infringement on free speech. Producers of political satire in particular may find the watermark requirement a joke killer. Further, some worry that the pace of deepfake technology development could expose websites to interminable litigation as the proliferation of deepfake content renders enforcement of the act on platforms impossible. Originally introduced in June 2019 by Representative Yvette Clarke, [D-NY-9], the bill languished in committee. Representative Clarke reintroduced the bill in April of this year before the 117th Congress, and it is currently being considered by three committees: Energy and Commerce, Judiciary, and Homeland Security.

The flurry of legislative activity at the federal level was mirrored by engagement by states as well. Five states have enacted deepfake legislation to combat political interference, nonconsensual pornography, or both, while another four states have introduced similar legislation. As with the federal legislation, opposition to the state deepfake laws is grounded in First Amendment concerns, with defenders of civil liberties such as the ACLU sending a letter to the California governor asking him to veto the legislation. He declined.

Deepfake related legislative activity has stalled during the Coronavirus pandemic, but the questions around how to craft legislation that strikes the right balance between privacy and dignity on the one hand, and free expression and satire on the other loom large as ever. These questions will only become more relevant with the rapid growth of deepfake technology and growing concerns about governmental overreach in good-faith efforts to protect citizens’ privacy and the democratic process.


Censorship Remains Viable in China– but for How Long?

by Greg Singer, UMN Law Student, MJLST Managing Editor

Thumbnail-Greg-Singer.jpgIn the west, perhaps no right is held in higher regard than the freedom of speech. It is almost universally agreed that a person has the inherent right to speak their mind as he or she pleases, without fear of censorship or reprisal by the state. Yet for the more than 1.3 billion currently residing in what is one of the oldest civilizations on the planet, such a concept is either unknown or wholly unreflective of the reality they live in.

Despite the exploding amount of internet users in China (from 200 million users in 2007 to over 530 million by the end of the first half of 2012, more than the entire population of North America), the Chinese Government has remained implausibly effective at banishing almost all traces of dissenting thought from the wires. A recent New York Times article detailing the fabulous wealth of the Chinese Premier Wen Jiabao and his family members (at least $2.7 billion) resulted in the almost immediate censorship of the newspaper’s English and Chinese web presence in China. Not stopping there, the censorship apparatus went on to scrub almost all links, reproductions, or blog posts based on the article, leaving little trace of its existence to the average Chinese citizen. Earlier this year, the Bloomberg News suffered a similar fate, as it too published an unacceptable report regarding the unusual wealth of Xi Jinping, the Chinese Vice President and expected successor of current President, Hu Jintao.

In “Forbidden City Enclosed by the Great Firewall: The Law and Power of Internet Filtering in China,” published in the Winter 2012 version of the Minnesota Journal of Law, Science & Technology, Jyh-An Lee and Ching-Yi Liu explain that it is not mere tenacity that permits such effective censorship–the structure of the Chinese internet itself has been designed to allow the centralized authority to control and filter the flow of all communications over the network. Even despite the decentralizing face of content creation on the web, it appears as though censorship will remain technically possible in China for the foreseeable future.

Yet still, technical capability is not synonymous with political permissibility. A powerful middle class is emerging in the country, with particular strength in the large urban areas, where ideas and sentiments are prone to spread quickly, even in the face of government censorship. At the same time, GDP growth is steadily declining from its tremendous peak in the mid-2000s. These two factors may combine to produce a population that has the time, education, and wherewithal to challenge a status quo that will perhaps look somewhat less like marvelous prosperity in the coming years. If China wishes to enter the developed world as a peer to the west (with an economy based on skilled and educated individuals, rather than mass labor), addressing its ongoing civil rights issues seems like an almost unavoidable prerequisite.


Google Glass: Augmented Realty or ADmented Realty?

by Sarvesh Desai, UMN Law Student, MJLSTStaff

Thumbnail-Sarvesh-Desai.jpgGoogle glasses . . . like a wearable smartphone, but “weighing a few ounces, the sleek electronic device has a tiny embedded camera. The glasses also deploy what’s known as a ‘heads-up display,’ in which data are projected into the user’s field of vision on a small screen above the right eye.”

google-glasses2.jpgThe glasses are designed to provide an augmented reality experience in which (hopefully useful) information can be displayed to the wearer based on what the wearer is observing in the world at that particular moment. The result could be a stunning and useful achievement, but as one commentator pointed out, Google is an advertising company. The result of Google glasses, or as Google prefers to call them “Google Glass”(since they actually have no lenses) is that advertisements following you around and continuously updating as you move through the world may soon be a reality.

With the ever increasing digital age, more of our movements, preferences, and lives are incessantly tracked. A large portion of the American population carries a mobile phone at all times, and as iPhone users learned in 2011, a smartphone is not only a handy way to keep Facebook up to date, it is also a potential GPS tracking device.

With technologies like smartphones, movement data is combined with location data to create a detailed profile of each person. Google Glass extends this personal profile even further by recording not only where you are, but what you are looking at. This technology makes advertising, as displayed in the hit movie, The Minority Report, a reality, while also creating privacy issues that previously could not even be conceptualized outside science fiction.

Wondering what it might look like to wander the world, as context-sensitive advertisements flood your field of vision? Jonathan McIntosh, a pop culture hacker has the answer. He released a video titled ADmented Reality in which he placed ads onto Google’s Project Glass promotional video demonstrating what the potential combination of the technology, tracking, and advertising might yield. McIntosh discussed the potential implications of such technology in the ABC News Technology Blog. “Google’s an ad company. I think it’s something people should be mindful of and critical of, especially in the frame of these awesome new glasses,” McIntosh said.

As this technology continues to improve and become a more integrated part of our lives, the issue of tracking becomes ever more important. For a thorough analysis of these important issues, take a look at Omer Tene and Jules Polonetsky’s article in the Minnesota Journal of Law, Science & Technology, “To Track or ‘Do Not Track’: Advancing Transparency and Individual Control in Online Behavioral Advertising.” The article covers current online tracking devices, the use of tracking, and recent developments in the regulation of online tracking. The issues are not simple and there are many competing interests involved: efficiency vs. privacy, law enforcement vs. individual rights, and reputation vs. freedom of speech, to name a few. As this technology inexorably marches on, it is good to consider whether legislation is needed and, if so, how will it balance those competing interests. In addition, what values do we consider to be of greatest importance and worth preserving at the risk of hindering “awesome new” technology?


FBI Face Recognition Concerns Privacy Advocates

by Rebecca Boxhorn, Consortium Research Associate, Former MJLST Staff & Editor

Thumbnail-Rebecca-Boxhorn.jpgHelen of Troy’s face launched a thousand ships, but yours might provide probable cause. The FBI is developing a nationwide facial recognition database that has privacy experts fretting about the definition of privacy in a technologically advanced society. The $1 billion Next Generation Identification initiative seeks to harness the power of biometric data in the fight against crime. Part of the initiative is the creation of a facial photograph database that will allow officials to match pictures to mug shots, electronically identify suspects in crowds, or even find fugitives on Facebook. The use of biometrics in law enforcement is nothing new, of course. Fingerprint and DNA evidence have led to the successful incarceration of thousands. What privacy gurus worry about is the power of facial recognition technology and the potential destruction of anonymity.

Most facial recognition technology relies on the matching of “face prints” to reference photographs. Your face print is composed of as many as 80 measurements, including nose width, eye socket depth, and cheekbone shape. . Sophisticated computer software then matches images or video to a stored face print and any data accompanying that face print. Accuracy of facial recognition programs varies, from accuracy estimates as low as 61% to as high as 95%.

While facial recognition technology may prove useful for suspect identification, your face print could reveal much more than your identity to someone with a cell phone camera and a Wi-Fi connection. Researchers at Carnegie Melon University were able to link face print data to deeply personal information using the Internet: Facebook pages, dating profiles, even social security numbers! Although the FBI has assured the public that it only intends to include criminals in its nationwide database, this has not quieted concerns in the privacy community. Innocence before proof of guilt does not apply to the collection of biometrics. Police commonly collect fingerprints from arrestees, and California’s Proposition 69 allows police to collect DNA samples from all people they arrest, no matter the charge, circumstances, or eventual guilt or innocence. With the legality of pre-conviction DNA collection largely unsettled, the legal implications of new facial recognition technology are anything but certain.

It is not difficult to understand, then, why facial recognition has captured the attention of the federal government, including Senator Al Franken of Minnesota. During a Judiciary Committee hearing in July, Senator Franken underscored the free speech and privacy implications of the national face print database. From cataloging political demonstration attendees to misidentifications, the specter of facial recognition technology has privacy organizations and Senator Franken concerned.

But is new facial recognition technology worth all the fuss? Instead of tin foil hats, should we don ski masks? The Internet is inundated with deeply private information voluntarily shared by individuals. Thousands of people log on to Patientslikeme.com to describe their diagnoses and symptoms; 23andme.com allows users to connect to previously unknown relatives based on shared genetic information. Advances in technology seem to be chipping away at traditional notions of privacy. Despite all of this sharing, however, many users find solace and protection in the anonymity of the Internet. The ability to hide your identity and, indeed, your face is a defining feature of the Internet and the utility and chaos it provides. But as Omer Tene and Jules Polonetsky identify in their article “To Track or ‘Do Not Track’: Advancing Transparency and Individual Control in Online Behavioral Advertising,” online advertising “fuels the majority of free content and services online” while amassing enormous amounts of data on users. Facial recognition technology only exacerbates concerns about Internet privacy by providing the opportunity to harvest user-generated data, provided under the guise of anonymity, to give faces to usernames.

Facial recognition technology undoubtedly provides law enforcement officers with a powerful crime-fighting tool. As with all new technology, it is easy to overstate the danger of governmental abuse. Despite FBI assurances to use facial recognition technology only to catch criminals, concerns regarding privacy and domestic spying persist. Need the average American fear the FBI’s facial recognition initiative? Likely not. To be safe, however, it might be time to invest in those oversized sunglasses you have been pining after.