Data

United States v. Microsoft Corp.: A Chance for SCOTUS to Address the Scope of the Stored Communications Act

Maya Digre, MJLST Staffer

 

On October 16th, 2017 the United States Supreme Court granted the Federal Government’s petition for certiorari in United States v. Microsoft Corp. The case is about a warrant issued to Microsoft that ordered it to seize and produce the contents of a customer’s e-mail account that the government believed was being used in furtherance of narcotics trafficking. Microsoft produced the non-content information that was stored in the U.S., but moved to quash the warrant with respect to the information that was stored abroad in Ireland. Microsoft claimed that the only way to access the information was through the Dublin data center, even though this data center could also be accessed by their database management program located at some of their U.S. locations.

 

The district court of New York determined that Microsoft was in civil contempt for not complying with the warrant. The 2nd Circuit reversed, stating that “Neither explicitly or implicitly does the statute envision the application of its warrant provision overseas” and “the application of the Act that the government proposes – interpreting ‘warrant’ to require a service provider to retrieve material from beyond the borders of the United States – would require us to disregard the presumption against extraterritoriality.” The court used traditional tools of statutory interpretation in the opinion including plain meaning, presumption against extraterritoriality, and legislative history.

 

The issue in the case, according to ScotusBlog is “whether a United States provider of email services must comply with a probable-cause-based warrant issued under 18 U.S.C. § 2703 by making disclosure in the United States of electronic communications within that provider’s control, even if the provider has decided to store that material abroad.” Essentially, the dispute centers on the scope of the Stored Communications Act (“SCA”) with respect to information that is stored abroad. The larger issue is the tension between international privacy laws, and the absolute nature of warrants issued in the United States. According to the New York Times, “the case is part of a broader clash between the technology industry and the federal government in the digital age.”

 

I think that the broader issue is something that the Supreme Court should address. However, I am not certain that this is the best case for the court. The fact that Microsoft can access the information from data centers in the United States with their database management program seems to weaken their claim. The case may be stronger for companies who cannot access information that they store abroad from within the United States. Regardless of this weakness, the Supreme Court should rule in favor of the State to preserve the force of warrants of this nature. It was Microsoft’s choice to store the information abroad, and I don’t think the choices of companies should impede legitimate crime-fighting goals of the government. Additionally, if the Court ruled that the warrant does not reach information that is stored abroad, this may incentivize companies to keep their information out of the reach of a U.S. warrant by storing it abroad. This is not a favorable policy choice for the Supreme Court to make; the justices should rule in favor of the government.

 

Unfortunately, the Court will not get to make a ruling on this case after Microsoft decided to drop it following the DOJ’s agreement to change its policy.


Microsoft Triumphs in Fight to Notify Users of Government Data Requests

Brandy Hough, MJLST Staffer

 

This week, Microsoft announced it will drop its secrecy order lawsuit against the U.S. government after the Deputy U.S. Attorney General issued a binding policy limiting the use and term of protective orders issued pursuant to 18 U.S.C. §2705(b) of the Electronic Communications Privacy Act of 1986 (“ECPA”), also referred to as the Stored Communications Act (“SCA”).

 

The ECPA governs requests to obtain user records and information from electronic service providers. “Under the SCA, the government may compel the disclosure of . . . information via subpoena, a court order under 18 U.S.C. § 2703(d), or a search warrant.” Pursuant to 18 U.S.C. § 2705(b), a government entity may apply for an order preventing a provider from notifying its user of the existence of the warrant, subpoena, or court order. Such an order is to be granted only if “there is reason to believe” that such notification will result in (1) endangering an individual’s life or physical safety; (2) flight from prosecution; (3) destruction of or tampering with evidence; (4) intimidation of witnesses; or (5) seriously jeopardizing an investigation or delaying a trial.

 

Microsoft’s April 2016 lawsuit stemmed from what it viewed as routine overuse of protective orders accompanying government requests for user data under the ECPA, often without fixed end dates. Microsoft alleged both First and Fourth Amendment violations, arguing that “its customers have a right to know when the government obtains a warrant to read their emails, and . . . Microsoft has a right to tell them.” Many technology leaders, including Apple, Amazon, and Twitter, signed amicus briefs in support of Microsoft’s efforts.

 

The Deputy Attorney General’s October 19th memo states that “[e]ach §2705(b) order should have an appropriate factual basis and each order should extend only as long as necessary to satisfy the government’s interest.” It further outlines steps that prosecutors applying for §2705(b) orders must follow, including one that states “[b]arring exceptional circumstances, prosecutors filing § 2705(b) applications may only seek to delay notice for one year or less.” The guidelines apply prospectively to applications seeking protective orders filed on or after November 18, 2017.

 

Microsoft isn’t sitting back to celebrate its success; instead, it is continuing its efforts outside the courtroom, pushing for Congress to amend the ECPA to address secrecy orders.

 

Had the case progressed without these changes, the court should have ruled in favor of Microsoft. Because the way § 2705(b) of the SCA was written, it allowed the government to exploit the “vague legal standards . . . to get indefinite secrecy orders routinely, regardless of whether they were even based on the specifics of the investigation at hand.”This behavior violated both the First Amendment – by restraining Microsoft’s speech based on “purely subjective criteria” rather than requiring the government to “establish that the continuing restraint on speech is narrowly tailored to promote a compelling interest”  – and the Fourth Amendment – by not allowing users to know if the government searches and seizes their cloud-based property, in contrast to the way Fourth Amendment rights  are afforded to information stored in a person’s home or business. The court therefore should have declared, as Microsoft urged, that § 2705(b) was “unconstitutional on its face.”

 


6th Circuit Aligns With 7th Circuit on Data Breach Standing Issue

John Biglow, MJLST Managing Editor

To bring a suit in any judicial court in the United States, an individual, or group of individuals must satisfy Article III’s standing requirement. As recently clarified by the Supreme Court in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), to meet this requirement, a “plaintiff must have (1) suffered an injury in fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision.” Id. at 1547. When cases involving data breaches have entered the Federal Circuit courts, there has been some disagreement as to whether the risk of future harm from data breaches, and the costs spent to prevent this harm, qualify as “injuries in fact,” Article III’s first prong.

Last Spring, I wrote a note concerning Article III standing in data breach litigation in which I highlighted the Federal Circuit split on the issue and argued that the reasoning of the 7th Circuit court in Remijas v. Neiman Marcus Group, LLC, 794 F.3d 688 (7th Cir. 2015) was superior to its sister courts and made for better law. In Remijas, the plaintiffs were a class of individuals whose credit and debit card information had been stolen when Neiman Marcus Group, LLC experienced a data breach. A portion of the class had not yet experienced any fraudulent charges on their accounts and were asserting Article III standing based upon the risk of future harm and the time and money spent mitigating this risk. In holding that these Plaintiffs had satisfied Article III’s injury in fact requirement, the court made a critical inference that when a hacker steals a consumer’s private information, “[p]resumably, the purpose of the hack is, sooner or later, to make fraudulent charges or assume [the] consumers’ identit[y].” Id. at 693.

This inference is in stark contrast to the line of reasoning engaged in by the 3rd Circuit in Reilly v. Ceridian Corp. 664 F.3d 38 (3rd Cir. 2011).  The facts of Reilly were similar to Remijas, except that in Reilly, Ceridian Corp., the company that had experienced the data breach, stated only that their firewall had been breached and that their customers’ information may have been stolen. In my note, mentioned supra, I argued that this difference in facts was not enough to wholly distinguish the two cases and overcome a circuit split, in part due to the Reilly court’s characterization of the risk of future harm. The Reilly court found that the risk of misuse of information was highly attenuated, reasoning that whether the Plaintiffs experience an injury depended on a series of “if’s,” including “if the hacker read, copied, and understood the hacked information, and if the hacker attempts to use the information, and if he does so successfully.” Id. at 43 (emphasis in original).

Often in the law, we are faced with an imperfect or incomplete set of facts. Any time an individual’s intent is an issue in a case, this is a certainty. When faced with these situations, lawyers have long utilized inferences to differentiate between more likely and less likely scenarios for what the missing facts are. In the case of a data breach, it is almost always the case that both parties will have little to no knowledge of the intent, capabilities, or plans of the hacker. However, it seems to me that there is room for reasonable inferences to be made about these facts. When a hacker is sophisticated enough to breach a company’s defenses and access data, it makes sense to assume they are sophisticated enough to utilize that data. Further, because there is risk involved in executing a data breach, because it is illegal, it makes sense to assume that the hacker seeks to gain from this act. Thus, as between the Reilly and Remijas courts’ characterizations of the likelihood of misuse of data, it seemed to me that the better rule is to assume that the hacker is able to utilize the data and plans to do so in the future. Further, if there are facts tending to show that this inference is wrong, it is much more likely at the pleading stage that the Defendant Corporation would be in possession of this information than the Plaintiff(s).

Since Remijas, there have been two data breach cases that have made it to the Federal Circuit courts on the issue of Article III standing. In Lewert v. P.F. Chang’s China Bistro, Inc., 819 F.3d 963, 965 (7th Cir. 2016), the court unsurprisingly followed the precedent set forth in their recent case, Remijas, in finding that Article III standing was properly alleged. In Galaria v. Nationwide Mut. Ins. Co., a recent 6th Circuit case, the court had to make an Article III ruling without the constraint of an earlier ruling in their Circuit, leaving the court open to choose what rule and reasoning to apply. Galaria v. Nationwide Mut. Ins. Co., No. 15-3386, 2016 WL 4728027, (6th Cir. Sept. 12, 2016). In the case, the Plaintiffs alleged, among other claims, negligence and bailment; these claims were dismissed by the district court for lack of Article III standing. In alleging that they had suffered an injury in fact, the Plaintiffs alleged “a substantial risk of harm, coupled with reasonably incurred mitigation costs.” Id. at 3. In holding that this was sufficient to establish Article III standing at the pleading stage, the Galaria court found the inference made by the Remijas court to be persuasive, stating that “[w]here a data breach targets personal information, a reasonable inference can be drawn that the hackers will use the victims’ data for the fraudulent purposes alleged in Plaintiffs’ complaints.” Moving forward, it will be intriguing to watch how Federal Circuits who have not faced this issue, like the 6th circuit before deciding Galaria, rule on this issue and whether, if the 3rd Circuit keeps its current reasoning, this issue will eventually make its way to the Supreme Court of the United States.


Solar Climate Engineering and Intellectual Property

Jesse L. Reynolds 

Postdoctoral researcher, and Research funding coordinator, sustainability and climate
Department of European and International Public Law, Tilburg Law School

Climate change has been the focus of much legal and policy activity in the last year: the Paris Agreement, the Urgenda ruling in the Netherlands, aggressive climate targets in China’s latest five year plan, the release of the final US Clean Power Plan, and the legal challenge to it. Not surprisingly, these each concern controlling greenhouse gas emissions, the approach that has long dominated efforts to reduce climate change risks.

Yet last week, an alternative approach received a major—but little noticed—boost. For the first time, a federal budget bill included an allocation specifically for so-called “solar climate engineering.” This set of radical proposed technologies would address climate change by reducing the amount of incoming solar radiation. These would globally cool the planet, counteracting global warming. For example, humans might be able to mimic the well-known cooling caused by large volcanos via injecting a reflective aerosol into the upper atmosphere. Research thus far – which has been limited to modeling – indicates that solar climate engineering (SCE) would be effective at reducing climate change, rapidly felt, reversible in its direct climatic effects, and remarkably inexpensive. It would also pose risks that are both environmental – such as difficult-to-predict changes to rainfall patterns – and social – such as the potential for international disagreement regarding its implementation.

The potential role of private actors in SCE is unclear. On the one hand, decisions regarding whether and how to intentionally alter the planet’s climate should be made through legitimate state-based processes. On the other hand, the private sector has long been the site of great innovation, which SCE technology development requires. Such private innovation is both stimulated and governed through governmental intellectual property (IP) policies. Notably, SCE is not a typical emerging technology and might warrant novel IP policies. For example, some observers have argued that SCE should be a patent-free endeavor.

In order to clarify the potential role of IP in SCE (focusing on patents, trade secrets, and research data), Jorge Contreras of the University of Utah, Joshua Sarnoff of DePaul University, and I wrote an article that was recently accepted and scheduled for publication by the Minnesota Journal of Law, Science & Technology. The article explains the need for coordinated and open licensing and data sharing policies in the SCE technology space.

SCE research today is occurring primarily at universities and other traditional research institutions, largely through public funding. However, we predict that private actors are likely to play a growing role in developing products and services to serve large scale SCE research and implementation, most likely through public procurement arrangements. The prospect of such future innovation should be not stifled through restrictive IP policies. At the same time, we identify several potential challenges for SCE technology research, development, and deployment that are related to rights in IP and data for such technologies. Some of these challenges have been seen in regard to other emerging technologies, such as the risk that excessive early patenting would lead to a patent thicket with attendant anti-commons effects. Others are more particular to SCE, such as oft-expressed concerns that holders of valuable patents might unduly attempt to influence public policy regarding SCE implementation. Fortunately, a review of existing patents, policies, and practices reveals a current opportunity that may soon be lost. There are presently only a handful of SCE-specific patents; research is being undertaken transparently and at traditional institutions; and SCE researchers are generally sharing their data.

After reviewing various options and proposals, we make tentative suggestions to manage SCE IP and data. First, an open technical framework for SCE data sharing should be established. Second, SCE researchers and their institutions should develop and join an IP pledge community. They would pledge, among other things, to not assert SCE patents to block legitimate SCE research and development activities, to share their data, to publish in peer reviewed scientific journals, and to not retain valuable technical information as trade secrets. Third, an international panel—ideally with representatives from relevant national and regional patent offices—should monitor and assess SCE patenting activity and make policy recommendations. We believe that such policies could head off potential problems regarding SCE IP rights and data sharing, yet could feasibly be implemented within a relatively short time span.

Our article, “Solar Climate Engineering and Intellectual Property: Toward a Research Commons,” is available online as a preliminary version. We welcome comments, especially in the next couple months as we revise it for publication later this year.


The Comment on the Note “Best Practices for Establishing Georgia’s Alzheimer’s Disease Registry” of Volume 17, Issue 1

Jing Han, MJLST Staffer

Alzheimer’s disease (AD), also known just Alzheimer’s, accounts for 60% to 70% of cases of dementia. It is a chronic neurodegenerative disease that usually starts slowly and gets worse over time. The cause of Alzheimer’s disease is poorly understood. No treatments could stop or reverse its progression, though some may temporarily improve symptoms. Affected people increasingly rely on others for assistance, often placing a burden on the caregiver; the pressures can include social, psychological, physical, and economic elements. It was first described by, and later named after, German psychiatrist and pathologist Alois Alzheimer in 1906. In 2015, there were approximately 48 million people worldwide with AD. In developed countries, AD is one of the most financially costly diseases. Before many states, including Georgia, South Carolina, passed legislation establishing the Registry, many private institutions across the country already had made tremendous efforts to establish their own Alzheimer’s disease registries. The country has experienced an exponential increase of people who are diagnosed with Alzheimer’s disease. More and more states have begun to have their own Alzheimer’s disease registry.

From this Note, the Registry in Georgia has emphasized from the outset, the importance of protecting the confidentiality of patent date from secondary uses. This Note explores many legal and ethical issues raised by the Registry. An Alzheimer’s disease patient’s diagnosis history, medication history, and personal lifestyle are generally confidential information, known only to the physician and patient himself. Reporting such information to the Registry, however, may lead to wider disclosure of what was previously private information and consequently may arouse constitutional concerns. It is generally known that the vast majority of public health registries in the past have focused on collection of infectious disease data, registries for non-infectious diseases, such as Alzheimer’s disease, diabetes, and cancer have been recently created. It is a delicate balance between the public interest and personal privacy. It is not a mandatory requirement to register because Alzheimer is not infectious. After all, people suffering from Alzheimer’s often face violations of their human rights, abuse and neglect, as well as widespread discrimination from the other people. When a patient is diagnosed as AD, the healthcare provider, the doctor should encourage, rather than compel patients to receive registry. Keeping all the patients’ information confidential, enacting the procedural rules to use the information and providing some incentives are good approaches to encourage more patients to join the registry.

Based on the attention to the privacy concerns under federal and state law, the Note recommend slightly broader data sharing with the Georgia Registry, such as a physician or other health care provider for the purpose of a medical evaluation or treatment of the individual; any individual or entity which provides the Registry with an order from a court of competent jurisdiction ordering the disclosure of confidential information. What’s more, the Note mentions there has the procedural rules designated to administer the registry in Georgia. The procedural rules involve clauses: who are the end-users of the registry; what type of information should be collected in the registry; how and from whom should the information be collected; and how should the information be shared or disclosed for policy planning for research purpose; how the legal representatives get authority from patient.

From this Note, we have a deep understanding of Alzheimer’s disease registry in the country through one state’s experience. The registry process has invoked many legal and moral issues. The Note compares the registry in Georgia with other states and points out the importance of protecting the confidentiality of patient data. Emphasizing the importance of protection of personal privacy could encourage more people and more states to get involved in this plan.


The Federal Government Wants Your iPhone Passcode: What Does the Law Say?

Tim Joyce, MJLST Staffer

Three months ago, when MJLST Editor Steven Groschen laid out the arguments for and against a proposed New York State law that would require “manufacturers and operating system designers to create backdoors into encrypted cellphones,” the government hadn’t even filed its motion to compel against Apple. Now, just a few weeks after the government quietly stopped pressing the issue, it almost seems as if nothing at all has changed. But, while the dispute at bar may have been rendered moot, it’s obvious that the fight over the proper extent of data privacy rights continues to simmer just below the surface.

For those unfamiliar with the controversy, what follows are the high-level bullet points. Armed attackers opened fire on a group of government employees in San Bernardino, CA on the morning of December 2, 2015. The attackers fled the scene, but were killed in a shootout with police later that afternoon. Investigators opened a terrorism investigation, which eventually led to a locked iPhone 5c. When investigators failed to unlock the phone, they sought Apple’s help, first politely, and then more forcefully via California and Federal courts.

The request was for Apple to create an authenticated version of its iOS operating system which would enable the FBI to access the stored data on the phone. In essence, the government asked Apple to create a universal hack for any iPhone operating that particular version of iOS. As might be predicted, Apple was less than inclined to help crack its own encryption software. CEO Tim Cook ran up the banner of digital privacy rights, and re-ignited a heated debate over the proper scope of government’s ability to regulate encryption practices.

Legal chest-pounding ensued.

That was the situation until March 28, when the government quietly stopped pursuing this part of the investigation. In its own words, the government informed the court that it “…ha[d] now successfully accessed the data stored on [the gunman]’s iPhone and therefore no longer require[d] the assistance from Apple Inc…”. Apparently, some independent governmental contractor (read: legalized hacker) had done in just a few days what the government had been claiming from the start was impossible without Apple’s help. Mission accomplished – so, the end?

Hardly.

While this one incident, for this one iPhone (the iOS version is only applicable to iPhone 5c’s, not any other model like the iPhone 6), may be history, many more of the same or substantially similar disputes are still trickling through the courts nationwide. In fact, more than ten other federal iPhone cases have been filed since September 2015, and all this based on a 227 year old act of last resort. States like New York are also getting into the mix, even absent fully ratified legislation. Furthermore, it’s obvious that legislatures are taking this issue seriously (see NYS’s proposed bill, recently returned to committee).

Although he is only ⅔ a lawyer at this point, it seems to this author that there are at least three ways a court could handle a demand like this, if the case were allowed to go to the merits.

  1. Never OK to demand a hack – In this situation, the courts could find that our collective societal interests in privacy would always preclude enforcement of an order like this. Seems unlikely, especially given the demonstrated willingness in this case of a court to make the order in the first place.
  2. Always OK to demand a hack – Similar to option 1, this option seems unlikely as well, especially given the First and Fourth Amendments. Here, the courts would have to find some rationale to justify hacking in every circumstance. Clearly, the United States has not yet transitioned to Orwellian dystopia yet.
  3. Sometimes OK to demand a hack, but scrutiny – Here, in the middle, is where it seems likely we’ll find courts in the coming years. Obviously, convincing arguments exist on each side, and it seems possible reconcile infringing personal privacy and upholding national security with burdening a tech company’s policy of privacy protection, given the right set of facts. The San Bernardino shooting is not that case, though. The alleged terrorist threat has not been characterized as sufficiently imminent, and the FBI even admitted that cracking the cell phone was not integral to the case and they didn’t find anything anyway. It will take a (probably) much more scary scenario for this option to snap into focus as a workable compromise.

We’re left then with a nagging feeling that this isn’t the last public skirmish we’ll see between Apple and the “man.” As digital technology becomes ever more integrated into daily life, our legal landscape will have to evolve as well.
Interested in continuing the conversation? Leave a comment below. Just remember – if you do so on an iPhone 5c, draft at your own risk.


Requiring Backdoors into Encrypted Cellphones

Steven Groschen, MJLST Managing Editor

The New York State Senate is considering a bill that requires manufacturers and operating system designers to create backdoors into encrypted cellphones. Under the current draft, failure to comply with the law would result in a $2,500 fine, per offending device. This bill highlights the larger national debate concerning privacy rights and encryption.

In November of 2015, the Manhattan District Attorney’s Office (MDAO) published a report advocating for a federal statute requiring backdoors into encrypted devices. One of MDAO’s primary reasons in support of the statute is the lack of alternatives available to law enforcement for accessing encrypted devices. The MDAO notes that traditional investigative techniques have largely been ineffective. Additionally, the MDAO argues that certain types of data residing on encrypted devices often cannot be found elsewhere, such as on a cloud service. Naturally, the inaccessibility of this data is a significant hindrance to law enforcement. The report offers an excellent summary of the law enforcement perspective; however, as with all debates, there is another perspective.

The American Civil Liberties Union (ACLU) has stated it opposes using warrants to force device manufacturers to unlock their customers’ encrypted devices. A recent ACLU blog post presented arguments against this practice. First, the ACLU argued that the government should not require “extraordinary assistance from a third party that does not actually possess the information.” The ACLU perceives these warrants as conscripting Apple (and other manufacturers) to conduct surveillance on behalf of the government. Second, the ACLU argued using search warrants bypasses a “vigorous public debate” regarding the appropriateness of the government having backdoors into cellphones. Presumably, the ACLU is less opposed to laws such as that proposed in the New York Senate, because that process involves an open public debate rather than warrants.

Irrespective of whether the New York Senate bill passes, the debate over government access to its citizens’ encrypted devices is sure to continue. Citizens will have to balance public safety considerations against individual privacy rights—a tradeoff as old as government itself.


Circumventing EPA Regulations Through Computer Programs

Ted Harrington, MJLST Staffer

In September of 2015, it was Volkswagen Group (VW). This December, it was the General Electric Company (GE) finalizing a settlement in the United States District Court in Albany. The use of computer programs or other technology to override, or “cheat,” some type of Environmental Protection Agency (EPA) regulation has become seemingly commonplace.

GE uses silicone as part of its manufacturing process, which results in volatile organic compounds and chlorinated hydrocarbons, both hazardous byproducts. The disposal of hazardous materials is closely regulated by the Resource Conservation and Recovery Act (RCRA). Under this act, the EPA has delegated permitting authority to the New York State Department of Environmental Conservation (DEC). This permitting authority allows the DEC to grant permits for the disposal of hazardous wastes in the form of an NYS Part 373 Permit.

The permit allowed GE to store hazardous waste, operate a landfill, and use two incinerators on-site at its Waterford, NY plant. The permit was originally issued in 1989, and was renewed in 1999. The two incinerators included an “automatic waste feed cutoff system” designed to keep the GE facility in compliance with RCRA and the NYS Part 373 Permit. If the incinerator reached a certain limit, the cutoff system would simply stop feeding more waste.

Between September 2006 and February 2007, the cutoff system was overridden by computer technology, or manually by GE employees, on nearly 2,000 occasions. This resulted in hazardous waste being disposed of in amounts grossly above the limits of the issued permits. In early December, GE quickly settled the claim by paying $2.25 million in civil penalties.

Beyond the extra pollution caused by GE, a broader problem is emerging—in an increasingly technological world, what can be done to prevent companies from skirting regulations using savvy computer programs? With more opportunities than ever to get around regulation using technology, is it even feasible to monitor these companies? It is virtually certain that similar instances will continue to surface, and agencies such as the EPA must be on the forefront of developing preventative technology to slow this trend.


Warrant Now Required For One Type of Federal Surveillance, and May Soon Follow for State Law Enforcement

Steven Graziano, MJLST Staffer

As technology has advanced over the recent decades, law enforcement agencies have expanded their enforcement techniques. One example of these tools is cell-site simulators, otherwise known as sting rays. Put simply, sting rays act as a mock cell tower, detect the use of a specific phone number in a given range, and then uses triangulation to locate the phone. However, the recent, heightened awareness and criticism directed towards government and law enforcement surveillance has affected their potential use. Specifically, many federal law enforcement agencies have been barred from their use without a warrant, and there is current federal legislation pending, which would require state and local law enforcement agents to also gain a warrant before using a sting ray.

Federal law enforcement agencies, specifically Immigration, Secret Service, and Homeland Security agents must obtain search warrants before using sting rays, as announced by the Department of Homeland Security. Homeland Security’s shift in policy comes after the Department of Justice made a similar statement. The DOJ has affirmed that although they had previously used cell-cite simulators without a warrant, going forward they will require law enforcement agencies gain a search warrant supported by probable cause. DOJ agencies directed by this policy include the FBI and the Drug Enforcement Administration. This shift in federal policy was largely in response to pressures put upon Washington by civil liberties groups, as well as the shift in American public’s attitude towards surveillance generally.

Although these policies only affect federal law enforcement agencies, there have also been steps taken to expand the warrant requirement for sting rays to state and local governments. Federal lawmakers have introduced the Cell-Site Simulator Act of 2015, also known as the Stingray Privacy Act, to hold state and local law enforcement to the same Fourth Amendment standards as the federal government. The law has been proposed in the House of Representatives by Rep. Jason Chaffetz (R-Utah) and was designated to a congressional committee on November 2, 2015, which will consider it before sending it to the entire House or Senate. In addition to requiring a warrant, the act also requires prosecutors and investigators to disclose to judges that the technology they intend to use in execution of the warrant is specifically a sting ray. The proposed law was partially a response to a critique of the federal warrant requirement, name that it did not compel state or local law enforcement to also obtain a search warrant.

The use of advanced surveillance programs by federal, state, and local law enforcement, has been a controversial subject recently. Although law enforcement has a duty to fully enforce that law, and this includes using the entirety of its resources to detect possible crimes, it must still adhere to the constitutional protections laid out in the Fourth Amendment when doing so. Technology chances and advances rapidly, and sometimes it takes the law some time to adapt. However, the shift in policy at all levels of government, shows that the law may be beginning to catch up to law enforcement’s use of technology.


Digital Millennium Copyright Act Exemptions Announced

Zach Berger, MJLST Staffer

The Digital Millennium Copyright Act (DMCA) first enacted in 1998, prevents owners of digital devices from making use of these devices in any way that the copyright holder does not explicitly permit. Codified in part in 17 U.S.C. § 1201, the DMCA makes it illegal to circumvent digital security measures that prevent unauthorized access to copyrighted works such has movies, video games, and computer programs. This law prevents users from breaking what is known as access controls, even if the purpose would fall under lawful fair use. According to the Electronic Frontier Foundation’s (a nonprofit digital rights organization) staff attorney Kit Walsh, “This ‘access control’ rule is supposed to protect against unlawful copying. But as we’ve seen in the recent Volkswagen scandal . . . it can be used instead to hide wrongdoing hidden in computer code.” Essentially, everything not explicitly permitted is forbidden.

However, these restrictions are not iron clad. Every three years, users are allowed to request exemptions to this law for lawful fair uses from the Library of Congress (LOC), but these exemptions are not easy to receive. In order to receive an exemption, activists must not only propose new exemptions, but also plead for ones already granted to be continued. The system is flawed, as users often need to have a way to circumvent their devices to make full use of the products. However, the LOC has recently released its new list of exemptions, and this expanded list represents a small victory for digital rights activists.

The exemptions granted will go into effect in 2016, and cover 22 types of uses affecting movies, e-books, smart phones, tablets, video games and even cars. Some of the highlights of the exemptions are as follows:

  • Movies where circumvention is used in order to make use of short portions of the motion pictures:
    • For educational uses by University and grade school instructors and students.
    • For e-books offering film analysis
    • For uses in noncommercial videos
  • Smart devices
    • Can “jailbreak” these devices to allow them to interoperate with or remove software applications, allows phones to be unlocked from their carrier
    • Such devices include, smart phones, televisions, and tablets or other mobile computing devices
      • In 2012, jailbreaking smartphones was allowed, but not tablets. This distinction has been removed.
    • Video Games
      • Fan operated online servers are now allowed to support video games once the publishers shut down official servers.
        • However, this only applies to games that would be made nearly unplayable without the servers.
      • Museums, libraries, and archives can go a step further by jailbreaking games as needed to get them functioning properly again.
    • Computer programs that operate things primarily designed for use by individual consumers, for purposes of diagnosis, repair, and modification
      • This includes voting machines, automobiles, and implantation medical devices.
    • Computer programs that control automobiles, for purposes of diagnosis, repair, and modification of the vehicle

These new exemptions are a small, but significant victory for consumers under the DMCA. The ability to analyze your automotive software is especially relevant in the wake of the aforementioned Volkswagen emissions scandal. However, the exemptions are subject to some important caveats. For example, only video games that are almost completely unplayable can have user made servers. This means that games where only an online multiplayer feature is lost, such servers are not allowed. A better long-term solution is clearly needed, as this burdensome process is flawed and has led to what the EFF has called “unintended consequences.” Regardless, as long as we still have this draconian law, exemptions will be welcomed. To read the final rule, register’s recommendation, and introduction (which provides a general overview) click here.