Data

AI: Legal Issues Arising From the Development of Autonomous Vehicle Technology

Sooji Lee, MJLST Staffer

Have you ever heard of the “Google deep mind challenge match?” AlphaGo, the artificial intelligence (hereinafter “AI”) created by Google, had a Go game match with Lee Sedol, 18-time world champion of Go in 2016. Go game is THE most complicated human made game that has more variable moves than you can ever imagine – more than a billion more variables than a chess game. People who knew enough about the complexity of Go game did not believe that it was possible for AI to calculate all these variables to defeat the world champion, who depended more on his guts and experiences. AlphaGo, however, defeated Mr. Lee by five to one leaving the whole world amazed.

Another use of AI is to make autonomous vehicles (hereinafter “AV”), to achieve mankind’s long-time dream: driving a car without driving. Now, almost every automobile manufacturer including GM, Toyota, Tesla and others, who each have enough capital to reinvest their money on the new technology, aggressively invest in AV technologies. As a natural consequence of increasing interest on AV technology, vehicle manufacturers have performed several driving tests on AVs. Many legal issues arose as a result of the trials. During my summer in Korea, I had a chance to research legal issues for an intellectual property infringement lawsuit regarding AV technology between two automobile manufacturers.

For a normal vehicle, a natural person is responsible if there is an accident. But who should be liable when an AV malfunctions? The owner of the vehicle, the manufacturer of the vehicle, or the entity who developed the vehicle’s software? This is one of the hardest questions that arises from the commercialization of AV. I personally think that the liability could be imposed on any of the various entities depending on different scenarios. If the accident happened because of the malfunctioning of the vehicle’s AI system, the software provider should be liable. If the accident occurred because the vehicle itself malfunctioned, the manufacturer should be held liable. But if the accident occurred because the owner of the vehicle poorly managed his/her car, the owner should be held liable. To sum up, there is no one-size fits all solution to who should be held liable. Courts should consider the causal sequence of the accident when determining liability.

Also, the legislative body must take data privacy into consideration when enacting statutes governing AVs. There are tons of cars on the road. Drivers should interact with other drivers to safely get to their destination. Therefore, AVs should share locations and current situations to interact well with other AVs. This means that a single entity should collect each AVs information and calculate it to prevent accidents or to effectively manage traffic. Nowadays, almost every driver is using navigation. This means that people must provide their location to a service provider, such as Google maps. Some may argue that service providers like Google maps already serve as a collector of vehicle information. But there are many navigation services. Since all AVs must interact with each other, centralizing the data with one service provider is wise. While centralizing the data and limiting consumer choice to one service provider is advisable, the danger of a data breach would be heightened should one service provider be selected. This is an important and pressing concern for legislatures considering enacting legislation regarding centralizing AV data with one service provider.

Therefore, enacting an effective, smart, and predictive statute is important to prevent potential problems. Notwithstanding its complexity, many states in the U.S. take a positive stance toward the commercialization of AV since the industry could become profitable. According to statistics from National Conference of State Legislatures, 33 states have introduced legislation and 10 states have issued executive orders related to AV technology. For example, Florida’s 2016 legislation expands allowed operation of autonomous vehicles on public roads. Also, Arizona’s Governor issued an executive order which encouraged the development of relevant technologies. With this steps, development of a legal shield is possible someday.


The Unfair Advantage of Web Television

Richard Yo, MJLST Staffer

 

Up to a certain point, ISPs like Comcast, Verizon, and AT&T enjoy healthy, mutually beneficial relationships with web content companies such as Netflix, YouTube, and Amazon. That relationship remains so even when regular internet usage moves beyond emails and webpage browsing to VoIP and video streaming. To consume data-heavy content, users seek the wider bandwidth of broadband service and ISPs are more than happy to provide it at a premium. However, once one side enters the foray of the other, the relationship becomes less tenable unless it is restructured or improved upon. This problem is worse when both sides attempt to mimic the other.

 

Such a tension had clearly arisen by the time Verizon v. FCC 740 F.3d 623 (D.C. Cir. 2014) was decided. The D.C. Circuit vacated, or rather clarified, the applicability of two of the three rules that constituted the FCC’s 2010 Open Internet Order. The D.C. Circuit clarified that the rule of transparency was applicable to all, but the restrictions on blocking and discrimination were applicable only to common carriers. The FCC had previously classified ISPs under Title I of the Communications Act; common carriers are classified under Title II. The 2014 decision confirmed that broadband companies, not being common carriers, could choose the internet speed of websites and web-services at their discretion so long as they were transparent. So, to say that the internet’s astounding growth and development is due to light touch regulation is disingenuous. That statement in and of itself is true. Such discriminatory and blocking behavior was not in the purview of broadband providers during the early days of the internet due to the aforementioned relationship.

 

Once web content began taking on the familiar forms of broadcast television, signs of throttling were evident. Netflix began original programming in 2013 and saw its streaming speeds drop dramatically that year on both Verizon and Comcast networks. In 2014, Netflix made separate peering-interconnection agreements with both companies to secure reliably fast speeds for itself. Soon, public outcry led to the FCC’s 2015 Open Internet Order reclassifying broadband internet service as a “telecommunications service” subject to Title II. ISPs were now common carriers and net neutrality was in play, at least briefly (2015-2018).

 

Due to the FCC’s 2018 Restoring Internet Freedom Order, much of the features of the 2015 order have been reversed. Some now fear that ISPs will again attempt to control the traffic on their networks in all sorts of insidious ways. This is a legitimate concern but not one that necessarily spans the entire spectrum of the internet.

 

The internet has largely gone unregulated thanks to legislation and policies meant to encourage innovation and discourse. Under this incubatory setting, numerous such advancements and developments have indeed been made. One quasi-advancement is the streaming of voice and video. The internet has gone from cat videos to award-winning dramas. What began as a supplement to mainstream entertainment has now become the dominant force. Instead of Holly Hunter rushing across a busy TV station, we have Philip DeFranco booting up his iMac. Our tastes have changed, and with it, the production involved.

 

There is an imbalance here. Broadcast television has always suffered the misgivings of the FCC, even more than its cable brethren. The pragmatic reason for this has always been broadcast television’s availability, or rather its unavoidability. Censors saw to it that obscenities would never come across a child’s view, even inadvertently. But it cannot be denied that the internet is vastly more ubiquitous. Laptop, tablet, and smartphone sales outnumber those of televisions. Even TVs are now ‘smart,’ serving not only their first master but a second web master as well (no pun intended). Shows like Community and Arrested Development were network television shows (on NBC and FOX, respectively) one minute, and web content (on Yahoo! and Netflix, respectively) the next. The form and function of these programs had not substantially changed but they were suddenly free of the FCC’s reign. Virtually identical productions on different platforms are regulated differently, all due to arguments anchored by fears of stagnation.


Car Wreck: Data Breach at Uber Underscores Legal Dangers of Cybersecurity Failures

Matthew McCord, MJSLT Staffer

 

This past week, Uber’s annus horribilis and the everincreasing reminders of corporate cybersecurity’s persistent relevance reached singularity. Uber, once praised as a transformative savior of the economy by technology-minded businesses and government officials for its effective service delivery model and capitalization on an exponentially-expanding internet, has found itself impaled on the sword that spurred its meteoric rise. Uber recently disclosed that hackers were able to access the personal information of 57 million riders and drivers last year. It then paid hackers $100,000 to destroy the compromised data, and failed to inform its users or sector regulators of the breach at the time. These hackers apparently compromised a trove of personally identifiable information, including names, telephone numbers, email addresses, and driver’s licenses of users and drivers through a flaw in their company’s GitHub security.

Uber, a Delaware corporation, is required to present notice of a data breach in the “most expedient time possible and without unreasonable delay” to affected customers per Delaware statutes. Most other states have adopted similar legislation which affects companies doing business in those states, which could allow those regulators and customers to bring actions against the company. By allegedly failing to provide timely notification, Uber opened itself to the parade of announced investigations from regulators into the breach: the United Kingdom’s Information Commissioner, for instance, has threatened fines following an inquiry, and U.S. state regulators are similarly considering investigations and regulatory action.

Though regulatory action is not a certainty, the possibility of legal action and the dangers of lost reputation are all too real. Anthem, a health insurer subject to far stricter federal regulation under HIPAA and its various amendments, lost $115 million to settlement of a class action suit over its infamous data breach. Short-term impacts on reputation rattle companies (especially those who respond less vigorously), with Target having seen its sales fall by almost 50% in 2013 Q4 after its data breach. The cost of correcting poor data security on a technical level also weighs on companies.

This latest breach underscores key problems facing businesses in the continuing era of exponential digital innovation. The first, most practical problem that companies must address is the seriousness with which companies approach information security governance. An increasing number of data sources and applications, and increasing complexity of systems and vectors, similarly increases the potential avenues to exposure for attack. One decade ago, most companies used at least somewhat isolated, internal systems to handle a comparatively small amount of data and operations. Now, risk assessments must reflect the sheer quantity of both internal and external devices touching networks, the innumerable ways services interact with one another (and thus expose each service and its data to possible breaches), and the increasing competence of organized actors in breaching digital defenses. Information security and information governance are no longer niches, relegated to one silo of a company, but necessarily permeate most every business area of an enterprise. Skimping on investment in adequate infrastructure far widens the regulatory and civil liability of even the most traditional companies for data breaches, as Uber very likely will find.

Paying off data hostage-takers and thieves is a particularly concerning practice, especially from a large corporation. This simply creates a perverse incentive for malignant actors to continue trying to siphon off and extort data from businesses and individuals alike. These actors have grown from operations of small, disorganized groups and individuals to organized criminal groups and rogue states allegedly seeking to circumvent sanctions to fund their regimes. Acquiescing to the demands of these actors invites the conga line of serious breaches to continue and intensify into the future.

Invoking a new, federal legislative scheme is a much-discussed and little-acted upon solution for disparate and uncoordinated regulation of business data practices. Though 18 U.S.C. § 1030 provides for criminal penalties for the bad actors, there is little federal regulation or legislation on the subject of liability or minimum standards for breached PII-handling companies generally. The federal government has left the bulk of this work to each state as it leaves much of business regulation. However, internet services are recognized as critical infrastructure by the Department of Homeland Security under Presidential Policy Directive 21. Data breaches and other cyber attacks result in data and intellectual property theft costing the global economy hundreds of billions of dollars annually, with widespread disruption potentially disrupting government and critical private sector operations, like the provision of utilities, food, and essential services, turning cybersecurity into a definite critical national risk requiring a coordinated response. Careful crafting of legislation authorizing federal coordination of cybersecurity best practices and adequately punitive federal action for negligence of information governance systems, would incentivize the private and public sectors to take better care of sensitive information, reducing the substantial potential for serious attacks to compromise the nation’s infrastructure and the economic well-being of its citizens and industries.


United States v. Microsoft Corp.: A Chance for SCOTUS to Address the Scope of the Stored Communications Act

Maya Digre, MJLST Staffer

 

On October 16th, 2017 the United States Supreme Court granted the Federal Government’s petition for certiorari in United States v. Microsoft Corp. The case is about a warrant issued to Microsoft that ordered it to seize and produce the contents of a customer’s e-mail account that the government believed was being used in furtherance of narcotics trafficking. Microsoft produced the non-content information that was stored in the U.S., but moved to quash the warrant with respect to the information that was stored abroad in Ireland. Microsoft claimed that the only way to access the information was through the Dublin data center, even though this data center could also be accessed by their database management program located at some of their U.S. locations.

 

The district court of New York determined that Microsoft was in civil contempt for not complying with the warrant. The 2nd Circuit reversed, stating that “Neither explicitly or implicitly does the statute envision the application of its warrant provision overseas” and “the application of the Act that the government proposes – interpreting ‘warrant’ to require a service provider to retrieve material from beyond the borders of the United States – would require us to disregard the presumption against extraterritoriality.” The court used traditional tools of statutory interpretation in the opinion including plain meaning, presumption against extraterritoriality, and legislative history.

 

The issue in the case, according to ScotusBlog is “whether a United States provider of email services must comply with a probable-cause-based warrant issued under 18 U.S.C. § 2703 by making disclosure in the United States of electronic communications within that provider’s control, even if the provider has decided to store that material abroad.” Essentially, the dispute centers on the scope of the Stored Communications Act (“SCA”) with respect to information that is stored abroad. The larger issue is the tension between international privacy laws, and the absolute nature of warrants issued in the United States. According to the New York Times, “the case is part of a broader clash between the technology industry and the federal government in the digital age.”

 

I think that the broader issue is something that the Supreme Court should address. However, I am not certain that this is the best case for the court. The fact that Microsoft can access the information from data centers in the United States with their database management program seems to weaken their claim. The case may be stronger for companies who cannot access information that they store abroad from within the United States. Regardless of this weakness, the Supreme Court should rule in favor of the State to preserve the force of warrants of this nature. It was Microsoft’s choice to store the information abroad, and I don’t think the choices of companies should impede legitimate crime-fighting goals of the government. Additionally, if the Court ruled that the warrant does not reach information that is stored abroad, this may incentivize companies to keep their information out of the reach of a U.S. warrant by storing it abroad. This is not a favorable policy choice for the Supreme Court to make; the justices should rule in favor of the government.

 

Unfortunately, the Court will not get to make a ruling on this case after Microsoft decided to drop it following the DOJ’s agreement to change its policy.


Microsoft Triumphs in Fight to Notify Users of Government Data Requests

Brandy Hough, MJLST Staffer

 

This week, Microsoft announced it will drop its secrecy order lawsuit against the U.S. government after the Deputy U.S. Attorney General issued a binding policy limiting the use and term of protective orders issued pursuant to 18 U.S.C. §2705(b) of the Electronic Communications Privacy Act of 1986 (“ECPA”), also referred to as the Stored Communications Act (“SCA”).

 

The ECPA governs requests to obtain user records and information from electronic service providers. “Under the SCA, the government may compel the disclosure of . . . information via subpoena, a court order under 18 U.S.C. § 2703(d), or a search warrant.” Pursuant to 18 U.S.C. § 2705(b), a government entity may apply for an order preventing a provider from notifying its user of the existence of the warrant, subpoena, or court order. Such an order is to be granted only if “there is reason to believe” that such notification will result in (1) endangering an individual’s life or physical safety; (2) flight from prosecution; (3) destruction of or tampering with evidence; (4) intimidation of witnesses; or (5) seriously jeopardizing an investigation or delaying a trial.

 

Microsoft’s April 2016 lawsuit stemmed from what it viewed as routine overuse of protective orders accompanying government requests for user data under the ECPA, often without fixed end dates. Microsoft alleged both First and Fourth Amendment violations, arguing that “its customers have a right to know when the government obtains a warrant to read their emails, and . . . Microsoft has a right to tell them.” Many technology leaders, including Apple, Amazon, and Twitter, signed amicus briefs in support of Microsoft’s efforts.

 

The Deputy Attorney General’s October 19th memo states that “[e]ach §2705(b) order should have an appropriate factual basis and each order should extend only as long as necessary to satisfy the government’s interest.” It further outlines steps that prosecutors applying for §2705(b) orders must follow, including one that states “[b]arring exceptional circumstances, prosecutors filing § 2705(b) applications may only seek to delay notice for one year or less.” The guidelines apply prospectively to applications seeking protective orders filed on or after November 18, 2017.

 

Microsoft isn’t sitting back to celebrate its success; instead, it is continuing its efforts outside the courtroom, pushing for Congress to amend the ECPA to address secrecy orders.

 

Had the case progressed without these changes, the court should have ruled in favor of Microsoft. Because the way § 2705(b) of the SCA was written, it allowed the government to exploit the “vague legal standards . . . to get indefinite secrecy orders routinely, regardless of whether they were even based on the specifics of the investigation at hand.”This behavior violated both the First Amendment – by restraining Microsoft’s speech based on “purely subjective criteria” rather than requiring the government to “establish that the continuing restraint on speech is narrowly tailored to promote a compelling interest”  – and the Fourth Amendment – by not allowing users to know if the government searches and seizes their cloud-based property, in contrast to the way Fourth Amendment rights  are afforded to information stored in a person’s home or business. The court therefore should have declared, as Microsoft urged, that § 2705(b) was “unconstitutional on its face.”

 


6th Circuit Aligns With 7th Circuit on Data Breach Standing Issue

John Biglow, MJLST Managing Editor

To bring a suit in any judicial court in the United States, an individual, or group of individuals must satisfy Article III’s standing requirement. As recently clarified by the Supreme Court in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), to meet this requirement, a “plaintiff must have (1) suffered an injury in fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision.” Id. at 1547. When cases involving data breaches have entered the Federal Circuit courts, there has been some disagreement as to whether the risk of future harm from data breaches, and the costs spent to prevent this harm, qualify as “injuries in fact,” Article III’s first prong.

Last Spring, I wrote a note concerning Article III standing in data breach litigation in which I highlighted the Federal Circuit split on the issue and argued that the reasoning of the 7th Circuit court in Remijas v. Neiman Marcus Group, LLC, 794 F.3d 688 (7th Cir. 2015) was superior to its sister courts and made for better law. In Remijas, the plaintiffs were a class of individuals whose credit and debit card information had been stolen when Neiman Marcus Group, LLC experienced a data breach. A portion of the class had not yet experienced any fraudulent charges on their accounts and were asserting Article III standing based upon the risk of future harm and the time and money spent mitigating this risk. In holding that these Plaintiffs had satisfied Article III’s injury in fact requirement, the court made a critical inference that when a hacker steals a consumer’s private information, “[p]resumably, the purpose of the hack is, sooner or later, to make fraudulent charges or assume [the] consumers’ identit[y].” Id. at 693.

This inference is in stark contrast to the line of reasoning engaged in by the 3rd Circuit in Reilly v. Ceridian Corp. 664 F.3d 38 (3rd Cir. 2011).  The facts of Reilly were similar to Remijas, except that in Reilly, Ceridian Corp., the company that had experienced the data breach, stated only that their firewall had been breached and that their customers’ information may have been stolen. In my note, mentioned supra, I argued that this difference in facts was not enough to wholly distinguish the two cases and overcome a circuit split, in part due to the Reilly court’s characterization of the risk of future harm. The Reilly court found that the risk of misuse of information was highly attenuated, reasoning that whether the Plaintiffs experience an injury depended on a series of “if’s,” including “if the hacker read, copied, and understood the hacked information, and if the hacker attempts to use the information, and if he does so successfully.” Id. at 43 (emphasis in original).

Often in the law, we are faced with an imperfect or incomplete set of facts. Any time an individual’s intent is an issue in a case, this is a certainty. When faced with these situations, lawyers have long utilized inferences to differentiate between more likely and less likely scenarios for what the missing facts are. In the case of a data breach, it is almost always the case that both parties will have little to no knowledge of the intent, capabilities, or plans of the hacker. However, it seems to me that there is room for reasonable inferences to be made about these facts. When a hacker is sophisticated enough to breach a company’s defenses and access data, it makes sense to assume they are sophisticated enough to utilize that data. Further, because there is risk involved in executing a data breach, because it is illegal, it makes sense to assume that the hacker seeks to gain from this act. Thus, as between the Reilly and Remijas courts’ characterizations of the likelihood of misuse of data, it seemed to me that the better rule is to assume that the hacker is able to utilize the data and plans to do so in the future. Further, if there are facts tending to show that this inference is wrong, it is much more likely at the pleading stage that the Defendant Corporation would be in possession of this information than the Plaintiff(s).

Since Remijas, there have been two data breach cases that have made it to the Federal Circuit courts on the issue of Article III standing. In Lewert v. P.F. Chang’s China Bistro, Inc., 819 F.3d 963, 965 (7th Cir. 2016), the court unsurprisingly followed the precedent set forth in their recent case, Remijas, in finding that Article III standing was properly alleged. In Galaria v. Nationwide Mut. Ins. Co., a recent 6th Circuit case, the court had to make an Article III ruling without the constraint of an earlier ruling in their Circuit, leaving the court open to choose what rule and reasoning to apply. Galaria v. Nationwide Mut. Ins. Co., No. 15-3386, 2016 WL 4728027, (6th Cir. Sept. 12, 2016). In the case, the Plaintiffs alleged, among other claims, negligence and bailment; these claims were dismissed by the district court for lack of Article III standing. In alleging that they had suffered an injury in fact, the Plaintiffs alleged “a substantial risk of harm, coupled with reasonably incurred mitigation costs.” Id. at 3. In holding that this was sufficient to establish Article III standing at the pleading stage, the Galaria court found the inference made by the Remijas court to be persuasive, stating that “[w]here a data breach targets personal information, a reasonable inference can be drawn that the hackers will use the victims’ data for the fraudulent purposes alleged in Plaintiffs’ complaints.” Moving forward, it will be intriguing to watch how Federal Circuits who have not faced this issue, like the 6th circuit before deciding Galaria, rule on this issue and whether, if the 3rd Circuit keeps its current reasoning, this issue will eventually make its way to the Supreme Court of the United States.


Solar Climate Engineering and Intellectual Property

Jesse L. Reynolds 

Postdoctoral researcher, and Research funding coordinator, sustainability and climate
Department of European and International Public Law, Tilburg Law School

Climate change has been the focus of much legal and policy activity in the last year: the Paris Agreement, the Urgenda ruling in the Netherlands, aggressive climate targets in China’s latest five year plan, the release of the final US Clean Power Plan, and the legal challenge to it. Not surprisingly, these each concern controlling greenhouse gas emissions, the approach that has long dominated efforts to reduce climate change risks.

Yet last week, an alternative approach received a major—but little noticed—boost. For the first time, a federal budget bill included an allocation specifically for so-called “solar climate engineering.” This set of radical proposed technologies would address climate change by reducing the amount of incoming solar radiation. These would globally cool the planet, counteracting global warming. For example, humans might be able to mimic the well-known cooling caused by large volcanos via injecting a reflective aerosol into the upper atmosphere. Research thus far – which has been limited to modeling – indicates that solar climate engineering (SCE) would be effective at reducing climate change, rapidly felt, reversible in its direct climatic effects, and remarkably inexpensive. It would also pose risks that are both environmental – such as difficult-to-predict changes to rainfall patterns – and social – such as the potential for international disagreement regarding its implementation.

The potential role of private actors in SCE is unclear. On the one hand, decisions regarding whether and how to intentionally alter the planet’s climate should be made through legitimate state-based processes. On the other hand, the private sector has long been the site of great innovation, which SCE technology development requires. Such private innovation is both stimulated and governed through governmental intellectual property (IP) policies. Notably, SCE is not a typical emerging technology and might warrant novel IP policies. For example, some observers have argued that SCE should be a patent-free endeavor.

In order to clarify the potential role of IP in SCE (focusing on patents, trade secrets, and research data), Jorge Contreras of the University of Utah, Joshua Sarnoff of DePaul University, and I wrote an article that was recently accepted and scheduled for publication by the Minnesota Journal of Law, Science & Technology. The article explains the need for coordinated and open licensing and data sharing policies in the SCE technology space.

SCE research today is occurring primarily at universities and other traditional research institutions, largely through public funding. However, we predict that private actors are likely to play a growing role in developing products and services to serve large scale SCE research and implementation, most likely through public procurement arrangements. The prospect of such future innovation should be not stifled through restrictive IP policies. At the same time, we identify several potential challenges for SCE technology research, development, and deployment that are related to rights in IP and data for such technologies. Some of these challenges have been seen in regard to other emerging technologies, such as the risk that excessive early patenting would lead to a patent thicket with attendant anti-commons effects. Others are more particular to SCE, such as oft-expressed concerns that holders of valuable patents might unduly attempt to influence public policy regarding SCE implementation. Fortunately, a review of existing patents, policies, and practices reveals a current opportunity that may soon be lost. There are presently only a handful of SCE-specific patents; research is being undertaken transparently and at traditional institutions; and SCE researchers are generally sharing their data.

After reviewing various options and proposals, we make tentative suggestions to manage SCE IP and data. First, an open technical framework for SCE data sharing should be established. Second, SCE researchers and their institutions should develop and join an IP pledge community. They would pledge, among other things, to not assert SCE patents to block legitimate SCE research and development activities, to share their data, to publish in peer reviewed scientific journals, and to not retain valuable technical information as trade secrets. Third, an international panel—ideally with representatives from relevant national and regional patent offices—should monitor and assess SCE patenting activity and make policy recommendations. We believe that such policies could head off potential problems regarding SCE IP rights and data sharing, yet could feasibly be implemented within a relatively short time span.

Our article, “Solar Climate Engineering and Intellectual Property: Toward a Research Commons,” is available online as a preliminary version. We welcome comments, especially in the next couple months as we revise it for publication later this year.


The Comment on the Note “Best Practices for Establishing Georgia’s Alzheimer’s Disease Registry” of Volume 17, Issue 1

Jing Han, MJLST Staffer

Alzheimer’s disease (AD), also known just Alzheimer’s, accounts for 60% to 70% of cases of dementia. It is a chronic neurodegenerative disease that usually starts slowly and gets worse over time. The cause of Alzheimer’s disease is poorly understood. No treatments could stop or reverse its progression, though some may temporarily improve symptoms. Affected people increasingly rely on others for assistance, often placing a burden on the caregiver; the pressures can include social, psychological, physical, and economic elements. It was first described by, and later named after, German psychiatrist and pathologist Alois Alzheimer in 1906. In 2015, there were approximately 48 million people worldwide with AD. In developed countries, AD is one of the most financially costly diseases. Before many states, including Georgia, South Carolina, passed legislation establishing the Registry, many private institutions across the country already had made tremendous efforts to establish their own Alzheimer’s disease registries. The country has experienced an exponential increase of people who are diagnosed with Alzheimer’s disease. More and more states have begun to have their own Alzheimer’s disease registry.

From this Note, the Registry in Georgia has emphasized from the outset, the importance of protecting the confidentiality of patent date from secondary uses. This Note explores many legal and ethical issues raised by the Registry. An Alzheimer’s disease patient’s diagnosis history, medication history, and personal lifestyle are generally confidential information, known only to the physician and patient himself. Reporting such information to the Registry, however, may lead to wider disclosure of what was previously private information and consequently may arouse constitutional concerns. It is generally known that the vast majority of public health registries in the past have focused on collection of infectious disease data, registries for non-infectious diseases, such as Alzheimer’s disease, diabetes, and cancer have been recently created. It is a delicate balance between the public interest and personal privacy. It is not a mandatory requirement to register because Alzheimer is not infectious. After all, people suffering from Alzheimer’s often face violations of their human rights, abuse and neglect, as well as widespread discrimination from the other people. When a patient is diagnosed as AD, the healthcare provider, the doctor should encourage, rather than compel patients to receive registry. Keeping all the patients’ information confidential, enacting the procedural rules to use the information and providing some incentives are good approaches to encourage more patients to join the registry.

Based on the attention to the privacy concerns under federal and state law, the Note recommend slightly broader data sharing with the Georgia Registry, such as a physician or other health care provider for the purpose of a medical evaluation or treatment of the individual; any individual or entity which provides the Registry with an order from a court of competent jurisdiction ordering the disclosure of confidential information. What’s more, the Note mentions there has the procedural rules designated to administer the registry in Georgia. The procedural rules involve clauses: who are the end-users of the registry; what type of information should be collected in the registry; how and from whom should the information be collected; and how should the information be shared or disclosed for policy planning for research purpose; how the legal representatives get authority from patient.

From this Note, we have a deep understanding of Alzheimer’s disease registry in the country through one state’s experience. The registry process has invoked many legal and moral issues. The Note compares the registry in Georgia with other states and points out the importance of protecting the confidentiality of patient data. Emphasizing the importance of protection of personal privacy could encourage more people and more states to get involved in this plan.


The Federal Government Wants Your iPhone Passcode: What Does the Law Say?

Tim Joyce, MJLST Staffer

Three months ago, when MJLST Editor Steven Groschen laid out the arguments for and against a proposed New York State law that would require “manufacturers and operating system designers to create backdoors into encrypted cellphones,” the government hadn’t even filed its motion to compel against Apple. Now, just a few weeks after the government quietly stopped pressing the issue, it almost seems as if nothing at all has changed. But, while the dispute at bar may have been rendered moot, it’s obvious that the fight over the proper extent of data privacy rights continues to simmer just below the surface.

For those unfamiliar with the controversy, what follows are the high-level bullet points. Armed attackers opened fire on a group of government employees in San Bernardino, CA on the morning of December 2, 2015. The attackers fled the scene, but were killed in a shootout with police later that afternoon. Investigators opened a terrorism investigation, which eventually led to a locked iPhone 5c. When investigators failed to unlock the phone, they sought Apple’s help, first politely, and then more forcefully via California and Federal courts.

The request was for Apple to create an authenticated version of its iOS operating system which would enable the FBI to access the stored data on the phone. In essence, the government asked Apple to create a universal hack for any iPhone operating that particular version of iOS. As might be predicted, Apple was less than inclined to help crack its own encryption software. CEO Tim Cook ran up the banner of digital privacy rights, and re-ignited a heated debate over the proper scope of government’s ability to regulate encryption practices.

Legal chest-pounding ensued.

That was the situation until March 28, when the government quietly stopped pursuing this part of the investigation. In its own words, the government informed the court that it “…ha[d] now successfully accessed the data stored on [the gunman]’s iPhone and therefore no longer require[d] the assistance from Apple Inc…”. Apparently, some independent governmental contractor (read: legalized hacker) had done in just a few days what the government had been claiming from the start was impossible without Apple’s help. Mission accomplished – so, the end?

Hardly.

While this one incident, for this one iPhone (the iOS version is only applicable to iPhone 5c’s, not any other model like the iPhone 6), may be history, many more of the same or substantially similar disputes are still trickling through the courts nationwide. In fact, more than ten other federal iPhone cases have been filed since September 2015, and all this based on a 227 year old act of last resort. States like New York are also getting into the mix, even absent fully ratified legislation. Furthermore, it’s obvious that legislatures are taking this issue seriously (see NYS’s proposed bill, recently returned to committee).

Although he is only ⅔ a lawyer at this point, it seems to this author that there are at least three ways a court could handle a demand like this, if the case were allowed to go to the merits.

  1. Never OK to demand a hack – In this situation, the courts could find that our collective societal interests in privacy would always preclude enforcement of an order like this. Seems unlikely, especially given the demonstrated willingness in this case of a court to make the order in the first place.
  2. Always OK to demand a hack – Similar to option 1, this option seems unlikely as well, especially given the First and Fourth Amendments. Here, the courts would have to find some rationale to justify hacking in every circumstance. Clearly, the United States has not yet transitioned to Orwellian dystopia yet.
  3. Sometimes OK to demand a hack, but scrutiny – Here, in the middle, is where it seems likely we’ll find courts in the coming years. Obviously, convincing arguments exist on each side, and it seems possible reconcile infringing personal privacy and upholding national security with burdening a tech company’s policy of privacy protection, given the right set of facts. The San Bernardino shooting is not that case, though. The alleged terrorist threat has not been characterized as sufficiently imminent, and the FBI even admitted that cracking the cell phone was not integral to the case and they didn’t find anything anyway. It will take a (probably) much more scary scenario for this option to snap into focus as a workable compromise.

We’re left then with a nagging feeling that this isn’t the last public skirmish we’ll see between Apple and the “man.” As digital technology becomes ever more integrated into daily life, our legal landscape will have to evolve as well.
Interested in continuing the conversation? Leave a comment below. Just remember – if you do so on an iPhone 5c, draft at your own risk.


Requiring Backdoors Into Encrypted Cellphones

Steven Groschen, MJLST Managing Editor

The New York State Senate is considering a bill that requires manufacturers and operating system designers to create backdoors into encrypted cellphones. Under the current draft, failure to comply with the law would result in a $2,500 fine, per offending device. This bill highlights the larger national debate concerning privacy rights and encryption.

In November of 2015, the Manhattan District Attorney’s Office (MDAO) published a report advocating for a federal statute requiring backdoors into encrypted devices. One of MDAO’s primary reasons in support of the statute is the lack of alternatives available to law enforcement for accessing encrypted devices. The MDAO notes that traditional investigative techniques have largely been ineffective. Additionally, the MDAO argues that certain types of data residing on encrypted devices often cannot be found elsewhere, such as on a cloud service. Naturally, the inaccessibility of this data is a significant hindrance to law enforcement. The report offers an excellent summary of the law enforcement perspective; however, as with all debates, there is another perspective.

The American Civil Liberties Union (ACLU) has stated it opposes using warrants to force device manufacturers to unlock their customers’ encrypted devices. A recent ACLU blog post presented arguments against this practice. First, the ACLU argued that the government should not require “extraordinary assistance from a third party that does not actually possess the information.” The ACLU perceives these warrants as conscripting Apple (and other manufacturers) to conduct surveillance on behalf of the government. Second, the ACLU argued using search warrants bypasses a “vigorous public debate” regarding the appropriateness of the government having backdoors into cellphones. Presumably, the ACLU is less opposed to laws such as that proposed in the New York Senate, because that process involves an open public debate rather than warrants.

Irrespective of whether the New York Senate bill passes, the debate over government access to its citizens’ encrypted devices is sure to continue. Citizens will have to balance public safety considerations against individual privacy rights—a tradeoff as old as government itself.