Data

Hacking the Circuit Split: Case asks Supreme Court to clarify the CFAA

Kate Averwater, MJLST Staffer

How far would you go to make sure your friend’s love interest isn’t an undercover cop? Would you run an easy search on your work computer? Unfortunately for Nathan Van Buren, his friend was part of an FBI sting operation and his conduct earned him a felony conviction under the Computer Fraud and Abuse Act (CFAA), 18 USC § 1030.

Van Buren, formerly a police sergeant in Georgia, was convicted of violating the CFAA. His acquaintance turned informant for the FBI and recorded their interactions. Van Buren knew Andrew Albo from Albo’s previous brushes with law enforcement. He asked Van Buren to run the license plate number of a dancer. Albo claimed he was interested in her and wanted to make sure she wasn’t an undercover cop. Trying to better his financial situation, Van Buren told Albo he needed money. Albo gave Van Buren a fake license plate number and $6,000. Van Buren then ran the fake number in the Georgia Crime Information Center (GCIC) database. Albo recorded their interactions and the trial court convicted Van Buren of honest-services wire fraud (18 USC §§ 1343, 1346) and felony computer fraud under the CFAA.

Van Buren appealed and the Eleventh Circuit vacated and remanded the honest-services wire fraud conviction but upheld the felony computer fraud conviction. His case is currently on petition for review before the Supreme Court.

The relevant portion of the CFAA criminalizes obtaining “information from any protected computer” by “intentionally access[ing] a computer without authorization or exceed[ing] authorized access.” Van Buren’s defense was that he had authorized access to the information. However, he admitted that he used it for an improper purpose. This disagreement over access restrictions versus use restrictions is the crux of the circuit split.  Van Buren’s petition emphasizes the need for the Supreme Court to resolve these discrepancies.

Most favorable to Van Buren is the Ninth Circuit’s reading of the CFAA. The court previously held that the CFAA did not criminalize abusing authorized access for impermissible purposes. Recently, the Ninth Circuit reaffirmed this interpretation. The Second and Fourth Circuits align with the Ninth in interpreting the CFAA narrowly, declining to criminalize conduct similar to Van Buren’s.

In affirming his conviction, the Eleventh Circuit rested on their previous decision in Rodriguez, a much broader reading of the CFAA. The First, Fifth, and Seventh Circuits join the Eleventh in interpreting the CFAA to include inappropriate use.

Van Buren’s case has sparked a bit of controversy and prompted multiple organizations to file amicus briefs. They are pushing the Supreme Court to interpret the CFAA in a narrow way that does not criminalize common activities. Broad readings of the CFAA lead to criticism of the law as “a tool ripe for abuse.”

Whether or not the Supreme Court agrees to hear the case, next time someone offers you $6,000 to do a quick search on your work computer, say no.


Forget About Quantum Computers Cracking Your Encrypted Data, Many Believe End-to-End Encryption Will Lose Out as a Matter of Policy

Ian Sannes, MJLST Staffer

As reported in Nature, Google recently announced they finally achieved quantum supremacy, which is the point when computers that work based on the spin of qubits, rather than how all conventional computers work, are finally able to solve problems faster than conventional computers. However, using quantum computers is not a threat to encryption any time soon according to John Preskill, who coined the term “quantum supremacy,” rather such theorized uses remain many years out. Furthermore, the question remains whether quantum computers are even a threat to encryption at all. IBM recently showcased one way to encrypt data that is immune to the theoretical cracking ability of future quantum computers. It seems that while one method of encryption is theoretically prone to attack by quantum computers, the industry will simply adopt methods that are not prone to such attacks when it needs to.

Does this mean that end-to-end encryption methods will always protect me?

Not necessarily. Stewart Baker opines there are many threats to encryption such as homeland security policy, foreign privacy laws, and content moderation, which he believes will win out over the right to have encrypted private data.

The highly-publicized efforts of the FBI in 2016 to try to force Apple to unlock encryption on an iPhone for national security reasons ended in the FBI dropping the case when they hired a third party who was able to crack the encryption. This may seem like a win for Silicon Valley’s historically pro-encryption stance but foreign laws, such as the UK’s Investigatory Powers Act, are opening the door for government power in obtaining user’s digital data.

In October of 2019 Attorney General Bill Barr requested that Facebook halt its plans to implement end-to-end encryption on its messaging services because it would prevent investigating serious crimes. Zuckerberg, the CEO of Facebook, admitted it would be more difficult to identify and remove harmful content if such an encryption was implemented, but has yet to implement the solution.

Some believe legislators may simply force software developers to create back doors to users’ data. Kalev Leetaru believes content moderation policy concerns will allow governments to bypass encryption completely by forcing device manufacturers or software companies to install client-side content-monitoring software that is capable of flagging suspicious content and sending decrypted versions to law enforcement automatically.

The trend seems to be headed in the direction of some governmental bypass of conventional encryption. However, just like IBM’s quantum-proof encryption was created to solve a weakness in encryption, consumers will likely find another way to encrypt their data if they feel there is a need.


Pacemakers, ICDs, and ICMs – oh my! Implantable heart detection devices

Janae Aune, MJLST Staffer

Heart attacks and heart disease kill hundreds of thousands of people in the United States every year. Heart disease affects every person differently based on their genetic and ethnic background, lifestyle, and family history. While some people are aware of their risk of heart problems, over 45 percent of sudden heart cardiac deaths occur outside of the hospital. With a condition as spontaneous as heart attacks, accurate information tracking and reporting is vital to effective treatment and prevention. As in any market, the market for heart monitoring devices is diverse, with new equipment arriving every year. The newest device in a long line of technology is the LINQ monitoring device. LINQ builds on and works with already established devices that have been used by the medical community.

Pacemakers were first used effectively in 1969 when lithium batteries were invented. These devices are surgically implanted under the skin of a patient’s chest and are meant to help control the heartbeat. These devices can be implanted for temporary or permanent use and are usually targeted at patients who experience bradycardia, a slow heart rate. These devices require consistent check-ins by a doctor, usually every three to six months. Pacemakers must also be replaced every 5 to 15 years depending on how long the battery life lasts. These devices revolutionized heart monitoring but involve significant risks with the surgery and potential device malfunctioning.

Implantable cardioverter defibrillators (ICD) are also surgically implanted devices but differ from pacemakers in that they deliver one shock when needed rather than continuous electrode shocks. ICDs are similar to the heart paddles doctors use when trying to stimulate a heart in the hospital – think yelling “charge” and the paddles they use. These devices are used mostly in patients with tachycardia, a heartbeat that is too fast. Implantation of an ICD requires feeding wires through the blood vessels of the heart. A subcutaneous ICD (S-ICD) has been newly developed and gives patients who have structural defects in their heart blood vessels another option of ICDs. Similar to pacemakers, an ICD monitors activity constantly, but will be read only at follow-up appointments with the doctor. ICDs last an average of seven years before the battery will need to be replaced.

The Reveal LINQ system is a newly developed heart monitoring device that records and transmits continuous information to a patient’s doctor at all times. The system requires surgical implantation of a small device known as the insertable cardiac monitor (ICM). The ICM works with another component called the patient monitor, which is a bedside monitor that transmits the continuous information collected by the ICM to a doctor instantly. A patient assistant control is also available which allows the patient to manually mark and record particular heart activities and transmit those in more detail. The LINQ system allows a doctor to track a patient’s heart activity remotely rather than requiring the patient to come in for the history to be examined. Continuous tracking and transmitting allow a patient’s doctor to more accurately examine heart activity and therefore create a more effective treatment approach.

With the development of wearable technology meant to track health information and transmit it to the wearer, the development of devices such as the LINQ system provide new opportunities for technologies to work together to promote better health practices. The Apple Watch series 4 included electrocardiogram monitoring that records heart activity and checks the reading for atrial fibrillation (AFB). This is the same heart activity pacemakers, ICDs, and the LINQ system are meant to monitor. The future capability of heart attack and disease detection and treatment could be massively impacted by the ability to monitor heart behavior in multiple different ways. Between the ability to shock your heart, continuously monitor and transmit information about it, and report to you when your heart rate may be experiencing abnormalities from a watch it seems as if a future of decreased heart problems could be a reality.

With all of these newly developed methods of continuous tracking, it begs the question of how all of that information is protected? Health and heart behavior, which is internal and out of your control, is as personal as information gets. Electronic monitoring and transmission of this data opens it up to cybersecurity targeting. Cybersecurity and data privacy issues with these devices have started to be addressed more fully, however the concerns differ depends on which implantable device a patient has. Vulnerabilities have been identified with ICD devices which would allow an unauthorized individual to access and potentially manipulate the device. Scholars have argued that efforts to decrease vulnerabilities should be focused on protecting the confidentiality, integrity, and availability of information transmitted by implantable devices. The FDA has indicated that the use of a home monitor system could decrease the potential vulnerabilities. As the benefits from heart monitors and heart data continue to grow, we need to be sure that our privacy protections grow with it.


Wearable, Shareable, Terrible? Wearable Technology and Data Protection

Alex Wolf, MJLST Staffer

You might consider the first wearable technology of the modern-day to be the Sony Walkman, which celebrates its 40th anniversary this year. After the invention of Bluetooth 1.0 in 2002, commercial competitors began to realize the vast promise that this emergent technology afforded. Fifteen years later, over 265 million wearable tech devices are sold annually. It looks to be a safe bet that this trend will continue.

A popular subset of wearable technology is the fitness tracker. The user attaches the device to themselves, usually on their wrist, and it records their movements. Lower-end trackers record basics like steps taken, distance walked or run, and calories burned, while the more sophisticated ones can track heart rate and sleep statistics (sometimes also featuring fun extras like Alexa support and entertainment app playback). And although this data could not replace the care and advice of a healthcare professional, there have been positive health results. Some people have learned of serious health problems only once they started wearing a fitness tracker. Other studies have found a correlation between wearing a FitBit and increased physical activity.

Wearable tech is not all good news, however; legal commentators and policymakers are worried about privacy compromises that result from personal data leaving the owner’s control. The Health Insurance Portability and Protection Act (HIPAA) was passed by Congress with the aim of providing legal protections for individuals’ health records and data if they are disclosed to third parties. But, generally speaking, wearable tech companies are not bound by HIPAA’s reach. The companies claim that no one else sees the data recorded on your device (with a few exceptions, like the user’s express written consent). But is this true?

A look at the modern American workplace can provide an answer. Employers are attempting to find new ways to manage health insurance costs as survey data shows that employees are frequently concerned with the healthcare plan that comes with their job. Some have responded by purchasing FitBits and other like devices for their employees’ use. Jawbone, a fitness device company on its way out, formed an “Up for Groups” plan specifically marketed towards employers who were seeking cheaper insurance rates for their employee coverage plans. The plan allows executives to access aggregate health data from wearable devices to help make cost-benefit determinations for which plan is the best choice.

Hearing the commentators’ and state elected representatives’ complaints, members of Congress have responded; Senators Amy Klobuchar and Lisa Murkowski introduced the “Protecting Personal Health Data Act” in June 2019. It would create a National Task Force on Health Data Protection, which would work to advise the Secretary of Health and Human Services (HHS) on creating practical minimum standards for biometric and health data. The bill is a recognition that HIPAA has serious shortcomings for digital health data privacy. As a 2018 HHS Committee Report noted, “A class of health records that can be subject to HIPAA or not subject to HIPAA is personal health records (PHRs) . . . PHRs not subject to HIPAA . . . [have] no other privacy rules.”  Dena Mendolsohn, a lawyer for Consumer Reports, remarked favorably that the bill is needed because the current framework is “out of date and incomplete.”

The Supreme Court has recognized privacy rights in cell-site location data, and a federal court recognized standing to sue for a group of plaintiffs whose personally identifiable information (PII) was hacked and uploaded onto the Dark Web. Many in the legal community are pushing for the High Court to offer clearer guidance to both tech consumers and corporations on the state of protection of health and other personal data, including private rights of action. Once there is a resolution on these procedural hurdles, we may see firmer judicial directives on an issue that compromises the protected interests of more and more people.

 


Google Fined for GDPR Non-Compliance, Consumers May Not Like the Price

Julia Lisi, MJLST Staffer

On January 14th, 2019, France’s Data Protection Authority (“DPA”) fined Google 50 million euros in one of the first enforcement actions taken under the EU’s General Data Protection Regulation (“GDPR”). The GDPR, which took effect in May of 2018, sent many U.S. companies scrambling in attempts to update their privacy policies. You, as a consumer, probably had to re-accept updated privacy policies from your social media accounts, phones, and many other data-based products. Google’s fine makes it the first U.S. tech giant to face GDPR enforcement. While a 50 million euro (roughly 57 million dollars) fine may sound hefty, it is actually relatively small compared to maximum fine allowed under the GDPR, which, for Google, would be roughly five billion dollars.

The French fine clarifies a small portion of the uncertainty surrounding GDPR enforcement. In particular, the French DPA rejected Google’s methods for getting consumers to consent to its  Privacy Policy and Terms of Service. The French DPA took issue with the (1) numerous steps users faced before they could opt out of Google’s data collection, (2) the pre-checked box indicating users’ consent, and (3) the inability of users to consent to individual data processes, instead requiring whole cloth acceptance of both Google’s Privacy Policy and Terms of Service.

The three practices rejected by the French DPA are commonplace in the lives of many consumers. Imagine turning on your new phone for the first time and scrolling through seemingly endless provisions detailing exactly how your daily phone use is tracked and processed by both the phone manufacturer and your cell provider. Imagine if you had to then scroll through the same thing for each major app on your phone. You would have much more control over your digital footprint, but would you spend hours reading each provision of the numerous privacy policies?

Google’s fine could mark the beginning of sweeping changes to the data privacy landscape. What once took a matter of seconds—e.g., checking one box consenting to Terms of Service—could now take hours. If Google’s fine sets a precedent, consumers could face another wave of re-consenting to data use policies, as other companies fall in line with the GDPR’s standards. While data privacy advocates may applaud the fine as the dawn of a new day, it is unclear how the average consumer will react when faced with an in-depth consent process.


A Data Privacy Snapshot: Big Changes, Uncertain Future

Holm Belsheim, MJLST Staffer

When Minnesota Senator Amy Klobuchar announced her candidacy for the Presidency, she stressed the need for new and improved digital data regulation in the United States. It is perhaps telling that Klobuchar, no stranger to internet legislation, labelled data privacy and net neutrality as cornerstones of her campaign. While data bills have been frequently proposed in Washington, D.C., few members of Congress have been as consistently engaged in this area as Klobuchar. Beyond expressing her longtime commitment to the idea, the announcement may also be a savvy method to tap into recent sentiments. Over the past several years citizens have experienced increasingly intrusive breaches of their information. Target, Experian and other major breaches exposed the information of hundreds of millions of people, including a shocking 773 million records in a recent report. See if you were among them. (Disclaimer: neither I nor MJLST are affiliated with these sites, nor can we guarantee accuracy.)

Data privacy has been big news in recent years. Internationally, Brazil, India and China are have recently put forth new legislation, but the big story was the European Union’s General Data Privacy Regulation, or GDPR, which began enforcement last year. This massive regulatory scheme codifies the European presumption that an individual’s data is not available for business purposes without the individual’s explicit consent, and even then only in certain circumstances. While the scheme has been criticized as both vague and overly broad, one crystal clear element is the seriousness of its enforcement capabilities. Facebook and Google each received large fines soon after the GDPR’s official commencement, and other companies have partially withdrawn from the EU in the face of compliance requirements. No clear challenge has emerged, and it looks like the GDPR is here to stay.

Domestically, the United States has nothing like the GDPR. The existing patchwork of federal and state laws leave much to be desired. Members of Congress propose new laws regularly, most of which then die in committee or are shelved. California has perhaps taken the boldest step in recent years, with its expansive California Consumer Protection Act (CCPA) scheduled to begin enforcement in 2020. While different from the GDPR, the CCPA similarly proposes heightened standards for companies to comply with, more remedies and transparency for consumers, and specific enforcement regimes to ensure requirements are met.

The consumer-friendly CCPA has drawn enormous scrutiny and criticism. While evincing modest support, or perhaps just lip service, tech titans like Facebook and Google are none too pleased with the Act’s potential infringement upon their access to Americans’ data. Since 2018, affected companies have lobbied Washington, D.C. for expansive and modernized federal data privacy laws. One common, though less publicized, element in these proposals is an explicit federal preemption provision, which would nullify the CCPA and other state privacy policies. While nothing has yet emerged, this issue isn’t going anywhere soon.


AI: Legal Issues Arising from the Development of Autonomous Vehicle Technology

Sooji Lee, MJLST Staffer

Have you ever heard of the “Google deep mind challenge match?” AlphaGo, the artificial intelligence (hereinafter “AI”) created by Google, had a Go game match with Lee Sedol, 18-time world champion of Go in 2016. Go game is THE most complicated human made game that has more variable moves than you can ever imagine – more than a billion more variables than a chess game. People who knew enough about the complexity of Go game did not believe that it was possible for AI to calculate all these variables to defeat the world champion, who depended more on his guts and experiences. AlphaGo, however, defeated Mr. Lee by five to one leaving the whole world amazed.

Another use of AI is to make autonomous vehicles (hereinafter “AV”), to achieve mankind’s long-time dream: driving a car without driving. Now, almost every automobile manufacturer including GM, Toyota, Tesla and others, who each have enough capital to reinvest their money on the new technology, aggressively invest in AV technologies. As a natural consequence of increasing interest on AV technology, vehicle manufacturers have performed several driving tests on AVs. Many legal issues arose as a result of the trials. During my summer in Korea, I had a chance to research legal issues for an intellectual property infringement lawsuit regarding AV technology between two automobile manufacturers.

For a normal vehicle, a natural person is responsible if there is an accident. But who should be liable when an AV malfunctions? The owner of the vehicle, the manufacturer of the vehicle, or the entity who developed the vehicle’s software? This is one of the hardest questions that arises from the commercialization of AV. I personally think that the liability could be imposed on any of the various entities depending on different scenarios. If the accident happened because of the malfunctioning of the vehicle’s AI system, the software provider should be liable. If the accident occurred because the vehicle itself malfunctioned, the manufacturer should be held liable. But if the accident occurred because the owner of the vehicle poorly managed his/her car, the owner should be held liable. To sum up, there is no one-size fits all solution to who should be held liable. Courts should consider the causal sequence of the accident when determining liability.

Also, the legislative body must take data privacy into consideration when enacting statutes governing AVs. There are tons of cars on the road. Drivers should interact with other drivers to safely get to their destination. Therefore, AVs should share locations and current situations to interact well with other AVs. This means that a single entity should collect each AVs information and calculate it to prevent accidents or to effectively manage traffic. Nowadays, almost every driver is using navigation. This means that people must provide their location to a service provider, such as Google maps. Some may argue that service providers like Google maps already serve as a collector of vehicle information. But there are many navigation services. Since all AVs must interact with each other, centralizing the data with one service provider is wise. While centralizing the data and limiting consumer choice to one service provider is advisable, the danger of a data breach would be heightened should one service provider be selected. This is an important and pressing concern for legislatures considering enacting legislation regarding centralizing AV data with one service provider.

Therefore, enacting an effective, smart, and predictive statute is important to prevent potential problems. Notwithstanding its complexity, many states in the U.S. take a positive stance toward the commercialization of AV since the industry could become profitable. According to statistics from National Conference of State Legislatures, 33 states have introduced legislation and 10 states have issued executive orders related to AV technology. For example, Florida’s 2016 legislation expands allowed operation of autonomous vehicles on public roads. Also, Arizona’s Governor issued an executive order which encouraged the development of relevant technologies. With this steps, development of a legal shield is possible someday.


The Unfair Advantage of Web Television

Richard Yo, MJLST Staffer

 

Up to a certain point, ISPs like Comcast, Verizon, and AT&T enjoy healthy, mutually beneficial relationships with web content companies such as Netflix, YouTube, and Amazon. That relationship remains so even when regular internet usage moves beyond emails and webpage browsing to VoIP and video streaming. To consume data-heavy content, users seek the wider bandwidth of broadband service and ISPs are more than happy to provide it at a premium. However, once one side enters the foray of the other, the relationship becomes less tenable unless it is restructured or improved upon. This problem is worse when both sides attempt to mimic the other.

 

Such a tension had clearly arisen by the time Verizon v. FCC 740 F.3d 623 (D.C. Cir. 2014) was decided. The D.C. Circuit vacated, or rather clarified, the applicability of two of the three rules that constituted the FCC’s 2010 Open Internet Order. The D.C. Circuit clarified that the rule of transparency was applicable to all, but the restrictions on blocking and discrimination were applicable only to common carriers. The FCC had previously classified ISPs under Title I of the Communications Act; common carriers are classified under Title II. The 2014 decision confirmed that broadband companies, not being common carriers, could choose the internet speed of websites and web-services at their discretion so long as they were transparent. So, to say that the internet’s astounding growth and development is due to light touch regulation is disingenuous. That statement in and of itself is true. Such discriminatory and blocking behavior was not in the purview of broadband providers during the early days of the internet due to the aforementioned relationship.

 

Once web content began taking on the familiar forms of broadcast television, signs of throttling were evident. Netflix began original programming in 2013 and saw its streaming speeds drop dramatically that year on both Verizon and Comcast networks. In 2014, Netflix made separate peering-interconnection agreements with both companies to secure reliably fast speeds for itself. Soon, public outcry led to the FCC’s 2015 Open Internet Order reclassifying broadband internet service as a “telecommunications service” subject to Title II. ISPs were now common carriers and net neutrality was in play, at least briefly (2015-2018).

 

Due to the FCC’s 2018 Restoring Internet Freedom Order, much of the features of the 2015 order have been reversed. Some now fear that ISPs will again attempt to control the traffic on their networks in all sorts of insidious ways. This is a legitimate concern but not one that necessarily spans the entire spectrum of the internet.

 

The internet has largely gone unregulated thanks to legislation and policies meant to encourage innovation and discourse. Under this incubatory setting, numerous such advancements and developments have indeed been made. One quasi-advancement is the streaming of voice and video. The internet has gone from cat videos to award-winning dramas. What began as a supplement to mainstream entertainment has now become the dominant force. Instead of Holly Hunter rushing across a busy TV station, we have Philip DeFranco booting up his iMac. Our tastes have changed, and with it, the production involved.

 

There is an imbalance here. Broadcast television has always suffered the misgivings of the FCC, even more than its cable brethren. The pragmatic reason for this has always been broadcast television’s availability, or rather its unavoidability. Censors saw to it that obscenities would never come across a child’s view, even inadvertently. But it cannot be denied that the internet is vastly more ubiquitous. Laptop, tablet, and smartphone sales outnumber those of televisions. Even TVs are now ‘smart,’ serving not only their first master but a second web master as well (no pun intended). Shows like Community and Arrested Development were network television shows (on NBC and FOX, respectively) one minute, and web content (on Yahoo! and Netflix, respectively) the next. The form and function of these programs had not substantially changed but they were suddenly free of the FCC’s reign. Virtually identical productions on different platforms are regulated differently, all due to arguments anchored by fears of stagnation.


Car Wreck: Data Breach at Uber Underscores Legal Dangers of Cybersecurity Failures

Matthew McCord, MJSLT Staffer

 

This past week, Uber’s annus horribilis and the everincreasing reminders of corporate cybersecurity’s persistent relevance reached singularity. Uber, once praised as a transformative savior of the economy by technology-minded businesses and government officials for its effective service delivery model and capitalization on an exponentially-expanding internet, has found itself impaled on the sword that spurred its meteoric rise. Uber recently disclosed that hackers were able to access the personal information of 57 million riders and drivers last year. It then paid hackers $100,000 to destroy the compromised data, and failed to inform its users or sector regulators of the breach at the time. These hackers apparently compromised a trove of personally identifiable information, including names, telephone numbers, email addresses, and driver’s licenses of users and drivers through a flaw in their company’s GitHub security.

Uber, a Delaware corporation, is required to present notice of a data breach in the “most expedient time possible and without unreasonable delay” to affected customers per Delaware statutes. Most other states have adopted similar legislation which affects companies doing business in those states, which could allow those regulators and customers to bring actions against the company. By allegedly failing to provide timely notification, Uber opened itself to the parade of announced investigations from regulators into the breach: the United Kingdom’s Information Commissioner, for instance, has threatened fines following an inquiry, and U.S. state regulators are similarly considering investigations and regulatory action.

Though regulatory action is not a certainty, the possibility of legal action and the dangers of lost reputation are all too real. Anthem, a health insurer subject to far stricter federal regulation under HIPAA and its various amendments, lost $115 million to settlement of a class action suit over its infamous data breach. Short-term impacts on reputation rattle companies (especially those who respond less vigorously), with Target having seen its sales fall by almost 50% in 2013 Q4 after its data breach. The cost of correcting poor data security on a technical level also weighs on companies.

This latest breach underscores key problems facing businesses in the continuing era of exponential digital innovation. The first, most practical problem that companies must address is the seriousness with which companies approach information security governance. An increasing number of data sources and applications, and increasing complexity of systems and vectors, similarly increases the potential avenues to exposure for attack. One decade ago, most companies used at least somewhat isolated, internal systems to handle a comparatively small amount of data and operations. Now, risk assessments must reflect the sheer quantity of both internal and external devices touching networks, the innumerable ways services interact with one another (and thus expose each service and its data to possible breaches), and the increasing competence of organized actors in breaching digital defenses. Information security and information governance are no longer niches, relegated to one silo of a company, but necessarily permeate most every business area of an enterprise. Skimping on investment in adequate infrastructure far widens the regulatory and civil liability of even the most traditional companies for data breaches, as Uber very likely will find.

Paying off data hostage-takers and thieves is a particularly concerning practice, especially from a large corporation. This simply creates a perverse incentive for malignant actors to continue trying to siphon off and extort data from businesses and individuals alike. These actors have grown from operations of small, disorganized groups and individuals to organized criminal groups and rogue states allegedly seeking to circumvent sanctions to fund their regimes. Acquiescing to the demands of these actors invites the conga line of serious breaches to continue and intensify into the future.

Invoking a new, federal legislative scheme is a much-discussed and little-acted upon solution for disparate and uncoordinated regulation of business data practices. Though 18 U.S.C. § 1030 provides for criminal penalties for the bad actors, there is little federal regulation or legislation on the subject of liability or minimum standards for breached PII-handling companies generally. The federal government has left the bulk of this work to each state as it leaves much of business regulation. However, internet services are recognized as critical infrastructure by the Department of Homeland Security under Presidential Policy Directive 21. Data breaches and other cyber attacks result in data and intellectual property theft costing the global economy hundreds of billions of dollars annually, with widespread disruption potentially disrupting government and critical private sector operations, like the provision of utilities, food, and essential services, turning cybersecurity into a definite critical national risk requiring a coordinated response. Careful crafting of legislation authorizing federal coordination of cybersecurity best practices and adequately punitive federal action for negligence of information governance systems, would incentivize the private and public sectors to take better care of sensitive information, reducing the substantial potential for serious attacks to compromise the nation’s infrastructure and the economic well-being of its citizens and industries.


United States v. Microsoft Corp.: A Chance for SCOTUS to Address the Scope of the Stored Communications Act

Maya Digre, MJLST Staffer

 

On October 16th, 2017 the United States Supreme Court granted the Federal Government’s petition for certiorari in United States v. Microsoft Corp. The case is about a warrant issued to Microsoft that ordered it to seize and produce the contents of a customer’s e-mail account that the government believed was being used in furtherance of narcotics trafficking. Microsoft produced the non-content information that was stored in the U.S., but moved to quash the warrant with respect to the information that was stored abroad in Ireland. Microsoft claimed that the only way to access the information was through the Dublin data center, even though this data center could also be accessed by their database management program located at some of their U.S. locations.

 

The district court of New York determined that Microsoft was in civil contempt for not complying with the warrant. The 2nd Circuit reversed, stating that “Neither explicitly or implicitly does the statute envision the application of its warrant provision overseas” and “the application of the Act that the government proposes – interpreting ‘warrant’ to require a service provider to retrieve material from beyond the borders of the United States – would require us to disregard the presumption against extraterritoriality.” The court used traditional tools of statutory interpretation in the opinion including plain meaning, presumption against extraterritoriality, and legislative history.

 

The issue in the case, according to ScotusBlog is “whether a United States provider of email services must comply with a probable-cause-based warrant issued under 18 U.S.C. § 2703 by making disclosure in the United States of electronic communications within that provider’s control, even if the provider has decided to store that material abroad.” Essentially, the dispute centers on the scope of the Stored Communications Act (“SCA”) with respect to information that is stored abroad. The larger issue is the tension between international privacy laws, and the absolute nature of warrants issued in the United States. According to the New York Times, “the case is part of a broader clash between the technology industry and the federal government in the digital age.”

 

I think that the broader issue is something that the Supreme Court should address. However, I am not certain that this is the best case for the court. The fact that Microsoft can access the information from data centers in the United States with their database management program seems to weaken their claim. The case may be stronger for companies who cannot access information that they store abroad from within the United States. Regardless of this weakness, the Supreme Court should rule in favor of the State to preserve the force of warrants of this nature. It was Microsoft’s choice to store the information abroad, and I don’t think the choices of companies should impede legitimate crime-fighting goals of the government. Additionally, if the Court ruled that the warrant does not reach information that is stored abroad, this may incentivize companies to keep their information out of the reach of a U.S. warrant by storing it abroad. This is not a favorable policy choice for the Supreme Court to make; the justices should rule in favor of the government.

 

Unfortunately, the Court will not get to make a ruling on this case after Microsoft decided to drop it following the DOJ’s agreement to change its policy.