Cyber Security

Car Wreck: Data Breach at Uber Underscores Legal Dangers of Cybersecurity Failures

Matthew McCord, MJSLT Staffer

 

This past week, Uber’s annus horribilis and the everincreasing reminders of corporate cybersecurity’s persistent relevance reached singularity. Uber, once praised as a transformative savior of the economy by technology-minded businesses and government officials for its effective service delivery model and capitalization on an exponentially-expanding internet, has found itself impaled on the sword that spurred its meteoric rise. Uber recently disclosed that hackers were able to access the personal information of 57 million riders and drivers last year. It then paid hackers $100,000 to destroy the compromised data, and failed to inform its users or sector regulators of the breach at the time. These hackers apparently compromised a trove of personally identifiable information, including names, telephone numbers, email addresses, and driver’s licenses of users and drivers through a flaw in their company’s GitHub security.

Uber, a Delaware corporation, is required to present notice of a data breach in the “most expedient time possible and without unreasonable delay” to affected customers per Delaware statutes. Most other states have adopted similar legislation which affects companies doing business in those states, which could allow those regulators and customers to bring actions against the company. By allegedly failing to provide timely notification, Uber opened itself to the parade of announced investigations from regulators into the breach: the United Kingdom’s Information Commissioner, for instance, has threatened fines following an inquiry, and U.S. state regulators are similarly considering investigations and regulatory action.

Though regulatory action is not a certainty, the possibility of legal action and the dangers of lost reputation are all too real. Anthem, a health insurer subject to far stricter federal regulation under HIPAA and its various amendments, lost $115 million to settlement of a class action suit over its infamous data breach. Short-term impacts on reputation rattle companies (especially those who respond less vigorously), with Target having seen its sales fall by almost 50% in 2013 Q4 after its data breach. The cost of correcting poor data security on a technical level also weighs on companies.

This latest breach underscores key problems facing businesses in the continuing era of exponential digital innovation. The first, most practical problem that companies must address is the seriousness with which companies approach information security governance. An increasing number of data sources and applications, and increasing complexity of systems and vectors, similarly increases the potential avenues to exposure for attack. One decade ago, most companies used at least somewhat isolated, internal systems to handle a comparatively small amount of data and operations. Now, risk assessments must reflect the sheer quantity of both internal and external devices touching networks, the innumerable ways services interact with one another (and thus expose each service and its data to possible breaches), and the increasing competence of organized actors in breaching digital defenses. Information security and information governance are no longer niches, relegated to one silo of a company, but necessarily permeate most every business area of an enterprise. Skimping on investment in adequate infrastructure far widens the regulatory and civil liability of even the most traditional companies for data breaches, as Uber very likely will find.

Paying off data hostage-takers and thieves is a particularly concerning practice, especially from a large corporation. This simply creates a perverse incentive for malignant actors to continue trying to siphon off and extort data from businesses and individuals alike. These actors have grown from operations of small, disorganized groups and individuals to organized criminal groups and rogue states allegedly seeking to circumvent sanctions to fund their regimes. Acquiescing to the demands of these actors invites the conga line of serious breaches to continue and intensify into the future.

Invoking a new, federal legislative scheme is a much-discussed and little-acted upon solution for disparate and uncoordinated regulation of business data practices. Though 18 U.S.C. § 1030 provides for criminal penalties for the bad actors, there is little federal regulation or legislation on the subject of liability or minimum standards for breached PII-handling companies generally. The federal government has left the bulk of this work to each state as it leaves much of business regulation. However, internet services are recognized as critical infrastructure by the Department of Homeland Security under Presidential Policy Directive 21. Data breaches and other cyber attacks result in data and intellectual property theft costing the global economy hundreds of billions of dollars annually, with widespread disruption potentially disrupting government and critical private sector operations, like the provision of utilities, food, and essential services, turning cybersecurity into a definite critical national risk requiring a coordinated response. Careful crafting of legislation authorizing federal coordination of cybersecurity best practices and adequately punitive federal action for negligence of information governance systems, would incentivize the private and public sectors to take better care of sensitive information, reducing the substantial potential for serious attacks to compromise the nation’s infrastructure and the economic well-being of its citizens and industries.


6th Circuit Aligns With 7th Circuit on Data Breach Standing Issue

John Biglow, MJLST Managing Editor

To bring a suit in any judicial court in the United States, an individual, or group of individuals must satisfy Article III’s standing requirement. As recently clarified by the Supreme Court in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), to meet this requirement, a “plaintiff must have (1) suffered an injury in fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision.” Id. at 1547. When cases involving data breaches have entered the Federal Circuit courts, there has been some disagreement as to whether the risk of future harm from data breaches, and the costs spent to prevent this harm, qualify as “injuries in fact,” Article III’s first prong.

Last Spring, I wrote a note concerning Article III standing in data breach litigation in which I highlighted the Federal Circuit split on the issue and argued that the reasoning of the 7th Circuit court in Remijas v. Neiman Marcus Group, LLC, 794 F.3d 688 (7th Cir. 2015) was superior to its sister courts and made for better law. In Remijas, the plaintiffs were a class of individuals whose credit and debit card information had been stolen when Neiman Marcus Group, LLC experienced a data breach. A portion of the class had not yet experienced any fraudulent charges on their accounts and were asserting Article III standing based upon the risk of future harm and the time and money spent mitigating this risk. In holding that these Plaintiffs had satisfied Article III’s injury in fact requirement, the court made a critical inference that when a hacker steals a consumer’s private information, “[p]resumably, the purpose of the hack is, sooner or later, to make fraudulent charges or assume [the] consumers’ identit[y].” Id. at 693.

This inference is in stark contrast to the line of reasoning engaged in by the 3rd Circuit in Reilly v. Ceridian Corp. 664 F.3d 38 (3rd Cir. 2011).  The facts of Reilly were similar to Remijas, except that in Reilly, Ceridian Corp., the company that had experienced the data breach, stated only that their firewall had been breached and that their customers’ information may have been stolen. In my note, mentioned supra, I argued that this difference in facts was not enough to wholly distinguish the two cases and overcome a circuit split, in part due to the Reilly court’s characterization of the risk of future harm. The Reilly court found that the risk of misuse of information was highly attenuated, reasoning that whether the Plaintiffs experience an injury depended on a series of “if’s,” including “if the hacker read, copied, and understood the hacked information, and if the hacker attempts to use the information, and if he does so successfully.” Id. at 43 (emphasis in original).

Often in the law, we are faced with an imperfect or incomplete set of facts. Any time an individual’s intent is an issue in a case, this is a certainty. When faced with these situations, lawyers have long utilized inferences to differentiate between more likely and less likely scenarios for what the missing facts are. In the case of a data breach, it is almost always the case that both parties will have little to no knowledge of the intent, capabilities, or plans of the hacker. However, it seems to me that there is room for reasonable inferences to be made about these facts. When a hacker is sophisticated enough to breach a company’s defenses and access data, it makes sense to assume they are sophisticated enough to utilize that data. Further, because there is risk involved in executing a data breach, because it is illegal, it makes sense to assume that the hacker seeks to gain from this act. Thus, as between the Reilly and Remijas courts’ characterizations of the likelihood of misuse of data, it seemed to me that the better rule is to assume that the hacker is able to utilize the data and plans to do so in the future. Further, if there are facts tending to show that this inference is wrong, it is much more likely at the pleading stage that the Defendant Corporation would be in possession of this information than the Plaintiff(s).

Since Remijas, there have been two data breach cases that have made it to the Federal Circuit courts on the issue of Article III standing. In Lewert v. P.F. Chang’s China Bistro, Inc., 819 F.3d 963, 965 (7th Cir. 2016), the court unsurprisingly followed the precedent set forth in their recent case, Remijas, in finding that Article III standing was properly alleged. In Galaria v. Nationwide Mut. Ins. Co., a recent 6th Circuit case, the court had to make an Article III ruling without the constraint of an earlier ruling in their Circuit, leaving the court open to choose what rule and reasoning to apply. Galaria v. Nationwide Mut. Ins. Co., No. 15-3386, 2016 WL 4728027, (6th Cir. Sept. 12, 2016). In the case, the Plaintiffs alleged, among other claims, negligence and bailment; these claims were dismissed by the district court for lack of Article III standing. In alleging that they had suffered an injury in fact, the Plaintiffs alleged “a substantial risk of harm, coupled with reasonably incurred mitigation costs.” Id. at 3. In holding that this was sufficient to establish Article III standing at the pleading stage, the Galaria court found the inference made by the Remijas court to be persuasive, stating that “[w]here a data breach targets personal information, a reasonable inference can be drawn that the hackers will use the victims’ data for the fraudulent purposes alleged in Plaintiffs’ complaints.” Moving forward, it will be intriguing to watch how Federal Circuits who have not faced this issue, like the 6th circuit before deciding Galaria, rule on this issue and whether, if the 3rd Circuit keeps its current reasoning, this issue will eventually make its way to the Supreme Court of the United States.


The Federal Government Wants Your iPhone Passcode: What Does the Law Say?

Tim Joyce, MJLST Staffer

Three months ago, when MJLST Editor Steven Groschen laid out the arguments for and against a proposed New York State law that would require “manufacturers and operating system designers to create backdoors into encrypted cellphones,” the government hadn’t even filed its motion to compel against Apple. Now, just a few weeks after the government quietly stopped pressing the issue, it almost seems as if nothing at all has changed. But, while the dispute at bar may have been rendered moot, it’s obvious that the fight over the proper extent of data privacy rights continues to simmer just below the surface.

For those unfamiliar with the controversy, what follows are the high-level bullet points. Armed attackers opened fire on a group of government employees in San Bernardino, CA on the morning of December 2, 2015. The attackers fled the scene, but were killed in a shootout with police later that afternoon. Investigators opened a terrorism investigation, which eventually led to a locked iPhone 5c. When investigators failed to unlock the phone, they sought Apple’s help, first politely, and then more forcefully via California and Federal courts.

The request was for Apple to create an authenticated version of its iOS operating system which would enable the FBI to access the stored data on the phone. In essence, the government asked Apple to create a universal hack for any iPhone operating that particular version of iOS. As might be predicted, Apple was less than inclined to help crack its own encryption software. CEO Tim Cook ran up the banner of digital privacy rights, and re-ignited a heated debate over the proper scope of government’s ability to regulate encryption practices.

Legal chest-pounding ensued.

That was the situation until March 28, when the government quietly stopped pursuing this part of the investigation. In its own words, the government informed the court that it “…ha[d] now successfully accessed the data stored on [the gunman]’s iPhone and therefore no longer require[d] the assistance from Apple Inc…”. Apparently, some independent governmental contractor (read: legalized hacker) had done in just a few days what the government had been claiming from the start was impossible without Apple’s help. Mission accomplished – so, the end?

Hardly.

While this one incident, for this one iPhone (the iOS version is only applicable to iPhone 5c’s, not any other model like the iPhone 6), may be history, many more of the same or substantially similar disputes are still trickling through the courts nationwide. In fact, more than ten other federal iPhone cases have been filed since September 2015, and all this based on a 227 year old act of last resort. States like New York are also getting into the mix, even absent fully ratified legislation. Furthermore, it’s obvious that legislatures are taking this issue seriously (see NYS’s proposed bill, recently returned to committee).

Although he is only ⅔ a lawyer at this point, it seems to this author that there are at least three ways a court could handle a demand like this, if the case were allowed to go to the merits.

  1. Never OK to demand a hack – In this situation, the courts could find that our collective societal interests in privacy would always preclude enforcement of an order like this. Seems unlikely, especially given the demonstrated willingness in this case of a court to make the order in the first place.
  2. Always OK to demand a hack – Similar to option 1, this option seems unlikely as well, especially given the First and Fourth Amendments. Here, the courts would have to find some rationale to justify hacking in every circumstance. Clearly, the United States has not yet transitioned to Orwellian dystopia yet.
  3. Sometimes OK to demand a hack, but scrutiny – Here, in the middle, is where it seems likely we’ll find courts in the coming years. Obviously, convincing arguments exist on each side, and it seems possible reconcile infringing personal privacy and upholding national security with burdening a tech company’s policy of privacy protection, given the right set of facts. The San Bernardino shooting is not that case, though. The alleged terrorist threat has not been characterized as sufficiently imminent, and the FBI even admitted that cracking the cell phone was not integral to the case and they didn’t find anything anyway. It will take a (probably) much more scary scenario for this option to snap into focus as a workable compromise.

We’re left then with a nagging feeling that this isn’t the last public skirmish we’ll see between Apple and the “man.” As digital technology becomes ever more integrated into daily life, our legal landscape will have to evolve as well.
Interested in continuing the conversation? Leave a comment below. Just remember – if you do so on an iPhone 5c, draft at your own risk.


Requiring Backdoors into Encrypted Cellphones

Steven Groschen, MJLST Managing Editor

The New York State Senate is considering a bill that requires manufacturers and operating system designers to create backdoors into encrypted cellphones. Under the current draft, failure to comply with the law would result in a $2,500 fine, per offending device. This bill highlights the larger national debate concerning privacy rights and encryption.

In November of 2015, the Manhattan District Attorney’s Office (MDAO) published a report advocating for a federal statute requiring backdoors into encrypted devices. One of MDAO’s primary reasons in support of the statute is the lack of alternatives available to law enforcement for accessing encrypted devices. The MDAO notes that traditional investigative techniques have largely been ineffective. Additionally, the MDAO argues that certain types of data residing on encrypted devices often cannot be found elsewhere, such as on a cloud service. Naturally, the inaccessibility of this data is a significant hindrance to law enforcement. The report offers an excellent summary of the law enforcement perspective; however, as with all debates, there is another perspective.

The American Civil Liberties Union (ACLU) has stated it opposes using warrants to force device manufacturers to unlock their customers’ encrypted devices. A recent ACLU blog post presented arguments against this practice. First, the ACLU argued that the government should not require “extraordinary assistance from a third party that does not actually possess the information.” The ACLU perceives these warrants as conscripting Apple (and other manufacturers) to conduct surveillance on behalf of the government. Second, the ACLU argued using search warrants bypasses a “vigorous public debate” regarding the appropriateness of the government having backdoors into cellphones. Presumably, the ACLU is less opposed to laws such as that proposed in the New York Senate, because that process involves an open public debate rather than warrants.

Irrespective of whether the New York Senate bill passes, the debate over government access to its citizens’ encrypted devices is sure to continue. Citizens will have to balance public safety considerations against individual privacy rights—a tradeoff as old as government itself.


Warrant Now Required For One Type of Federal Surveillance, and May Soon Follow for State Law Enforcement

Steven Graziano, MJLST Staffer

As technology has advanced over the recent decades, law enforcement agencies have expanded their enforcement techniques. One example of these tools is cell-site simulators, otherwise known as sting rays. Put simply, sting rays act as a mock cell tower, detect the use of a specific phone number in a given range, and then uses triangulation to locate the phone. However, the recent, heightened awareness and criticism directed towards government and law enforcement surveillance has affected their potential use. Specifically, many federal law enforcement agencies have been barred from their use without a warrant, and there is current federal legislation pending, which would require state and local law enforcement agents to also gain a warrant before using a sting ray.

Federal law enforcement agencies, specifically Immigration, Secret Service, and Homeland Security agents must obtain search warrants before using sting rays, as announced by the Department of Homeland Security. Homeland Security’s shift in policy comes after the Department of Justice made a similar statement. The DOJ has affirmed that although they had previously used cell-cite simulators without a warrant, going forward they will require law enforcement agencies gain a search warrant supported by probable cause. DOJ agencies directed by this policy include the FBI and the Drug Enforcement Administration. This shift in federal policy was largely in response to pressures put upon Washington by civil liberties groups, as well as the shift in American public’s attitude towards surveillance generally.

Although these policies only affect federal law enforcement agencies, there have also been steps taken to expand the warrant requirement for sting rays to state and local governments. Federal lawmakers have introduced the Cell-Site Simulator Act of 2015, also known as the Stingray Privacy Act, to hold state and local law enforcement to the same Fourth Amendment standards as the federal government. The law has been proposed in the House of Representatives by Rep. Jason Chaffetz (R-Utah) and was designated to a congressional committee on November 2, 2015, which will consider it before sending it to the entire House or Senate. In addition to requiring a warrant, the act also requires prosecutors and investigators to disclose to judges that the technology they intend to use in execution of the warrant is specifically a sting ray. The proposed law was partially a response to a critique of the federal warrant requirement, name that it did not compel state or local law enforcement to also obtain a search warrant.

The use of advanced surveillance programs by federal, state, and local law enforcement, has been a controversial subject recently. Although law enforcement has a duty to fully enforce that law, and this includes using the entirety of its resources to detect possible crimes, it must still adhere to the constitutional protections laid out in the Fourth Amendment when doing so. Technology chances and advances rapidly, and sometimes it takes the law some time to adapt. However, the shift in policy at all levels of government, shows that the law may be beginning to catch up to law enforcement’s use of technology.


Data Breach and Business Judgment

Quang Trang, MJLST Staffer

Data breaches are a threat to major corporations. Corporations such as Target Co. and Wyndham Worldwide Co. have been victim of mass data breaches. The damage caused by such breaches have led to derivative lawsuits being filed by shareholders to hold board of directors responsible.

In Palkon v. Holmes, 2014 WL 5341880 (D. N.J. 2014), Wyndham Worldwide Co. shareholder Dennis Palkon filed a lawsuit against the company’s board of directors. The judge granted the board’s motion to dismiss partially because of the Business Judgment Rule. The business judgement rule governs when boards refuse shareholder demands. The principle of the business judgment rule is that “courts presume that the board refused the demand on an informed basis, in good faith and in honest belief that the action taken was in the best interest of the company.” Id. The shareholder who brings the derivative suit has the burden to rebut the presumption that the board acted in good faith or that the board did not base its decision on reasonable investigation.

Cyber security is a developing area. People are still unsure how prevalent the problem is and how damaging it is. It is difficult to determine what a board needs to do with such ambiguous information. In a time when there is no set corporate cyber security standards, it is difficult for a shareholder to show bad faith or lack of reasonable investigation. Until clear standards and procedures for cyber security are widely adopted, derivative suits over data breaches will likely be dismissed such as in Palkon.


E.C.J Leaves U.S. Organizations to Search for Alternative Data Transfer Channels

J. Adam Sorenson, MJLST Staffer

The Court of Justice of the European Union (E.C.J.), the European’s top court, immediately invalidated a 15-year-old U.S. EU Safe Harbor Program Oct. 6th (Schrems v. Data Prot. Comm’r, E.C.J., No. C-362/14, 10/6/15). This left the thousands of businesses which use this program without a reliable and lawful way to transfer personal data from the European Economic Area to the United States.

The Safe Harbor Program was developed by the U.S. Department of Commerce in consultation with the European Commission. It was designed to provide a streamlined and cost-effective means for U.S. organizations to comply with the European Commission’s Directive on Data Protection (Data Protection Directive) which went into effect October of 1998. The program allowed U.S. organizations to voluntarily join and freely transfer personal data out of all 28 member states if they self-certify and comply with the programs 7 Safe Harbor Privacy Principles. The program was enforced by the U.S. Federal Trade Commission. Schrems v. Data Prot. Comm’r, however, brought a swift halt to the program.

This case revolves around Mr. Schrems, an Australian Facbook user since 2008 living in Austria. Some or all of the data collected by the social networking site Facebook is transferred to servers in the United States where it undergoes processing. Mr. Schrems brought suit against the Data Protection Commissioner after he did not exercise his statutory authority to prohibit this transfer. The case applied to a 2000 decision by the European Commission which found the program provided adequate privacy protection and was in line with the Data Protection Directive. The directive prohibits “transfers of personal data to a third country not ensuring an adequate level of protection.”(Schrems) The directive goes on to say that adequate levels may be inferred if a third country ensures an adequate level of protection.

The E.C.J. found that the current Safe Harbor Program did not ensure an adequate level of protection, and therefore found the 2000 decision and the program itself as invalid. This means all U.S. organizations currently transferring personal data out of the EEA are doing so in violation of the Data Protection Directive. This case requires U.S. organizations to find alternative methods of approved data transfer, which generally means seeking the approval of data protection authorities in the EU, which can be a long process.

Although the EU national data protection authorities may allow for some time before cracking down on these U.S. organization, this decision signals a massive shift in the way personal data is transferred between the U.S. and Europe, and will most likely have ripple effects throughout the data privacy and data transfer worlds.


Cyber Intrusions

Hana Kidaka, MJLST Staffer

On November 24, 2014, hackers stole confidential information from the servers of Sony Pictures Entertainment. The hackers claimed to have stolen 100 terabytes of confidential information, including employee Social Security numbers, e-mail conversations between executives, and unreleased films. This Sony hack and “[t]he dramatic increase in cyber intrusions” led the Obama Administration to issue legislative proposals on January 13, 2015 in hopes of strengthening cybersecurity. The Administration’s proposals attempt to: “(1) enhance cybersecurity threat information sharing within the private sector and with the Federal Government; (2) establish a single standard to protect individuals by requiring businesses to notify them if their personal information is compromised; and (3) strengthen the ability of law enforcement to investigate and prosecute cybercrimes.”

Following the legislative proposals, President Obama signed executive orders that encourage companies to share cybersecurity information with each other and the government and that allow the government to impose penalties on foreign “individuals or entities that engage in significant malicious cyber-enabled activities.” The President has also been in talks with foreign governments to strengthen cybersecurity. For example, on September 25, 2015, President Obama announced that the U.S. and China have agreed to work together to prevent cybercrimes by providing “timely responses . . . to requests for information and assistance concerning malicious cyber activities” and by “identify[ing] and promot[ing] appropriate norms of state behavior in cyberspace within the international community.” While this is a small step in the right direction, it is important that our federal government establish a comprehensive cybersecurity legal framework that will effectively combat against cyber threats, but also take into account the privacy concerns of many individuals and companies. It will be interesting to see if and how Congress will address these conflicting interests in the near future.


The Shift Toward Data Privacy: Workplace, Evidence, and Death

<Ryan Pesch, MJLST Staff Member

I’m sure I am not alone in remembering the constant urgings to be careful what I post online. I was told not to send anything in an email I wouldn’t want made public, and I guess it made some sense that the internet was commonly viewed as a sort of public forum. It was the place teens went to be relieve their angst, to post pictures, and to exchange messages. But the demographic of people that use the internet is constantly growing. My mom and sister communicate their garden interests using Pinterest (despite the fact that my mom needs help to download her new podcasts), and as yesterday’s teens become today’s adults, what people are comfortable putting online continues to expand. For example, the advent of online finances illustrate that the online world is about so much more than frivolity. The truth of the matter is that the internet shapes the way we think about ourselves. And as Lisa Durham Taylor observed in her article for MJLST in the spring of 2014, the courts are taking notice.

The article concerns the role of internet privacy in the employment context, noting that where once a company could monitor its employee’s computer activity with impunity (after all, it was being done on the company time and with company resources), courts have recently realized that the internet stands for more than dalliance. In it, Taylor notes that the connectedness of employees brings with it both advantages and disadvantages to the corporation. It both helps and hinders productivity, offering a more efficient way of accomplishing a task, but providing the material for procrastination in an accompanying hand. When the line blurs, and people start using company time for personal acts, the line-drawing can get tricky. Companies have an important interest in preserving the confidentiality of their work, but courts have recently been drawing the lines to favor the employee over the employer. This is in stark contrast to the early decisions, which gave companies a broad right to discharge an “at-will” employee and found that there was no expectation of privacy in the workplace. Luckily, courts are beginning to recognize that the nature of a person’s online interactions make the company’s snooping more analogous to going through an employee’s personal possessions than it is to monitoring an employee’s efficiency.

I would add into the picture the recently-decided Supreme Court case of Riley v. California, where the Court held that a police needed a warrant to search a suspect’s phone. The Court said that there was not reasonable cause to search a cell phone because the nature of the technology means that the police would be violating more than necessary to conduct normal business. They likened it to previous restrictions which prevented police from searching locked possessions incident to arrest, and sarcastically observed that cell phones have become “such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy.” The “vast quantities of personal information” and the fact that the phone itself is not a weapon make its taking unjustified in the course of a normal search.

This respect for the data of individuals seems to be signaling a new and incredibly complicated age of law. When does a person have the right to protect their data? When can that protection be broken? As discussed in a recent post on this blog, there is an ongoing debate about what to do with the data of decedents. To me, a conservative approach makes the most sense, especially in context with the cases discussed by Lisa Taylor and the decision in Riley v. California. However, courts have sided with those seeking access because the nature of a will grants the property of the deceased to the heirs, which has been extended to online “property.” What Rebecca Cummings points out to help swing the balance back in favor of privacy, is that it is not just the property of the deceased to which you are granting access. The nature of email means that a person’s inbox has copies of letters from others which may have never been intended for the eyes of someone else.

I can only imagine the number of people who, had they the presence of mind to consider this eventuality, would act differently either in the writing of their will or their management of their communications. I am sure that this is already something lawyers advise their clients about when discussing their plans for their estate, but for many, death comes before they have the chance to fully consider these things. As generations who have grown up on the internet start to encounter the issue in earnest, I have no doubt that the message will spread, but I can’t help but feel it should be spreading already. So: what would your heirs find tucked away in the back of your online closet? And if the answer to that is something you’d rather not think about, perhaps we should support the shift to privacy in more aspects of the digital world.


I’m Not a Doctor, But…: E-Health Records Issues for Attorneys

Ke Huang, MJLST Lead Articles Editor

The Health Information Technology for Economic and Clinical Health Act of 2009 (HITECH Act) generally provides that, by 2015, healthcare providers must comply with the Act’s electronic health record (EHR) benchmarks, or, the government would reduce these providers’ Medicare payments by one percent.

These provisions of the HITECH Act are more than a health policy footnote. Especially for attorneys, the growing use of EHRs raises several legal issues. Indeed, in Volume 10, Issue 1 of the Minnesota Journal of Law, Science & Technology, published six years ago, Kari Bomash analyzes the consequence of EHRs in three legal-related aspects. In Privacy and Public Health in the Information Age, Bomash discusses how a Minnesota Health Records Act amendment relates to: (1) privacy, especially consent of patients, (2) data security (Bomash was almost prescient given the growing security concerns), and (3) data use regulations that affect medical doctors.

Bomash’s discussion is not exhaustive. EHRs also raise legal issues running the gamut of intellectual property, e-discovery, to malpractice. Given that software runs EHRs, IP industry is very much implicated. So much so that some proponents of EHR even support open source. (Another MJLST Article explains the concept of open source.)

E-discovery may be more straightforward. Like other legal parties maintaining electronic stored information, health entities storing EHR must comply with court laws governing discovery.

And malpractice? One doctor suggested in a recent Wall Street Journal op-ed that EHR interferes with a doctor’s quality of care. Since quality of care, or lack thereof, is correlated with malpractice actions, commentators raised the concern that EHR could raise malpractice actions. A 2010 New England Journal of Medicine study addressed this topic but could not provide a conclusive answer.

Even my personal experience with EHRs is one of the reasons that lead me to want to become an attorney. As a child growing up in an immigrant community, I often accompanied adult immigrants, to interpret in contract closings, small-business transactions, and even clinic visits. Helping in those matters sparked my interest in law. In one of the clinic visits, I noticed that an EHR print-out of my female cousin stated that she was male. I explained the error to her.

“I suppose you have to ask them to change it, then,” she said.

I did. I learned from talking to the clinic administrator the EHR software was programmed to recognize female names, and, for names that were ambiguous, as was my cousin’s, the software automatically categorized the patient as male. Even if my cousin’s visit was for an ob-gyn check-up.