Articles by mjlst

6th Circuit Aligns With 7th Circuit on Data Breach Standing Issue

John Biglow, MJLST Managing Editor

To bring a suit in any judicial court in the United States, an individual, or group of individuals must satisfy Article III’s standing requirement. As recently clarified by the Supreme Court in Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), to meet this requirement, a “plaintiff must have (1) suffered an injury in fact, (2) that is fairly traceable to the challenged conduct of the defendant, and (3) that is likely to be redressed by a favorable judicial decision.” Id. at 1547. When cases involving data breaches have entered the Federal Circuit courts, there has been some disagreement as to whether the risk of future harm from data breaches, and the costs spent to prevent this harm, qualify as “injuries in fact,” Article III’s first prong.

Last Spring, I wrote a note concerning Article III standing in data breach litigation in which I highlighted the Federal Circuit split on the issue and argued that the reasoning of the 7th Circuit court in Remijas v. Neiman Marcus Group, LLC, 794 F.3d 688 (7th Cir. 2015) was superior to its sister courts and made for better law. In Remijas, the plaintiffs were a class of individuals whose credit and debit card information had been stolen when Neiman Marcus Group, LLC experienced a data breach. A portion of the class had not yet experienced any fraudulent charges on their accounts and were asserting Article III standing based upon the risk of future harm and the time and money spent mitigating this risk. In holding that these Plaintiffs had satisfied Article III’s injury in fact requirement, the court made a critical inference that when a hacker steals a consumer’s private information, “[p]resumably, the purpose of the hack is, sooner or later, to make fraudulent charges or assume [the] consumers’ identit[y].” Id. at 693.

This inference is in stark contrast to the line of reasoning engaged in by the 3rd Circuit in Reilly v. Ceridian Corp. 664 F.3d 38 (3rd Cir. 2011).  The facts of Reilly were similar to Remijas, except that in Reilly, Ceridian Corp., the company that had experienced the data breach, stated only that their firewall had been breached and that their customers’ information may have been stolen. In my note, mentioned supra, I argued that this difference in facts was not enough to wholly distinguish the two cases and overcome a circuit split, in part due to the Reilly court’s characterization of the risk of future harm. The Reilly court found that the risk of misuse of information was highly attenuated, reasoning that whether the Plaintiffs experience an injury depended on a series of “if’s,” including “if the hacker read, copied, and understood the hacked information, and if the hacker attempts to use the information, and if he does so successfully.” Id. at 43 (emphasis in original).

Often in the law, we are faced with an imperfect or incomplete set of facts. Any time an individual’s intent is an issue in a case, this is a certainty. When faced with these situations, lawyers have long utilized inferences to differentiate between more likely and less likely scenarios for what the missing facts are. In the case of a data breach, it is almost always the case that both parties will have little to no knowledge of the intent, capabilities, or plans of the hacker. However, it seems to me that there is room for reasonable inferences to be made about these facts. When a hacker is sophisticated enough to breach a company’s defenses and access data, it makes sense to assume they are sophisticated enough to utilize that data. Further, because there is risk involved in executing a data breach, because it is illegal, it makes sense to assume that the hacker seeks to gain from this act. Thus, as between the Reilly and Remijas courts’ characterizations of the likelihood of misuse of data, it seemed to me that the better rule is to assume that the hacker is able to utilize the data and plans to do so in the future. Further, if there are facts tending to show that this inference is wrong, it is much more likely at the pleading stage that the Defendant Corporation would be in possession of this information than the Plaintiff(s).

Since Remijas, there have been two data breach cases that have made it to the Federal Circuit courts on the issue of Article III standing. In Lewert v. P.F. Chang’s China Bistro, Inc., 819 F.3d 963, 965 (7th Cir. 2016), the court unsurprisingly followed the precedent set forth in their recent case, Remijas, in finding that Article III standing was properly alleged. In Galaria v. Nationwide Mut. Ins. Co., a recent 6th Circuit case, the court had to make an Article III ruling without the constraint of an earlier ruling in their Circuit, leaving the court open to choose what rule and reasoning to apply. Galaria v. Nationwide Mut. Ins. Co., No. 15-3386, 2016 WL 4728027, (6th Cir. Sept. 12, 2016). In the case, the Plaintiffs alleged, among other claims, negligence and bailment; these claims were dismissed by the district court for lack of Article III standing. In alleging that they had suffered an injury in fact, the Plaintiffs alleged “a substantial risk of harm, coupled with reasonably incurred mitigation costs.” Id. at 3. In holding that this was sufficient to establish Article III standing at the pleading stage, the Galaria court found the inference made by the Remijas court to be persuasive, stating that “[w]here a data breach targets personal information, a reasonable inference can be drawn that the hackers will use the victims’ data for the fraudulent purposes alleged in Plaintiffs’ complaints.” Moving forward, it will be intriguing to watch how Federal Circuits who have not faced this issue, like the 6th circuit before deciding Galaria, rule on this issue and whether, if the 3rd Circuit keeps its current reasoning, this issue will eventually make its way to the Supreme Court of the United States.


Navigating the Future of Self-Driving Car Insurance Coverage

Nathan Vanderlaan, MJLST Staffer

Autonomous vehicle technology is not new to the automotive industry. For the most part however, most of these technologies have been incorporated as back-up measures for when human error leads to poor driving. For instance, car manufactures have offered packages that incorporate features such as blind-spot monitoring, forward-collision warnings with automatic breaking, as well as lane-departure warnings and prevention. However, the recent push by companies like Google, Uber, Tesla, Ford and Volvo are making the possibility of fully autonomous vehicles a near-future reality.

Autonomous vehicles will arguably be the next greatest technology, that will be responsible for saving countless lives. According to alertdriving.com, over 90 percent of accidents are the result of human error. By taking human error out of the driving equation, The Atlantic estimates that the full implementation of automated cars could save up to 300,000 lives a decade in the United States alone. In a show of federal support, U.S. Transportation Secretary Anthony Foxx released an update in January 2016 to the National Highway Traffic Safety Administration’s (NHTSA) stance on Autonomous Vehicles, promulgating a set of 15 standards to be followed by car manufactures in developing such technologies. Further, in March 2016, the NHSTA promised $3.9 billion dollars in funding over 10 years to “support the development and adoption of safe vehicle automation.” As the world makes the push for fully autonomous vehicles, the insurance industry will have to respond to the changing nature of vehicular transportation.

One of the companies leading the innovative charge is Tesla. New Tesla models may now come equipped with an “autopilot” feature. This feature incorporates multiple external sensors that relay real-time data to a computer that navigates the vehicle in most highway situations.  It allows the car to slow down when it encounters obstacles, as well as change lanes when necessary. Elon Musk, Tesla’s CEO estimates that the autopilot feature is able to reduce Tesla driver accidents by as much as 50 percent. Still, the system is not without issue. This past June, a user of the autopilot system was killed when his car collided with a tractor trailer that the car’s sensors failed to detect. Tesla quickly distributed a software operating system that he claims would have been able to detect the trailer. The accident has quickly prompted the discussion of how insurance claims and coverage will adapt to accidents in which the owners of a vehicle are no longer cause of such accidents.

Auto Insurance is a state regulated industry. Currently, there are two significant insurance models: no-fault concepts, and the tort system. While each state system has many differences, each model has the same over-arching structure. No-fault insurance models require the insurer to pay parties injured in an accident regardless of fault. Under the tort system, the insurer of the party who is responsible for the accident foots the bill. Under both systems however, the majority of insurance premium costs are derived from personal liability coverage. A significant portion of insurance coverage structure is premised on the notion that drivers cause accidents. But when the driver is taken out of the equation, the basic concept behind automotive insurance changes.

 

What seems to be the most logical response to the implementation of fully-autonomous vehicles is to hold the manufacture liable. Whenever a car crashes that is engaged in a self-driving feature, it can be presumed that the crash was caused by a manufacturing defect. The injured party would then instigate a products-liability action to recover for damages suffered during the accident. Yet this system ignores some important realities. One such reality is that manufactures will likely place the new cost on the consumer in the purchase price of the car. These costs could leave a car outside the average consumer’s price range, and could hinder the wide-spread implementation of a safer automotive alternative to human-driven cars. Even if manufactures don’t rely on consumers to cover the bill, the new system will likely require new forms of regulation to protect car manufactures from going under due to overwhelming judgments in the courts.

Perhaps a more effective method of insurance coverage has been proposed by RAND, a company that specializes in evaluating and suggesting how best to utilize new technologies. RAND has suggested that a universal no-fault system be implemented for autonomous vehicle owners. Under such a system, autonomous car drivers would still pay premiums, but such premiums would be significantly lower as accident rates decrease. It is likely that for this system to work, regulation would have to come from the federal level to insure the policy is followed universally in the United States. One such company that has begun a system mirroring this philosophy is Adrian Flux in Britain. This insurer offers a plan for drivers of semi-autonomous vehicles that is lower in price than traditional insurance plans. Adrian Flux has also announced that it would update its policies as both the liability debate and driverless technology evolves.

No matter the route chosen by regulators or insurance companies, the issue of autonomous car insurance likely won’t arise until 2020 when Volvo plans to place commercial, fully-autonomous vehicles on the market. Even still, it could be decades before a majority of vehicles on the street have such capabilities. This time will give regulators, insurers, and manufactures alike, adequate time to develop a system that will best propel our nation towards a safer, autonomous automotive society.


Drinking the Kool-Aid? Why We Might Want to Talk About Our Road Salt

Nick Redmond, MJLST Staffer

Winter is coming. Or—at least according to the 2017 Farmer’s Almanac“winter is back” after an exceptionally mild 2015–2016 season, and with it comes all of the shoveling, the snow-blowing, and the white walkers de-icing of slippery roads that we missed last year. So what does the most overused Game of Thrones quote and everyone’s least favorite season have to do with Kool-Aid (actually, Flavor-Aid)? Just like the origins of the phrase “drinking the Kool-Aid,” this post has to do with cyanide. More specifically, the ferrocyanide compounds that we use to coat our road salt and that are potentially contaminating our groundwater.

De-icing chemicals are commonly regarded as the most efficient and effective means of keeping our roads safe and free from ice in the winter. De-icing compounds come in many forms, from solids to slurries to sticky beet juice- or cheese brine-based liquids. The most common de-icing chemical is salt, with cities like Minneapolis spending millions of dollars to purchase upwards of 15,000 tons of treated and untreated salt to spray on their roads. In order to keep the solid salt from clumping or “caking” and becoming unusable as it sits around it’s usually treated with chemicals to ensure that it can be spread evenly on roads. Ferrocyanide (a/k/a hexacyanoferrate(II)) and the compounds sodium ferrocyanide and potassium ferrocyanide are yellow chemicals commonly used as anti-caking additives for road salt in Minnesota and other parts of the country, and they can be found in varying concentrations depending on the product, from 0.0003 ppm to 0.33 ppm. To put those numbers in perspective, the CDC warns that cyanide starts to produce harmful effects on humans at 0.05 mg/dL, or 0.5 ppm.

But why are chemicals on our road salt troubling? Road salt keeps ice from forming a bond with the pavement by lowering the freezing point of snow as it falls on the ground. As the salt gets wet it dissolves; it and the chemicals that may be attached to it have to go somewhere, which may be our surface and ground waters or the air if the liquids evaporate. The introduction of these chemicals into groundwater is of particular concern for the 75% of Minnesotans and people like them who rely on groundwater sources for drinking water. The potential for harm arises when ferrocyanide compounds are exposed to light and rapidly decompose, yielding free cyanide (CN and HCN). Further, as waters contaminated with cyanide are chlorinated and introduced to acids they may produce the harmful compound cyanogen chloride, a highly toxic gas that was once considered for use in chemical warfare. Taking into account the enormous amount of salt used and stored each year, even small concentrations may add up over time. And although the EPA has placed cyanide on the Clean Water Act’s list of toxic substances, the fact that road salt is a non-point source means that it’s entirely up to states and municipalities to decide how they want to regulate it.

The good news is that ferrocyanides are among the least toxic cyanide salts, and tend not to release toxic free cyanide. What’s more, the concentrations of ferrocyanide on road salt are generally quite low, are spread out over large areas, and are even further diluted by precipitation, evaporation, and existing ground and surface water. In order to really affect drinking water the ferrocyanide has to (1) not evaporate into the air, (2) make its way through soil and into aquifers, and (3) in large enough concentrations to actually harm humans, something that can be difficult for a large molecule. Despite all of this, however, the fact that Minneapolis alone is dumping more than 15,000 tons of road salt each year, some of it laced with ferrocyanide, should give us pause. That’s the same weight as 15,000 polar bears being released in the city streets every year! Most importantly, these compounds seep into our garden soil, stick to our car tires and our boots, and soak the fur of our pets and wild animals. While cyanide on road salt certainly isn’t a significant public health risk right now, being a part of local conversations to explore and encourage alternatives (and there are a number of alternatives) to prevent future harm might be something to consider.

At the very least think twice about eating snow off the ground (if you weren’t already). Especially the yellow stuff.


Digital Health and Legal Aid: The Lawyer Will Skype You Now

Angela Fralish, MJLST Invited Blogger

According to Dr. Shirley Musich’s research article: Homebound Older Adults: Prevalence, Characteristics, Health Care Utilization and Quality of Care, homebound patients are among the top 5% of medical service users with persistently high expenses. As it stands, about 3.6 million homebound Americans are in need of continuous medical care, but with the cost of healthcare rising, the number of elderly people retiring, hospitals closing in increasing numbers and physician shortages anticipated, caring for the homebound is becoming expensive and impractical. In an article titled Care of the Chronically Ill at Home: An Unresolved Dilemma in Health Policy for the United States, author Karen Buhler-Wilkerson notes that even after two centuries of various experiments to deliver and finance home health care, there are still too many unresolved issues.

One potential solution could be at the crossroads of technology, medicine and law. Telemedicine is a well-known medical technology providing cost effective medical care for the homebound. Becker’s reports that telemedicine visits are often more affordable, and access is a very important component, both in the sense of enabling patients to communicate through a smartphone, and the ability for clinicians to reach patients at a distance, particularly those for whom travel to a hospital on a weekly basis for necessary follow-ups or check-ins would be costly and is not feasible. Telemedicine is a form of affordable technology reaching homebound patients.

Legal aid organizations are also beginning to integrate virtual services for the homebound. For example, at Illinois Legal Aid Online, clients are able to have a live consultation with a legal professional, and in Maryland, a virtual courthouse is used for alternative dispute resolution proceedings. Some states, such as Alaska and New York, have advocated for virtual consults and hearings as part of a best practices model. On September 22nd of this year, the ABA launched a free virtual legal advice clinic to operate as an online version of a walk in clinic. However, despite these responsive measures, virtual technology for legal aid is expensive and burdensome.

But what about the cancer patient who can’t get out of bed to come in for a legal aid appointment, but needs help with a disability claim to pay their medical bills? Could diversifying telehealth user interfaces help cure the accessibility gap for both medicine and law?

Some organizations have already begun collaborations to address these issues. Medical Legal Partnerships work together to provide comprehensive care through cost effective resource pooling of business funds and federal and corporate grant money. Partnerships resolve the sociolegal determinants impacting the health of a patient. One classic case example is the homebound patient with aggravated asthma living in a house with mold spores.  A lawyer works to get the housing up to code, which reduces the asthma, and consequently future medical costs. Lawyers resolve the economic factors perpetuating a health condition while physicians treat it biologically. These partnerships are being implemented nationwide because of their proven results in decreasing the cost of care. In the case of telehealth, the homebound asthmatic patient, could log on to their computer, or work through an app on their phone, to show the attorney the living conditions in high resolution, in addition to receiving medical treatment.

The government seems to be favorable to these resolutions. The Health Resources and Services Administration allocated $18 million to health center collaborations seeking to improve quality care through health information technology. Further, the FDA has created the Digital Health program to encourage and foster collaborations in technologies to promote public health. Last year alone, Congress awarded $4 million to the Legal Services Corporation, who then disbursed that money among 15 legal aid organizations, many of which “will use technology to connect low-income populations to resources and services.” Telehealth innovation is a cornerstone for medical and legal professions committed to improvements in low cost quality patient care, especially for the homebound.

Medical facilities could even extend this same technology profitably by offering patients an in-house “attorney consult” service to improve quality of care. Much like the invention of the convenient cordless phone, a telehealth phone could be used in house or outpatient to give a health organization a leading market edge in addition to decreasing costs. Technology has yet to fully develop the number of ways that telehealth can be used to deliver legal services to improve healthcare.

So if there is a multidisciplinary call for digital aid, why aren’t we seeing more of it on a daily basis? For one, the regulatory law landscape may cause confusion. The FDA governs medical devices, the FTC regulates PHI data breaches and the FCC governs devices using broadcast services or electromagnetic spectrum. Telehealth touches on all of these and results in jurisdictional overlap amongst regulatory agencies. Other reasons may involve resistance to new technology and ever-evolving legislation and policies. In Teladoc, Inc., v. Texas Medical Board, a standard of care issue was raised when the medical board issued an injunction for physicians who prescribed medicine, but had not yet seen the patient in person. One physician in the case stated that without telehealth, his homebound patient would receive no treatment. Transitioning from traditional in person consultations to virtual assistance can greatly improve the health of patient, but has brought an entourage of notable concerns.

Allegedly, the use of telehealth was first executed by Alexander Graham Bell in 1876 when he made a phone call to his doctor. Over 140 years later, this technology is used by NASA for outer space health consults. While the technology is still relatively new, especially for collaborative patient treatment by doctors and lawyers, used wisely, it can be an interdisciplinary collaborative renaissance in using technology to improve healthcare systems and patient lives.

From all perspectives, virtual aid is well funded future component of both the medical and legal fields. It can be used in the legal sense to help people in need, in the business sense as an ancillary convenience service generating profits, or in the medical sense to provide care for the homebound. The trick will be to find engineers who can secure multiuse interfaces while meeting federal regulations and public demand. Only time will tell if such a tool can be efficiently developed.


Haiti, Hurricanes and Holes in Disaster Law

Amy Johns, MJLST Staffer

The state of national disaster relief is one that depends greatly on the country and that country’s funds. Ryan S. Keller’s article, “Keeping Disaster Human: Empathy, Systematization, and the Law,” argues that proposed legal changes to the natural disaster laws (both national and international) could have negative consequences for the donative funding of disaster relief. In essence, he describes a potential trade–off: do we want to risk losing the money that makes disaster relief possible, for the sake of more effectively designating and defining disasters? These calculations are particularly critical for countries that rely heavily on foreign aid to recover after national disasters.

In light of recent tragedies, I would point to a related difficulty: what happens when the money is provided, but because of a lack of accountability or governing laws, the funds never actually make it to their intended purposes? Drumming up financial support is all well and good, but what if the impact is never made because there are no legal and institutional supports in place?

Keller brings up a common reason to improve disaster relief law: “efforts to better systematize disaster may also better coordinate communication procedures and guidelines.” There is a fundamental difficulty in disaster work when organizations don’t know exactly what they are supposed to be doing. A prime example of the lack of communication and guidelines has been seen in Haiti, in which disaster relief efforts are largely dependent on foreign aid. The fallout from Hurricane Matthew has resurrected critiques of the 2010 earthquake response—most prominent was the claim of the Red Cross to build 130,000 homes, when in fact it only built six. Though the Red Cross has since disputed these claims, this fiasco pointed to an extreme example of NGOs’ lack of accountability to donors. Even when such efforts go as planned and are successful, the concern among many is that such efforts build short—term solutions without helping to restructure institutions that will last beyond the presence of these organizations.

Could legal regulations fix problems of accountability in disaster relief? If so, the need for those considerations is imminent: climate change means that similar disasters are likely to occur with greater frequency, so the need for effective long-term solutions will only become more pressing.


Permissionless Innovation or Precautionary Principle: the Policy Menu of the Future

Ethan Konschuh, MJLST Staffer

In their recent paper, Guns, Limbs, and Toys: What Future for 3D Printing?, published in the Minnesota Journal of Law, Science, and Technology Volume 17, Issue 2, Adam Thierer and Adam Marcus discussed the potential regulatory frameworks for technological innovations that could spur what they call “the next great industrial revolution.”  They believe that 3D printing, one such innovation, could offer such great benefits that it could significantly enhance global welfare.  However, they worry that preemptive regulations on the technology could undermine these benefits before giving them a chance to be realized.  The paper advocates for a method of regulation called “permissionless innovation,” as opposed to regulations following the “precautionary principle.”  While there are many pros to the former, it could leave unchecked the risks curtailed by the latter.

“Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default.”  It follows from the idea that unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, should they arise, can be addressed later.  The authors point to numerous benefits of this approach with respect to emerging technologies.  One of the most obvious benefits is that this type of regulatory framework does not prematurely inhibit potential benefits.  “Regulatory systems based on precautionary thinking focus on preemptive remedies that aim to predict the future and its hypothetical problems. But if public policy is rooted in fear of hypothetical worst-case scenarios, it means that best-case scenarios will never come about.”  It would also preserve the modern startup culture where “just about anyone can afford to launch a business.”  Implementing a framework based on the precautionary principle will create barriers to entry and raise the cost of innovation.  This would also reduce the ability to maximize competitive advantage through trial and error, which refines the technology and efficient allocation of resources for development.  As an example of the potential detriments to competitive advantage from preemptive regulation, the authors point to the different policies of the Europe and the U.S. in the mid-nineties internet explosion where the former preemptively regulated and the latter allowed for permissionless innovation, resulting in the U.S. being a global leader in information technologies and Europe lagging far behind.

An alternative regulatory approach discussed in the article is based on the precautionary principle, which generally refers to the belief that new innovations should be curtailed or disallowed until it can be proven that they will not cause harm.  This approach, while posing problems of its own discussed above, would solve some of the problems arising under permissionless innovation.  While there are many economic and social benefits to permissionless innovation as the bedrock on which policy rests, it inherently allows for the “error” half of “trial and error.”  The whole concept is rooted in the idea of ex post regulation, creating policy to correct for problems that have already occurred.  While traditionally, as shown through the internet regulation difference and outcome between Europe and the U.S., the risk of error has not outweighed the benefits that result, new technologies pose new risks.

For example, in the realm of 3D printing, one of the hot topics is 3D printed firearms.  Current laws would not make 3D printed guns illegal, as most regulations focus on the sale and distribution of firearms, not creation for personal use.  The reasons why it might be more prudent to adopt a precautionary principle approach to regulating this technology are obvious.  To adopt an ex post approach to something that could have such dire consequences could be disastrous, especially considering the amount of time required to adopt policy and implement regulations.  Permissionless innovation could thus become a sort of self-fulfilling prophecy in that major tragedies resulting from 3D printing could result in exactly what advocates of permissionless innovation seek to prevent in the first place: strict regulation that undermines the development of the technology.

The debate will likely heat up as technology continues to develop.  In the era of self-driving cars, private drones, big data, and other technologies that continue to change the way that humans interact with the world around them, 3D printing is not the only area in which this discussion will arise.  The policy decisions that will be made in the next few years will have far reaching consequences that are difficult to predict.  Do the economic and social benefits of being able to manufacture goods at home outweigh the risks of legal, discrete self-armament and its consequences?  The proverbial pill may be too large for some to swallow.


Industry Giants Praise FDA Draft Guidance on Companion Diagnostics

Na An, MJLST Article Editor

In July 2016, the US Food and Drug Administration (FDA) published a draft guidance document titled “Principles for Codevelopment of an In Vitro Companion Diagnostic Device with a Therapeutic Product.”  The new draft guidance aims to serve as a “practical guide” and assist sponsors of drugs and in vitro diagnostics (IVD) in developing these two products simultaneously.  So far, FDA has received six public comments on the draft guide which are mostly positive, with Illumina calling the document “worth the wait,” and Genentech claiming it “crucial for the advancement of personalized medicine.”

A companion diagnostic includes a medical device, in this case an in vitro device, which provides safety and efficacy information of a corresponding drug or biological product.  It is a critical component of precision medicine, the cornerstone of which is the ability to identify and measure biomarkers indicative of the patient’s response to a particular therapy.  Approximately, a quarter of new drugs approved over the past two years were a drug-IVD companion.  However, the codevelopment process is complicated by the fact that these two products may be developed on different schedules, subject to different regulatory requirements, and reviewed by different center at the FDA.  The long-awaited draft guidance was in the works for more than a decade and intended to help sponsors and the FDA reviewers navigate these challenges.

In this draft guidance, FDA reiterates its general policy that IVD devices should receive marketing approval contemporaneously with the authorization of the corresponding therapeutic product.  FDA states that “the availability of an IVD with ‘market-ready’ analytical performance characteristics . . . is highly recommended at the time of initiation of clinical trials intended to support approval of the therapeutic product.”  FDA also recommends: “Using an analytically validated test is important to protect clinical trial subjects, to be able to interpret trial results when a prototype test is used, and to help to define acceptable performance characteristics for the development of the candidate IVD companion diagnostic.”  The new draft guidance provides much more information about the technical and scientific aspects of the development process.  For example, the draft guidance details the use of IVD prototype tests for the purpose of testing the drug early in the development, considerations for planning and executing a therapeutic product clinical trial that also includes the investigation of an IVD companion diagnostic, the use of a prospective-retrospective study approach, the use of training and validation sample sets, and the use of a master file for the therapeutic product to provide data in support of the IVD companion diagnostic marketing application.

The draft guidance has received high marks from industry giants. Illumina said the draft “has been a long time coming, eagerly anticipated, but worth the wait.”  Yet, the gene sequencing giant also seeks more clarity from FDA on risk assessments and expectations for analytical validation prior to investigational IVD use in trials.  “There is an opportunity here for FDA to add clarity on this important decision making process. We suggest this discussion on significant risk versus nonsignificant risk determinations be expanded and put into an appendix with examples. This is a unique opportunity for FDA to help sponsors get this process right,” Illumina says.  On a similarly positive note, Genentech called the draft “crucial for the advancement of personalized medicine,” and supplementary to two previous guidance documents on next generation sequencing.  In addition, Genentech notes that the scope of this IVD and drug co-development draft guidance “is limited, and therefore it does not address the requirements for development of complementary diagnostics or the challenges of co-development using high-throughput technologies such as Next-Generation Sequencing (NGS) based test panels, which are an increasingly attractive tool for both developers and providers.”  AstraZeneca, on the other hand, seeks more clarity on guidance on complementary diagnostics and clarifying between “patient enrichment” and “patient selection” and the resulting considerations on determination of significant risk uses of investigational devices.

We eagerly wait for FDA’s view of these comments and impacts of the guidance on the codevelopment of a drug-IVD companion.


Dastardly Dementor Dad Dupes Daughter

Tim Joyce, Editor-in-Chief, Volume 18

Between last week’s midterm exams and next week’s Halloween shenanigans, the Forum proudly presents some light-hearted and seasonally-appropriate issue spotting. In this week’s issue: Drone Dementors!

This story has been going around recently, of a Wisconsin man who pranked his daughter hard by retrofitting his unmanned aircraft (aka, a “drone”) with fishing line and some well-positioned strips of black cloth (video here). Apparently she’s a big fan of the Harry Potter books and films, and so we’re sure she recognized it immediately as a real-life incarnation of the soul-sucking guardians of the Wizard prison known as Dementors. Undoubtedly, the fact that it was hovering around her backyard elicited some kind of hilarious reaction. Twitter (*strong language alert*) is still guffawing, but we’re pretty sure not everyone would instantly have recognized this as a joke.

While it’s at least arguable that this flight complies with many/most of the FAA’s recently finalized drone regs, let’s take a moment to examine some more creative theories of potential liability behind this prankster parent’s aerial antics:

  1. Negligent Infliction of Emotional Distress. First, let’s admit that there are probably much more effective ways to punish your dad for scaring you than a lawsuit. But, for an exceedingly litigious daughter of average sensitivities, the argument could be made that dad should have known better. In other words, Cardozo’s proximate cause “foreseeability” analysis from Palsgraf rears its ugly complicated head once again! We’ll admit that the eggshell plaintiff argument might provide a decent defense for pops, but it sure seems risky to wait until after the claim to invoke it.
  1. Intentional Infliction of Emotional Distress. See above, except now dad knows what he’s doing. To the extent that it’s sometimes harder to prove intent than negligence, a plaintiff might want to avoid this particular type of claim. On the other hand, an intentional actor is definitely a less sympathetic defendant, and particularly so if the jury is full of those darned Harry Potter-loving Millenials. Of course they’ll have time to serve jury duty, what with all that free time gained from “choosing to” live at home, and their generally lax work ethic.
  1. Copyright Infringement. Trademark infringement doesn’t apply when you’re not using the good in commerce. But Section 106 of the Copyright Act gives an exclusive right to control derivative works. This drone-decoration does look an awful lot like the movies, and it’s pretty clear that no one affirmatively granted permission to use the characters in real life. Given the rightsholders’ propensity to vigorously protect the brand / expression, dad would be wise to cool it on the backyard joyrides. On the other hand, he could probably make some kind of fair use argument, arguing something along the lines of “transformative use” — again, much simpler to avoid the issue altogether.
  1. Some crazy-attenuated products liability claim. You can imagine a situation where the drone clips the tree, or the puppet gets tangled in the branches, and then spirals out of control, injuring a bystander. Should the drone manufacturer then be liable? Extending the chain of causation out to the drone assembly factory seems a bit tenuous, but should strict liability apply nonetheless? Does this situation violate the FAA’s prohibition on “careless or reckless operations” or “carriage of hazardous materials?” Should that fact make a difference?

Everyone with half a brain seems to agree that this is just a really great one-time use scenario. But there are real issues to consider with this, as with any, new technology. Plaintiffs may have to get creative when arguing for liability, at least until courts take judicial notice of the power of a Patronus Charm. From those of us here at MJLST, have a fun and safe Halloween!


EmDrives: The End of Newtonian Physics?

Peter Selness, MJLST Staffer

The EmDrive has been the center of much controversy over the past decade, and rightfully so.  But what exactly is an EmDrive, and why does it have the scientific community at odds with one another over the underlying science behind it?  The EmDrive is a type of propulsion system that was first designed by Roger Shawyer in 2001.  Essentially, it is a RF resonant cavity thruster that relies on electro magnetic radiation projected into the cavity of a cone to produce thrust.

The EmDrive was met with no small amount of criticism when first proposed because it is what is known as a propellantless propulsion system in that it consumes no fuel when producing thrust.  Not only does it consume no fuel, however, it also appears to only produce force in one direction, thus contradicting Newton’s third law of “for every action there is an equal and opposite reaction.”  Such a proposition has been compared to standing on the deck of a sailboat and pushing on the mast to propel it across a lake, or the old adage of “pulling yourself up by your bootstraps.”  The implications of such a device means that our understanding of physics as it relates to Newton’s third law (which has been relied upon for centuries) is either not entirely understood yet by humanity, or is completely wrong; which is largely why the EmDrive has received such criticism from the scientific community.

And yet, there are multiple confirmed reports of EmDrive testing resulting in this unexplainable thrust that have arisen independently from Roger Shawyer.  Even NASA conducted testing on EmDrives in 2014 and reported measuring a thrust produced by the device.  A similar experiment was then carried out by NASA again in 2015 to correct for some reported errors from the first test, but thrust was surprisingly recorded again despite the corrections.  Also, an EmDrive paper has finally been accepted by peer review by the American Institute of Aeronautics and Astronautics, granting the technology more authority from critics.

Interestingly enough, legal developments have also granted significant legitimacy to the EmDrive.  Roger Shawyer currently has three patents granted on the EmDrive, while two more are still going through the patent process.  Being granted three patents from the UK IP Office means that the physics behind the EmDrive has been thoroughly examined and was found to not violate the laws of physics, as such a violation would inevitably have lead to the patent applications being denied.  Furthermore, Shawyer’s most recent patent, as of October 12th, was filed more than 18 months ago, allowing the patent office to disclose the information contained to the public.  Such a public disclosure should in turn allow for greater scrutiny of Shawyer’s more recent efforts in developing the EmDrive.

The implications of the EmDrive being accepted as a legitimate technology are immense.  First of all, a working propellantless propulsion system would allow for future space craft to be much lighter and cheaper without requiring large amounts of rocket fuel for each take off.  It also would allow for much faster space travel, possibly allowing humans to reach the outer limits of our solar system in a matter of years and Mars within only a few months.  Furthermore, outside its space propulsion systems applications, there’s really no limit to what it may be applied to.

Despite passing several hurdles in recent years, however, the EmDrive is still a long way from leading us to interstellar travel.  The testing conducted by NASA, while showing positive results, also recorded thrust of a force just slightly higher than the magnitude of error for the experiment.  Also, while this positive result allowed it to pass peer review, that does not necessarily mean that the technology is sound and will not later be found to have flaws.  In all likelihood, the chances of a new technology being discovered that, for the first time, violates the laws of physics as we have known them for hundreds of years is a far less likely result than finding some sort of experimental error in the technology.  But maybe, just maybe, this could be the end of Newtonian physics as we know it.


A New Option for Investors Warry of High Frequency Trading

Spencer Caldwell-McMillan, MJLST Staffer

In his recent paper, The Law and Ethics of High Frequency Trading, which was published in the Minnesota Journal of Law, Science, and Technology Issue 17, Volume 1, Steven McNamara examined the cost and benefits of a high frequency trading (HFT) on stock exchanges. He observed that problematic practices such as flash orders and colocation can provide HFT firms with asymmetrical information compared to retail or even sophisticated institutional investors.

In June, a new type of exchange was approved by the Securities and Exchange Commission (SEC). IEX Group Inc. was granted exchange status from the SEC. Before this designation the firm was handling less than 2% of all equity trades, with this new designation the exchange is likely to see volume increase as orders are routed to the exchange. IEX uses 38 miles of looped fiber optic cable to combat some of the information asymmetry that HFT firms exploit. IEX uses this coil to slow incoming orders down by about 350 microseconds. This is roughly half the time a baseball makes contact with a baseball bat. While this may seem like an insignificant amount of time, the proposal proved extremely controversial. The SEC asked for five revisions to IEX application and released the decision at 8 PM on a Friday.

This speed bump serves two purposes: to stop HFT firms from taking advantage of stale prices found on IEX orders and to prevent them from removing liquidity on other exchanges so that IEX’s customer are unable to fill their orders. Critics of this system claim that the speed bump violates rules that requires exchanges to fulfill orders at the best price. However, IEX pushes back on these points to arrangements like colocation that allows firms to pay for faster access to markets by buying space on the servers of the stock exchanges. These policies allow HFT firms to get information faster than even the most sophisticated investors because of their proximity to the data. IEX began operations as an exchange in August and time will tell whether it can generate profits without compromising their pro-investor stance.

This debate is likely to continue long after public attention has faded from HFT. Institutional investors are the most likely beneficiaries of these changes, in fact, in a letter to the SEC the Teacher Retirement System of Texas, claimed that using IEX to process trades could save the fund millions of dollars a year. More recently, Chicago Stock Exchange has submitted a proposal to include a similar speed bump on its exchange. Taken together these two exchanges would represent a small fraction of the order volume being processed by U.S. exchanges but these changes could have a lasting impact if they drive institutional investors to change their trading behavior.