Uncategorised

Who Owns Your Tattoos?

Zachary Berger, MJLST Executive Editor

Tattoos are more common today than they have ever been. More than 20 percent of Americans have at least one tattoo, and among millennials nearly 40 percent have at least one tattoo. Most people probably assume that they are the owners of their tattoos. After all, tattoos are part of your body. How can you not own them? However, the answer is a bit more complicated than that. There have been a number of lawsuits relating to tattoo ownership over the years, but a recent one has brought the issue back into the news.

Take-Two Interactive Software is the parent company of 2k Sports and developer Visual Concepts, who create the popular NBA 2K video game series. Solid Oak Sketches is a company that is claiming ownership over a number of tattoos that appear in the game on several prominent players, including LeBron James and Kobe Bryant. Take-Two has licenses to the players’ likenesses, but Solid Oak claims they alone have the right to license the tattoos, which appear on the players in the game. Tattoos are very prominent in the NBA, and if the players had to appear without them, it would break the sense of realism the game is attempting to convey.Suits such as this one have occurred before, most notably in 2011 when Mike Tyson’s facial tattoo appeared on Ed Helms in the movie The Hangover Part II. However, all of these cases have settled, leaving tattoo ownership in a murky place.

In order to be eligible for copyright protection, a creation must meet three requirements. It must be a work of authorship, it must be fixed in a tangible medium of expression, and it must be original. It is up for debate whether tattoos meet all of these requirements but many believe they do.

The more interesting question, I think, is once that tattoo is finished, who owns it? Does it belong to the artist or patron? As is frequently the case in the legal field, the answer likely is that “it depends.” The default rule in copyright is that ownership is given to the artist. The University of Minnesota Law School’s own Professor Thomas Cotter is coauthor of one of the earliest and most extensive forays into the question of tattoo ownership. He concluded that there are three possibilities: 1) the artist owns the tattoo in the same way a painter owns what he or she paints on a canvas, 2) the work is a joint work, meaning that “it is prepared by two or more authors with the intention that their contributions be merged into inseparable or interdependent copyrightable expression,” or 3) the work is a work for hire. As explained by Timothy C. Bradley of the Coats & Bennet law firm, a “work for hire” applies when the work is 1) prepared by an employee within the scope of his or her employment or 2) specially ordered or commissioned for use in certain circumstances. A “work for hire” is owned by the party that commissions the work.

Take-Two recently won a dismissal of a potentially large damages claim when Judge Laura Taylor dismissed Solid Oak’s statutory damage claim because the tattoo designs Solid Oak claims ownership to were not registered with the U.S. Copyright Office until 2015, but Take-Two first used them in 2013 with the release of NBA 2k14. In order to be able to obtain statutory damages, a plaintiff must have registered their copyright before the alleged infringement. Solid Oak argued that each new 2K game is a separate instance of infringement, but the judge disagreed.

However, Solid Oak is still able to seek actual damages, so the case will continue. In their answer and counterclaim, Take-Two is relying in particular on the defenses of  de minimis use and fair use. De minimis is Latin for “minimal things” and essentially means that the infringement was insignificant and not worthy of judicial scrutiny. Fair use is an affirmative defense that allows limited copying without the copyright owner’s permission for the purposes such as criticism, comment, news reporting, teaching, scholarship, etc. The following factors (from 17 U.S.C. § 107) are used to analyze fair use:

(1) the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

(2) the nature of the copyrighted work;

(3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

(4) the effect of the use upon the potential market for or value of the copyrighted work.

Solid Oak v. 2k Games may be the first tattoo copyright infringement suit to settle on the merits rather than out of court and thus will be a very interesting case to watch going forward. The case is Solid Oak Sketches v. 2K Games and Take-Two Interactive Software, No. 16-CV-724-LTS (S.D.N.Y. filed Feb. 1, 2016).

 


Popular Perceptions May Hold Automated Vehicles Back

Alex Vilisides, MJLST Symposium Editor

Last October, when the Minnesota Journal of Law, Science and Technology (MJLST) co-sponsored a symposium entitled “Automated Vehicles: The Legal and Policy Road Ahead,” experts from a broad range of areas came together to discuss the challenges of self-driving cars. There was excitement that the technology was closer than ever, but also sober discussions of the many legal, ethical, and practical challenges that lay ahead. Recent media reaction to a new academic study emphasizes a very basic challenge: autonomous cars must overcome the challenge of being weird.

On April 9, 2015, University of Michigan announced in a press release the findings of a new study concluding that use of automated vehicles could increase the incidence of motion sickness. The study asked 3,200 adults what activities they would do in a car that drove itself. From the responses, such as reading, texting or watching television, the study concluded that “6-12 percent of Americans adults riding in fully self-driving vehicles” could expect to “suffer moderate or severe motion sickness.” Of course, the “frequency and severity of motion sickness is influenced” not by the inherent nature of an automated vehicle, but “by the activity that one would be involved in instead of driving.”

Outlets from NBC News to Popular Mechanics to The Guardian picked up the story. “Self-Driving Cars Might Make You Vomit,” declared a headline about the study on the Huffington Post. The cost-benefit imbalance is staggering. If people are able to implement technology that may be capable of making travel safer, cheaper, more accessible and less destructive to our environment it may come at a cost. That cost, these articles point out, is that if people choose to read or watch television in a self-driving car, some fraction of the population may be at greater risk to experience motion sickness. Despite this stark contrast, the dominant media narrative is served by focusing more on the weird, uncomfortable experience of riding in a self-driving car.

It is unlikely that any of the extraordinary articles published in MJLST’s upcoming symposium issue, focusing on automated vehicles, will receive this type of media attention. This is because an article about these vehicles making people vomit fits the dominant narrative: automated vehicles are weird. They a strange new technology and a harbinger of the robot-controlled dystopian future. Information that fits this narrative is far more affecting for an average reader than a rational cost-benefit analysis. And this weirdness has consequences. If the benefits are as great as advocates claim, the delay in adoption caused by social pressures and popularity has real consequences. The adoption of mass technologies is not pre-ordained. The weirdness battle is one that advocates of automated cars must fight if society is going to adopt this potentially transcendent technology.


The Development of Autonomous Vehicles

Kirsten Johanson, MJLST Staff Member

On a glorious Halloween day last fall, the Minnesota Journal of Law, Science and Technology (MJLST) co-sponsored its annual symposium entitled “Automated Vehicles: The Legal and Policy Road Ahead.” As the title implies, this symposium focused on one of the more exciting and innovative areas of the automotive industry–self-driving cars. Numerous academics, researchers, and innovators in the industry presented on the major issues that impact this industry’s future, specifically in terms of safety and other regulatory standards. Ensuring that proper standards are in place before these vehicles take to the roads is necessary to protect the pubic against everything from the cars’ operations in bad weather conditions to irresponsible human operators. Many of these standards require legislative involvement to craft state and national policy that strikes the delicate balance between safety and integration of new technology.

These issues are hugely important in the future development and integration of autonomous vehicle technology. In recognition of that importance, MJLST’s upcoming publication (Volume 16.2) is expected to include four articles written by experts and scholars in the autonomous vehicle field that address a number of major issues that arise in conjunction with the development of autonomous vehicle technology. Each of these articles expands on a particular idea presented at the symposium but, despite the differences in article topics, one recurring theme is the radical change that will develop upon a workable regulatory scheme and publicly available vehicles.

While numerous regulatory concerns require further development before these so-called self-driving cars are ready to hit the road with the general pubic at the wheel (or, in Google’s case, lack thereof . . .), none of the major car companies are missing the opportunity to participate in the development of autonomous technology. Every major car manufacturer in the world from Volkswagen to Audi as well as the ever technologically savvy Google is trying its hand at autonomous development (see depictions below). Not only does this competitive push for technological creativity result in a dynamic array of vehicle characteristics and (most importantly) awesome looking cars, it also indicates that, given the proper regulatory structure, the autonomous vehicle industry is poised to explode.

That being said, this is an area of the law that must continue to adapt with the changing technology. Regulators need to understand the impacts autonomous cars will have on the public from both convenience and safety perspectives and draft legislation accordingly. The symposium articles published in MJLST Volume 16.2 recognize this regulatory need and address specific issues that provide helpful insight to any interested party.

Self-driving cars might be one of the more futuristic ideas of the next generation and all of the decisions of today will impact the ultimate success of such vehicles. A basic Internet search for “autonomous vehicles” shows the potential is there and growing every day–even companies like Uber are getting involved! The wave of the future is sneaking upon on us and what that wave looks like is up to the initial responsive decisions made in the next five years. Check out MJLST Volume 16.2 for an in-depth analysis of what those decisions might look like and how they will impact various aspects of your life.

In the meantime, take a look at the following examples of autonomous vehicles, Volkswagen Autonomous Car; Autonomous Audi; Google’s Self-Driving Vehicle Prototype


Tesla’s Autonomous Vehicle Update is the First Step Toward Bright and Automated Future

Ian Blodger, MJLST Staff Member

Tesla Motors recently announced a software update to its Model S that will allow the vehicle to drive autonomously on highways. This development may be the first step toward an almost entirely autonomous vehicle fleet. This change to transportation could have profound implications on everything from city density to traffic safety.

Autonomous vehicles may eventually increase urban density by reducing the requirement for parking spaces in cities. Tesla’s update will give its vehicles the ability to drive completely autonomously from a parking spot to pick up the vehicle owner. While Tesla states that this feature should currently be used only on private property, the programming opens up the possibility of radical changes to the current cityscape. Under the current transportation model, people who commute to work in larger cities must also find a place to park their car during the day. Additionally, this location must be relatively close to the person’s place of work. With autonomous vehicles, however, commuters could be dropped off by their vehicle, which would then find parking outside the city. At the end of the day, the commuter would call their car to pick them up and drive home. This would allow developers to maximize the function of valuable real estate inside cities, currently being used only to park cars. Additionally, parking outside the city center could reduce costs for vehicle owners, since parking structures would have less financial overhead to account for in pricing. Essentially, the currently available technology will eventually allow for increased efficiency of valuable city real estate.

Moreover, autonomous vehicles could provide improved efficiency for commuters on their way to work. Since commuters will not need to concentrate on driving, they could pay attention to other tasks, like preparing for the workday. While Tesla’s updates may not quite allow the vehicle owner to divert their full attention from the road, it gets close. As the quality of autonomous vehicle programming improves and the number of autonomous vehicles increases, commuters will be able to invest their full attention in things other than driving.

Besides these general efficiency improvements, autonomous vehicles may have the added benefit of decreasing motorist deaths. According to the Association for Safe International Road Travel, the United States has more than more than 37,000 traffic deaths each year. Since many of these deaths are caused by driver error, allowing vehicles to take the wheel could save thousands of lives each year. Tesla’s updates are just the first step to improving the safety and efficiency of our roadways.

The Minnesota Journal of Law Science and Technology recently held a symposium on the legal and social implications of autonomous vehicle technology, and will be publishing a number of articles adapted from speaker’s presentations in the upcoming Spring 2015 issue of Volume 16. The articles vary widely in their analyses of the social and legal implications of autonomous vehicles, and will be a great resource for anyone interested in learning more about the subject.


Admission Of Scientific Evidence In Criminal Case Under The Daubert Standard

Sen “Alex” Wang, MJLST Staff Member

In Crawford v. Washington, the Supreme Court, in a unanimous decision, overruled its earlier decision in Ohio v. Roberts by rejecting the admission of the out-of-court testimony due to its nature as “testimonial” evidence. However, it was not clear if the constitutional right of confrontation only applied to traditional witnesses (like the statement in Crawford) or if it also applied to scientific evidence and experts. Subsequently, the Court clarified this point in Melendez-Diaz v. Massachusetts and Bullcoming v. New Mexico, where the Court upheld the confrontation right of the defendants to cross-examine the analysts who performed the scientific tests. However, compare to traditional testimony from eyewitnesses, scientific evidence (e.g., blood alcohol measurement, field breathalyzer, genetic testing) is a relatively new development in criminal law. The advancement of modern technologies creates a new question, namely whether this evidence would be sufficiently reliable to avoid triggering the Confrontation Clause.

This question is discussed in a student note & comment titled The Admission of Scientific Evidence in a Post-Crawford World in Volume 14, Issue 2 of the Minnesota Journal of Law, Science & Technology. The author Eric Nielson pointed out that the ongoing dispute in the Court about requiring analysts to testify before admitting scientific findings missed the mark. Specifically, scientific evidence, especially the result of an analytical test is an objective, not subjective, determination. In the courtroom, testimony of a scientific witness is mainly based on review of the content of the witness’s report, not his memories. Thus, according to the author, though Justice Scalia’s boldly statements in Crawford that “reliability is an amorphous, if not entirely subjective, concept[,]” may be right in the context of traditional witness, it is clearly wrong in the realm of science where reliability is a measurable quantity. In particular, the author suggested that scientific evidence should be admitted under the standard articulated by the Court in Daubert v. Dow.

As emphasized by the author, a well-drafted, technical report should answer all of the questions that would be asked of the analyst. Given that there is currently no national or widely-accepted set of standards for forensic science written reports or testimony, the author proposed the following key components to be included in a scientific report conforming to the Daubert standard: 1) sample identifier, including any identifier(s) assigned to the sample during analysis; 2) documentation of sample receipt and chain of custody; 3) analyst’s name; 4) analyst’s credentials; 5) evidence of analyst’s certification or qualification to perform the specific test; 6) laboratory’s certification; 7) testing method, either referencing an established standard (e.g., ASTM E2224 – 10 Standard Guide for Forensic Analysis of Fibers by Infrared Spectroscopy) or a copy of the method if it is not publicly available; 8) evidence of the effectiveness and reliability of the method, either from peer reviewed journals, method certification, or internal validation testing; 9) results of testing, including the results of all standards or controls run as part of the testing; 10) copies of all results, figures, graphs, etc; 11) copy of the calibration log or certificate for any equipment used; 12) any observations, deviations, and variances, or an affirmative statement that none were observed; 13) analyst’s statement that all this information is true, correct, and complete to the best of their knowledge; 14) analyst’s statement that the information is consistent with various hearsay exceptions; 15) evidence of second-party review, generally a supervisor or qualified peer; 16) posting a copy to a publicly maintained database; 17) notifying the authorizing entity via email of the completion of the work and the location of the posting.

Per the author, because scientific evidence is especially probative, the current refusal to demand evidence of reliability, method validation, and scientific consensus has allowed shoddy work and practices to impersonate dependable science in the courts. This is an injustice to the innocent and the guilty alike.


Mechanics or Manipulation: Regulation of High Frequency Trading Since the “Flash Crash” and a Proposal for a Preventative Approach

Dan Keith, MJLST Staff Member

In May of 2010, the DOW Jones plummeted to Depression levels and recovered within a half an hour. The disturbing part? No one knew why.

An investigation by the Securities Exchange Commission (SEC) and the Commodity Futures trading Commission (CTFC) determined that, in complicated terms, the Flash Crash involved “a rapid automated sale of 75,000 E-mini S&P 500 June 2010 stock index futures contracts (worth about $4.1 billion) over an extremely short time period created a large order imbalance that overwhelmed the small risk-bearing capacity of financial intermediaries–that is, the high-frequency traders and market makers.” After about 10 minutes of purchasing the E-mini, High Frequency Traders (HFTs) began selling this same instrument rapidly to deplete its own reserves which had overflowed. This unloading came at a time when liquidity was already low, meaning this rapid and aggressive selling increased the downward spiral. As a result of this volatility and overflowing inventory of the E-mini, HFTs were passing contracts back in forth in a game of financial “hot potato.”

In simpler terms, on this day in May of 2010, a number of HFT algorithms had “glitched”, generating a feedback loop that caused stock prices to spiral and skyrocket.

This event put High Frequency Trading on the map, for both the public and regulators. The SEC and the CTFC have responded with significant legislation meant to curb the mechanistic risks that left the stock market vulnerable in the spring of 2010. Those regulations include new reporting systems like the Consolidated Audit Trail (CAT) that is supposed to allow regulators to track HFT activity by the data it produces as it comes in. Furthermore, Regulation Systems Compliance Integrity (Reg SCI), a regulation still being negotiated into its final form, would require that HFTs and other eligible financial groups “carefully design, develop, test, maintain, and surveil systems that are integral to their operations. Such market participants would be required to ensure their core technology meets certain standards, conduct business continuity testing, and provide certain notifications in the event of systems disruptions and other events.”

While these regulations are appropriate for the mechanistic failures of HFT activity, regulators have largely overlooked an aspect of High Frequency Trading that deserves more attention–nefarious, manipulative HFT practices. These come in the form of either “human decisions” or “nefarious” mechanisms built into the algorithms that animate High Frequency Trading. “Spoofing”, “smoking”, or “stuffing”–there are different names, with small variations, but each of these activities involves a form of making large orders for stock and quickly cancelling or withdrawing those orders in order to create false market data.

Regulators have responded with “deterrent”-style legislation that outlaws this type of activity. Regulators and lawmakers have yet, however, to introduce regulations that would truly “prevent” as opposed to simply “deter” these types of activities. Plans for truly preventative regulations can be modeled on current practices and existing regulations. A regulation of this kind only requires the right framework to make it truly effective as a preventative measure, stopping “Flash Crash” type events before they can occur.


Commercial Drones: What’s a Business to do?

Neal Rasmussen, MJLST Staff Member

Since the March 2014 decision by administrative law judge Patrick Geraghty, the legality of using a drone for commercial purposes has been up for debate. Geraghty held that the Federal Aviation Administration (FAA) could not regulate the use of drones for commercial purposes under the current regulatory regime because a drone could not be considered an “aircraft” under 14 C.F.R. § 91.13(a) therefore could not be in violation of the Federal Aviation Regulations.

The FAA’s ability to regulate commercial drones came to the forefront when Raphael Pirker, a professional photographer, was paid by the University of Virginia to provide aerial photographs and video, which was accomplish by using a small drone. The FAA claimed the drone was operated in “a careless or reckless manner so as to endanger the life or property of another” in violation of 14 C.F.R. § 91.13(a) and assessed a $10,000 penalty. Pirker promptly challenged this penalty arguing his drone was not an “aircraft” and could not be in violation of the Federal Aviation Regulations. Geraghty agreed, finding that the definition of “aircraft” as defined in 49 U.S.C. § 40102(a) (6) (“any contrivance invented, used or designed to navigate or fly in, the air”) and 14 C.F.R. § 1.1 (“a device that is used or intended to be used for flight in the air”) did not include model aircraft or drones.

This decision left a gaping hole in the FAA’s enforcement power and was welcomed by businesses using commercial drones due to their ability to now fly without fear of penalties. Understandably, the decision was immediately appealed by the FAA. On appeal the National Transportation Safety Board (NTSB) reversed the decision by finding that drones did meet the definition of “aircraft” as defined in 49 U.S.C. § 40102(a)(6) and 14 C.F.R. § 1.1, thus Pirker could be subject to penalties for violation of 14 C.F.R. § 91.13(a). The NTSB remanded the case in order to determine if Pirker’s operation was in a careless or reckless manner warranting the $10,000 penalty.

In an effort to legally integrate drones into the National Airspace System (NAS), the FAA has since allowed businesses to file for exemptions under Section 333 of the FAA Modernization and Reform Act of 2012. These exemptions are acting as a gap filler until the FAA releases their proposed regulations for small drones, which are expected later this year. To date, thirteen Section 333 exemptions have been granted by the FAA. The most prevalent industry to be granted an exemption is the film industry, totaling seven of the thirteen. Other industries include construction, real estate, agriculture, and surveying. The number of exemptions is expected to grow, as the FAA has received over 200 applications for exemptions. However, the number of drones in the sky is not expected to skyrocket anytime soon due to the length of time and expense needed in order to obtain a Section 333 exemption which limits the number of companies that can apply and be granted an exemption.

Although not ideal, the exemption process is a major step in the right direction for the FAA as it finally begins to work with, not against, businesses to fully integrate drones into the NAS. Full integration into the NAS, however, will not occur until final regulations are released later this year. Even after regulations are released it could take a few years to work out all of the logistics of using drones for commercial purposes. In any event, don’t expect your Amazon package to be delivered by drones anytime soon. Stay tuned!


Biosimilar Drugs Gaining Traction with the FDA

Ethan Mobley, MJLST Staff Member

Recently, an FDA-commissioned panel recommended the Administration approve a cancer fighting drug developed by Novartis called EP2006. The recommendation is significant because if the FDA follows the panel’s advice and approves the drug, it will be the first time the FDA has approved a “biosimilar” drug under the Biologics Price Competition and Innovation Act (BPCI Act). A biosimilar drug is a drug that is “interchangeable” with or “highly-similar” to a biological drug already licensed by the FDA. In the words of the FDA, “[a] biological product may be demonstrated to be ‘biosimilar’ if data show that the product is ‘highly similar’ to the reference product notwithstanding minor differences in clinically inactive components and there are no clinically meaningful differences between the biological product and the reference product in terms of safety, purity and potency.” Interestingly, a biosimilar drug is not considered to be a generic version of its already-approved counterpart– only bioequivalent drugs could be generics. Nonetheless, biosimilar drugs are still desirable for consumers because they are subject to an expedited approval process compared to their already-approved counterpart. Such drugs would be readily substitutable by a pharmacist without requiring the prescriber’s permission.

In this case, the panel advised the FDA that EP2006 is biosimilar to Amgen’s medication, Nuopogen (filgrastim), which is used to boost white-blood cell production in the body. Unfortunately, Neupogen is predictably expensive. But, introduction of EP2006 into the market would make cancer-fighting medication more price-accessible for many patients. Such a decrease in price would necessarily follow from increased competition for a white blood cell-producing drug and the reduced development costs of EP2006 attributable to the expedited approval process under the BPCI Act. Ideally, a groundbreaking approval of EP2006 under the BPCI Act would also pave the way for other price-accessible medication meant to treat all sorts of ailments.


Is it illegal to test websites for security flaws? Heartbleed & the CFAA

Erin Fleury, MJLST Managing Editor

Earlier this year, the general public became acutely aware of the Heartbleed security bug which exposed vast amounts of encrypted data from websites using OpenSSL technology (estimated to affect at least 66% of active websites). Software companies are still fixing these vulnerabilities but many servers remain vulnerable and surely victims could continue to suffer from these data breaches long after they occurred. While Heartbleed, and the fact that it was around for nearly two years prior to detection, is troubling by itself, it also raises concerns about the scope of the Computer Fraud and Abuse Act (CFAA), 18 U.S.C. §1030, and white-hat hackers.

The CFAA prohibits “intentionally accessing a computer without authorization or exceed[ing] authorized access” and thereby “obtain[ing] information from a protected computer.” See § 1030(a)(2). It would appear that the Heartbleed bug operates by doing exactly that. In very simplistic terms, OpenSSL authorizes limited requests for information but Heartbleed exploits a flaw to cause systems to send back far more than what is intended. Of course, the CFAA is meant to target people who use exploits such as this to gain unauthorized access to computer systems, so it would seem that using Heartbleed is clearly within the scope and purpose of the CFAA.

The real problem arises, however, for people interested in independently (i.e. without authorization) testing a system to determine if it is still susceptible to Heartbleed or other vulnerabilities. With Heartbleed, the most efficient way to test for the bug is to send an exploitive request and see if the system sends back extra information. This too would seem to fall squarely within the ambit of the CFAA and could potentially be a violation of federal law. Even testing a website which has been updated so that it is no longer vulnerable could potentially be a violation under §1030(b)(“attempting to commit a violation under subsection (a)”).

At first glance it might seem logical that no one should be attempting to access systems they do not own, but there are a number of non-nefarious reasons someone might do so. Perhaps customers may simply wish to determine whether a website is secure before entering their personal information. More importantly, independent hackers can play a significant role in finding system weaknesses (and thereby helping the owner make the system more secure), as evidenced by the fact that many major companies now offer bounty programs to independent hackers. Yet those who do not follow the parameters of a bounty program, or who discover flaws in systems without such a program, may be liable under the CFAA because of their lack of authorization. Furthermore, the CFAA has been widely criticized for being overly broad because, among other reasons, it does not fully distinguish between the reasons one might “exceed authorization.” Relatively minor infractions (such as violating the Terms of Service on MySpace) may be sufficient to violate federal law, and the penalties for fairly benevolent violations (such as exploiting security flaws but only reporting it to the media rather than using the obtained information for personal gains) can seem wildly disproportional to the offense.

These security concerns are not limited to websites or the theft of data either. Other types of systems could pose far greater safety risks. The CFAA’s definition of a “protected computer” in § 1030(e)(1-2) applies to a wide range of electronics and this definition will only expand as computers are integrated into more and more of the items we use on a daily basis. In efforts to find security weaknesses, researchers have successfully hacked and taken control of implantable medical devices or even automobiles. Merely checking a website to see if it is still susceptible to Heartbleed is unlikely to draw the attention of the FBI, so in many ways these concerns can be dismissed for the simple reason that broad enforcement is unlikely and, of course, many of the examples cited above involved researchers who had authorization. Yet, the CFAA’s scope is still concerning because of the chilling effect it could have on research and overall security by dissuading entities from testing systems for weaknesses without permission or, perhaps more likely, by discouraging individuals from disclosing these weaknesses when they find them.

Without question, our laws should punish those who use exploits (such as Heartbleed) to steal valuable information or otherwise harm people. But the CFAA also seems to apply with great force to unauthorized access which ultimately serves a tremendous societal good and should be somewhat excusable, if not encouraged. The majority of the CFAA was written decades ago and, while there have been recent efforts to amend it, it remains a highly-controversial law. Surely, issues surrounding cybersecurity are unlikely to disappear anytime soon. It will be interesting to see how courts and lawmakers react to solve these challenging issues in an evolving landscape.


Somnophilia, The “Sleeping Beauty” Disorder

Becky Huting, MJLST Editor

To date, at least 19 women have come forward accusing Bill Cosby of some type of sexual abuse. The majority of the women have told similar stories that involve some variant of being drugged, sexually assaulted, or being drugged and also sexually assaulted by Cosby. The New York Times recently published a piece entitled “When a Rapist’s Weapon is a Drug” that talks about a particular kind of paraphilia that some hypothesize is present in Cosby: a sexual deviation that involves drugging and raping unconscious partners. While it is important to note there is no indication of any formal diagnoses of Cosby (nor of criminal charges), this narrative has opened the dialogue about the contours of sexual disorder diagnosis and what it might mean in our legal regime.

The DSM, or Diagnostic and Statistical Manual of Mental Disorders, is authored by the American Psychiatric Association (APA) and offers a standardized classification of mental disorders. According to the APA, the DSM is “intended to be applicable in a wide array of contexts and used by clinicians and researchers of many different orientations (e.g., biological, psychodynamic, cognitive, behavioral, interpersonal, family/systems).” The DSM’s 5th Edition (DSM 5) is the 2013 update to the APA tool, superseding the last (DSM-IV-TSR), which was published in 2000.

Paraphilic disorders are defined by an unusual sexual preference that becomes compulsive. The DSM 5 contains eight distinct groups of disorders that constitute paraphilia. They include: exhibitionistic disorder, fetishistic disorder, frotteuristic disorder (arousal from touching or rubbing against a stranger), pedophilic disorder, sexual masochism disorder, sexual sadism disorder, transvestite disorder, and voyeuristic disorder.

Now returning to Cosby: date rape incidents involving drugs being dosed to victims are very common. Alcohol is the most commonly used drug in sexual assaults, but some perpetrators use so-called “knock-out” drugs. Experts view the motives for the former simple opportunism, but some of the latter category of druggers have a different motive in mind: they like unresponsive partners. This preference for unconscious partners, and the erotic arousal dependent upon intruding upon an unresponsive partner, and sometimes waking the person, is being labeled “sleeping beauty syndrome” or “Somnophilia.” Somnophilia is a less common compulsion, but under a more common umbrella of a motive guided by coercion where the perpetrator is aroused by domination of their drugged partner.

According to Dr. Michael First, a psychiatrist and editorial consultant on the new DSM 5, the kind of coercion and domination achieved by drugging a partner is common enough that the APA actually contemplated adding it as a distinct diagnosis as a paraphilia disorder, but the idea was shelved in part because of concerns that doing so would give rapists added recourse in legal cases. This should be of interest for legal practitioners: it begs the question – should doctors be thinking about legal implications when they classify disorders? If they are indeed guided by what might be a legal defense, one could imagine the whole composition of the DSM changing tomorrow. Just a couple examples come quickly to mind. Schizophrenia is a widely accepted mental disorder included on the DSM, and yet is not infrequently used to bolster a legal defense for very horrific crimes. Consider also sleep-walking disorders. These too are on the DSM 5, and yet, criminal defendants have been known to use sleep-walking as a legal defense for equally ghastly crimes. It seems incongruous to say that leaving these kind of “excusing” mental disorders off is the policy here. They are already on the DSM, and criminal defendants have used them for quite some time. If the APA is willing to sacrifice classifying valid mental disorders in the name of some sense of legal responsibility, they must also consider the consequences for the field of psychiatry and the name of treatment.

Clearly here the concern by the American Psychiatric Association is that giving disorders like Somnophilia a name legitimizes it – those ostensibly like Bill Cosby will now have a diagnosis to stand behind in court. They can say: “it wasn’t my fault, it’s my disposition. I have a disorder.” (It is also unclear that a jury would give any sympathetic weight or credence to this). But the clear question is whether lawyers want doctors doing the legal work for them behind the scenes. Will psychiatry and its patients actually benefit by this kind of legal policy gut-checking, or should we just ask politely ask doctors to do what they do best – classify, diagnose, and treat?