Social Media

Controversial Anti-Sex Trafficking Bill Eliminates Safe-Harbor for Tech Companies

Maya Digre, MJLST Staffer

 

Last week the U.S. Senate voted to approve the Stop Enabling Sex Traffickers Act. The U.S. House of Representatives also passed a similar bill earlier this year. The bill creates an exception to Section 230 of the Communications Decency Act that allows victims of sex trafficking to sue websites that enabled their abuse. The bill was overwhelmingly approved in both the U.S. House and Senate, receiving 388-25 and 97-2 votes respectively. President Trump has indicated that he is likely to sign the bill.

 

Section 230 of the Communications Decency Act shields websites from liability stemming from content posted by third parties on their sites. Many tech companies argue that this provision has allowed them to become successful without a constant threat of liability. However, websites like Facebook, Google, and Twitter have recently received criticism for the role they played in unintentionally meddling in the 2016 presidential election. Seemingly the “hands off” approach of many websites has become a problem that Congress now seeks to address, at least with respect to sex trafficking.

 

The proposed exception would expose websites to liability if they “knowingly” assist, support, or facilitate sex trafficking. The bill seeks to make websites more accountable for posts on their site, discouraging a “hands off” approach.

 

While the proposed legislation has received bipartisan support from congress, it has been quite controversial in many communities. Tech companies, free-speech advocates, and consensual sex workers all argue that the bill will have unintended adverse consequences. The tech companies and free-speech advocates argue that the bill will stifle speech on the internet, and force smaller tech companies out of business for fear of liability. Consensual sex workers argue that this bill will shut down their online presence, forcing them to engage in high-risk street work. Other debates center on how the “knowingly” standard will affect how websites are run. Critics argue that, in response to this standard, “[s]ites will either censor more content to lower risk of knowing about sex trafficking, or they will dial down moderation in an effort not to know.” At least one website has altered their behavior in the wake of this bill. In response to this legislation Craigslist has remove the “personal ad” platform from their website.

 


Judicial Interpretation of Emojis and Emoticons

Kirk Johnson, MJLST Staffer

 

In 2016, the original 176 emojis created by Shigetaka Kurita were enshrined in New York’s Museum of Modern Art as just that: art. Today, a smartphone contains approximately 2,000 icons that many use as a communication tool. New communicative tools present new problems for users and the courts alike; when the recipient of a message including an icon interprets the icon differently than the sender, how should a court view that icon? How does it affect the actus reus or mens rea of a crime? While a court has a myriad of tools that they use to decipher the meaning of new communicative tools, the lack of a universal understanding of these icons has created interesting social and legal consequences.

The first of many problems with the use of an emoji is that there is general disagreement on what the actual icon means. Take this emoji for example: 🙏. In a recent interview by the Wall Street Journal, people aged 10-87 were asked what this symbol meant. Responses varied from hands clapping to praying. The actual title of the emoji is “Person with Folded Hands.”

Secondly, the icons can change over time. Consider the update of the Apple iOS from 9 to 10; many complained that this emoji, 💁, lost its “sass.” It is unclear whether the emoji was intended to have “sass” to begin with, especially since the title of the icon is “Information Desk Person.”

Finally, actual icons vary from device to device. In some instances, when an Apple iPhone user sends a message to an Android phone user, the icon that appears on the recipient’s screen is completely different than what the sender intended. When Apple moved from iOS 9 to iOS 10, they significantly altered their pistol emoji. While an Android user would see something akin to this 🔫, an iPhone user sees a water pistol. Sometimes, an equivalent icon is not present on the recipient’s device and the only thing that appears on their screen is a black box.

Text messages and emails are extremely common pieces of evidence in a wide variety of cases, from sexual harassment litigation to contract disputes. Recently, the Ohio Court of Appeals was called upon to determine whether the text message “come over” with a “winky-face emoji” was adequate evidence to prove infidelity. State v. Shepherd, 81 N.E.3d 1011, 1020 (Ohio Ct. App. 2017). A Michigan sexual harassment attorney’s client was convinced that an emoji that looked like a horse followed by an icon resembling a muffin meant “stud muffin,” which the client interpreted as an unwelcome advance from a coworker. Luckily, messages consisting entirely of icons rarely determine the outcome of a case on their own; in the sexual harassment arena, a single advance from an emoji message would not be sufficient to make a case.

However, the implications are much more dangerous in the world of contracts. According to the Restatement (Second) of Contracts § 20 (1981),

(1) There is no manifestation of mutual assent to an exchange if the parties attach materially different meanings to their manifestations and

(a) neither party knows or has reason to know the meaning attached by the other; or

(b) each party knows or each party has reason to know the meaning attached by the other.

(2) The manifestations of the parties are operative in accordance with the meaning attached to them by one of the parties if

(a) that party does not know of any different meaning attached by the other, and the other knows the meaning attached by the first party; or

(b) that party has no reason to know of any different meaning attached by the other, and the other has reason to know the meaning attached by the first party.

 

Adhering to this standard with emojis would produce varied and unexpected results. For example, if Adam sent Bob a message “I’ll give you $5 to mow my lawn 😉,” would Bob be free to accept the offer? Would the answer be different if Adam used the 😘 emoji instead of the 😉 emoji? What if Bob received a black box instead of any emoji at all? Conversely, if Adam sent Bob the message without an emoji and Bob replied to Adam “Sure 😉,” should Adam be able to rely upon Bob’s message as acceptance? In 2014, the Michigan Court of Appeals ruled that the emoticon “:P” denoted sarcasm and that the text prior to the message should be interpreted with sarcasm. Does this extend to the emoji 😜😝, and 😛, titled “Face with Stuck-Out Tongue And Winking Eye,” “Face With Stuck-Out Tongue And Tightly-Closed Eyes,” and “Face With Stuck-Out Tongue” respectively?

In a recent case in Israel, a judge ruled that the message “✌👯💃🍾🐿☄constituted acceptance of a rental contract. While the United States does have differing standards for the laws of contracts, it seems that a judge could find that to be acceptance under the Restatement of Contracts (Second) § 20(2). Eric Goldman at the Santa Clara University School of Law hypothesizes that an emoji dictionary might help alleviate this issue. While a new Black’s Emoji Law Dictionary may seem unnecessary to many, without some sort of action it will be the courts deciding what the meaning of an emoji truly is. In a day where courts rule that a jury is entitled to actually see the emoji rather than have a description read to them, we can’t ignore the reality that action is necessary.


E-threat: Imminent Danger in the Information Age

MJLST Staffer, Jacob Weindling

 

One of the basic guarantees of the First Amendment is the right to free speech. This right protects the individual from restrictions on speech by the government, but is often invoked as a rhetorical weapon against private individuals or organizations declining to publish another’s words. On the internet, these organizations include some of the most popular discussion platforms in the U.S. including Facebook, Reddit, Yahoo, and Twitter. A key feature of these organizations is their lack of government control. As recenty as 2017, the Supreme Court has identified First Amendment grounds for overturning prohibitions on social media access. Indeed, one of the only major government prohibitions on speech currently in force is the ban on child pornography. Violent rhetoric, meanwhile, continues to fall under the Constitutional protections identified by the Court.

Historically, the Supreme Court has taken a nuanced view of violent speech as it relates to the First Amendment. The Court held in Brandenburg v. Ohio that “the constitutional guarantees of free speech and free press do not permit a State to forbid or proscribe advocacy of the use of force or of law violation except where such advocacy is directed to inciting or producing imminent lawless action and is likely to incite or produce such action.” Contrast this with discussion of a moral responsibility to resort to violence, which the Supreme Court has held to be distinct from preparing a group for imminent violent acts.

With the rise and maturation of the internet, public discourse has entered a new and relatively unchartered territory that the Supreme Court would have been hard-pressed to anticipate at the time of the Brandenburg and Noto decisions. Where once geography served to isolate Neo-Nazi groups and the Ku Klux Klan into small local chapters, the internet now provides a centralized meeting place for the dissemination and discussion of violent rhetoric. Historically, the Supreme Court concerned itself mightily with the distinction between an imminent call to action and a general discussion of moral imperatives, making clear delineations between the two.

The context of the Brandenburg decision was a pre-information age telecommunications regime. While large amounts of information could be transmitted around the world in relatively short order thanks to development of international commercial air travel, real-time communication was generally limited to telephone conversations between two individuals. An imminent call to action would require substantial real-world logistics, meetings, and preparation, all of which provide significant opportunities for detection and disruption by law enforcement. By comparison, internet forums today provide for near-instant communication between large groups of individuals across the entire world, likely narrowing the window that law enforcement would have to identify and act upon a credible, imminent threat.

At what point does Islamic State recruitment or militant Neo-Nazi organizing on the internet rise to the level of imminent threat? The Supreme Court has not yet decided the issue, many internet businesses have recently begun to take matters into their own hands. Facebook and Youtube have reportedly been more active in policing Islamic State propaganda, while Reddit has taken some steps to remove communities that advocate for rape and violence. Consequently, while the Supreme Court has not yet elected to draw (or redraw) a bright red line in the internet age, many businesses appear to be taking the first steps to draw the line themselves, on their terms.


Fi-ARRR-e & Fury: Why Even Reading the Pirated Copy of Michael Wolff’s New Book Is Probably Copyright Infringement

By Tim Joyce, MJLST EIC-Emeritus

 

THE SITUATION

Lately I’ve seen several Facebook links to a pirated copy of Fire & Fury: Inside the Trump White House, the juicy Michael Wolff expose documenting the first nine months of the President’s tenure. The book reportedly gives deep, behind-the-scenes perspectives on many of Mr. Trump’s most controversial actions, including firing James Comey and accusing President Obama of wiretapping Trump Tower.

 

It was therefore not surprising when Trump lawyers slapped a cease & desist letter on Wolff and his publisher. While there are probably volumes yet to be written about the merits of those claims (in my humble opinion: “sorry, bros, that’s not how defamation of a public figure works”), this blog post deals with the copyright implications of sharing and reading the pirated copy of the book, and the ethical quandaries it creates. I’ll start with the straightforward part.

 

THE APPLICABLE LAW

First, it should almost go without saying that the person who initially created the PDF copy of the 300+ page book broke the law. (Full disclosure: I did click on the Google link, but only to verify that it was indeed the book and not just a cover page. It was. Even including the page with copyright information!) I’ll briefly connect the dots for any copyright-novices reading along:

 

    • Wolff is the “author” of the book, a “literary work” that constitutes an “original works of authorship fixed in any tangible medium of expression” [see 17 USC 102’].
    • As the author, one of his copyrights is to control … well … copying. The US Code calls that “reproduction” [see 17 USC 106].
    • He also gets exclusive right to “display” the literary work “by means of a film, slide, television image, or any other device or process” [see 17 USC 101]. Basically, he controls display in any medium like, say, via a Google Drive folder.
    • Unauthorized reproduction, display, and/or distribution is called “infringement” [see 17 USC 501]. There are several specific exceptions carved into the copyright code for different types of creative works, uses, audiences, and other situations. But this doesn’t fall into one of those exceptions.

 

  • So, the anonymous infringer has broken the law.

 

  • [It’s not clear, yet, whether this person is also a criminal under 17 USC 506, because I haven’t seen any evidence of fraudulent intent or acting “for purposes of commercial advantage or private financial gain.”]

 

Next, anyone who downloads a copy of the book onto their smartphone or laptop is also an infringer. The same analysis applies as above, only with a different starting point. The underlying material’s copyright is still held by Wolff as the author. Downloading creates a “reproduction,” which is still unauthorized by the copyright owner. Unauthorized exercise of rights held exclusively by the author + no applicable exceptions = infringement.

 

Third, I found myself stuck as to whether I, as a person who had intentionally clicked through into the Google Drive hosting the PDF file, had also technically violated copyright law. Here, I hadn’t downloaded, but merely clicked the link which launched the PDF in a new Chrome tab. The issue I got hung up on was whether that had created a “copy,” that is a “material objects … in which a work is fixed by any method now known or later developed, and from which the work can be perceived, reproduced, or otherwise communicated, either directly or with the aid of a machine or device.” [17 USC 101]

 

Computer reproductions are tricky, in part because US courts lately haven’t exactly given clear guidance on the matter. (Because I was curious — In Europe and the UK, it seems like there’s an exception for temporary virtual copies, but only when incidental to lawful uses.) There’s some debate as to whether it’s infringement if only the computer is reading the file, and for a purpose different than perceiving the artistic expression. (You may remember the Google Books cases…) However, when it’s humans doing the reading, that “purpose of the copying” argument seems to fall by the wayside.

 

Cases like  Cartoon Network v. CSC Holdings have attempted to solve the problem of temporary copies (as when a new browser window opens), but the outcome there (i.e., temporary copies = ok) was based in part on the fact that the streaming service being sued had the right to air the media in question. Their copy-making was merely for the purposes of increasing speed and reducing buffering for their paid subscribers. Here, where the right to distribute the work is decidedly absent, the outcome seems like it should be the opposite. There may be a case out there that deals squarely with this situation, but it’s been awhile since copyright class (yay, graduation!) and I don’t have free access to Westlaw anymore. It’s the best I could do in an afternoon.

 

Of course, an efficient solution here would be to first crack down on the entities and individuals that first make the infringement possible – ISPs and content distributors. The Digital Millennium Copyright Act already gives copyright owners a process to make Facebook take bootleg copies of their stuff down. But that only solves half the problem, in my opinion. We have to reconcile our individual ethics of infringement too.

 

ETHICAL ISSUES, FOR ARTISTS IN PARTICULAR

One of the more troubling aspects of this pirateering that I saw was that the link-shares came from people who make their living in the arts. These are the folks who–rightly, in my opinion–rail against potential “employers” offering “exposure” instead of cold hard cash when they agree to perform. To expect to be paid for your art, while at the same time sharing an illegal copy of someone else’s, is logically inconsistent to me.

 

As a former theater actor and director (read: professional almost-broke person) myself, I can understand the desire to save a few dollars by reading the pirated copy. The economics of making a living performing are tough – often you agree to take certain very-low-paying artistic jobs as loss-leaders toward future jobs. But I have only met a very few of us willing to perform for free, and even fewer who would tolerate rehearsing with the promise of pay only to be stiffed after the performance is done. That’s essentially what’s happening when folks share this bootleg copy of Michael Wolff’s book.

 

I’ve heard some relativistic views on the matter, saying that THIS book containing THIS information is so important NOW, that a little infringement shouldn’t matter. But you could argue that Hamilton, the hit musical about the founding of our nation and government, has equally urgent messages regarding democracy, totalitarianism, individual rights, etc. Should anyone, therefore, be allowed to just walk into the theater and see the show without paying? Should the cast be forced to continue performing even when there is no longer ticket revenue flowing to pay for their efforts? I say that in order to protect justice at all times, we have to protect justice this time.

 

tl;dr

Creating, downloading, and possibly even just viewing the bootleg copy of Michael Wolff’s book linking around Facebook is copyright infringement. We cannot violate this author’s rights now if we expect to have our artistic rights protected tomorrow.

 

Contact Me!

These were just some quick thoughts, and I’m sure there’s more to say on the matter. If you’d like to discuss any copyright issues further, I’m all ears.


Sex Offenders on Social Media?!

Young Choo, MJLST Staffer

 

A sex offender’s access to social media is problematic nowadays on social media, especially considering the vast amount of dating apps you can use to meet other users. Crimes committed through the use of dating apps (such as Tinder and Grindr) include rape, child sex grooming and attempted murder. These statistics have increased seven-fold in just two years. Although sex offenders are required to register with the State, and individuals can get accesses to each state’s sex offender registry online, there are few laws and regulations designed to combat this specific situation in which minors or other young adults can become victims of sex crimes. A new dating app called “Gastby” was introduced to resolve this situation. When new users sign up for Gatsby, they’re put through a criminal background check, which includes sex offender registries.

Should sex-offenders even be allowed to get access to the social media? Recent Supreme Court case, Packingham v. North Carolina, decided that a North Carolina law preventing sex offenders getting access to a commercial social networking web site is unconstitutional due to the First Amendment’s Free Speech Clause. The Court emphasized the fact that accessing to the social media is vital for citizens in the exercise of First Amendment rights. The North Carolina law was struck down mainly because it wasn’t “narrowly tailored to serve a significant governmental interest,” but the Court noted that this decision does not prevent a State from enacting more specific laws to address and ban certain activity of sex offender on social media.

The new online dating app, Gatsby, cannot be the only solution to the current situation. There are already an estimated 50 million people using Tinder in the world and the users do not have a method of determining whether their matches may be sex offenders. New laws narrowly-tailored to address the situation, perhaps requiring dating apps to do background checks on users or an alternative method to prevent sex offenders from utilizing the dating app, might be necessary to reduce the increasing number of crimes through the dating apps.


Congress, Google Clash Over Sex-Trafficking Liability Law

Samuel Louwagie, MJLST Staffer

Should web companies be held liable when users engage in criminal sex trafficking on the platforms they provide? Members of both political parties in Congress are pushing to make the answer to that question yes, over the opposition of tech giants like Google.

The Communications Decency Act was enacted in 1934. In the early 1990s, as the Internet went live, Congress added Section 230 to the act. That provision protected providers of web platforms from civil liability for content posted by users of those platforms. The act states that in order to “promote the continued development of the internet . . . No provider of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” That protection, according to the ACLU, “defines Internet culture as we know it.”  

Earlier this month, Congress debated an amendment to Section 230 called the Stop Enabling Sex Traffickers Act of 2017. The act would remove that protection from web platforms that knowingly allow sex trafficking to take place. The proposal comes after the First Circuit Court of Appeals held in March of 2016 that even though Backpage.com played a role in trafficking underage girls, section 230 protected it from liability. Sen. Rob Portman, a co-sponsor of the bill, wrote that it is Congress’ “responsibility to change this law” while “women and children have . . . their most basic rights stripped from them.” And even some tech companies, such as Oracle, have supported the bill.

Google, meanwhile, has resisted such emotional pleas. Its lobbyists have argued that Backpage.com could be criminally prosecuted, and that to remove core protections from internet companies will damage the free nature of the web. Critics, such as New York Times columnist Nicholas Kristof, argue the Stop Enabling Sex Traffickers Act was crafted “exceedingly narrowly to target those intentionally engaged in trafficking children.”

The bill has bipartisan support and appears to be gaining steam. The Internet Association, a trade group including Google and Facebook, expressed a willingness at a Congressional hearing to supporting “targeted amendments” to the Communications Decency Act. Whether Google likes it or not, eventually platforms will be at legal risk if they don’t police their content for sex trafficking.


Faux News vs. Freedom of Speech?

Tyler Hartney, MJLST Staffer

This election season has produced a lot of jokes on social media. Some of the jokes are funny and other jokes lack an obvious punch line. Multiple outlets are now reporting that this fake news may’ve influenced voters in the 2016 presidential election. Both Facebook and Google have made conscious efforts to reduce the appearance of these fake news stories on their sites in attempt to reduce the click bait, and thus the revenue streams, of these faux news outlets. With the expansion of the use of technology and social media, these types of stories become of a relevant circulation to possibly warrant misinformation being spread on a massive level. Is this like screaming “fire” in a crowded theatre? How biased would filtering this speech become? Facebook was blown to shreds by the media when it was found to have suppressed conservative news outlets, but as a private business it had every right to do so. Experts are now saying that the Russian government made efforts to help spread this fake news to help Donald Trump win the presidency.

First, the only entity that cannot place limits on speech is the state. If Facebook or Google chose to filter the news broadcasted on each site, users still do not have a claim against the entity; this would be a considered a private business choice. These faux news outlets circulate stories that have appeared to be, at times, intentionally and willfully misleading. Is this similar to a man shouting “fire” in a crowded theatre? In essence, the man in the aforementioned commonly used hypothetical knows that his statement is false and that it has a high probability of inciting panic, but the general public will not be aware of the validity of his statement and will have no time to check. The second part of that statement is key. The general public would not hypothetically have time to check the validity of the statement. If government were to begin passing regulations and cracking down on the circulation and creation of these hoax news stories, it would have to prove that these stories create a “clear and present danger” that will bring significant troubles that Congress has the right to protect the public from. This standard was created in the Supreme Court’s decision in Schenck v. United States. The government will not likely be capable of banning these types of faux news stories because, while some may consider these stories dangerous, the audience has the capability of validating the content from these untrusted sources.

Even contemplating government action under this circumstance would require the state to walk a fine line with freedom of political expression. What is humorous and what is dangerously misleading? For example, The Onion posted an article entitled “Biden Forges Presidents Signature Executive Order 54723,” clearly this is a joke; however, it holds the potential ability to insight fury from those who might believe it and create a misinformed public that might use this as material information when casting a ballot. This Onion article is not notably different from another post entitled “FBI AGENT SUSPECTED IN HILLARY EMAIL LEAKS FOUND DEAD IN APPARENT MURDER-SUICIDE” published by the Denver Guardian. With the same potential to mislead the public, there wouldn’t really be any identifiable differences between the two stories. This area of gray would make it extremely difficult to methodically stop the production of fake news while ensuring the protection of the comedic parody news. The only way to protect the public from the dangers of these stories that are apparently being pushed on to the American voting public by the Russian government in an attempt to influence election outcomes is to educate the public on how to verify online accounts.


The Best Process for the Best Evidence

Mary Riverso, MJLST Staffer

Social networking sites are now an integral part of American society. Almost everyone and everything has a profile, typically on multiple platforms. And people like to use them. Companies like having direct contact with their customers, media outlets like having access to viewer opinions, and people like to document their personal lives.

However, as the use of social-networking continues to increase in scope, the information placed in the public sphere is playing an increasingly centralized role in investigations and litigation. Many police departments conduct regular surveillance of public social media posts in their communities because these sites have become conduits for crimes and other wrongful behavior. As a result, litigants increasingly seek to offer records of statements made on social media sites as evidence. So how exactly can content from social media be used as evidence? Ira Robbins explores this issue in her article Writings on the Wall: The Need for an Authorship-Centric Approach to the Authentication of Social-Networking Evidence. The main hurdle is one of reliability. In order to be admitted as evidence, the source of information must be authentic so that a fact-finder may rely on the source and ultimately its content as trustworthy and accurate. However, social media sites are particularly susceptible to forgery, hacking, and alterations. Without a confession, it is often difficult to determine who is the actual author responsible for posting the content.

Courts grapple with this issue – some allow social media evidence only when the record establishes distinctive characteristics of the particular website under Federal Rule of Evidence 901(b)(4), other courts believe authentication is a relatively low bar and as long as the witness testifies to the process by which the record was obtained, then it is ultimately for the jury to determine the credibility of the content. But is that fair? If evidence is supposed to assist the fact-finder in “ascertaining the truth and securing a just determination,” should it not be of utmost importance to determine the author of the content? Is not a main purpose of authentication to attribute the content to the proper author? Social media records may well be the best evidence against a defendant, but without an authorship-centric approach, the current path to their admissibility may not yet be the best process.


Are News Aggregators Getting Their Fair Share of Fair Use?

Mickey Stevens, MJLST Note & Comment Editor

Fair use is an affirmative defense to copyright that permits the use of copyrighted materials without the author’s permission when doing so fulfills copyright’s goal of promoting the progress of science and useful arts. One factor that courts analyze to determine whether or not fair use applies is whether the use is of a commercial nature or if it is for nonprofit educational purposes—in other words, whether the use is “transformative.” Recently, courts have had to determine whether automatic news aggregators can invoke the fair use defense against claims of copyright infringement. An automatic news aggregator scrapes the Internet and republishes pieces of the original source without adding commentary to the original works.

In Spring 2014, MJLST published “Associated Press v. Meltwater: Are Courts Being Fair to News Aggregators?” by Dylan J. Quinn. That article discussed the Meltwater case, in which the United States District Court for the Southern District of New York held that Meltwater—an automatic news aggregator—could not invoke the defense of fair use because its use of copyrighted works was not “transformative.” Meltwater argued that it should be treated like search engines, whose actions do constitute fair use. The court rejected this argument, stating that Meltwater customers were using the news aggregator as a substitute for the original work, instead of clicking through to the original article like a search engine.

In his article, Quinn argued that the Meltwater court’s interpretation of “transformative” was too narrow, and that such an interpretation made an untenable distinction between search engines and automatic news aggregators who function similarly. Quinn asked, “[W]hat if a news aggregator can show that its commercial consumers only use the snippets for monitoring how frequently it is mentioned in the media and by whom? Is that not a different ‘use’?” Well, the recent case of Fox News Network, LLC v. TVEyes, Inc. presented a dispute similar to Quinn’s hypothetical that might indicate support for his argument.

In TVEyes, Fox News claimed that TVEyes, a media-monitoring service that aggregated news reports into a searchable database, had infringed copyrighted clips of Fox News programs. The TVEyes database allowed subscribers to track when, where, and how words of interest are used in the media—the type of monitoring that Quinn argued should constitute a “transformative” use. In a 2014 ruling, the court held that TVEyes’ search engine that displayed clips was transformative because it converted the original work into a research tool by enabling subscribers to research, criticize, and comment. 43 F. Supp. 3d 379 (S.D.N.Y. 2014). In a 2015 decision, the court analyzed a few specific features of the TVEyes service, including an archiving function and a date-time search function. 2015 WL 5025274 (S.D.N.Y. Aug. 25, 2015). The court held that the archiving feature constituted fair use because it allowed subscribers to detect patterns and trends and save clips for later research and commentary. However, the court held that the date-time search function (allowing users to search for video clips by date and time of airing) was not fair use. The court reasoned that users who have date and time information could easily obtain that clip from the copyright holder or licensing agents (e.g. by buying a DVD).

While the court’s decision did point out that the video clip database was different in kind from that of a collection of print news articles, the TVEyes decisions show that the court may now be willing to allow automatic news aggregators to invoke the fair use defense when they can show that their collection of print news articles enables consumers to track patterns and trends in print news articles for research, criticism, and commentary. Thus, the TVEyes decisions may lead the court to reconsider the distinction between search engines and automatic news aggregators established in Meltwater that puts news aggregators at a disadvantage when it comes to fair use.


The Limits of Free Speech

Paul Overbee, MJLST Editor

A large portion of society does not put much thought into what they post on the internet. From tweets and status updates to YouTube comments and message board activities, many individuals post on impulse without regard to how their messages may be interpreted by a wider audience. Anthony Elonis is just one of many internet users that are coming to terms with the consequences of their online activity. Oddly enough, by posting on Facebook Mr. Elonis took the first steps that ultimately led him to the Supreme Court. The court is now considering whether the posts are simply a venting of frustration as Mr. Elonis claims, or whether the posts constitute a “true threat” that will direct Mr. Elonis directly to jail.

The incident in question began a week after Tara Elonis obtained a protective order against her husband. Upon receiving the order, Mr. Elonis posted to Facebook, “Fold up your PFA [protection-from-abuse order] and put it in your pocket […] Is it thick enough to stop a bullet?” According the Mr. Elonis, he was trying to emulate the rhyming styles of the popular rapper Eminem. At a later date, an FBI agent visited Mr. Elonis regarding his threatening posts about his wife. Soon after the agent left, Mr. Elonis again returned to Facebook to state “Little agent lady stood so close, took all the strength I had not to turn the [expletive] ghost. Pull my knife, flick my wrist and slit her throat.”

Due to these posts, Mr. Elonis was sentenced to nearly four years in federal prison, and Elonis v. United States is now in front of the Supreme Court. Typical state statutes define these “true threats” without any regard to whether the speaker actually intended to cause such terror. For example, Minnesota’s “terroristic threats” statute includes “reckless disregard of the risk of causing such terror.” Some states allow for a showing of “transitory anger” to overcome a “true threat” charge. This type of defense arises where the defendant’s actions are short-lived, have no intent to terrorize, and clearly are tied to an inciting event that caused the anger.

The Supreme Court’s decision will carry wide First Amendment implications for free speech rights and artistic expression. A decision that comes down harshly on Mr. Elonis may have the effect of chilling speech on the internet. The difference between a serious statement and one that is joking many times depends on the point of view of the reader. Many would rather stop their posting on the internet instead of risk having their words misinterpreted and charges brought. On the other hand, if the Court were to look towards the intent of Mr. Elonis, then “true threat” statutes may lose much of their force due to evidentiary issues. A decision in favor of Mr. Elonis may lead to a more violent internet where criminals such as stalkers have a longer leash in which to persecute their victims. Oral argument on the case was held on December 1, 2014, and a decision will be issued in the near future.