internet

A Nation of Misinformation? the Attack on the Government’s Efforts to Stop Social Media Misinformation

Alex Mastorides, MJLST Staffer

Whether and how misinformation on social media can be curtailed has long been the subject of public debate. This debate has increasingly gained momentum since the beginning of the COVID-19 pandemic, at a time when uncertainty was the norm and people across the nation scrambled for information to help them stay safe. Misinformation regarding things like the origin of the pandemic, the treatment that should be administered to COVID-positive people, and the safety of the vaccine has been widely disseminated via social media platforms like TikTok, Facebook, Instagram, and X (formerly known as Twitter). The federal government under the Biden Administration has sought to curtail this wave of misinformation, characterizing it as a threat to public health. However, many have accused it of unconstitutional acts of censorship in violation of the First Amendment.

The government cannot directly interfere with the content posted on social media platforms; this right is held by the private companies that own the platforms. Instead, the government’s approach has been to communicate with social media companies, encouraging them to address misinformation that is promulgated on their sites. Per the Biden Administration: “The President’s view is that the major platforms have a responsibility related to the health and safety of all Americans to stop amplifying untrustworthy content, disinformation, and misinformation, especially related to COVID-19, vaccinations, and elections.”[1]

Lower Courts have Ruled that the Government May Not Communicate with Social Media Companies for Purposes of Curtailing Online Misinformation

The case of Murthy v. Missouri may result in further clarity from the Supreme Court regarding the powers of the federal government to combat misinformation on social media platforms. The case began in the United States District Court for the Western District of Louisiana when two states–Missouri and Louisiana–along with several private parties filed suit against numerous federal government entities, including the White House and agencies such as the Federal Bureau of Investigation, the Centers for Disease Control & Prevention, and the Cybersecurity & Infrastructure Security Agency.[2] These entities have repeatedly communicated with social media companies, allegedly encouraging them to remove or censor the plaintiffs’ online content due to misinformation about the COVID-19 pandemic (including content discussing “the COVID-19 lab-leak theory, pandemic lockdowns, vaccine side-effects, election fraud, and the Hunter Biden laptop story.”)[3] The plaintiffs allege that these government entities “‘coerced, threatened, and pressured [the] social-media platforms to censor [them]’ through private communications and legal threats” in violation of the plaintiffs’ First Amendment rights.[4]

The District Court agreed with the plaintiffs, issuing a preliminary injunction on July 4, 2023 to greatly restrict the entities’ ability to contact social media companies (especially with regard to misinformation).[5] This approach was predicated on the idea that government communications with social media companies about misinformation on their platforms is essentially coercive, forcing the companies to censor speech at the government’s demand. The injunction was appealed to the Fifth Circuit, which narrowed the injunction’s scope to just the White House, the Surgeon General’s office, and the FBI.[6]

Following the Fifth Circuit’s ruling on the preliminary injunction, the government parties to the Murthy case applied for a stay of the injunction with the United States Supreme Court.[7] The government further requested that the Court grant certiorari with regard to the questions presented by the injunction. The government attacked the injunction on three grounds. The first is that the plaintiffs did not have standing to sue under Article III because they did not show that the censoring effect on their posts was “fairly traceable” to the government or “redressable by injunctive relief.”[8]

The second argument is that the conduct at issue does not constitute a First Amendment free speech violation.[9] This claim is based on the state action doctrine, which outlines the circumstances in which the decisions of private entities are considered to be “state action.” If a private social media company’s decisions to moderate content are sufficiently “coerced” by the government, the law treats those decisions as if they were made by the government directly.[10] In that situation, the First Amendment would apply.[11] The Supreme Court has advocated for a strict evaluation of what kind of conduct might be considered “coercive” under this doctrine in an effort to avoid infringing upon the rights of private companies to modulate speech on their platforms.[12] The government’s Application for Stay argues that the Fifth Circuit’s decision is an overly broad application of the doctrine in light of the government’s conduct.[13]

Third, the government maintains that the preliminary injunction is overly broad because it “covers the government’s communications with all social-media platforms (not just those used by respondents) regarding all posts by any person (not just respondents) on all topics.”[14]

The Supreme Court Granted the Requested Stay and Granted Certiorari Regarding Three Key Questions

The Supreme Court granted the government’s request for a stay on the preliminary injunction. The Court simultaneously granted certiorari with respect to the questions posed in the government’s Application for Stay: “(1) Whether respondents have Article III standing; (2) Whether the government’s challenged conduct transformed private social-media companies’ content-moderation decisions into state action and violated respondents’ First Amendment rights; and (3) Whether the terms and breadth of the preliminary injunction are proper.”[15]

The Court gave no explanation for its grant of the request for stay or for its grant of certiorari. However, Justice Alito, joined by Justice Thomas and Justice Gorsuch, issued a dissent from the grant of application for stay, arguing that the government has not shown a likelihood that denial of a stay will result in irreparable harm.[16] He contends that the government’s argument about irreparable harm comes from hypotheticals rather than from actual “concrete” proof that harm is imminent.[17] The dissent further displays a disapproving attitude of the government’s actions toward social media misinformation: “At this time in the history of our country, what the Court has done, I fear, will be seen by some as giving the Government a green light to use heavy-handed tactics to skew the presentation of views on the medium that increasingly dominates the dissemination of news. That is most unfortunate.”[18]

Justice Alito noted in his dissent that the completion of the Court’s review of the case may not come until spring of next year.[19] The stay on the preliminary injunction will hold until that time.

Notes

[1] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[2] State v. Biden, 83 F.4th 350, 359 (5th Cir. 2023).

[3] Id. at 359.

[4] Id. at 359-60.

[5] Id. at 360.

[6] Id.

[7] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[8] Id. at 2.

[9] Id. at 3.

[10] Id. at 10.

[11] Id.

[12] Id. at 4 (citing Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1933 (2019)).

[13] Application for Stay, Murthy v. Missouri, No. 23A243 (23-411) (2023).

[14] Id. at 5.

[15] Press Briefing by Press Secretary Jen Psaki and Secretary of Agriculture Tom Vilsack, The White House (May 5, 2021), https://www.whitehouse.gov/briefing-room/press-briefings/2021/05/05/press-briefing-by-press-secretary-jen-psaki-and-secretary-of-agriculture-tom-vilsack-may-5-2021/.

[16] On Application for Stay at 3, Murthy v. Missouri, No. 23A243 (23-411) (October 20, 2023) (Alito, J. dissenting) (citing Hollingsworth v. Perry, 558 U.S. 183, 190 (2010)).

[17] Id. at 3-4.

[18] Id. at 5.

[19] Id. at 2.


Will Moody v. NetChoice, LLC End Social Media?

Aidan Vogelson, MJLST Staffer

At first, the concept that social media’s days may be numbered seems outlandish. Billions of people utilize social media every day and, historically, social media companies and other internet services have enjoyed virtually unfettered editorial control over how they manage their services. This freedom stems from 47 U.S.C. § 230.[1] § 230 withholds liability for “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected…”[2]  In other words, if someone makes an obscene post on Facebook and Facebook removes the post, Facebook cannot be held liable for any violation of protected speech. § 230 has long allowed social media companies to self-regulate by removing posts that violate their terms of service, but on September 29, the Supreme Court granted a writ of certiorari in Moody v. NetChoice, LLC, a case that may fundamentally change how social media companies operate by allowing the government at the state or federal level to regulate around their § 230 protections.

At issue in Moody is whether the methods social media companies use to moderate their content are permissible under the First Amendment and whether social media companies may be classified as common carriers.[3] Common carriers are services which hold themselves open to the public and transport people or goods.[4] While the term “common carrier” once referred only to public transportation services like railroads and airlines, the definition now encompasses communications services such as radio and telephone companies.[5] Common carriers are subjected to greater regulations, including anti-discrimination regulations, due to their market domination of a necessary public service.[6]  For example, given our reliance on airlines and telephone companies in performing necessary services, common carrier regulations ensure that an airline cannot decline to sell tickets to passengers because of their religious beliefs and a cellular network cannot bar service to customers because it disapproves of the content of their phone conversations. If social media companies are held to be common carriers, the federal government and the state governments could impose regulations on what content those companies restrict.

Moody stems from state efforts to do just that. The Florida legislature passed State Bill 7072 to curtail what it saw as social media censorship of conservative voices.[7] The Florida law allows for significant fines against social media companies that demonstrate “unfair censorship” or “deplatform” political candidates, like X (formerly Twitter) did when it removed former President Trump from its platform for falsely claiming that the 2020 election was stolen.[8] Florida is not the only state to pursue a common carrier designation for social media. Texas passed a similar law in 2021 (which is currently enjoined by NetChoice, LLC  v. Paxton and will be addressed alongside Moody) and the attorney general of Ohio has sued Google, seeking for the court to declare that Google is a common carrier to prevent the company from prioritizing its own products in search results.[9] Ohio v. Google LLC is ongoing, and while the judge partially granted Google’s motion to dismiss, he found that Ohio’s claim that Google is a common carrier is cognizable.[10] Given the increasing propensity with which states are attempting to regulate social media, the Supreme Court’s ruling is necessary to settle this vital issue.

Supporters of classifying social media companies as common carriers argue that social media is simply the most recent advancement in communication and should accordingly be designated a common carrier, just as telephone operators and cellular networks are. They explain that designating social media companies as common carriers is actually consistent with the broad protections of § 230, as regulating speech on a social media site regulates the speech of users, not the speech of the company.[11]

However, they ignore that social media companies rely on First Amendment and § 230 protections when they curate the content on their sites. Without the ability to promote or suppress posts and users, these companies would not be able to provide the personalized content that attracts users, and social media would likely become an even greater hotbed of misinformation and hate speech than it already is. The purpose of § 230 is to encourage the development of a thriving online community, which is why Congress chose to shield internet services from liability for content. Treating social media companies as common carriers would stifle that aim.

It is unclear how the Court will rule. In his concurrence in Biden v. Knight First Amend. Inst., Justice Thomas indicated he may be willing to consider social media companies as common carriers.[12] The other justices have yet to write or comment on this issue, but whatever their decision may be, the ramifications of this case will be significant. The conservative politicians behind the Florida and Texas laws have specifically decried what they argue is partisan censorship of conservative views about the Covid-19 pandemic and the 2020 election, yet these very complaints demonstrate the need for social media companies to exercise editorial control over their content. Covid-19 misinformation unquestionably led to unnecessary deaths from the Covid-19 pandemic.[13] Misinformation about the 2020 election led to a violent attempted overthrow of our government. These threats of violence and dangerous misinformation are the harms that Congress created § 230 to avoid. Without the ability for social media companies to curate content, social media will assuredly contain more racism, misinformation, and calls for violence. Few would argue given the omnipresence of social media in our modern world, our reliance on it for communication, and the misinformation it spreads that social media does not need some form of regulation, but if the Court allows the Florida and Texas laws implicated in Moody and NetChoice to stand, they will be paving the way for a patchwork quilt of laws in every state which may render social media unworkable

Notes

[1] See 47 U.S.C. § 230.

[2] 47 U.S.C. §230(c)(2)(A).

[3] Moody v. Netchoice, LLC, SCOTUSblog, https://www.scotusblog.com/case-files/cases/moody-v-netchoice-llc/.

[4] Alison Frankel, Are Internet Companies ‘Common Carriers’ of Content? Courts Diverge on Key Question, REUTERS, (May 31, 2022, 5:52 PM), https://www.reuters.com/legal/transactional/are-internet-companies-common-carriers-content-courts-diverge-key-question-2022-05-31/.

[5] Id.

[6] Id.

[7] David Savage, Supreme Court Will Decide if Texas and Florida Can Regulate Social Media to Protect ‘Conservative Speech’, LA TIMES (Sept. 29, 2023, 8:33 AM), https://www.msn.com/en-us/news/us/supreme-court-will-decide-if-texas-and-florida-can-regulate-social-media-to-protect-conservative-speech/ar-AA1hrE2s.

[8] Id.

[9] AG Yost Files Landmark Lawsuit to Declare Google a Public Utility, OHIO ATTORNEY GENERAL’S OFFICE (June 8, 2021), https://www.ohioattorneygeneral.gov/Media/News-Releases/June-2021/AG-Yost-Files-Landmark-Lawsuit-to-Declare-Google-a.

[10] Ohio v. Google LLC, No. 21-CV-H-06-0274 (Ohio Misc. 2022), https://fingfx.thomsonreuters.com/gfx/legaldocs/gdpzyeakzvw/frankel-socialmediacommoncarrier–ohioruling.pdf.

[11] John Villasenor, Social Media Companies and Common Carrier Status: A Primer, BROOKINGS INST. (Oct. 27, 2022), https://www.brookings.edu/articles/social-media-companies-and-common-carrier-status-a-primer/.

[12] Biden v. Knight First Amend. Inst., 141 S. Ct. 1220 (2021),  https://www.law.cornell.edu/supremecourt/text/20-197.

[13] Alistair Coleman, ’Hundreds Dead’ Because of Covid-19 Misinformation, BBC (Aug. 12, 2020), https://www.bbc.com/news/world-53755067.


Fake It ‘Til You Make It: How Should Deepfakes Be Regulated?

Tucker Bender, MJLST Staffer

Introduction

While rapidly advancing artificial intelligence (AI) is certain to elevate technology and human efficiency, AI also poses several threats. Deepfakes use machine learning and AI to essentially photoshop individuals into images and videos. The advancement of AI allows unskilled individuals to quickly create incredibly lifelike fake media. Further, in an increasingly digital world, deepfakes can be used to rapidly disseminate misinformation and cause irreparable harm to someone’s reputation. Minnesota is an example of a state that has recently enacted deepfake law. However, some view these laws as a violation of First Amendment rights and as being unnecessary due to incentives for private companies to monitor their sites for misinformation. 

Minnesota’s Deepfake Law

On August 1st, 2023, a deepfake law became effective in Minnesota.[1] In the absence of any federal law, Minnesota joins a handful of states that have enacted legislation to combat deepfakes.[2] Laws vary by state, with some allowing criminal charges in certain situations, while others allow a civil action. Specifically, the Minnesota law imposes civil and criminal liability for the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts” and criminal liability for the “use of deep fake technology to influence an election”.[3]

The law imposes severe penalties for each. For creating and disseminating a sexual deepfake, damages can include general and special damages, profit gained from the deepfake, a civil penalty awarded to the plaintiff in the amount of $100,000, and attorney fees.[4] Additionally, criminal penalties can consist of up to three years imprisonment, a fine of up to $5,000, or both.[5] Criminal penalties for use of deepfake technology to influence an election vary depending on whether it is a repeat violation, but can result in up to five years imprisonment, a fine of up to $10,000, or both.[6]

These two deepfake uses appear to elevate the penalties of Minnesota’s criminal defamation statute. The defamation statute allows up to one year of imprisonment, a fine of up to $3,000, or both for whoever “communicates any false and defamatory matter to a third person without the consent of the person defamed”.[7]

It is completely logical for the use of deepfakes to carry harsher penalties than other methods of defamation. Other methods of defamation can be harmful, but typically consist of publications or statements made by a third party about a victim. Deepfakes, on the other hand, make viewers believe the victim is making the statement or committing the act themselves. The image association with a deepfake understandably creates greater harm, as recollection of the deepfake imagery can be difficult for viewers to dissociate from the victim. 

Almost everyone can agree that the Minnesota deepfake law was needed legislation, as evidenced by the bill passing the House in a 127-0 vote.[8] However, the law may be too narrow. Deepfake technology is indisputably damaging when used to create sexually explicit images of someone or to influence an election. But regardless of the false imagery depicted by the deepfake, the image association makes the harm to one’s reputation much greater than mere spoken or written words by a third party. By prohibiting only two uses of deepfake technology in the law, a door is left open for someone to create a deepfake of a victim spewing hateful rhetoric or committing heinous, non-sexual acts. While victims of these deepfakes can likely find redress through civil defamation suits for damages, the criminal liability of the deepfake creators would appear limited to Minnesota’s criminal defamation statute.[9] Further, defamation statutes are better suited to protect celebrities, but deepfakes are more likely to be damaging to people outside of the public eye.[10] There is a need for deepfake-specific legislation to address the technologically advanced harm that deepfakes can cause to the average person.

As state (and possibly federal) statutes progress to include deepfake laws, legislators should avoid drafting the laws too narrowly. While deepfakes that depict sexual acts or influence elections certainly deserve inclusion, so do other uses of deepfakes that injure a victim’s reputation. Elevated penalties should be implemented for any type of deepfake defamation, with even further elevated penalties for certain uses of deepfakes. 

Opposition to Deepfake Laws

Although many agree that deepfakes present issues worthy of legislation, others are skeptical and worried about First Amendment rights, as well as broad legislation undermining valuable uses of the technology.[11] Specifically, skeptics are concerned about legislation that targets political speech, such as the Minnesota statute, as political speech is arguably a category of free speech protected above any other.[12]

Another real concern with broad deepfake legislation is that it would place a burden on innocent creators while doing little to stop those spreading malicious deepfakes. This is due, in part, to the difficulty in tracking down malicious deepfake uploaders, who do so anonymously. Proposed federal regulation suggests a requirement that “any advanced technological false personation record which contains a moving visual element shall contain an embedded digital watermark clearly identifying such record as containing altered audio or visual elements”.[13] However, opponents view this as useless legislation. Deepfake creators and others wanting to spread misinformation clearly have the technical ability to remove a watermark if they can create advanced deepfakes in the first instance.  

Role of Private Parties

Social media sites such as X (formerly known as Twitter) and Facebook should also be motivated to keep harmful deepfakes from being disseminated throughout their platforms. Users of these sites generally will want to be free from harassment and misinformation. This has led to solutions such as X implementing “Community Notes”, which allows videos created using deepfake technology to remain on the platform, but clearly labels them as fake or altered.[14] Private solutions such as this may be the best compromise. Viewers are able to understand the media is fake, while creators are still able to share their work without believing their free speech is being impinged upon. However, the sheer amount of content posted on social media sites makes it inevitable that some harmful deepfakes are not marked accordingly, and thus cause misinformation and reputational injury.

Although altered images and misinformation are nothing new, deepfakes and today’s social media platforms present novel challenges resulting from the realism and rapid dissemination of the modified media. Whether the solution is through broad, narrow, or nonexistent state laws is left to be determined and will likely be a subject of debate for the foreseeable future. 

Notes

[1] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[2] https://www.pymnts.com/artificial-intelligence-2/2023/states-regulating-deepfakes-while-federal-government-remains-deadlocked/

[3] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[4] https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0https://www.revisor.mn.gov/bills/text.php?number=HF1370&type=bill&version=3&session=ls93&session_year=2023&session_number=0

[5] Id.

[6] Id.

[7] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[8] https://www.revisor.mn.gov/bills/bill.php?b=House&f=HF1370&ssn=0&y=2023

[9] https://www.revisor.mn.gov/statutes/cite/609.765#:~:text=Whoever%20with%20knowledge%20of%20its,one%20year%20or%20to%20payment

[10] https://www.ebglaw.com/wp-content/uploads/2021/08/Reif-Fellowship-2021-Essay-2-Recommendation-for-Deepfake-Law.pdf

[11] https://rtp.fedsoc.org/paper/deepfake-laws-risk-creating-more-problems-than-they-solve/

[12]  Id.

[13] https://www.congress.gov/bill/117th-congress/house-bill/2395/text

[14] https://communitynotes.twitter.com/guide/en/about/introduction


After Hepp: Section 230 and State Intellectual Property Law

Kelso Horne IV, MJLST Staffer

Although hardly a competitive arena, Section 230(c) of the Communications Decency Act (the “CDA”) is almost certainly the best known of all telecommunications laws in the United States. Shielding Internet Service Providers (“ISPs”) and websites from liability for the content published by their users, § 230(c)’s policy goals are laid out succinctly, if a bit grandly, in § 230(a) and § 230(b).[1] These two sections speak about the internet as a force for economic and social good, characterizing it as a “vibrant and competitive free market” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”[2] But where §§ 230(a),(b) both speak broadly of a utopian vision for the internet, and (c) grants websites substantial privileges, § 230(e) gets down to brass tacks.[3]

CDA: Goals and Text

The CDA lays out certain limitations on the shield protections provided by § 230(c).[4] Among these is § 230(e)(2) which states in full, “Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.”[5] This particular section, despite its seeming clarity, has been the subject of litigation for over a decade, and in 2021 a clear circuit split was opened between the 9th and 3rd Circuit Courts over how this short sentence applies to state intellectual property laws. The 9th Circuit Court follows the principle that the policy portions of § 230 as stated in §§ 230(a),(b) should be controlling, and that, as a consequence, state intellectual property claims should be barred. The 3rd Circuit Court follows the principle that the plain text of § 230(e)(2) unambiguously allows for state intellectual property claims.

Who Got There First? Lycos and Perfect 10

In Universal Commc’n Sys., Inc. v. Lycos, Inc., the 1st Circuit Court faced this question obliquely; the court assumed that they were not immunized from state intellectual property law by § 230 and the claims were dismissed, but on different grounds.[6] Consequently, when the 9th Circuit released their opinion in Perfect 10, Inc. v. CCBILL LLC only one month later, they felt free to craft their own rule on the issue.[7] Consisting of a few short paragraphs, the court’s decision on state intellectual property rights is nicely summarized in a short sentence. They stated that “As a practical matter, inclusion of rights protected by state law within the ‘intellectual property’ exemption would fatally undermine the broad grant of immunity provided by the CDA.”[8] The court’s analysis in Perfect 10 was almost entirely based on what allowing state intellectual property claims would do to the policy goals stated in § 230(a) and § 230(b), and did not attempt, or rely on, a particularly thorough reading of § 230(e)(2). Here the court looks at both the policy stated in § 230(a) and § 230(b) and the text of § 230(e)(2) and attempts to rectify them. The court clearly sees the possibility of issues arising from allowing plaintiffs to bring cases through fifty different state systems against websites and ISPs for the postings of their users. This insight may be little more than hindsight, however, given the date of the CDA’s drafting.

Hepp Solidifies a Split

Perfect 10 would remain the authoritative appellate level case on the issue of the CDA and state intellectual property law until 2021, when the 3rd Circuit stepped into the ring.[9] In Hepp v. Facebook, Pennsylvania newsreader Karen Hepp sued Facebook for hosting advertisements promoting a dating website and other services which had used her likeness without her permission.[10] In a much longer analysis, the 3rd Circuit held that the 9th Circuit’s interpretation argued for by Facebook “stray[ed] too far from the natural reading of § 230(e)(2)”.[11] Instead, the 3rd Circuit argued for a closer reading of the text of § 230(e)(2) which they said aligned closely with a more balanced selection of policy goals, including allowance for state intellectual property law.[12] The court also mentions structural arguments relied on by Facebook, mostly examining how narrow the other exceptions in 230(e) are, which the majority states “cuts both ways” since Congress easily cabined meanings when they wanted to.[13]

The dissent in Hepp agreed with the 9th Circuit that the policy goals stated in §§230(a),(b) should be considered controlling.[14] It also noted two cases in other circuits where courts had shown hesitancy towards allowing state intellectual property claims under the CDA to go forward, although both claims had been dismissed on other grounds.[15] Perhaps unsurprisingly, the dissent sees the structural arguments as compelling, and in Facebook’s favor.[16] With the circuits now definitively split on the issue, the text of §§ 230(a),(b) would certainly seem to demand the Supreme Court, or Congress, step in and provide a clear standard.

What Next? Analyzing the CDA

Despite being a pair of decisions ostensibly focused on parsing out what exactly Congress was intending when they drafted § 230, both Perfect 10 and Hepp left out any citation to legislative history when discussing the § 230(e)(2) issue. However, this is not as odd as it seems at first glance. The Communications Decency Act is large, over a hundred pages in length, and § 230 makes up about a page and a half.[17] Most of the content of the legislative reports published after the CDA was passed instead focused on its landmark provisions which attempted, mostly unsuccessfully, to regulate obscene materials on the internet.[18] Section 230 gets a passing mention, less than a page, some of which is taken up with assurances that it would not interfere with civil liability for those engaged in “cancelbotting,” a controversial anti-spam method of the Usenet era.[19] It is perhaps unfair to say that § 230 was an afterthought, but it is likely that lawmakers did not understand its importance at the time of passage. This may be an argument for eschewing the 9th Circuit’s analysis which seemingly imparts the CDA’s drafters with an overly high degree of foresight into § 230’s use by internet companies over a decade later.

Indeed, although one may wish that Congress had drafted it differently, the text of § 230(e)(2) is clear, and the inclusion of “any” as a modifier to “law” makes it difficult to argue that state intellectual property claims are not exempted by the general grant of immunity in § 230.[20] Congressional inaction should not give way to courts stepping in to determine what they believe would be a better Act. Indeed, the 3rd Circuit majority in Hepp may be correct in stating that Congress did in fact want state intellectual property claims to stand. Either way, we are faced with no easy judicial answer; to follow the clear text of the section would be to undermine what many in the e-commerce industry clearly see as an important protection and to follow the purported vision of the Act stated in §§230(a),(b) would be to remove a protection to intellectual property which victims of infringement may use to defend themselves. The circuit split has made it clear that this is a question on which reasonable jurists can disagree. Congress, as an elected body, is in the best position to balance these equities, and they should use their law making powers to definitively clarify the issue.

Notes

[1] 47 U.S.C. § 230.

[2] Id.

[3] 47 U.S.C. § 230(e).

[4] Id.

[5] 47 U.S.C. § 230(e)(2).

[6] Universal v. Lycos, 478 F.3d 413 (1st Cir. 2007)(“UCS’s remaining claim against Lycos was brought under Florida trademark law, alleging dilution of the “UCSY” trade name under Fla. Stat. § 495.151. Claims based on intellectual property laws are not subject to Section 230 immunity.”).

[7] 488 F.3d 1102 (9th Cir. 2007).

[8] Id. at 1119 n.5.

[9] Kyle Jahner, Facebook Ruling Splits Courts Over Liability Shield Limits for IP, Bloomberg Law, (Sep. 28, 2021, 11:32 AM).

[10] 14 F.4th 204, 206-7 (3d Cir. 2021).

[11] Id. at 210.

[12] Id. at 211.

[13] Hepp v. Facebook, 14 F.4th 204 (3d Cir. 2021)(“[T]he structural evidence it cites cuts both ways. Facebook is correct that the explicit references to state law in subsection (e) are coextensive with federal laws. But those references also suggest that when Congress wanted to cabin the interpretation about state law, it knew how to do so—and did so explicitly.”).

[14] 14 F.4th at 216-26 (Cowen, J., dissenting).

[15] Almeida v. Amazon.com, Inc., 456 F.3d 1316 (11th Cir. 2006); Doe v. Backpage.com, LLC, 817 F.3d 12 (1st Cir. 2016).

[16] 14 F.4th at 220 (Cowen, J., dissenting) (“[T]he codified findings and policies clearly tilt the balance in Facebook’s favor.”).

[17] Communications Decency Act of 1996, Pub. L. 104-104, § 509, 110 Stat. 56, 137-39.

[18] H.R. REP. NO. 104-458 at 194 (1996) (Conf. Rep.); S. Rep. No. 104-230 at 194 (1996) (Conf. Rep.).

[19] Benjamin Volpe, From Innovation to Abuse: Does the Internet Still Need Section 230 Immunity?, 68 Cath. U. L. Rev. 597, 602 n.27 (2019); see Denise Pappalardo & Todd Wallack, Antispammers Take Matters Into Their Own Hands, Network World, Aug. 11, 1997, at 8 (“cancelbots are programs that automatically delete Usenet postings by forging cancel messages in the name of the authors. Normally, they are used to delete postings by known spammers. . . .”).

[20] 47 U.S.C. § 230(e)(2).


Digital Literacy, a Problem for Americans of All Ages and Experiences

Justice Shannon, MJLST Staffer

According to the American Library Association, “digital literacy” is “the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills.” Digital literacy is a term that has existed since the year 1997. Paul Gilster coined Digital literacy as “the ability to understand and use information in multiple formats from a wide range of sources when it is presented via computers.” In this way, the definition of digital literacy has broadened from how a person absorbs digital information to how one develops, absorbs, and critiques digital information.

The Covid-19 Pandemic taught Americans of all ages the value of Digital literacy. Elderly populations were forced online without prior training due to the health risks presented by Covid-19, and digitally illiterate parents were unable to help their children with classes.

Separate from Covid-19, the rise of crypto-currency has created a need for digital literacy in spaces that are not federally regulated.

Elderly

The Covid-19 pandemic did not create the need for digital literacy training for the elderly. However, the pandemic highlighted a national need to address digital literacy among America’s oldest population. Elderly family members quarantined during the pandemic were quickly separated from their families. Teaching family members how to use Zoom and Facebook messenger became a substitute for some but not all forms of connectivity. However, teaching an elderly family member how to use Facebook messenger to speak to loved ones does not enable them to communicate with peers or teach them other digital literacy skills.

To address digital literacy issues within the elderly population states have approved Senior Citizen Technology grants. Pennsylvania’s Department of Aging has granted funds to adult education centers for technology for senior citizens. Programs like this have been developing throughout the nation. For example, Prince George’s Community College in Maryland uses state funds to teach technology skills to its older population.

It is difficult to tell if these programs are working. States like Pennsylvania and Maryland had programs before the pandemic. Still, these programs alone did not reduce the distance between America’s aging population and the rest of the nation during the pandemic. However, when looking at the scale of the program in Prince George’s County, this likely was not the goal. Beyond that, there is a larger question: Is the purpose of digital literacy for the elderly to ensure that they can connect with the world during a pandemic, or is the goal simply ensuring that the elderly have the skills to communicate with the world? With this in mind, programs that predate the pandemic, such as the programs in Pennsylvania and Maryland, likely had the right approach even if they weren’t of a large enough scale to ensure digital literacy for the entirety of our elderly population.

Parents

The pandemic highlighted a similar problem for many American families. While state, federal, and local governments stepped up to provide laptops and access to the internet, many families still struggled to get their children into online classes; this is an issue in what is known as “last mile infrastructure.”During the pandemic, the nation quickly provided families with access to the internet without ensuring they were ready to navigate it. This left families feeling ill-prepared to support their children’s educational growth from home. Providing families with access to broadband without digital literacy training disproportionately impacted families of color by limiting their children’s growth capacity online compared to their peers. While this wasn’t an intended result, it is a result of hasty bureaucracy in response to a national emergency. Nationally, the 2022 Workforce Innovation Opportunity Act aims to address digital literacy issues among adults by increasing funding for teaching workplace technology skills to working adults. However, this will not ensure that American parents can manage their children’s technological needs.

Crypto

Separate from issues created by Covid-19 is cryptocurrency. One of the largest selling points of cryptocurrency is that it is largely unregulated. Users see it as “digital gold, free from hyper-inflation.”While these claims can be valid, consumers frequently are not aware of the risks of cryptocurrency. Last year the Chair of the SEC called cryptocurrencies “the wild west of finance rife with fraud, scams, and abuse.”This year the Department of the Treasury announced they would release instructional materials to explain how cryptocurrencies work. While this will not directly regulate cryptocurrencies providing Americans with more tools to understand cryptocurrencies may help reduce cryptocurrency scams.

Conclusion

Addressing digital literacy has been a problem for years before the Covid-19 pandemic. Additionally, when new technologies become popular, there are new lessons to learn for all age groups. Covid-19 appropriately shined a light on the need to address digital literacy issues within our borders. However, if we only go so far as to get Americans networked and prepared for the next national emergency, we’ll find that there are disparities between those who excel online and those who are are ill-equipped to use the internet to connect with family, educate their kids, and participate in e-commerce.


Extending Trademark Protections to the Metaverse

Alex O’Connor, MJLST Staffer

After a 2020 bankruptcy and steadily decreasing revenue that the company attributes to the Coronavirus pandemic, Chuck E. Cheese is making the transition to a pandemic-proof virtual world. Restaurant and arcade center Chuck E. Cheese is hoping to revitalize its business model by entering the metaverse. In February, Chuck E. Cheese filed two intent to use trademark filings with the USPTO. The trademarks were filed under the names “CHUCK E. VERSE” and “CHUCK E. CHEESE METAVERSE”. 

Under Section 1 of the Lanham Act, the two most common types of applications for registration of a mark on the Principal Register are (1) a use based application for which the applicant must have used the mark in commerce and (2) an “intent to use” (ITU) based application for which the applicant must possess a bona fide intent to use the mark in trade in the near future. Chuck E. Cheese has filed an ITU application for its two marks.

The metaverse is a still-developing virtual and immersive world that will be inhabited by digital representations of people, places, and things. Its appeal lies in the possibility of living a parallel, virtual life. The pandemic has provoked a wave of investment into virtual technologies, and brands are hurrying to extend protection to virtual renditions of their marks by registering specifically for the metaverse. A series of lawsuits related to alleged infringing use of registered marks via still developing technology has spooked mark holders into taking preemptive action. In the face of this uncertainty, the USPTO could provide mark holders with a measure of predictability by extending analogue protections of marks used in commerce to substantially similar virtual renditions. 

Most notably, Hermes International S.A. sued the artist Mason Rothschild for both infringement and dilution for the use of the term “METABIRKINS” in his collection of Non-Fungible Tokens (NFTs). Hermes alleges that the NFTs are confusing customers about the source of the digital artwork and diluting the distinctive quality of Hermes’ popular line of handbags. The argument continues that the term “META” is merely a generic term that simply means “BIRKINS in the metaverse,” and Rothschild’s use of the mark constitutes trading on Hermes’ reputation as a brand.  

Many companies and individuals are rushing to the USPTO to register trademarks for their brands to use in virtual reality. Household names such as McDonalds (“MCCAFE” for a virtual restaurant featuring actual and virtual goods), Panera Bread (“PANERAVERSE” for virtual food and beverage items), and others have recently filed applications for registration with the USPTO for virtual marks. The rush of filings signals a recognition among companies that the digital marketplace presents countless opportunities for them to expand their brand awareness, or, if they’re not careful, for trademark copycats to trade on their hard-earned good will among consumers.

Luckily for Chuck E. Cheese and other companies that seek to extend their brands into the metaverse, trademark protection in the metaverse is governed by the same set of rules governing regular analogue trademark protection. That is, the mark the company is seeking to protect must be distinctive, it must be used in commerce, and it must not be covered by a statutory bar to protection. For example, if a mark’s exclusive use by one firm would leave other firms at a significant non-reputation related disadvantage, the mark is said to be functional, and it can’t be protected. The metaverse does not present any additional obstacles to trademark protection, and so as long as Chuck E. Cheese eventually uses its two marks,it will enjoy their exclusive use among consumers in the metaverse. 

However, the relationship between new virtual marks and analogue marks is a subject of some uncertainty. Most notably, should a mark find broad success and achieve fame in the metaverse, would that virtual fame confer fame in the real world? What will trademark expansion into the metaverse mean for licensing agreements? Clarification from the USPTO could help put mark holders at ease as they venture into the virtual market. 

Additionally, trademarks in the metaverse present another venue in which trademark trolls can attempt to register an already well known mark with no actual intent to use it-—although the requirement under U.S. law that mark holders either use or possess a bona fide intent to use the mark can help mitigate this problem. Finally, observers contend that the expansion of commerce into the virtual marketplace will present opportunities for copycats to exploit marks. Already, third parties are seeking to register marks for virtual renditions of existing brands. In response, trademark lawyers are encouraging their clients to register their virtual marks as quickly as possible to head off any potential copycat users. The USPTO could ensure brands’ security by providing more robust protections to virtual trademarks based on a substantially similar, already registered analogue trademark.


“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


The Uniform Domain Name Dispute Resolution Policy (“UDRP”): Not a Trademark Court but a Narrow Administrative Procedure Against Abusive Registrations

Thao Nguyen, MJLST Staffer

Anyone can register a domain name through one of the thousands of registrars on a first-come, first-serve basis at a low cost. The ease of entry has created so-called “cybersquatters,” who register for domain names that reflect trademarks before the true trademark owners are able to do so. Cybersquatters often aim to profit from cybersquatting activities, either by selling the domain names back to the trademark holders for a higher price, by generating confusion in order to take advantage of the trademark’s goodwill, or by diluting the trademark and disrupting the business of a competitor. A single cybersquatter can cybersquat on several thousand domain names that incorporate well-known trademarks.

Paragraph 4(a) of the UDRP provides that the complainant must successfully establish all three of the following of elements: (i) that the disputed domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; (ii) that the registrant has no rights or legitimate interests in respect of the domain name; and (iii) that the registrant registered and is using the domain name in bad faith. Remedies for a successful complainant include cancellation or transfer to the complainant of the disputed domain name.

Although prized for being focused, expedient, and inexpensive, the UDRP is not without criticism, the bulk of which focuses on the issue of fairness. The frequent charge is that the UDRP is inherently biased in favor of trademark owners and against domain name holders, not all of whom are “cybersquatters.” This bias is indicated by statistics: 75% to 90% of URDP decisions each year are decided against the domain name owner.

Nonetheless, the asymmetry of outcomes, rather than being a sign of an unfair arbitration process, may simply reflect the reality that most UDRP complaints are brought when there is a clear case of abuse, and most respondents in the proceeding are true cybersquatters who knowingly and willfully violated the UDRP. Therefore, what may appear to be the UDRP’s shortcomings are in facts signs that the UDRP is fulfilling its primary purpose. Furthermore, to appreciate the UDRP proceeding and understand the asymmetry that might normally raise red flags in an adjudication, one must understand that the UDRP is not meant to resolve trademark dispute. A representative case where this purpose is addressed is Cameron & Company, Inc. v. Patrick Dudley, FA1811001818217 (FORUM Dec. 26, 2018), where the Panel wrote, “cases involving disputes regarding trademark rights and usage, trademark infringement, unfair competition, deceptive trade practices and related U.S. law issues are beyond the scope of the Panel’s limited jurisdiction under the Policy.” In other words, the UDRP’s scope is limited to detecting and reversing the damages of cybersquatting, and the administrative dispute-resolution procedure is streamlined for this purpose.[1]

That the UDRP is not a trademark court is evident in the UDRP’s refusal to handle cases where multiple legitimate complainants assert right to a single domain name registered by a cybersquatter. UDRP Rule 3(a) states: “Any person or entity may initiate an administrative proceeding by submitting a complaint.” The Forum’s Supplemental Rule 1(e) defines “The Party Initiating a Complaint Concerning a Domain Name Registration” as a “single person or entity claiming to have rights in the domain name, or multiple persons or entities who have a sufficient nexus who can each claim to have rights to all domain names listed in the Complaint.” UDRP cases with two or more complainants in a proceeding are possible only when the complainants are affiliated with each other as to share a single license to a trademark,[2] for example, when the complainant is assigned rights to a trademark registered by another entity,[3] or when the complainant has a subsidiary relationship with the trademark registrant.[4]

Since the UDRP does not resolve a good faith trademark dispute but intervenes only when there is clear abuse, the respondent’s bad faith is central: a domain name may be confusingly similar or even identical to a trademark, and yet a complainant cannot prevail if the respondent has rights and legitimate interests in the domain name and/or did not register and use the domain name in bad faith.[5] For this reason, the UDRP sets a high standard for the complainant to establish respondent’s bad faith. For example, UDRP provides a defense if the domain name registrant has made demonstrable preparations to use the domain name in a bona fide offering of goods or services. On the other hand, the Anticybersquatting Consumer Protection Act (“ACPA”) only provides a defense if there is prior good faith use of the domain name, not simply preparation to use. Another distinction between the UDRP and the ACPA is that the UDRP requires that complainant prove bad faith in both registration and use of the disputed domain to prevail, whereas the ACPA only requires complainant to prove bad faith in either registration or use.

Such a high standard for bad faith indicates that the UDRP is not equipped resolve issues where both parties dispute their respective rights in the trademark. In fact, when abuse is non-existent or not obvious, the UDRP Panel would refuse to transfer the disputed domain name from the respondent to the complainant.[6] Instead, the parties would need to resolve these claims in regular courts under either the ACPA or the Latham act. Limiting itself to addressing cybersquatting allows the UDRP to become extremely efficient in dealing with cybersquatting practices, a widespread and highly damaging abuse of the Internet age. This efficiency and ease of the UDRP process is appreciated by trademark-owning businesses and individuals, who prefer that disputes are handled promptly and economically. From the time of the UDRP’s creation until now, ICANN has not shown intention for reforming the Policy despite existing criticisms,[7] and for good reasons.

 

[Notes]

[1] Gerald M. Levine, Domain Name Arbitration: Trademarks, Domain Names, and Cybersquatting at 102 (2019).

[2] Tasty Baking, Co. & Tastykake Invs., Inc. v. Quality Hosting, FA 208854 (FORUM Dec. 28, 2003) (treating the two complainants as a single entity where both parties held rights in trademarks contained within the disputed domain names.)

[3] Golden Door Properties, LLC v. Golden Beauty / goldendoorsalon, FA 1668748 (FORUM May 7, 2016) (finding rights in the GOLDEN DOOR mark where Complainant provided evidence of assignment of the mark, naming Complainant as assignee); Remithome Corp v. Pupalla, FA 1124302 (FORUM Feb. 21, 2008) (finding the complainant held the trademark rights to the federally registered mark REMITHOME, by virtue of an assignment); Stevenson v. Crossley, FA 1028240 (FORUM Aug. 22, 2007) (“Per the annexed U.S.P.T.O. certificates of registration, assignments and license agreement executed on May 30, 1997, Complainants have shown that they have rights in the MOLD-IN GRAPHIC/MOLD-IN GRAPHICS trademarks, whether as trademark holder, or as a licensee. The Panel concludes that Complainants have established rights to the MOLD-IN GRAPHIC SYSTEMS mark pursuant to Policy ¶ 4(a)(i).”)

[4] Provide Commerce, Inc v Amador Holdings Corp / Alex Arrocha, FA 1529347 (FORUM Jan. 3, 2014) (finding that the complainant shared rights in a mark through its subsidiary relationship with the trademark holder); Toyota Motor Sales, U.S.A., Inc. v. Indian Springs Motor, FA 157289 (FORUM June 23, 2003) (“Complainant has established that it has rights in the TOYOTA and LEXUS marks through TMC’s registration with the USPTO and Complainant’s subsidiary relationship with TMC.”)

[5] Levine, supra note 1, at 99; see e.g., Dr. Alan Y. Chow, d/b/a Optobionics v. janez bobnik, FA2110001967817 (FORUM Nov. 23, 2021) (refusing to transfer the <optobionics.com> domain name despite its being identical to Complainant’s OPTOBIONICS mark and formerly owned by Complainant, since “[t]he Panel finds no evidence in the Complainant’s submissions . . . [that] the Respondent a) does not have a legitimate interest in the domain name and b) registered and used the domain name in bad faith.”).

[6] Swisher International, Inc. v. Hempire State Smoke Shop, FA2106001952939 (FORUM July 27, 2021).

[7] Id. at 359.


Counter Logic Broadband

Justice C. Shannon, MJLST Staffer

In 2015 Zaqueri “Aphromoo” Black won his first North American League of Legends championship series “LCS” championship playing support for Counter Logic Gaming. Since 2013 at least forty players have made the starting lineups for eight to ten LCS teams. Aphromoo is the only African American to win an LCS MVP. Aphromoo is the only African American player to win multiple LCS finals. Aphromoo is the only African American player to win a single LCS Final. Aphromoo is the only African American player to make it to an LCS final. Aphromoo is the only African American player to participate in LCS playoffs. Indeed, Aphromoo is the only African American player to have a starting role on an LCS team. Why? At least in part, because due to the digital divide.

More than a quarter of African Americans do not have broadband. Further, nearly 40% of the African Americans in the rural south do not have broadband. One quarter of the Latinx population does not have broadband. These discrepancies allow fewer African Americans and Latinx to play online video games like League of Legends. Okay, but if the digital divide only affects esports, why should the nation care? The digital divide, as seen in esports, is also seen in the American educational system. More than 15% of American households lacked broadband at the start of the pandemic. This gap was more pronounced in African American and Latinx households. These statistics demonstrate a national need to address the digital divide for entertainment purposes and, more importantly, educational purposes. So, what are some legal solutions to the digital divide? Municipal internet, subsidies, and low-income broadband laws.

Municipal Internet

Municipal broadband is not a new concept, but recently it has been seen as a solution to help address the digital divide. While the up-front cost to a city may be substantial, the long-term advantages can be significant. Highland, IL, and other communities across the United States provide high-speed internet for as low as $35 a month. Cities providing low-cost broadband through municipalities frequently have competitive prices for gigabit speeds as well. The most significant downside to this solution is that these cities are frequently in rural locations that do not provide for large populations. In addition, when municipalities attempt to provide broadband outside of their borders, state laws preempt them to protect ISPs. ISPs lobby for laws to deter or prevent municipal internet on the basis that they are necessary to prevent unfair competition; this fear of unfair competition, however, restricts communities from getting connected.

To avoid the preemption issue during the pandemic, some cities have established narrow versions of municipal broadband. In addition, these cities are providing free connectivity in heavily populated communities. For example, during the pandemic, Chattanooga, Tennessee, offered free broadband to low-income students. If these solutions stay in place, they will set an industry precedent for providing broadband to low-income communities.

Subsidies

The emergency Broadband Benefit provides up to $50 per month towards broadband services for eligible households and $75 a month for households on tribal lands. To qualify for the program, a household must meet one of five standards. Congress created the program to help low-income households stay connected during the pandemic. Congress allocated $3.2 billion to the FCC to enable the agency to provide the discount. This discount also comes with a one-time device discount of up to $100 so that users not only have broadband but have the tools to utilize broadband. The advantage of this subsidy is it directly addresses the issue of low-income recipients not being able to afford broadband, which can immediately affect the 15% of Americans who do not have broadband.

The downside of this solution is to qualify, a recipient must share their income on a webpage they have not visited before, which can be invasive. Further, this plan does not permanently address the cost of broadband, and once it ends, it is possible that the same groups of Americans who could not afford broadband before lose access to the internet. Additionally, when the average cost of a laptop in America is $700, a discount of $100 does not do very much to ensure that users are correctly benefitting from their new broadband connection. If the goal is to ensure that users can attend classes, complete homework assignments, and maybe play esports on the side, then a lower-cost tablet ($350 on average) would not address the problem of needing hardware to access broadband.

However, a program like this could be valued as a reasonable start if things continue to go in the right direction. A fair price for broadband is $60 a month. Reducing the cost of broadband to $10 per recipient for competitive speeds and reliability after subsidization could be a great tool to eliminate the digital divide so long as it persists after the pandemic.

Low-Income Broadband Laws

Low-cost broadband laws would require internet service providers to provide broadband plans for low-income recipients at a low-cost price. This approach would directly address Americans with physical access to broadband but who cannot pay for broadband solutions due to cost, thus, helping to bridge the digital divide. Low-cost broadband plans such as New York’s proposed Affordable Broadband Act would require all internet service providers serving more than 20,000 households to provide two low-cost plans to qualifying (low income) customers. However, New York’s law was stymied by ISPs arguing that it is an illegal way to close the digital divide as states are preempted from rate regulation of broadband by the Federal Communications Commission.

The ISPs argued that the Affordable Broadband Act operated within the field of interstate commerce and was thus likely preempted by the Federal Communications Act of 1934. However, as broadband is almost always interstate commerce, other state laws similar to New York’s Affordable Broadband Act would probably run into the same issue. Thus, a low-income broadband law would likely need to come from the federal level to avoid the same road bumps.

The Future of Broadband and the Digital Divide

An overlapping theme between many of these solutions is that they were implemented during the pandemic; this begs the question, are these short-term solutions to an unexpected life-changing event or rational long-term solutions for various long-term problems, including the pandemic? If cities, states, and the nation stay the course and implement more low-cost broadband solutions such as municipal internet, subsidies, and low-income broadband laws, it will be possible to address the digital divide. However, if jurisdictions treat these solutions like short-term stopgaps, communities that cannot afford traditional broadband solutions will again lose broadband access. Students will again go to McDonald’s to do homework assignments, and Aphromoo may continue to be the only active African American LCS player.