Internet

Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


Social Media Influencers Ask What “Intellectual Property” Means

Henry Killen, MJLST Staffer

Today, just about anyone can name their favorite social media influencer. The most popular influencers are athletes, musicians, politicians, entrepreneurs, or models. Ultra-famous influencers, such as Kylie Jenner, can charge over 1 million dollars for a single post with a company’s product. So what are the risks of being an influencer? Tik Tok star Charli D’Amelio has been on both sides of intellectual property disputes. A photo of Charli was included in media mogul Sheeraz Hasan’s video promoting his ability to “make anyone famous.” The video featured many other celebrities such as Logan Paul and Zendaya. Charli’s legal team sent a cease-and-desist letter to Sheeraz demanding that her portion of the promotional video is scrubbed. Her lawyers assert that her presence in the promo “is not approved and will not be approved.” Charli has also been on the other side of celebrity intellectual property issues. The star published her first book In December and has come under fire from photographer Jake Doolittle for allegedly using photos he took without his permission. Though no lawsuit has been filed, Jake posted a series of Instagram posts blaming Charli’s team for not compensating him for his work.

Charli’s controversies highlight a bigger question society is facing, is content shared on social media platforms considered intellectual property? A good place to begin is figuring out what exactly intellectual property is. Intellectual property “refers to creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names, and images used in commerce.” Social media platforms make it possible to access endless displays of content – from images to ideas – creating a cultural norm of sharing many aspects of life. Legal teams at the major social media platforms already have policies in place that make it against the rules to take images from a social media feed and use them as one’s own. For example, Bloggers may not be aware what they write may already by trademarked or copyrighted or that the images they get off the internet for their posts may not be freely reposted. Influencers get reposted on sites like Instagram all the time, and not just by loyal fans. These reposts may seem harmless to many influencers, but it is actually against Instagram’s policy to repost a photo without the creator’s consent. This may seem like not a big deal because what influencer doesn’t want more attention? However, sometimes influencers’ work gets taken and then becomes a sensation. A group of BIPOC TikTok users are fighting to copyright a dance they created that eventually became one of biggest dances in TikTok history. A key fact in their case is that the dance only became wildly popular after the most famous TiKTok users began doing it.

There are few examples of social media copyright issues being litigated, but in August 2021, a Manhattan Federal judge ruled that the practice of embedding social media posts on third-party websites, without permission from the content owner, could violate the owner’s copyright. In reaching this decision, the judge rejected the “server test” from the 9th Circuit, which holds that embedding content from a third party’s social media account only violates the contents owner’s copyright if a copy is stored on the defendant’s serves. .  General copyright laws from Congress lay out four considerations when deciding if a work should be granted copyright protection: originality, fixation, idea versus expression, and functionality. These considerations notably leave a gray area in determining if dances or expressions on social media sites can be copyrighted. Congress should enact a more comprehensive law to better address intellectual property as it relates to social media.


The Uniform Domain Name Dispute Resolution Policy (“UDRP”): Not a Trademark Court but a Narrow Administrative Procedure Against Abusive Registrations

Thao Nguyen, MJLST Staffer

Anyone can register a domain name through one of the thousands of registrars on a first-come, first-serve basis at a low cost. The ease of entry has created so-called “cybersquatters,” who register for domain names that reflect trademarks before the true trademark owners are able to do so. Cybersquatters often aim to profit from cybersquatting activities, either by selling the domain names back to the trademark holders for a higher price, by generating confusion in order to take advantage of the trademark’s goodwill, or by diluting the trademark and disrupting the business of a competitor. A single cybersquatter can cybersquat on several thousand domain names that incorporate well-known trademarks.

Paragraph 4(a) of the UDRP provides that the complainant must successfully establish all three of the following of elements: (i) that the disputed domain name is identical or confusingly similar to a trademark or service mark in which the complainant has rights; (ii) that the registrant has no rights or legitimate interests in respect of the domain name; and (iii) that the registrant registered and is using the domain name in bad faith. Remedies for a successful complainant include cancellation or transfer to the complainant of the disputed domain name.

Although prized for being focused, expedient, and inexpensive, the UDRP is not without criticism, the bulk of which focuses on the issue of fairness. The frequent charge is that the UDRP is inherently biased in favor of trademark owners and against domain name holders, not all of whom are “cybersquatters.” This bias is indicated by statistics: 75% to 90% of URDP decisions each year are decided against the domain name owner.

Nonetheless, the asymmetry of outcomes, rather than being a sign of an unfair arbitration process, may simply reflect the reality that most UDRP complaints are brought when there is a clear case of abuse, and most respondents in the proceeding are true cybersquatters who knowingly and willfully violated the UDRP. Therefore, what may appear to be the UDRP’s shortcomings are in facts signs that the UDRP is fulfilling its primary purpose. Furthermore, to appreciate the UDRP proceeding and understand the asymmetry that might normally raise red flags in an adjudication, one must understand that the UDRP is not meant to resolve trademark dispute. A representative case where this purpose is addressed is Cameron & Company, Inc. v. Patrick Dudley, FA1811001818217 (FORUM Dec. 26, 2018), where the Panel wrote, “cases involving disputes regarding trademark rights and usage, trademark infringement, unfair competition, deceptive trade practices and related U.S. law issues are beyond the scope of the Panel’s limited jurisdiction under the Policy.” In other words, the UDRP’s scope is limited to detecting and reversing the damages of cybersquatting, and the administrative dispute-resolution procedure is streamlined for this purpose.[1]

That the UDRP is not a trademark court is evident in the UDRP’s refusal to handle cases where multiple legitimate complainants assert right to a single domain name registered by a cybersquatter. UDRP Rule 3(a) states: “Any person or entity may initiate an administrative proceeding by submitting a complaint.” The Forum’s Supplemental Rule 1(e) defines “The Party Initiating a Complaint Concerning a Domain Name Registration” as a “single person or entity claiming to have rights in the domain name, or multiple persons or entities who have a sufficient nexus who can each claim to have rights to all domain names listed in the Complaint.” UDRP cases with two or more complainants in a proceeding are possible only when the complainants are affiliated with each other as to share a single license to a trademark,[2] for example, when the complainant is assigned rights to a trademark registered by another entity,[3] or when the complainant has a subsidiary relationship with the trademark registrant.[4]

Since the UDRP does not resolve a good faith trademark dispute but intervenes only when there is clear abuse, the respondent’s bad faith is central: a domain name may be confusingly similar or even identical to a trademark, and yet a complainant cannot prevail if the respondent has rights and legitimate interests in the domain name and/or did not register and use the domain name in bad faith.[5] For this reason, the UDRP sets a high standard for the complainant to establish respondent’s bad faith. For example, UDRP provides a defense if the domain name registrant has made demonstrable preparations to use the domain name in a bona fide offering of goods or services. On the other hand, the Anticybersquatting Consumer Protection Act (“ACPA”) only provides a defense if there is prior good faith use of the domain name, not simply preparation to use. Another distinction between the UDRP and the ACPA is that the UDRP requires that complainant prove bad faith in both registration and use of the disputed domain to prevail, whereas the ACPA only requires complainant to prove bad faith in either registration or use.

Such a high standard for bad faith indicates that the UDRP is not equipped resolve issues where both parties dispute their respective rights in the trademark. In fact, when abuse is non-existent or not obvious, the UDRP Panel would refuse to transfer the disputed domain name from the respondent to the complainant.[6] Instead, the parties would need to resolve these claims in regular courts under either the ACPA or the Latham act. Limiting itself to addressing cybersquatting allows the UDRP to become extremely efficient in dealing with cybersquatting practices, a widespread and highly damaging abuse of the Internet age. This efficiency and ease of the UDRP process is appreciated by trademark-owning businesses and individuals, who prefer that disputes are handled promptly and economically. From the time of the UDRP’s creation until now, ICANN has not shown intention for reforming the Policy despite existing criticisms,[7] and for good reasons.

 

[Notes]

[1] Gerald M. Levine, Domain Name Arbitration: Trademarks, Domain Names, and Cybersquatting at 102 (2019).

[2] Tasty Baking, Co. & Tastykake Invs., Inc. v. Quality Hosting, FA 208854 (FORUM Dec. 28, 2003) (treating the two complainants as a single entity where both parties held rights in trademarks contained within the disputed domain names.)

[3] Golden Door Properties, LLC v. Golden Beauty / goldendoorsalon, FA 1668748 (FORUM May 7, 2016) (finding rights in the GOLDEN DOOR mark where Complainant provided evidence of assignment of the mark, naming Complainant as assignee); Remithome Corp v. Pupalla, FA 1124302 (FORUM Feb. 21, 2008) (finding the complainant held the trademark rights to the federally registered mark REMITHOME, by virtue of an assignment); Stevenson v. Crossley, FA 1028240 (FORUM Aug. 22, 2007) (“Per the annexed U.S.P.T.O. certificates of registration, assignments and license agreement executed on May 30, 1997, Complainants have shown that they have rights in the MOLD-IN GRAPHIC/MOLD-IN GRAPHICS trademarks, whether as trademark holder, or as a licensee. The Panel concludes that Complainants have established rights to the MOLD-IN GRAPHIC SYSTEMS mark pursuant to Policy ¶ 4(a)(i).”)

[4] Provide Commerce, Inc v Amador Holdings Corp / Alex Arrocha, FA 1529347 (FORUM Jan. 3, 2014) (finding that the complainant shared rights in a mark through its subsidiary relationship with the trademark holder); Toyota Motor Sales, U.S.A., Inc. v. Indian Springs Motor, FA 157289 (FORUM June 23, 2003) (“Complainant has established that it has rights in the TOYOTA and LEXUS marks through TMC’s registration with the USPTO and Complainant’s subsidiary relationship with TMC.”)

[5] Levine, supra note 1, at 99; see e.g., Dr. Alan Y. Chow, d/b/a Optobionics v. janez bobnik, FA2110001967817 (FORUM Nov. 23, 2021) (refusing to transfer the <optobionics.com> domain name despite its being identical to Complainant’s OPTOBIONICS mark and formerly owned by Complainant, since “[t]he Panel finds no evidence in the Complainant’s submissions . . . [that] the Respondent a) does not have a legitimate interest in the domain name and b) registered and used the domain name in bad faith.”).

[6] Swisher International, Inc. v. Hempire State Smoke Shop, FA2106001952939 (FORUM July 27, 2021).

[7] Id. at 359.


TikTok Settles in Class Action Data Privacy Lawsuit – Will Pay $92 Million Settlement

Sarah Nelson, MJLST Staffer

On November 15, 2021, TikTok users received the following notification within the app: “Class Action Settlement Notice: U.S. residents who used Tik Tok before 01 OCT 2021 may be eligible for a class settlement payment – visit https://www.TikTokDataPrivacySettlement.com for details.” The notification was immediately met with skepticism, with users taking to Twitter and TikTok itself to joke about how the notification was likely a scam. However, for those familiar with TikTok’s litigation track record on data privacy, this settlement does not come as a surprise. Specifically, in 2019, TikTok – then known as Musical.ly – settled with the Federal Trade Commission over alleged violations of the Children’s Online Privacy Protection Act for $5.7 million. This new settlement is notable for the size of the payout and for what it tells us about the current state of data privacy and biometric data law in the United States.

Allegations in the Class Action

21 federal lawsuits against TikTok were consolidated into one class action to be overseen by the United States District Court for the Northern District of Illinois. All of the named plaintiffs in the class action are from either Illinois or California and many are minors. The class action comprises two classes – one class covers TikTok users nationwide and the other only includes Tik Tok users who are residents of Illinois.

In the suit, plaintiffs allege TikTok improperly used their personal data. This improper use includes accusations that TikTok, without consent, shared consumer data with third parties. These third parties allegedly include companies based in China, as well as well-known companies in the United States like Google and Facebook. The class action also accuses TikTok of unlawfully using facial recognition technology and of harvesting data from draft videos – videos that users made but never officially posted. Finally, plaintiffs allege TikTok actively took steps to conceal these practices.

What State and Federal Laws Were Allegedly Violated?

On the federal law level, plaintiffs allege TikTok violated the Computer Fraud and Abuse Act (CFAA) and the Video Privacy Protection Act (VPPA). As the name suggests, the CFAA was enacted to combat computer fraud and prohibits accessing “protected computers” in the absence of authorization or beyond the scope of authorization. Here, the plaintiff-users allege TikTok went beyond the scope of authorization by secretly transmitting personal data, “including User/Device Identifiers, biometric identifiers and information, and Private Videos and Private Video Images never intended for public consumption.” As for the VPPA, the count alleges the Act was violated when TikTok gave “personally identifiable information” to Facebook and Google. TikTok allegedly provided Facebook and Google with information about what videos a TikTok user had watched and liked, and what TikTok content creators a user had followed.

On the state level, the entire class alleged violations of the California Comprehensive Data Access and Fraud Act and a Violation of the Right to Privacy under the California Constitution. Interestingly, the plaintiffs within the Illinois subclasswere able to allege violations under the Biometric Information Privacy Act (BIPA). Under the BIPA, before collecting user biometric information, companies must inform the consumer in writing that the information is being collected and why. The company must also say how long the information will be stored and get the consumer to sign off on the collection. The complaint alleges TikTok did not provide the required notice or receive the required written consent.

Additionally, plaintiffs allege intrusion upon seclusion, unjust enrichment, and violation of both a California unfair competition law and a California false advertising law.

In settling the class action, TikTok denies any wrongdoing and maintains that this settlement is only to avoid the cost of further litigation. TikTok gave the following statement to the outlet Insider: “While we disagree with the assertions, we are pleased to have reached a settlement agreement that allows us to move forward and continue building a safe and joyful experience for the TikTok community.”

Terms of the Settlement

To be eligible for a settlement payment, a TikTok user must be a United States resident and must have used the app prior to October of 2021. If an individual meets these criteria, they must submit a claim before March 1, 2022. 89 million usersare estimated to be eligible to receive payment. However, members of the Illinois subclass are eligible to receive six shares of the settlement, as compared to the one share the nationwide class is eligible for. This difference is due to the added protection the Illinois subclass has from BIPA.

In addition to the payout, the settlement will require TikTok to revise its practices. Under the agreed upon settlement reforms, TikTok will no longer mine data from draft videos, collect user biometric data unless specified in the user agreement, or use GPS data to track user location unless specified in the user agreement. TikTok also said they would no longer send or store user data outside of the United States.

All of the above settlement terms are subject to final approval by the U.S. District Judge.

Conclusion

The lawyers representing TikTok users remarked that this settlement was “among the largest privacy-related payouts in history.” And, as noted by NPR, this settlement is similar to the one agreed to by Facebook in 2020 for $650 million. It is possible the size of these settlements will contribute to technology companies preemptively searching out and ceasing practices that may be privacy violative

It is also worth noting the added protection extended to residents of Illinois because of BIPA and its private right of actionthat can be utilized even where there has not been a data breach.

Users of the TikTok app often muse about how amazingly curated their “For You Page” – the videos that appear when you open the app and scroll without doing any particular search – seem to be. For this reason, even with potential privacy concerns, the app is hard to give up. Hopefully, users can rest a bit easier now knowing TikTok has agreed to the settlement reforms.


Counter Logic Broadband

Justice C. Shannon, MJLST Staffer

In 2015 Zaqueri “Aphromoo” Black won his first North American League of Legends championship series “LCS” championship playing support for Counter Logic Gaming. Since 2013 at least forty players have made the starting lineups for eight to ten LCS teams. Aphromoo is the only African American to win an LCS MVP. Aphromoo is the only African American player to win multiple LCS finals. Aphromoo is the only African American player to win a single LCS Final. Aphromoo is the only African American player to make it to an LCS final. Aphromoo is the only African American player to participate in LCS playoffs. Indeed, Aphromoo is the only African American player to have a starting role on an LCS team. Why? At least in part, because due to the digital divide.

More than a quarter of African Americans do not have broadband. Further, nearly 40% of the African Americans in the rural south do not have broadband. One quarter of the Latinx population does not have broadband. These discrepancies allow fewer African Americans and Latinx to play online video games like League of Legends. Okay, but if the digital divide only affects esports, why should the nation care? The digital divide, as seen in esports, is also seen in the American educational system. More than 15% of American households lacked broadband at the start of the pandemic. This gap was more pronounced in African American and Latinx households. These statistics demonstrate a national need to address the digital divide for entertainment purposes and, more importantly, educational purposes. So, what are some legal solutions to the digital divide? Municipal internet, subsidies, and low-income broadband laws.

Municipal Internet

Municipal broadband is not a new concept, but recently it has been seen as a solution to help address the digital divide. While the up-front cost to a city may be substantial, the long-term advantages can be significant. Highland, IL, and other communities across the United States provide high-speed internet for as low as $35 a month. Cities providing low-cost broadband through municipalities frequently have competitive prices for gigabit speeds as well. The most significant downside to this solution is that these cities are frequently in rural locations that do not provide for large populations. In addition, when municipalities attempt to provide broadband outside of their borders, state laws preempt them to protect ISPs. ISPs lobby for laws to deter or prevent municipal internet on the basis that they are necessary to prevent unfair competition; this fear of unfair competition, however, restricts communities from getting connected.

To avoid the preemption issue during the pandemic, some cities have established narrow versions of municipal broadband. In addition, these cities are providing free connectivity in heavily populated communities. For example, during the pandemic, Chattanooga, Tennessee, offered free broadband to low-income students. If these solutions stay in place, they will set an industry precedent for providing broadband to low-income communities.

Subsidies

The emergency Broadband Benefit provides up to $50 per month towards broadband services for eligible households and $75 a month for households on tribal lands. To qualify for the program, a household must meet one of five standards. Congress created the program to help low-income households stay connected during the pandemic. Congress allocated $3.2 billion to the FCC to enable the agency to provide the discount. This discount also comes with a one-time device discount of up to $100 so that users not only have broadband but have the tools to utilize broadband. The advantage of this subsidy is it directly addresses the issue of low-income recipients not being able to afford broadband, which can immediately affect the 15% of Americans who do not have broadband.

The downside of this solution is to qualify, a recipient must share their income on a webpage they have not visited before, which can be invasive. Further, this plan does not permanently address the cost of broadband, and once it ends, it is possible that the same groups of Americans who could not afford broadband before lose access to the internet. Additionally, when the average cost of a laptop in America is $700, a discount of $100 does not do very much to ensure that users are correctly benefitting from their new broadband connection. If the goal is to ensure that users can attend classes, complete homework assignments, and maybe play esports on the side, then a lower-cost tablet ($350 on average) would not address the problem of needing hardware to access broadband.

However, a program like this could be valued as a reasonable start if things continue to go in the right direction. A fair price for broadband is $60 a month. Reducing the cost of broadband to $10 per recipient for competitive speeds and reliability after subsidization could be a great tool to eliminate the digital divide so long as it persists after the pandemic.

Low-Income Broadband Laws

Low-cost broadband laws would require internet service providers to provide broadband plans for low-income recipients at a low-cost price. This approach would directly address Americans with physical access to broadband but who cannot pay for broadband solutions due to cost, thus, helping to bridge the digital divide. Low-cost broadband plans such as New York’s proposed Affordable Broadband Act would require all internet service providers serving more than 20,000 households to provide two low-cost plans to qualifying (low income) customers. However, New York’s law was stymied by ISPs arguing that it is an illegal way to close the digital divide as states are preempted from rate regulation of broadband by the Federal Communications Commission.

The ISPs argued that the Affordable Broadband Act operated within the field of interstate commerce and was thus likely preempted by the Federal Communications Act of 1934. However, as broadband is almost always interstate commerce, other state laws similar to New York’s Affordable Broadband Act would probably run into the same issue. Thus, a low-income broadband law would likely need to come from the federal level to avoid the same road bumps.

The Future of Broadband and the Digital Divide

An overlapping theme between many of these solutions is that they were implemented during the pandemic; this begs the question, are these short-term solutions to an unexpected life-changing event or rational long-term solutions for various long-term problems, including the pandemic? If cities, states, and the nation stay the course and implement more low-cost broadband solutions such as municipal internet, subsidies, and low-income broadband laws, it will be possible to address the digital divide. However, if jurisdictions treat these solutions like short-term stopgaps, communities that cannot afford traditional broadband solutions will again lose broadband access. Students will again go to McDonald’s to do homework assignments, and Aphromoo may continue to be the only active African American LCS player.


NFTs and the Tweet Worth $2.9 Million: Beliefs Versus the Legal Reality

Emily Newman, MJLST Staffer

A clip of Lebron James dunking a basketball, a picture of Lindsay Lohan’s face, and an X-ray of William Shatner’s teeth—what do all these seemingly random things have in common? They’ve all been sold as NFTs for thousands to hundreds of thousands of dollars. It seems like almost everyone, from celebrities to your “average Joe” is taking part in this newest trend, but do all parties really know what they’re getting themselves into? Before addressing that point, let’s look at what exactly are these “NFTs.”

What are they?

NFT stands for “non-fungible token.” In contrast to fungible items, this means that it is unique and can’t be traded or replaced for something else. As explained by Mitchell Clark from The Verge, “a bitcoin is fungible — trade one for another bitcoin, and you’ll have exactly the same thing. A one-of-a-kind trading card, however, is non-fungible. If you traded it for a different card, you’d have something completely different.” NFTs can basically be anything digital, and while headlines have been made over Twitter founder Jack Dorsey selling his first tweet as an NFT for $2.9 million, their popularity has really exploded within the world of digital art. Examples include the Nyan Cat meme selling for around $700,000 and the artist Beeple selling a collage of his work at Christie’s for $69 million (for reference, Monet’s “Nymphéas,” was sold for $54 million in 2014).

Anyone can download and view NFTs for free, so what is all the hype about? Buyers get ownership of the NFT. “To put it in terms of physical art collecting: anyone can buy a Monet print. But only one person can own the original.” This originality provides a sense of authenticity to the art, which is important these days “when forged art is proliferating online.” To facilitate this buying, selling, and reselling of digital art, several online marketplaces have emerged such as OpenSea (where one can purchase their very own CryptoKitties), Nifty Gateway, and Rarible.

NFTs, Copyright Law, and Consumer Protection

As mentioned above, NFT purchasers can own an original piece of digital art—but there’s a catch. Owning the NFT itself does not necessarily equate to ownership of the original work and its underlying copyright. In other words, buying an NFT “does not mean that the copyright to that artwork transfers to the buyer,” it is simply a “digital receipt showing that the holder owns a version of the work.” Without the underlying copyright, the purchaser of an NFT does not have the right to reproduce or prepare derivative works, or to distribute the work—that right belongs exclusively to the copyright owner.

Mike Shinoda, one of the musicians behind Linkin Park and an NFT artist himself, states that “there’s nobody who’s serious about NFTs who really humors the idea that what you’re selling is the copyright  . . . .” However, as Pramod Chintalapoodi from the Chip Law Group points out, oftentimes “buyers’ beliefs about what they own do not translate to legal reality.” Chintalapoodi also describes how companies who sell NFTs are not transparent about this either; for instance, Decentraland describes itself as the “first-ever virtual world owned by its users,” but “according to Article 12.1 of Decentraland’s Terms of Use, it is Metaverse Holdings Ltd. that owns all IP rights on the site.” However, its users still spend millions of dollars on the site buying NFTs.

Going forward, NFT purchasers should clarify with the seller about what exactly it is they are purchasing. Preston J. Byrne from CoinDesk encourages consumers to ask “are you buying information, copyrights, bragging rights or none or all of those things? Do you have the documentation to back all of that up?” Additionally, are you even buying an original work or did the seller create an NFT of someone else’s work? Asking these questions early on can help with avoiding “significant financial or legal pain down the road.” While it may not be the norm to receive the underlying copyright when purchasing an NFT today, and while lawmakers may not step in anytime soon (or at all) and force sellers to display their terms explicitly, it is predicted that transferring copyrights to the purchaser will be a “valued feature for NFT platforms” in the future.

 


Robinhood Changed the Game(Stop) of Modern Day Investing but Did They Go Too Far?

Amanda Erickson, MJLST Staffer

It is likely that you have heard the video game chain, GameStop, in the news more frequently than normal. GameStop is a publicly traded company that is known for selling, trading, and purchasing gaming devices and accessories. Along with many other retailers during the COVID-19 pandemic, GameStop has been struggling. Not only did COVID-19 affect its operations, but the Internet beat the company’s outdated business model. Prior to January 2021, GameStop’s stock prices reflected the apparent new reality of gaming. In March 2015, GameStop’s closing price was around $40 a share, but at the beginning of January 2021, it was at $20 a share. With a downward trend like this, it might come as a shock to learn that on January 27, 2021, GameStop’s closing price was at $347.51 a share, with the stock briefly peaking at $483 on the following day.

This dramatic surge can be accredited to a large group of amateur traders on the Reddit forum, r/WallStreetBets, who promoted investments in the stock. This sudden surge forced large scale institutional investors, who originally bet against the stock through short positions, to buy the stock in order to hedge their positions. Short selling involves “borrowing” shares of a company, and quickly selling the borrowed shares into the market. The short seller hopes that these shares will fall in price, so that they can buy the shares back at a potentially lower price. If this happens, they can return the shares back that they “borrowed” and keep the difference as profit. The practice of short selling is controversial. Short selling can lead to stock price manipulation and can generate misinformation about a company, but it can also serve to check and balance the markets. The group on Reddit knew that short sellers had positions betting against GameStop and wanted to take advantage of these positions. This caused the stock price to soar when these short sellers had to repurchase their borrowed shares.

This historic scene intrigued many day traders to participate and place bets on GameStop, and other stocks that this Reddit group was promoting. Many chose to use Robinhood, a free online trading app, to make these trades. Robinhood introduced a radical business model in 2014 by offering consumers a platform that allowed them to trade with zero commissions, and ultimately changed the way the industry operated. That is until Robinhood issued a statement on January 28, 2021 announcing that “in light of recent volatility, we restricted transactions for certain securities,” including GameStop. Later that day, Robinhood issued another statement saying it would allow limited buying of those securities starting the next day. This came as a shock to many Robinhood users, because Robinhood’s mission is to “democratize finance for all.” These events exacerbated previous questions about the profitability model of Robinhood and ultimately left many users questioning Robinhood’s mission.

The first lawsuit was filed by a Robinhood user on January 28, 2021, alleging that Robinhood blocked its users from purchasing any of GameStop’s stock “in the midst of an unprecedented stock rise thereby depriv[ing] retail investors of the ability to invest in the open-market and manipulating the open market.” Robinhood is now facing over 30 lawsuits, with that number only rising. The chaos surrounding GameStop stock has caught lawmakers’ attention, and they are now calling for congressional action. On January 29, 2021, the Securities and Exchange Commission issued a statement informing that it is “closely monitoring and evaluating the extreme price volatility of certain stocks’ trading prices” and expressed that it will “closely review actions taken by regulated entities that may disadvantage investors.” Robinhood issued another statement on January 29, 2021, stating they did not want to stop people from buying these stocks, but that they had to take these steps to conform with their regulatory capital requirements.

The frenzy has since calmed down but left many Americans with questions surrounding the legality of Robinhood’s actions. While it may seem like Robinhood went against everything the free market has to offer, legal experts disagree, and it all boils down to the contract. The Robinhood contract states “I understand Robinhood may at any time, in its sole discretion and without prior notice to Me, prohibit or restrict My ability to trade securities.” Just how broad is that discretion, though? The issue now is if Robinhood treated some users differently than others. Columbia Law School professor, Joshua Mitts, said, “when hedge funds are going to lose from a trading suspension, they don’t face any lockup like this, any suspension, any halt at the retail level, but when retail investors find themselves locked in, they find themselves unable to exit the trade.” This protective action by Robinhood directly contradicts the language in the Robinhood contract that states that the user agrees Robinhood does not “provide investment advice in connection with this Account.” The language in this contract may seem clear separately, but when examining Robinhood’s restrictions, it leaves room to question what constitutes advice when restricting retail investors’ trades.

Robinhood’s practices are now under scrutiny by retail investors who question the priority of the company. The current lawsuits against Robinhood could potentially impact how fintech companies are able to generate profits and what federal oversight they might have moving forward. This instance of confusion between retail investors and their platform choice points to the potential weaknesses in this new form of trading. While GameStop’s stock price may have declined since January 28, the events that unfolded will likely change the guidelines of retail investing in the future.

 


Lawyers in Flame Wars: The ABA Says Be Nice Online

Parker von Sternberg, MJLST Staffer

The advent of Web 2.0 around the turn of the millennium brought with it an absolute tidal wave of new social interactions. To this day we are in the throes of figuring out how to best engage with one another online, particularly when things get heated or otherwise out of hand. In this new wild west, lawyers sit at a perhaps unfortunate junction. Lawyers are indelibly linked to problems and moments of breakdown—precisely the events that lead to lashing out online. At the same time a lawyer, more so than many professions, relies upon their personal reputation to drive their business. When these factors collide, it creates pressure on the lawyer to defend themselves, but doing so properly can be a tricky thing.

When it comes to questions of ethics for lawyers, the first step is generally to crack open the Model Rules of Professional Conduct (MRPC), given that they have been adopted in 49 states and are kept up to date by the American Bar Association (ABA). While these model rules are customized to some extent from state to state, by and large the language used in the MRPC is an effective starting point for professional ethics issues across the country. Recently, the ABA has stepped into the fray with Formal Opinion 496, which lays out the official interpretation of MRPC 1.6 and how it comes into play in these situations.

MRPC 1.6 protects confidentiality of client information. For our purposes, the pertinent sections are

(a) A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by paragraph (b) and . . .

(b)(5) to establish a claim or defense on behalf of the lawyer in a controversy between the lawyer and the client, to establish a defense to a criminal charge or civil claim against the lawyer based upon conduct in which the client was involved, or to respond to allegations in any proceeding concerning the lawyer’s representation of the client.

So, when someone goes on Google Maps and excoriates your practice, a review that will pop up to everyone who even looks for directions to the office, what can be done? The first question is whether or not they are in fact a former client. If not, feel free to engage! Just wade in there and start publicly fighting with a stranger (really though, don’t do this. Even the ABA knows what the Streisand Effect is). However, if they are a former client, MRPC 1.6 or the local equivalent applies.

In Minnesota we have the MNRPC, with 1.6(b)(8) mirroring MRPC 1.6(b)(5). At its core, the ABA’s interpretation turns on the fact that an online review, on its own, does not qualify as a “controversy” or “proceeding.” That is not to say that it cannot give rise to one though! In 2016 an attorney in Florida took home $350,000 after former clients repeatedly defamed her because their divorce didn’t go how they wanted. But short of the outright lies in that case, lawyers suffering from online hit pieces are more limited in their options. The ABA lays out four possible responses to poor online reviews:

1) do not respond to the negative post or review at all, because as was brought up above, you tempt the Streisand Effect at your peril;

2) request that the website or search engine remove the review;

3) post an invitation to contact the lawyer privately to resolve the matter; and

4) indicate that professional considerations preclude a response.

While none of these options exactly inspire images of righteous fury, defending your besmirched professional honor or righting the wrongs done to your name, it appears unlikely that they will get you in trouble with the ethics board either. The ABA’s formal opinion lays out an impressive list of authorities from nearly a dozen states establishing that lawyers can and will face consequences for public review-related dust ups. The only option for an attorney really looking to have it out online it seems is to move to Washington D.C., where the local rules allow for disclosure “to the extent reasonably necessary to respond to specific allegations by the client concerning the lawyer’s representation of the client.”


Google It: Justice Department Files Antitrust Case Against Google

Amanda Erickson, MJLST Staffer

Technology giants, such as Google, have the ability to influence the data and information that flows through our day to day lives by tailoring what each user sees on its platform. Big Tech companies have been under scrutiny for years, but they continue to become more powerful and have access to more user data even as the global economy tanks. As Google’s influence broadens, the concern over monopolization of the market grows. This concern peaked on October 20, 2020 when the Justice Department filed an antitrust lawsuit against Google for abusing its dominance in general search services, search advertising, and general search text advertising markets through anticompetitive and exclusionary practices.

The Department of Justice, along with eleven state attorney generals, raised three claims in their lawsuit, all of which are under Section 2 of the Sherman Antitrust Act. The Department of Justice claims that, because of Google’s contracts with companies like Apple and Samsung, and its multiple products and services, such as search, video, photo, map, and email, competitors in search will not stand a chance. The complaint is rather broad, but it details the cause of action well, even including several graphs and figures for additional support. For instance, the complaint states Google has a market value of $1 trillion and annual revenue that exceeds $160 billion. This allows Google to pay “billions of dollars each year to distributors . . . to secure default status for its general search engine.” Actions like these have the potential to curb competitive action and harm consumers according to the government.

The complaint states that “between its exclusionary contracts and owned-and-operated properties, Google effectively owns or controls search distribution channels accounting for roughly 80 percent of the general search queries in the United States.” It further mentions that “Google” is not only a noun meaning the company, but a verb that is now used when talking about general searches on the internet. It has become a common practice for people to say, “Google it,” even if they complete an internet search with a different search engine. If Google is considered to be a monopoly, who is harmed by Google’s market power? The complaint addresses the harm to both advertisers and consumers. Advertisers have very little choice but to pay the fee to Google’s search advertising and general search text monopolies and consumers are forced to accept all of Google’s policies, including privacy, security and use of personal data policies. This is also a barrier to entry for new companies emerging into the market that are struggling to gain market share.

Google claims that it is not dominant in the industry, but rather just the preferred platform by users. Google argues that its competitors are simply a click away and Google users are free to switch to other search engines if they prefer. Google points out that its deals with companies such as Apple and Microsoft are completely legal deals and these deals only violate antitrust law if they exclude competition. Since switching to another search engine is only a few clicks away, Google claims it is not excluding competition. As for Google’s next steps, it is “confident that a court will conclude that this suit doesn’t square with either the facts or the law” and it will “remain focused on delivering the free services that help Americans every day.”

Antitrust laws are in place to protect the free market economy and to allow competitive practices. Attorney General William Barr stated “[t]oday, millions of Americans rely on the Internet and online platforms for their daily lives.  Competition in this industry is vitally important, which is why today’s challenge against Google—the gatekeeper of the Internet—for violating antitrust laws is a monumental case.” This is just the beginning of a potentially historic case as it aims to protect competition and innovation in the technology markets. Consumers should consider the impacts of their daily searches and the implications a monopoly could have on the future structure of internet searching.

 


Watching an APA Case Gestate Live!

Parker von Sternberg, MJLST Staffer

On October 15th the FCC published an official Statement of Chairman Pai on Section 230. Few particular statutes have come under greater fire in recent memory than the Protection for “Good Samaritan” Blocking and Screening of Offensive Material and the FCC’s decision to wade into the fray is almost certain to end up causing someone to bring suit regardless of which side of the issue the Commission comes down on.

As a brief introduction, 47 U.S. Code § 230 provides protections from civil suits for providers of Interactive Computer Services, which for our purposes can simply be considered websites. The statute was drafted and passed as a direct response by Congress to a pair of cases, namely Cubby, Inc. v. CompuServe Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co.Cubby, Inc. v. CompuServe Inc., 776 F.Supp. 135 (S.D.N.Y. 1991) and Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995). Cubby held that the defendant, CompuServe, was not responsible for third-party posted content on its message board. The decisive reasoning by the court was that CompuServe was a distributor, not a publisher, and thus “must have knowledge of the contents of a publication before liability can be imposed.”Cubby, Inc. v. CompuServe Inc., 776 F.Supp. 135, 139 (S.D.N.Y. 1991). On the other hand, in Stratton Oakmont, the defendant’s exertion of “editorial control” over a message board otherwise identical to the one in Cubby “opened [them] up to a greater liability than CompuServe and other computer networks that make no such choice.” Stratton Oakmont, 1995 WL 323710 at *5.

Congress thus faced an issue: active moderation of online content, which is generally going to be a good idea, created civil liability where leaving message boards open as a completely lawless zone protects the owner of the board. The answer to this conundrum was § 230 which states, in part:

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability – No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected . . . .

Judicial application of the statute has so far largely read the language expansively. Zeran v. AOL held that “[b]y its plain language, § 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997). The court also declined to recognize a difference between a defendant acting as a publisher versus a distributor. Speaking to Congress’s legislative intent, the court charted a course that aimed to both immunize service providers as well as encourage self-regulation. Id. at 331-334. Zeran has proved immensely influential, having been cited over a hundred times in the ensuing thirteen years.

Today however, the functioning of § 230 has become a lightning rod for the complaints of many on social media. Rather than encouraging interactive computer services to self-regulate, the story goes that it instead protects them despite their “engaging in selective censorship that is harming our national discourse.” Republicans in the Senate have introduced a bill to amend the Communications Decency Act specifically to reestablish liability for website owners in a variety of ways that § 230 currently protects them from. The Supreme Court has also dipped its toes in the turbulent waters of online censorship fights, with Justice Thomas saying that “courts have relied on policy and purpose arguments to grant sweeping protection to Internet platforms” and that “[p]aring back the sweeping immunity courts have read into §230 would not necessarily render defendants liable for online misconduct.

On the other hand, numerous private entities and individuals hold that § 230 forms part of the backbone of the internet as we know it today. Congress and the courts, up until a couple of years ago, stood in agreement that it was vitally important to draw a bright line between the provider of an online service and those that used it. It goes without saying that some of the largest tech companies in the world directly benefit from the protections offered by this law, and it can be argued that the economic impact is not limited to those larger players alone.

What all of this hopefully goes to show is that, no matter what happens to this statute, someone somewhere will be willing to spend the time and the money necessary to litigate over it. The question is what shape that litigation will take. As it currently stands, the new bill in the Senate has little chance of getting through the House of Representatives to the President’s desk. The Supreme Court just recently denied cert to yet another § 230 case, upholding existing precedent. Enter Ajit Pai and the FCC, with their legal authority to interpret 47 U.S. Code § 230. Under the cover of Chevron deference protecting administrative action with regard to interpreting statutes the legislature has empowered them to enforce, the FCC wields massive influence with regard to the meaning of § 230. Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984).

While the FCC’s engagement is currently limited to a statement that it intends to “move forward with rulemaking to clarify [§ 230’s] meaning,” there are points to discuss. What limits are there on the power to alter the statute’s meaning? Based on the Commissioner’s statement, can we tell generally what side they are going to come down on? With regard to the former, as was said above, the limit is set largely by Chevron deference and by § 706 of the APA. The key words here are going to be if whoever ends up unhappy with the FCC’s interpretation can prove that it is “arbitrary and capricious” or goes beyond a “permissible construction of the statute.” Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984).

The FCC Chairman’s statement lays out that issues exist surrounding §230 and establishes that the FCC believes the legal authority exists for it to interpret the statute. It finishes by saying “[s]ocial media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.” Based on this statement alone, it certainly sounds like the FCC intends to narrow the protections for interactive computer services providers in some fashion. At the same time, it raises questions. For example, does § 230 provide websites with special forms of free speech that other individuals and groups do not have? The statute does not on its face make anything legal that without it would not be. Rather, it ensures that legal responsibility for speech lies with the speaker, rather than the digital venue in which it is said.

The current divide on liability for speech and content moderation on the internet draws our attention to issues of power as the internet continues to pervade all aspects of life. When the President of the United States is being publicly fact-checked, people will sit up and take notice. The current Administration, parts of the Supreme Court, some Senators, and now the FCC all appear to feel that legal proceedings are a necessary response to this happening. At the same time, alternative views do exist outside of Washington D.C., and at many points they may be more democratic than those proposed within our own government.

There is a chance that if the FCC takes too long to produce a “clarification” of §230 that Chairman Pai will be replaced after the upcoming Presidential election. Even if this does happen, I feel that the outlining of the basic positions surrounding this statute is nonetheless worthwhile. A change in administrations simply means that the fight will occur via proposed statutory amendments or in the Supreme Court, rather than via the FCC.