Internet

Twitter Troubles: The Upheaval of a Platform and Lessons for Social Media Governance

Gordon Unzen, MJLST Staffer

Elon Musk’s Tumultuous Start

On October 27, 2022, Elon Musk officially completed his $44 billion deal to purchase the social media platform, Twitter.[1] When Musk’s bid to buy Twitter was initially accepted in April 2022, proponents spoke of a grand ideological vision for the platform under Musk. Musk himself emphasized the importance of free speech to democracy and called Twitter “the digital town square where matters vital to the future of humanity are debated.”[2] Twitter co-founder Jack Dorsey called Twitter the “closest thing we have to a global consciousness,” and expressed his support of Musk: “I trust his mission to extend the light of consciousness.”[3]

Yet only two weeks into Musk’s rule, the tone has quickly shifted towards doom, with advertisers fleeing the platform, talk of bankruptcy, and the Federal Trade Commission (“FTC”) expressing “deep concern.” What happened?

Free Speech or a Free for All?

Critics were quick to read Musk’s pre-purchase remarks about improving ‘free speech’ on Twitter to mean he would change how the platform would regulate hate speech and misinformation.[4] This fear was corroborated by the stream of racist slurs and memes from anonymous trolls ‘celebrating’ Musk’s purchase of Twitter.[5] However, Musk’s first major change to the platform came in the form of a new verification service called ‘Twitter Blue.’

Musk took control of Twitter during a substantial pullback in advertisement spending in the tech industry, a problem that has impacted other tech giants like Meta, Spotify, and Google.[6] His solution was to seek revenue directly from consumers through Twitter Blue, a program where users could pay $8 a month for verification with the ‘blue check’ that previously served to tell users whether an account of public interest was authentic.[7] Musk claimed this new system would give ‘power to the people,’ which proved correct in an ironic and unintended fashion.

Twitter Blue allowed users to pay $8 for a blue check and impersonate politicians, celebrities, and company media accounts—which is exactly what happened. Musk, Rudy Giuliani, O.J. Simpson, LeBron James, and even the Pope were among the many impersonated by Twitter users.[8] Companies received the same treatment, with an impersonation Eli Lilly and Company account writing “We are excited to announce insulin is free now,” causing its stock to drop 2.2%.[9]This has led advertising firms like Omnicom and IPG’s Mediabrands to conclude that brand safety measures are currently impeded on Twitter and advertisers have subsequently begun to announce pauses on ad spending.[10] Musk responded by suspending Twitter Blue only 48 hours after it launched, but the damage may already be done for Twitter, a company whose revenue was 90% ad sales in the second quarter of this year.[11] During his first mass call with employees, Musk said he could not rule out bankruptcy in Twitter’s future.[12]

It also remains to be seen whether the Twitter impersonators will escape civil liability under theories of defamation[13] or misappropriation of name or likeness,[14] or criminal liability under state identity theft[15] or false representation of a public employee statutes,[16] which have been legal avenues used to punish instances of social media impersonation in the past.

FTC and Twitter’s Consent Decree

On the first day of Musk’s takeover of Twitter, he immediately fired the CEO, CFO, head of legal policy, trust and safety, and general counsel.[17] By the following week, mass layoffs were in full swing with 3,700 Twitter jobs, or 50% of its total workforce, to be eliminated.[18] This move has already landed Twitter in legal trouble for potentially violating the California WARN Act, which requires 60 days advance notice of mass layoffs.[19] More ominously, however, these layoffs, as well as the departure of the company’s head of trust and safety, chief information security officer, chief compliance officer and chief privacy officer, have attracted the attention of the FTC.[20]

In 2011, Twitter entered a consent decree with the FTC in response to data security lapses requiring the company to establish and maintain a program that ensured its new features do not misrepresent “the extent to which it maintains and protects the security, privacy, confidentiality, or integrity of nonpublic consumer information.”[21] Twitter also agreed to implement two-factor authentication without collecting personal data, limit employee access to information, provide training for employees working on user data, designate executives to be responsible for decision-making regarding sensitive user data, and undergo a third-party audit every six months.[22] Twitter was most recently fined $150 million back in May for violating the consent decree.[23]

With many of Twitter’s former executives gone, the company may be at an increased risk for violating regulatory orders and may find itself lacking the necessary infrastructure to comply with the consent decree. Musk also reportedly urged software engineers to “self-certify” legal compliance for the products and features they deployed, which may already violate the court-ordered agreement.[24] In response to these developments, Douglas Farrar, the FTC’s director of public affairs, said the commission is watching “Twitter with deep concern” and added that “No chief executive or company is above the law.”[25] He also noted that the FTC had “new tools to ensure compliance, and we are prepared to use them.”[26] Whether and how the FTC will employ regulatory measures against Twitter remains uncertain.

Conclusions

The fate of Twitter is by no means set in stone—in two weeks the platform has lost advertisers, key employees, and some degree of public legitimacy. However, at the speed Musk has moved so far, in two more weeks the company could likely be in a very different position. Beyond the immediate consequences to the company, Musk’s leadership of Twitter illuminates some important lessons about social media governance, both internal and external to a platform.

First, social media is foremost a business and not the ‘digital town square’ Musk imagines. Twitter’s regulation of hate speech and verification of public accounts served an important role in maintaining community standards, promoting brand safety for advertisers, and protecting users. Loosening regulatory control runs a great risk of delegitimizing a platform that corporations and politicians alike took seriously as a tool for public communication.

Second, social media stability is important to government regulators and further oversight may not be far off on the horizon. Musk is setting a precedent and bringing the spotlight on the dangers of a destabilized social media platform and the risks this may pose to data privacy, efforts to curb misinformation, and even the stock market. In addition to the FTC, Senate Majority Whip, and chair of the Senate Judiciary Committee, Dick Durbin, has already commented negatively on the Twitter situation.[27] Musk may have given powerful regulators, and even legislators, the opportunity they were looking for to impose greater control over social media. For better or worse, Twitter’s present troubles could lead to a new era of government involvement in digital social spaces.

Notes

[1] Adam Bankhurst, Elon Musk’s Twitter Takeover and the Chaos that Followed: The Complete Timeline, IGN (Nov. 11, 2022), https://www.ign.com/articles/elon-musks-twitter-takeover-and-the-chaos-that-followed-the-complete-timeline.

[2] Monica Potts & Jean Yi, Why Twitter is Unlikely to Become the ‘Digital Town Square’ Elon Musk Envisions, FiveThirtyEight (Apr. 29, 2022), https://fivethirtyeight.com/features/why-twitter-is-unlikely-to-become-the-digital-town-square-elon-musk-envisions/.

[3] Bankhurst, supra note 1.

[4] Potts & Yi, supra note 2.

[5] Drew Harwell et al., Racist Tweets Quickly Surface After Musk Closes Twitter Deal, Washington Post (Oct. 28, 2022), https://www.washingtonpost.com/technology/2022/10/28/musk-twitter-racist-posts/.

[6] Bobby Allyn, Elon Musk Says Twitter Bankruptcy is Possible, But is That Likely?, NPR (Nov. 12, 2022), https://www.wglt.org/2022-11-12/elon-musk-says-twitter-bankruptcy-is-possible-but-is-that-likely.

[7] Id.

[8] Keegan Kelly, We Will Never Forget These Hilarious Twitter Impersonations, Cracked (Nov. 12, 2022), https://www.cracked.com/article_35965_we-will-never-forget-these-hilarious-twitter-impersonations.html; Shirin Ali, The Parody Gold Created by Elon Musk’s Twitter Blue, Slate (Nov. 11, 2022), https://slate.com/technology/2022/11/parody-accounts-of-twitter-blue.html.

[9] Ali, supra note 8.

[10] Mehnaz Yasmin & Kenneth Li, Major Ad Firm Omnicom Recommends Clients Pause Twitter Ad Spend – Memo, Reuters (Nov. 11, 2022), https://www.reuters.com/technology/major-ad-firm-omnicom-recommends-clients-pause-twitter-ad-spend-verge-2022-11-11/; Rebecca Kern, Top Firm Advises Pausing Twitter Ads After Musk Takeover, Politico (Nov. 1, 2022), https://www.politico.com/news/2022/11/01/top-marketing-firm-recommends-suspending-twitter-ads-with-musk-takeover-00064464.

[11] Yasmin & Li, supra note 10.

[12] Katie Paul & Paresh Dave, Musk Warns of Twitter Bankruptcy as More Senior Executives Quit, Reuters (Nov. 10, 2022), https://www.reuters.com/technology/twitter-information-security-chief-kissner-decides-leave-2022-11-10/.

[13] Dorrian Horsey, How to Deal With Defamation on Twitter, Minc, https://www.minclaw.com/how-to-report-slander-on-twitter/ (last visited Nov. 12, 2022).

[14] Maksim Reznik, Identity Theft on Social Networking Sites: Developing Issues of Internet Impersonation, 29 Touro L. Rev. 455, 456 n.12 (2013), https://digitalcommons.tourolaw.edu/cgi/viewcontent.cgi?article=1472&context=lawreview.

[15] Id. at 455.

[16] Brett Snider, Can a Fake Twitter Account Get You Arrested?, FindLaw Blog (April 22, 2014), https://www.findlaw.com/legalblogs/criminal-defense/can-a-fake-twitter-account-get-you-arrested/.

[17] Bankhurst, supra note 1.

[18] Sarah Perez & Ivan Mehta, Twitter Sued in Class Action Lawsuit Over Mass Layoffs Without Proper Legal Notice, Techcrunch (Nov. 4, 2022), https://techcrunch.com/2022/11/04/twitter-faces-a-class-action-lawsuit-over-mass-employee-layoffs-with-proper-legal-notice/.

[19] Id.

[20] Natasha Lomas & Darrell Etherington, Musk’s Lawyer Tells Twitter Staff They Won’t be Liable if Company Violates FTC Consent Decree (Nov. 11, 2022), https://techcrunch.com/2022/11/11/musks-lawyer-tells-twitter-staff-they-wont-be-liable-if-company-violates-ftc-consent-decree/.

[21] Id.

[22] Scott Nover, Elon Musk Might Have Already Broken Twitter’s Agreement With the FTC, Quartz (Nov. 11, 2022), https://qz.com/elon-musk-might-have-already-broken-twitter-s-agreement-1849771518.

[23] Tom Espiner, Twitter Boss Elon Musk ‘Not Above the Law’, Warns US Regulator, BBC (Nov. 11, 2022), https://www.bbc.com/news/business-63593242.

[24] Nover, supra note 22.

[25] Espiner, supra note 23.

[26] Id.

[27] Kern, supra note 10.


Target Number One, the Consequences of Being the Best

Ben Lauter, MJLST Staffer

The World of Chess

Since 2013, Norwegian Magnus Carlsen has been the reigning World Champion in chess. This achievement was not shocking to many; Magnus has been an elite chess prodigy and Grandmaster since the age of thirteen (nine years before his eventual champion title). Many regard Magnus as the best chess player ever, surpassing the legend of Fischer and Kasparov[1], two former great world champions. During Kasparov’s reign, he drew, or tied, Magnus in a classical game[2] of chess when Magnus was just thirteen. With this being said, it seems impossible to quantify the talent and genius that Magnus possesses and continues to refine in chess. However, that is exactly what the ELO rating system intends to do.

An ELO rating is a calculation of a chess player’s current skill level. Magnus boasts the highest classical ELO rating ever to be retained: 2882. Along the way to receiving this all-time high was a period of time spanning nearly two and a half years where Magnus did not lose a single classical game, winning 125 straight. All of this is to say, Magnus Carlsen is an unstoppable force in chess. However, on September 4th, 2022, Magnus played a game that would snap his then current 53 game winning streak. On that date he lost to a 19-year-old American at the St. Louis based Sinquefield Cup Tournament, Hans Niemann, a San Francisco born prodigy currently ranked as the 49th best player in the world with an ELO rating of 2688.

The Match

This match had anything but a quiet result, despite the silence in the interviews afterwards. All that was said from the reigning World Champion was a tweet stating that Magnus would be withdrawing from the tournament, a measure that is near unprecedented from a World Champion at such a major world tournament. With that tweet, a clip was attached of the famous soccer (football) manager, Jose Mourinho, saying “If I speak, I will be in big trouble.” The chess world speculated that this was Magnus’s informal way of accusing the teenage Hans of cheating in an “over the board” chess match. A conjecture of which the chess world has not yet made peace, with article after article, interview after interview, and Grandmaster after Grandmaster giving their two cents.

There were many aftershocks to Magnus’s tweet, but it seems that the legal ones, namely a defamation case for slander or libel, may be the worst for Magnus. For the past several weeks Hans Niemann has been put under the magnifying glass. He has faced harassment, attacks on his character, and irreparable reputational damage. Yet, Magnus has still failed to present any evidence as to why he withdrew or sent that tweet out to the world and has not yet clarified or disclaimed any of the rumors that shadow Hans.
For a while, it looked like Hans would simply have actions and innuendos as his evidence in a slander or libel case. Then, after an online chess tournament that both Magnus and Hans were participants in, Magnus put out his official position on the matter. Magnus declared that on top of cheating in his match in St. Louis, Hans was a serial chess cheater and should be punished proportionately to the crime he committed. In Magnus’s declaration, he said that he believed his accusation whole-heartedly and would never participate in an invitational event in which Hans plays again. Throughout the rest of the statement Magnus provided zero evidence of the alleged cheating and stated he could not release his evidence without the approval of the player that he accused.

Consequences

There are two massive consequences likely to result from Magnus’s statement. The first is that Han’s professional career will likely be in ruins. Invitationals are a priority for top ranked chess professionals, allowing them to play in official matches and record status for their rating in addition to receiving prize money. If an invitational is going to have to choose between a candidate for the best player of all time, Magnus, and a rising teenager, Hans, there might not be a long discussion. The second consequence is that because no evidence has been released to validate the statements that Magnus made based on his gut feeling, Hans may have a case for slander or libel.

There are four elements to prove in a slander case. The plaintiff must show that there was a false statement made purporting to be fact, a publication of that statement to a third person, fault amounting to at least negligence, and damages incurred. Two of these elements are quite clear and likely provable; there was publication of a statement and there were damages to Han’s reputation. The other two elements require further analysis. The third element related to fault asks one to look to Magnus’s state of mind when he made his statements and find evidence that he did so to tarnish Han’s name, or was at the very least negligent in making the statements, to fulfill a prima facie case for slander. This standard is notoriously hard to prove and will undoubtedly act as a roadblock to a slander case. However, it will likely be even harder for Hans to prove the first element, that the statement was false purporting to be fact. This element causes an issue because of the difficulty in proving that something that didn’t happen, didn’t happen. Specifically, Hans would have to show that he did not cheat in order to prove that Magnus’s cheating accusation was false.

Further complicating the issue is surfacing evidence from other sources making Magnus’s claim of cheating more believable. Statistical analysis of Han’s performances show that he has been playing games with computer moves 90% of the time or more, compared to the likes of Fischer, Kasparov, or Magnus who are only around 70% during their all-time peaks, and to traditional 2700 ELO rated Grandmasters who average between 50%-60%. Reports indicate that based on Han’s last 18 months of performance the chance that he played games at the rate he had without computer assistance is one in over 60,000. Without being able to prove that Magnus’s statements are at the least unlikely true, Hans will likely fail to prove slander and his career will likely be derailed after the events of September.

Notes

[1]  Kasparov is the longest reigning World Champion to date.

[2] A “Classical Game” is a time format of chess that allows for 120 minutes of play per person for the first forty moves; it allows for the deepest level of consideration on every move. As a result, classical games of chess are an incredibly accurate and sound measure of a player’s talent. They are used to determine the World Champion every two years.


It’s Social Media – A Big Lump of Unregulated Child Influencers!

Tessa Wright, MJLST Staffer

If you’ve been on TikTok lately, you’re probably familiar with the Corn Kid. Seven-year-old Tariq went viral on TikTok in August after appearing in an 85-second video clip professing his love of corn.[1] Due to his accidental viral popularity, Tariq has become a social media celebrity. He has been featured in content collaborations with notable influencers, starred in a social media ad for Chipotle, and even created an account on Cameo.[2] At seven-years-old, he has become a child influencer, a minor celebrity, and a major financial contributor for his family. Corn Kid is not alone. There are a growing number of children rising to fame via social media. In fact, today child influencers have created an eight-billion-dollar social media advertising industry, with some children generating as much as $26 million a year through advertising and sponsored content.[3] Yet, despite this rapidly growing industry, there are still very few regulations protecting the financial earnings of children entertainers in the social media industry.[4]

What Protects Children’s Financial Earnings in the Entertainment Industry?

Normally, children in the entertainment industry have their financial earnings protected under the California Child Actor’s Bill (also known as the Coogan Law).[5] The Coogan Law was passed in 1939 by the state of California in response to the plight of Jackie Coogan.[6] Coogan was a child star who earned millions of dollars as a child actor only to discover upon reaching adulthood that his parents had spent almost all of his money.[7] Over the years the law has evolved, and today it upholds that earnings by minors in the entertainment industry are the property of the minor.[8] Specifically, the California law creates a fiduciary relationship between the parent and child and requires that 15% of all earnings must be set aside in a blocked trust.[9]

What Protections do Child Social Media Stars Have? 

Social media stars are not legally considered to be actors, so the Coogan Law does not apply to their earnings.[10] So, are there other laws protecting these social media stars? The short answer is, no. 

Technically, there are laws that prevent children under the age of 12 from using social media apps which in theory should protect the youngest of social media stars.[11] However, even though these social media platforms claim that they require users to be at least thirteen years old to create accounts on their platforms, there are still ways children end up working in content creation jobs.[12] The most common scenario is that parents of these children make content in which they feature their children.[13] These “family vloggers” are a popular genre of YouTube videos where parents frequently feature their children and share major life events; sometimes they even feature the birth of their children. Often these parents also make separate social media accounts for their children which are technically run by the parents and are therefore allowed despite the age restrictions.[14] There are no restrictions or regulations preventing parents from making social media accounts for their children, and therefore no restriction on the parents’ collection of the income generated from such accounts.[15]

New Attempts at Legislation 

So far, there has been very little intervention by lawmakers. The state of Washington has attempted to turn the tide by proposing a new state bill that attempts to protect children working in social media.[16] The bill was introduced in January of 2022 and, if passed, would offer protection to children living within the state of Washington who are on social media.[17] Specifically, the bill introduction reads, “Those children are generating interest in and revenue for the content, but receive no financial compensation for their participation. Unlike in child acting, these children are not playing a part, and lack legal protections.”[18] The bill would hopefully help protect the finances of these child influencers. 

Additionally, California passed a similar bill in 2018.[19] Unfortunately, it only applies to videos that are longer than one hour and have direct payment to the child.[20] What this means is that a child who, for example, is a Twitch streamer that posts a three-hour livestream and receives direct donations during the stream, would be covered by the bill; however, a child featured in a 10-minute YouTube video or a 15-second TikTok would not be financially protected under the bill.

The Difficulties in Regulating Social Media Earnings for Children

Currently, France is the only country in the world with regulations for children working in the social media industry.[21] There, children working in the entertainment industry (whether as child actors, models, or social media influencers) have to register for a license and their earnings must be put into a dedicated bank account for them to access when they’re sixteen.[22] However, the legislation is still new and it is too soon to see how well these regulations will work. 

The problem with creating legislation in this area is attributable to the ad hoc nature of making social media content.[23] It is not realistic to simply extend existing legislation applicable to child entertainers to child influencers[24] as their work differs greatly. Moreover, it becomes extremely difficult to attempt to regulate an industry when influencers can post content from any location at any time, and when parents may be the ones filming and posting the videos of their children in order to boost their household income. For example, it would be hard to draw a clear line between when a child is being filmed casually for a home video and when it is being done for work, and when an entire family is featured in a video it would be difficult to determine how much money is attributable to each family member. 

Is There a Solution?

While there is no easy solution, changing the current regulations or creating new regulations is the clearest route. Traditionally, tech platforms have taken the view that governments should make rules and then they will then enforce them.[25] All major social media sites have their own safety rules, but the extent to which they are responsible for the oversight of child influencers is not clearly defined.[26] However, if any new regulation is going to be effective, big tech companies will need to get involved. As it stands today, parents have found loopholes that allow them to feature their child stars on social media without violating age restrictions. To avoid these sorts of loopholes to new regulations, it will be essential that big tech companies work in collaboration with legislators in order to create technical features that prevent them.

The hope is that one day, children like Corn Kid will have total control of their financial earnings, and will not reach adulthood only to discover their money has already been spent by their parents or guardians. The future of entertainment is changing every day, and the laws need to keep up. 

Notes

[1] Madison Malone Kircher, New York Times (Online), New York: New York Times Company (September 21, 2022) https://www.nytimes.com/2022/09/21/style/corn-kid-tariq-tiktok.html.

[2] Id.

[3] Marina Masterson, When Play Becomes Work: Child Labor Laws in the Era of ‘Kidfluencers’, 169 U. Pa. L. Rev. 577, 577 (2021).

[4] Coogan Accounts: Protecting Your Child Star’s Earnings, Morgan Stanley (Jan. 10, 2022), https://www.morganstanley.com/articles/trust-account-for-child-performer.

[5] Coogan Law, https://www.sagaftra.org/membership-benefits/young-performers/coogan-law (last visited Oct. 16, 2022).

[6] Id.

[7] Id.

[8] Cal. Fam. Code § 6752.

[9] Id.

[10] Morgan Stanley, supra note 4.

[11] Sapna Maheshwari, Online and Making Thousands, at Age 4: Meet the Kidfluencers, N.Y. Times, (March 1, 2019) https://www.nytimes.com/2019/03/01/business/media/social-media-influencers-kids.html.

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Katie Collins, TikTok Kids Are Being Exploited Online, but Change is Coming, CNET (Aug. 8, 2022 9:00 AM), https://www.cnet.com/news/politics/tiktok-kids-are-being-exploited-online-but-change-is-coming/.

[17] Id.

[18] Id.

[19] E.W. Park, Child Influencers Have No Child Labor Regulations. They Should, Lavoz News (May 16, 2022) https://lavozdeanza.com/opinions/2022/05/16/child-influencers-have-no-child-labor-regulations-they-should/.

[20] Id.

[21] Collins, supra note 19.

[22] Id.

[23] Id.

[24] Id.

[25] Id.

[26] Katie Collins, TikTok Kids Are Being Exploited Online, but Change is Coming, CNET (Aug. 8, 2022 9:00 AM), https://www.cnet.com/news/politics/tiktok-kids-are-being-exploited-online-but-change-is-coming/.


Meta Faces Class Action Lawsuits Over Pixel Tool Data Controversy

Ray Mestad, MJLST Staffer

With a market capitalization of $341 billion, Meta Platforms is one of the most valuable companies in the world.[1] Information is a prized asset for Meta, but how that information is acquired continues to be a source of conflict. Their Meta “Pixel” tool is a piece of code that allows websites to track visitor activity.[2] However, what Meta does with the data after it is acquired may be in violation of a variety of privacy laws. Because of that, Meta is now facing almost fifty class action lawsuits due to Pixel’s use of data from video players and healthcare patient portals.[3]

What is Pixel?

Pixel is an analytical tool that tracks visitor actions on a website.[4] In theory, the actions that are tracked include purchases, registrations, cart additions, searches and more. This information can then be used by the website owners to better understand user behavior. Website owners can more efficiently use ad spend by tailoring ads to relevant users and finding more receptive users based on Pixel’s analysis.[5]

In the world of search engine optimization and web analysis tools like Pixel are common, and there are other sites, like Google Analytics, that provide similar functions. However, there are two key differences between these other tools and Pixel. First, Pixel has in some cases accidentally scraped private, identifiable information from websites. Second, Pixel can connect that information to the social profiles on their flagship website, Facebook. Whether intentionally or accidentally, Pixel has been found to have grabbed personal information beyond the simple user web actions it was supposed to be limited to and connected them to Facebook profiles.[6]

Pixel and Patient Healthcare Information

It’s estimated that, until recently, one third of the top 100 hospitals in the country used Pixel on their websites.[7] However, that number may decrease after Meta’s recent data privacy issues. Meta faced both criticism and legal action in the summer of 2022 for its treatment of user data on healthcare websites. Pixel incorrectly retrieved private patient information, including names, conditions, email addresses and more. Meta then targeted hospital website users with ads on Facebook, using the information Pixel collected from hospital websites and patient portals by matching user information with their Facebook accounts.[8] Novant Health, a healthcare provider, ran advertisements promoting vaccinations in 2020. They then added Pixel code to their website to evaluate the effectiveness of the campaign. Pixel proceeded to send private and identifiable user information to Meta.[9] Another hospital (and Meta’s co-defendant in the lawsuit), the University of California San Francisco and Dignity Health (“UCSF”), was accused of illegally gathering patient information via Pixel code on their patient portal. Private medical information was then distributed to Meta. At some point, it is claimed that pharmaceutical companies then gained access to this medical information and sent out targeted ads based thereon.[10] That is just one example – all in all, more than 1 million patients have been affected by this Pixel breach.[11] 

Pixel and Video Tracking

The problems did not stop there. Following its patient portal controversy, Meta again faced criticism for obtaining protected user data with Pixel, this time in the context of video consumption. There are currently 47 proposed class actions against Meta for violations of the Video Privacy Protection Act (the “VPPA”). The VPPA was created in the 1980’s to cover videotape and audio-visual materials. No longer confined to the rental store, the VPPA has now taken on a much broader meaning after the growth of the internet. 

These class actions accuse Meta of using the Pixel tool to take video user data from a variety of company websites, including the NFL, NPR, the Boston Globe, Bloomberg Law and many more. The classes allege that by collecting video viewing activity in a personally identifiable manner without consent (matching Facebook user IDs to the activity rather than anonymously), so Pixel users could target their ads at the viewers, Pixel violated the VPPA. Under the VPPA Meta is not the defendant in these lawsuits, but rather the companies that shared user information with Meta.[12]

Causes of Action

The relatively new area of data privacy is scarcely litigated by the federal government due to the lack of statutes protecting consumer privacy on the federal level. Because of that, the number of data protection civil litigants can be expected to continue to grow. [13] HIPAA is the Health Insurance Portability and Accountability Act, an act created in 1996 to protect patient information from disclosure without patient consent. In the patient portal cases, HIPAA actions would have to be initiated by the US government. Claimants are therefore suing Meta under consumer protection and other privacy laws like the California Confidentiality of Medical Information Act, the Federal Wiretap Act, and the Comprehensive Computer Data Access and Fraud Act instead.[14] These state Acts allow individuals to sue, when under Federal Acts like HIPPA, the Government may move slowly, or not at all. And in the cases of video tracking, the litigants may only sue the video provider, not Meta itself.[15] Despite that wrinkle of benefit to Meta, their involvement in more privacy disputes is not ideal for the tech giant as it may hurt the trustworthiness of Meta Platforms in the eyes of the public.

Possible Outcomes

If found liable, the VPPA violations could result in damages of $2,500 per class member.[16] Punitive damages for the healthcare data breaches could run in the millions as well and would vary state to state due to the variety of acts the claims are brought in violation of.[17] Specifically, in the UCSF data case class members are seeking punitive damages of $5 million.[18] One possible hang-up that may become an issue for claimants are arbitration agreements. If the terms and conditions of either hospital patient portals or video provider websites contain arbitration clauses, litigants may have difficulty overcoming them. On the one hand, these terms and conditions may be binding and force the parties to attend mandatory arbitration meetings. On the other hand, consumer rights attorneys may argue that consent needs to come from forms separate from online user agreements.[19] If more lawsuits emerge due to the actions of Pixel, it is quite possible that companies will move away from the web analytics tools to avoid potential liability. It remains to be seen whether the convenience and utility of Meta Pixel stops being worth the risk the web analytics tools present to websites.

Notes

[1] Meta Nasdaq, https://www.google.com/finance/quote/META:NASDAQ (last visited Oct. 21, 2022).

[2] Meta Pixel, Meta for Developers, https://developers.facebook.com/docs/meta-pixel/.

[3] Sky Witley, Meta Pixel’s Video Tracking Spurs Wave of Data Privacy Suits, (Oct. 13, 2022, 3:55 AM), Bloomberg Law, https://news.bloomberglaw.com/privacy-and-data-security/meta-pixels-video-tracking-spurs-wave-of-consumer-privacy-suits.

[4] Meta Pixel, https://adwisely.com/glossary/meta-pixel/ (last visited Oct. 21, 2022).

[5] Ted Vrountas, What Is the Meta Pixel & What Does It Do?, https://instapage.com/blog/meta-pixel.

[6] Steve Adler, Meta Facing Further Class Action Lawsuit Over Use of Meta Pixel Code on Hospital Websites, HIPPA Journal (Aug. 1, 2022), https://www.hipaajournal.com/meta-facing-further-class-action-lawsuit-over-use-of-meta-pixel-code-on-hospital-websites/.

[7] Id.

[8] Id.

[9] Bill Toulas, Misconfigured Meta Pixel exposed healthcare data of 1.3M patients, Bleeping Computer (Aug. 22, 2022, 2:16 PM), https://www.bleepingcomputer.com/news/security/misconfigured-meta-pixel-exposed-healthcare-data-of-13m-patients/.

[10] Adler, supra note 6.

[11] Toulas, supra note 9.

[12] Witley, supra note 3. 

[13] Id.

[14] Adler, supra note 6.

[15] Witley, supra note 3.

[16] Id

[17] Dave Muoio, Northwestern Memorial the latest hit with a class action over Meta’s alleged patient data mining, Fierce Healthcare (Aug. 12, 2022 10:30AM), https://www.fiercehealthcare.com/health-tech/report-third-top-hospitals-websites-collecting-patient-data-facebook.

[18] Id.

[19] Witley, supra note 3.




After Hepp: Section 230 and State Intellectual Property Law

Kelso Horne IV, MJLST Staffer

Although hardly a competitive arena, Section 230(c) of the Communications Decency Act (the “CDA”) is almost certainly the best known of all telecommunications laws in the United States. Shielding Internet Service Providers (“ISPs”) and websites from liability for the content published by their users, § 230(c)’s policy goals are laid out succinctly, if a bit grandly, in § 230(a) and § 230(b).[1] These two sections speak about the internet as a force for economic and social good, characterizing it as a “vibrant and competitive free market” and “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity.”[2] But where §§ 230(a),(b) both speak broadly of a utopian vision for the internet, and (c) grants websites substantial privileges, § 230(e) gets down to brass tacks.[3]

CDA: Goals and Text

The CDA lays out certain limitations on the shield protections provided by § 230(c).[4] Among these is § 230(e)(2) which states in full, “Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.”[5] This particular section, despite its seeming clarity, has been the subject of litigation for over a decade, and in 2021 a clear circuit split was opened between the 9th and 3rd Circuit Courts over how this short sentence applies to state intellectual property laws. The 9th Circuit Court follows the principle that the policy portions of § 230 as stated in §§ 230(a),(b) should be controlling, and that, as a consequence, state intellectual property claims should be barred. The 3rd Circuit Court follows the principle that the plain text of § 230(e)(2) unambiguously allows for state intellectual property claims.

Who Got There First? Lycos and Perfect 10

In Universal Commc’n Sys., Inc. v. Lycos, Inc., the 1st Circuit Court faced this question obliquely; the court assumed that they were not immunized from state intellectual property law by § 230 and the claims were dismissed, but on different grounds.[6] Consequently, when the 9th Circuit released their opinion in Perfect 10, Inc. v. CCBILL LLC only one month later, they felt free to craft their own rule on the issue.[7] Consisting of a few short paragraphs, the court’s decision on state intellectual property rights is nicely summarized in a short sentence. They stated that “As a practical matter, inclusion of rights protected by state law within the ‘intellectual property’ exemption would fatally undermine the broad grant of immunity provided by the CDA.”[8] The court’s analysis in Perfect 10 was almost entirely based on what allowing state intellectual property claims would do to the policy goals stated in § 230(a) and § 230(b), and did not attempt, or rely on, a particularly thorough reading of § 230(e)(2). Here the court looks at both the policy stated in § 230(a) and § 230(b) and the text of § 230(e)(2) and attempts to rectify them. The court clearly sees the possibility of issues arising from allowing plaintiffs to bring cases through fifty different state systems against websites and ISPs for the postings of their users. This insight may be little more than hindsight, however, given the date of the CDA’s drafting.

Hepp Solidifies a Split

Perfect 10 would remain the authoritative appellate level case on the issue of the CDA and state intellectual property law until 2021, when the 3rd Circuit stepped into the ring.[9] In Hepp v. Facebook, Pennsylvania newsreader Karen Hepp sued Facebook for hosting advertisements promoting a dating website and other services which had used her likeness without her permission.[10] In a much longer analysis, the 3rd Circuit held that the 9th Circuit’s interpretation argued for by Facebook “stray[ed] too far from the natural reading of § 230(e)(2)”.[11] Instead, the 3rd Circuit argued for a closer reading of the text of § 230(e)(2) which they said aligned closely with a more balanced selection of policy goals, including allowance for state intellectual property law.[12] The court also mentions structural arguments relied on by Facebook, mostly examining how narrow the other exceptions in 230(e) are, which the majority states “cuts both ways” since Congress easily cabined meanings when they wanted to.[13]

The dissent in Hepp agreed with the 9th Circuit that the policy goals stated in §§230(a),(b) should be considered controlling.[14] It also noted two cases in other circuits where courts had shown hesitancy towards allowing state intellectual property claims under the CDA to go forward, although both claims had been dismissed on other grounds.[15] Perhaps unsurprisingly, the dissent sees the structural arguments as compelling, and in Facebook’s favor.[16] With the circuits now definitively split on the issue, the text of §§ 230(a),(b) would certainly seem to demand the Supreme Court, or Congress, step in and provide a clear standard.

What Next? Analyzing the CDA

Despite being a pair of decisions ostensibly focused on parsing out what exactly Congress was intending when they drafted § 230, both Perfect 10 and Hepp left out any citation to legislative history when discussing the § 230(e)(2) issue. However, this is not as odd as it seems at first glance. The Communications Decency Act is large, over a hundred pages in length, and § 230 makes up about a page and a half.[17] Most of the content of the legislative reports published after the CDA was passed instead focused on its landmark provisions which attempted, mostly unsuccessfully, to regulate obscene materials on the internet.[18] Section 230 gets a passing mention, less than a page, some of which is taken up with assurances that it would not interfere with civil liability for those engaged in “cancelbotting,” a controversial anti-spam method of the Usenet era.[19] It is perhaps unfair to say that § 230 was an afterthought, but it is likely that lawmakers did not understand its importance at the time of passage. This may be an argument for eschewing the 9th Circuit’s analysis which seemingly imparts the CDA’s drafters with an overly high degree of foresight into § 230’s use by internet companies over a decade later.

Indeed, although one may wish that Congress had drafted it differently, the text of § 230(e)(2) is clear, and the inclusion of “any” as a modifier to “law” makes it difficult to argue that state intellectual property claims are not exempted by the general grant of immunity in § 230.[20] Congressional inaction should not give way to courts stepping in to determine what they believe would be a better Act. Indeed, the 3rd Circuit majority in Hepp may be correct in stating that Congress did in fact want state intellectual property claims to stand. Either way, we are faced with no easy judicial answer; to follow the clear text of the section would be to undermine what many in the e-commerce industry clearly see as an important protection and to follow the purported vision of the Act stated in §§230(a),(b) would be to remove a protection to intellectual property which victims of infringement may use to defend themselves. The circuit split has made it clear that this is a question on which reasonable jurists can disagree. Congress, as an elected body, is in the best position to balance these equities, and they should use their law making powers to definitively clarify the issue.

Notes

[1] 47 U.S.C. § 230.

[2] Id.

[3] 47 U.S.C. § 230(e).

[4] Id.

[5] 47 U.S.C. § 230(e)(2).

[6] Universal v. Lycos, 478 F.3d 413 (1st Cir. 2007)(“UCS’s remaining claim against Lycos was brought under Florida trademark law, alleging dilution of the “UCSY” trade name under Fla. Stat. § 495.151. Claims based on intellectual property laws are not subject to Section 230 immunity.”).

[7] 488 F.3d 1102 (9th Cir. 2007).

[8] Id. at 1119 n.5.

[9] Kyle Jahner, Facebook Ruling Splits Courts Over Liability Shield Limits for IP, Bloomberg Law, (Sep. 28, 2021, 11:32 AM).

[10] 14 F.4th 204, 206-7 (3d Cir. 2021).

[11] Id. at 210.

[12] Id. at 211.

[13] Hepp v. Facebook, 14 F.4th 204 (3d Cir. 2021)(“[T]he structural evidence it cites cuts both ways. Facebook is correct that the explicit references to state law in subsection (e) are coextensive with federal laws. But those references also suggest that when Congress wanted to cabin the interpretation about state law, it knew how to do so—and did so explicitly.”).

[14] 14 F.4th at 216-26 (Cowen, J., dissenting).

[15] Almeida v. Amazon.com, Inc., 456 F.3d 1316 (11th Cir. 2006); Doe v. Backpage.com, LLC, 817 F.3d 12 (1st Cir. 2016).

[16] 14 F.4th at 220 (Cowen, J., dissenting) (“[T]he codified findings and policies clearly tilt the balance in Facebook’s favor.”).

[17] Communications Decency Act of 1996, Pub. L. 104-104, § 509, 110 Stat. 56, 137-39.

[18] H.R. REP. NO. 104-458 at 194 (1996) (Conf. Rep.); S. Rep. No. 104-230 at 194 (1996) (Conf. Rep.).

[19] Benjamin Volpe, From Innovation to Abuse: Does the Internet Still Need Section 230 Immunity?, 68 Cath. U. L. Rev. 597, 602 n.27 (2019); see Denise Pappalardo & Todd Wallack, Antispammers Take Matters Into Their Own Hands, Network World, Aug. 11, 1997, at 8 (“cancelbots are programs that automatically delete Usenet postings by forging cancel messages in the name of the authors. Normally, they are used to delete postings by known spammers. . . .”).

[20] 47 U.S.C. § 230(e)(2).


iMessedUp – Why Apple’s iOS 16 Update Is a Mistake in the Eyes of Litigators.

Carlisle Ghirardini, MJLST Staffer

Have you ever wished you could unsend a text message? Has autocorrect ever created a typo you would give anything to edit? Apple’s recent iOS 16 update makes these dreams come true. The new software allows you to edit a text message a maximum of five times for up to 15 minutes after delivery and to fully unsend a text for up to two minutes after delivery.[1] While this update might be a dream for a sloppy texter, it may become a nightmare for a victim hoping to use text messages as legal evidence. 

But I Thought my Texts Were Private?

Regardless of the passcode on your phone, or other security measures you may use to keep your correspondence private, text messages can be used as relevant evidence in litigation so long as they can be authenticated.[2] Under the Federal Rules of Evidence Rule 901(a), such authentication only requires proof sufficient to support a finding that the evidence at issue is what you claim it is.[3] Absent access to the defendant’s phone, a key way to authenticate texts includes demonstrating the personal nature of the messages, which emulate earlier communication.[4] However, for texts to be admitted as evidence beyond hearsay, proof of the messages through screenshots, printouts, or other tangible methods of authentication is vital.[5]

A perpetrator may easily abuse the iOS 16 features by crafting harmful messages and then editing or unsending them. This has several negative effects. First, the fact that this capability is available may increase perpetrator utilization of text, knowing that disappearing harassment will be easier to get away with. Further, victims will be less likely to capture the evidence in the short time before the proof is rescinded, but after the damage has already been done. Attorney Michelle Simpson Tuegal who spoke out against this software shared how “victims of trauma cannot be relied upon, in that moment, to screenshot these messages to retain them for any future legal proceedings.”[6] Finally, when the victims are without proof and the perpetrator denies sending, psychological pain may result from such “gaslighting” and undermining of the victim’s experience.[7]

Why are Text Messages so Important?

Text messages have been critical evidence in proving the guilt of the defendant in many types of cases. One highly publicized example is the trial of Michelle Carter, who sent manipulative text messages to encourage her then 22-year-old boyfriend to commit suicide.[8] Not only were these texts of value in proving reckless conduct, they also proved Carter guilty of involuntary manslaughter as her words were shown to be the cause of the victim’s death. Without evidence of this communication, the case may have turned out very differently. Who is to say that Carter would not have succeeded in her abuse by sending and then unsending or editing her messages later?

Text messaging is also a popular tool for perpetrators of sexual harassment, and it happens every day. In a Rhode Island Supreme Court case, communication via iMessage was central to the finding of 1st degree sexual assault, as the 17-year-old plaintiff felt too afraid to receive a hospital examination after her attack.[9] Fortunately, the plaintiff had saved photos of inappropriate messages the perpetrator sent after the incident, amongst other records of their texting history, which properly authenticated the texts and connected him to the crime. It is important to note, however, that the incriminating screenshots were not taken until the morning after and with the help of a family member. This demonstrates how it is not often the first instinct of a victim to immediately memorialize evidence, especially when the content may be associated with shame or trauma. The new iOS feature may take away this opportunity to help one’s case through messages which can paint a picture of the incident or the relationship between the parties.

Apple Recognized That They Messed Up

The current iOS 16 update offering two minutes to recall messages and 15 minutes to edit them is actually an amendment to Apple’s originally offered timeframe of 15 minutes to unsend. This change came in light of efforts from an advocate for survivors of sexual harassment and assault. The advocate wrote a letter to the Apple CEO warning of the dangers of this new unsending capability.[10] While the decreased timeframe that resulted leaves less room for abuse of the feature, editing is just as dangerous as unsending. With no limit to how much text you can edit, one could send full sentences of verbal abuse simply just to later edit and replace them with a one-word message. Furthermore, if someone is reading the harmful messages in real time, the shorter window only gives them less time to react – less time to save the messages for evidence. While we can hope that the newly decreased window makes perpetrators think harder before sending a text that they may not be able to delete, this is wishful thinking. The fact that almost half of young people have reported being victims to cyberbullying when there has been no option to rescind or edit one’s messages shows that the length of the iOS feature likely does not matter.[11] The abilities of the new Apple software should be disabled; their “fix” to the update is not enough. The costs of what such a feature will do to victims and their chances of success in litigation outweigh the benefits to the careless texter. 

Notes

[1] Sofia Pitt, Apple Now Lets You Edit and Unsend Imessages on Your Iphone. Here’s How to Do It, CNBC (Sep. 12, 2022, 1:12 PM), https://www.cnbc.com/2022/09/12/how-to-unsend-imessages-in-ios-16.html.

[2] FED. R. EVID. 901(a).

[3] Id.

[4] United States v. Teran, 496 Fed. Appx. 287 (4th Cir. 2012).

[5] State v. Mulcahey, 219 A.3d 735 (R.I. Sup. Ct. 2019).

[6] Jess Hollington, Latest Ios 16 Beta Addresses Rising Safety Concerns for Message Editing, DIGITALTRENDS (Jul. 27, 2022) https://www.digitaltrends.com/mobile/ios-16-beta-4-message-editing-unsend-safety-concerns-fix/

[7] Id.

[8] Commonwealth v. Carter, 115 N.E.3d 559 (Mass. Sup. Ct. 2018).

[9] Mulcahey, 219 A.3d at 740.

[10] Hollington, supra note 5.

[11] 45 Cyberbullying Statistics and Facts to Make Texting Safer, SLICKTEXT (Jan. 4, 2022) https://www.slicktext.com/blog/2020/05/cyberbullying-statistics-facts/.




Freedom to Moderate? Circuits Split Over First Amendment Interpretation

Annelise Couderc, MJLST Staffer

Recently, the Florida and Texas Legislatures passed substantively similar laws which restrict social media platforms’ ability to moderate posts expressing “viewpoints,” and require platforms to provide explanations for why they chose to censor certain content. These laws seemingly stem from the perception of conservative leaning users that their views are disproportionately censored, despite evidence showing otherwise. The laws are in direct conflict with the current prevalent understanding of social media’s access to First Amendment protections, which include the right to moderate content, an expression of free speech.

While the 11th Circuit declared the Florida law unconstitutional for violating social media platforms’ First Amendment rights in May, only four months later the 5th Circuit reinstated the similar Texas law without explanation, overturning the previous injunction made by the U.S. District Court for the Western District of Texas. On September 16, 2022, the 5th Circuit released its full decision explaining its reinstatement of the censorship statute, immediately raising constitutional alarm bells in the news. Following this circuit split, social media platforms must navigate a complicated legal minefield. The issue is likely to be resolved by the Supreme Court in response to Florida’s petition of the 11th Circuit’s May decision.

Social Media Platforms Are Generally Free to Moderate Content

The major social media platforms all have policies which ban certain content, or at least require a sensitivity warning to be posted before viewing certain content. Twitter restricts hate speech and imagery, gratuitous violence, sexual violence, and requires sensitive content warnings on adult content. Facebook sets Community Standards and YouTube (a Google subsidiary) sets Community Guidelines that restrict similar content.[1] Social media corporations’ access to free speech protections were well understood under settled Supreme Court precedent, and were further confirmed in the controversial 2010 Supreme Court decision Citizens United establishing the rights of corporations to make political donations as a demonstration of free speech. In sum, Courts have generally allowed social media platforms to moderate and censor sensitive content as they see fit, and platforms have embraced this through their establishment and enforcement of internal guidelines. 

Circuits Split Over First Amendment Concerns

Courts have generally rejected arguments challenging social media platforms’ ability to set and uphold their own content guidelines, upholding social media platforms’ free speech protections under the First Amendment. The 5th Circuit’s rejection of this widely accepted standard has created a circuit split which will lead to further litigation and leave social media platforms uncertain about the validity of their policies and the extent of their constitutional rights.

The 11th Circuit’s opinion in May of this year was consistent with the general understanding of social media’s place as private businesses which hold First Amendment rights. It rejected Florida’s argument that social media platforms are common carriers and stated that editorial discretion by the platforms is a protected First Amendment right.[2] The Court recognized the platforms’ freedom to abide by their own community guidelines and choose which content to prioritize as expressions of editorial judgment protected by the First Amendment.[3] This opinion was attacked directly by the 5th Circuit’s later decision, challenging the 11th Circuit’s adherence to existing First Amendment jurisprudence. 

In its September 16th opinion, the 5th Circuit refused to recognize censorship as speech, rejecting the plaintiff’s argument that content moderation was a form of editorial discretion (a recognized form of protected speech for newspapers).[4] The court also invoked common carrier doctrine—which empowers states to enforce nondiscriminatory practices for services that the public uses en masse (a classification that the 11th Circuit explicitly rejected)—, embracing it in the context of social media platforms.[5] Therefore, the court held with “no doubts” that section 7 of the Texas law—which prevents platforms from censoring “viewpoints” (with exceptions for blatantly illegal speech provoking violence, etc.) of users—was constitutional.[6] Section 2 of the contested statute, requiring social media platforms to  justify and announce their moderation choices, was similarly upheld as being a sufficiently important interest of the government, and not unduly burdensome to the businesses.[7] The law allows individuals to sue for enforcement. 

The Supreme Court’s Role and Further Implications

Florida, on September 21st, 2022, petitioned for a writ of certiorari asking the Supreme Court to review the May 2022 decision. The petition included reference to the 5th Circuit opinion, calling for the Supreme Court to weigh in on the Circuit split. Considering recent Supreme Court decisions cutting down Fourth and Fifth amendment rights, it is anticipated that First Amendment rights of online platforms may be next.

Although the Florida and Texas laws involved in these Circuit Court decisions were Republican proposed bills, a Supreme Court decision would impact blue states as well. California, for example, has proposed a bill requiring social media platforms to make public their policies on hate speech and disinformation. A decision in either direction would impact both Republican and Democratic legislatures’ ability to regulate social media platforms in any way.

Notes

[1] Studies have found that platforms like YouTube may actually push hateful content through their algorithms despite what their official policies may state.

[2] NetChoice, LLC v. AG, Fla., 34 F.4th 1196, 1222 (11th Cir. 2022).

[3] Id. at 1204.

[4] Netchoice, L.L.C. v. Paxton, No. 21-51178, 2022 U.S. App. LEXIS 26062, at *28 (5th Cir. Sep. 16, 2022).

[5] Id. at 59.

[6] Id. at 52.

[7]  Id. at 102.


Digital Literacy, a Problem for Americans of All Ages and Experiences

Justice Shannon, MJLST Staffer

According to the American Library Association, “digital literacy” is “the ability to use information and communication technologies to find, evaluate, create, and communicate information, requiring both cognitive and technical skills.” Digital literacy is a term that has existed since the year 1997. Paul Gilster coined Digital literacy as “the ability to understand and use information in multiple formats from a wide range of sources when it is presented via computers.” In this way, the definition of digital literacy has broadened from how a person absorbs digital information to how one develops, absorbs, and critiques digital information.

The Covid-19 Pandemic taught Americans of all ages the value of Digital literacy. Elderly populations were forced online without prior training due to the health risks presented by Covid-19, and digitally illiterate parents were unable to help their children with classes.

Separate from Covid-19, the rise of crypto-currency has created a need for digital literacy in spaces that are not federally regulated.

Elderly

The Covid-19 pandemic did not create the need for digital literacy training for the elderly. However, the pandemic highlighted a national need to address digital literacy among America’s oldest population. Elderly family members quarantined during the pandemic were quickly separated from their families. Teaching family members how to use Zoom and Facebook messenger became a substitute for some but not all forms of connectivity. However, teaching an elderly family member how to use Facebook messenger to speak to loved ones does not enable them to communicate with peers or teach them other digital literacy skills.

To address digital literacy issues within the elderly population states have approved Senior Citizen Technology grants. Pennsylvania’s Department of Aging has granted funds to adult education centers for technology for senior citizens. Programs like this have been developing throughout the nation. For example, Prince George’s Community College in Maryland uses state funds to teach technology skills to its older population.

It is difficult to tell if these programs are working. States like Pennsylvania and Maryland had programs before the pandemic. Still, these programs alone did not reduce the distance between America’s aging population and the rest of the nation during the pandemic. However, when looking at the scale of the program in Prince George’s County, this likely was not the goal. Beyond that, there is a larger question: Is the purpose of digital literacy for the elderly to ensure that they can connect with the world during a pandemic, or is the goal simply ensuring that the elderly have the skills to communicate with the world? With this in mind, programs that predate the pandemic, such as the programs in Pennsylvania and Maryland, likely had the right approach even if they weren’t of a large enough scale to ensure digital literacy for the entirety of our elderly population.

Parents

The pandemic highlighted a similar problem for many American families. While state, federal, and local governments stepped up to provide laptops and access to the internet, many families still struggled to get their children into online classes; this is an issue in what is known as “last mile infrastructure.”During the pandemic, the nation quickly provided families with access to the internet without ensuring they were ready to navigate it. This left families feeling ill-prepared to support their children’s educational growth from home. Providing families with access to broadband without digital literacy training disproportionately impacted families of color by limiting their children’s growth capacity online compared to their peers. While this wasn’t an intended result, it is a result of hasty bureaucracy in response to a national emergency. Nationally, the 2022 Workforce Innovation Opportunity Act aims to address digital literacy issues among adults by increasing funding for teaching workplace technology skills to working adults. However, this will not ensure that American parents can manage their children’s technological needs.

Crypto

Separate from issues created by Covid-19 is cryptocurrency. One of the largest selling points of cryptocurrency is that it is largely unregulated. Users see it as “digital gold, free from hyper-inflation.”While these claims can be valid, consumers frequently are not aware of the risks of cryptocurrency. Last year the Chair of the SEC called cryptocurrencies “the wild west of finance rife with fraud, scams, and abuse.”This year the Department of the Treasury announced they would release instructional materials to explain how cryptocurrencies work. While this will not directly regulate cryptocurrencies providing Americans with more tools to understand cryptocurrencies may help reduce cryptocurrency scams.

Conclusion

Addressing digital literacy has been a problem for years before the Covid-19 pandemic. Additionally, when new technologies become popular, there are new lessons to learn for all age groups. Covid-19 appropriately shined a light on the need to address digital literacy issues within our borders. However, if we only go so far as to get Americans networked and prepared for the next national emergency, we’ll find that there are disparities between those who excel online and those who are are ill-equipped to use the internet to connect with family, educate their kids, and participate in e-commerce.


Extending Trademark Protections to the Metaverse

Alex O’Connor, MJLST Staffer

After a 2020 bankruptcy and steadily decreasing revenue that the company attributes to the Coronavirus pandemic, Chuck E. Cheese is making the transition to a pandemic-proof virtual world. Restaurant and arcade center Chuck E. Cheese is hoping to revitalize its business model by entering the metaverse. In February, Chuck E. Cheese filed two intent to use trademark filings with the USPTO. The trademarks were filed under the names “CHUCK E. VERSE” and “CHUCK E. CHEESE METAVERSE”. 

Under Section 1 of the Lanham Act, the two most common types of applications for registration of a mark on the Principal Register are (1) a use based application for which the applicant must have used the mark in commerce and (2) an “intent to use” (ITU) based application for which the applicant must possess a bona fide intent to use the mark in trade in the near future. Chuck E. Cheese has filed an ITU application for its two marks.

The metaverse is a still-developing virtual and immersive world that will be inhabited by digital representations of people, places, and things. Its appeal lies in the possibility of living a parallel, virtual life. The pandemic has provoked a wave of investment into virtual technologies, and brands are hurrying to extend protection to virtual renditions of their marks by registering specifically for the metaverse. A series of lawsuits related to alleged infringing use of registered marks via still developing technology has spooked mark holders into taking preemptive action. In the face of this uncertainty, the USPTO could provide mark holders with a measure of predictability by extending analogue protections of marks used in commerce to substantially similar virtual renditions. 

Most notably, Hermes International S.A. sued the artist Mason Rothschild for both infringement and dilution for the use of the term “METABIRKINS” in his collection of Non-Fungible Tokens (NFTs). Hermes alleges that the NFTs are confusing customers about the source of the digital artwork and diluting the distinctive quality of Hermes’ popular line of handbags. The argument continues that the term “META” is merely a generic term that simply means “BIRKINS in the metaverse,” and Rothschild’s use of the mark constitutes trading on Hermes’ reputation as a brand.  

Many companies and individuals are rushing to the USPTO to register trademarks for their brands to use in virtual reality. Household names such as McDonalds (“MCCAFE” for a virtual restaurant featuring actual and virtual goods), Panera Bread (“PANERAVERSE” for virtual food and beverage items), and others have recently filed applications for registration with the USPTO for virtual marks. The rush of filings signals a recognition among companies that the digital marketplace presents countless opportunities for them to expand their brand awareness, or, if they’re not careful, for trademark copycats to trade on their hard-earned good will among consumers.

Luckily for Chuck E. Cheese and other companies that seek to extend their brands into the metaverse, trademark protection in the metaverse is governed by the same set of rules governing regular analogue trademark protection. That is, the mark the company is seeking to protect must be distinctive, it must be used in commerce, and it must not be covered by a statutory bar to protection. For example, if a mark’s exclusive use by one firm would leave other firms at a significant non-reputation related disadvantage, the mark is said to be functional, and it can’t be protected. The metaverse does not present any additional obstacles to trademark protection, and so as long as Chuck E. Cheese eventually uses its two marks,it will enjoy their exclusive use among consumers in the metaverse. 

However, the relationship between new virtual marks and analogue marks is a subject of some uncertainty. Most notably, should a mark find broad success and achieve fame in the metaverse, would that virtual fame confer fame in the real world? What will trademark expansion into the metaverse mean for licensing agreements? Clarification from the USPTO could help put mark holders at ease as they venture into the virtual market. 

Additionally, trademarks in the metaverse present another venue in which trademark trolls can attempt to register an already well known mark with no actual intent to use it-—although the requirement under U.S. law that mark holders either use or possess a bona fide intent to use the mark can help mitigate this problem. Finally, observers contend that the expansion of commerce into the virtual marketplace will present opportunities for copycats to exploit marks. Already, third parties are seeking to register marks for virtual renditions of existing brands. In response, trademark lawyers are encouraging their clients to register their virtual marks as quickly as possible to head off any potential copycat users. The USPTO could ensure brands’ security by providing more robust protections to virtual trademarks based on a substantially similar, already registered analogue trademark.


“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.