Social Media

Freedom to Moderate? Circuits Split Over First Amendment Interpretation

Annelise Couderc, MJLST Staffer

Recently, the Florida and Texas Legislatures passed substantively similar laws which restrict social media platforms’ ability to moderate posts expressing “viewpoints,” and require platforms to provide explanations for why they chose to censor certain content. These laws seemingly stem from the perception of conservative leaning users that their views are disproportionately censored, despite evidence showing otherwise. The laws are in direct conflict with the current prevalent understanding of social media’s access to First Amendment protections, which include the right to moderate content, an expression of free speech.

While the 11th Circuit declared the Florida law unconstitutional for violating social media platforms’ First Amendment rights in May, only four months later the 5th Circuit reinstated the similar Texas law without explanation, overturning the previous injunction made by the U.S. District Court for the Western District of Texas. On September 16, 2022, the 5th Circuit released its full decision explaining its reinstatement of the censorship statute, immediately raising constitutional alarm bells in the news. Following this circuit split, social media platforms must navigate a complicated legal minefield. The issue is likely to be resolved by the Supreme Court in response to Florida’s petition of the 11th Circuit’s May decision.

Social Media Platforms Are Generally Free to Moderate Content

The major social media platforms all have policies which ban certain content, or at least require a sensitivity warning to be posted before viewing certain content. Twitter restricts hate speech and imagery, gratuitous violence, sexual violence, and requires sensitive content warnings on adult content. Facebook sets Community Standards and YouTube (a Google subsidiary) sets Community Guidelines that restrict similar content.[1] Social media corporations’ access to free speech protections were well understood under settled Supreme Court precedent, and were further confirmed in the controversial 2010 Supreme Court decision Citizens United establishing the rights of corporations to make political donations as a demonstration of free speech. In sum, Courts have generally allowed social media platforms to moderate and censor sensitive content as they see fit, and platforms have embraced this through their establishment and enforcement of internal guidelines. 

Circuits Split Over First Amendment Concerns

Courts have generally rejected arguments challenging social media platforms’ ability to set and uphold their own content guidelines, upholding social media platforms’ free speech protections under the First Amendment. The 5th Circuit’s rejection of this widely accepted standard has created a circuit split which will lead to further litigation and leave social media platforms uncertain about the validity of their policies and the extent of their constitutional rights.

The 11th Circuit’s opinion in May of this year was consistent with the general understanding of social media’s place as private businesses which hold First Amendment rights. It rejected Florida’s argument that social media platforms are common carriers and stated that editorial discretion by the platforms is a protected First Amendment right.[2] The Court recognized the platforms’ freedom to abide by their own community guidelines and choose which content to prioritize as expressions of editorial judgment protected by the First Amendment.[3] This opinion was attacked directly by the 5th Circuit’s later decision, challenging the 11th Circuit’s adherence to existing First Amendment jurisprudence. 

In its September 16th opinion, the 5th Circuit refused to recognize censorship as speech, rejecting the plaintiff’s argument that content moderation was a form of editorial discretion (a recognized form of protected speech for newspapers).[4] The court also invoked common carrier doctrine—which empowers states to enforce nondiscriminatory practices for services that the public uses en masse (a classification that the 11th Circuit explicitly rejected)—, embracing it in the context of social media platforms.[5] Therefore, the court held with “no doubts” that section 7 of the Texas law—which prevents platforms from censoring “viewpoints” (with exceptions for blatantly illegal speech provoking violence, etc.) of users—was constitutional.[6] Section 2 of the contested statute, requiring social media platforms to  justify and announce their moderation choices, was similarly upheld as being a sufficiently important interest of the government, and not unduly burdensome to the businesses.[7] The law allows individuals to sue for enforcement. 

The Supreme Court’s Role and Further Implications

Florida, on September 21st, 2022, petitioned for a writ of certiorari asking the Supreme Court to review the May 2022 decision. The petition included reference to the 5th Circuit opinion, calling for the Supreme Court to weigh in on the Circuit split. Considering recent Supreme Court decisions cutting down Fourth and Fifth amendment rights, it is anticipated that First Amendment rights of online platforms may be next.

Although the Florida and Texas laws involved in these Circuit Court decisions were Republican proposed bills, a Supreme Court decision would impact blue states as well. California, for example, has proposed a bill requiring social media platforms to make public their policies on hate speech and disinformation. A decision in either direction would impact both Republican and Democratic legislatures’ ability to regulate social media platforms in any way.

Notes

[1] Studies have found that platforms like YouTube may actually push hateful content through their algorithms despite what their official policies may state.

[2] NetChoice, LLC v. AG, Fla., 34 F.4th 1196, 1222 (11th Cir. 2022).

[3] Id. at 1204.

[4] Netchoice, L.L.C. v. Paxton, No. 21-51178, 2022 U.S. App. LEXIS 26062, at *28 (5th Cir. Sep. 16, 2022).

[5] Id. at 59.

[6] Id. at 52.

[7]  Id. at 102.


“I Don’t Know What to Tell You. It’s the Metaverse—I’ll Do What I Want.” How Rape Culture Pervades Virtual Reality

Zanna Tennant, MJLST Staffer

When someone is robbed or injured by another, he or she can report to the police and hold the criminal accountable. When someone is wronged, they can seek retribution in court. Although there are certainly roadblocks in the justice system, such as inability to afford an attorney or the lack of understanding how to use the system, most people have a general understanding that they can hold wrongdoers accountable and the basic steps in the process. In real life, there are laws explicitly written that everyone must abide by. However, what happens to laws and the justice system as technology changes how we live? When the internet came into widespread public use, Congress enacted new laws new laws to control how people are allowed to use the internet. Now, a new form of the internet, known as the Metaverse, has both excited big companies about what it could mean for the future, as well as sparked controversy about how to adapt the law to this new technology. It can be hard for lawyers and those involved in the legal profession to imagine how to apply the law to a technology that is not yet fully developed. However, Congress and other law-making bodies will need to consider how they can control how people use the Metaverse and ensure that it will not be abused.

The Metaverse is a term that has recently gained a lot of attention, although by no means is the concept new. Essentially, the Metaverse is a “simulated digital environment that uses augmented reality (AR), virtual reality (VR), and blockchain, along with concepts from social media, to create spaces for rich user interaction mimicking the real world.” Many people are aware that virtual reality is a completely simulated environment which takes a person out of the real world. On the other hand, augmented reality uses the real-world and adds or changes things, often using a camera. Both virtual and augmented reality are used today, often in the form of video games. For virtual reality, think about the headsets that allow you to immerse yourself in a game. I, myself, have tried virtual reality video games, such as job simulator. Unfortunately, I burned down the kitchen in the restaurant I was working at. An example of augmented reality is PokemonGo, which many people have played. Blockchain technology, the third aspect, is a decentralized, distributed ledger that records the provenance of a digital asset. The Metaverse is a combination of these three aspects, along with other possibilities. As Matthew Ball, a venture capitalist has described it, “the metaverse is a 3D version of the internet and computing at large.” Many consider it to be the next big technology that will revolutionize the way we live. Mark Zuckerberg has even changed the name of his company, Facebook, to “Meta” and is focusing his attention on creating a Metaverse.

The Metaverse will allow people to do activities that they do in the real world, such as spending time with friends, attending concerts, and engaging in commerce, but in a virtual world. People will have their own avatars that represent them in the Metaverse and allow them to interact with others. Although the Metaverse does not currently exist, as there is no single virtual reality world that all can access, there are some examples that come close to what experts imagine the Metaverse to look like. The game, Second Life, is a simulation that allows users access to a virtual reality where they can eat, shop, work, and do any other real-world activity. Decentraland is another example which allows people to buy and sell land using digital tokens. Other companies, such as Sony and Lego, have invested billions of dollars in the development of the Metaverse. The idea of the Metaverse is not entirely thought out and is still in the stages of development. However, there are many popular culture references to the concepts involved in the Metaverse, such as Ready Player One and Snow Crash, a novel written by Neal Stephenson. Many people are excited about the possibilities that the Metaverse will bring in the future, such as creating new ways of learning through real-world simulations. However, with such great change on the horizon, there are still many concerns that need to be addressed.

Because the Metaverse is such a novel concept, it is unclear how exactly the legal community will respond to it. How do lawmakers create laws that regulate the use of something not fully understood and how does it make sure that people do not abuse it? Already, there have been numerous instances of sexual harassments, threats of rape and violence and even sexual assault. Recently, a woman was gang raped in the VR platform Horizon Worlds, which was created by Meta. Unfortunately and perhaps unsurprisingly, little action was taken in response, other than an apology from Meta and statements that they would make improvements. This was a horrifying experience that showcased the issues surrounding the Metaverse. As explained by Nina Patel, the co-founder and VP of Metaverse Research, “virtual reality has essentially been designed so the mind and body can’t differentiate virtual/digital experiences from real.” In other words, the Metaverse is so life-like that a person being assaulted in a virtual world would feel like they actually experienced the assault in real life. This should be raising red flags. However, the problem arises when trying to regulate activities in the Metaverse. Sexually assaulting someone in a virtual reality is different than assaulting someone in the real world, even if it feels the same to the victim. Because people are aware that they are in a virtual world, they think they can do whatever they want with no consequences.

At the present, there are no laws regarding conduct in the Metaverse. Certainly, this is something that will need to be addressed, as there needs to be laws that prevent this kind of behavior from happening. But how does one regulate conduct in a virtual world? Does a person’s avatar have personhood and rights under the law? This has yet to be decided. It is also difficult to track someone in the Metaverse due to the ability to mask their identity and remain anonymous. Therefore, it could be difficult to figure out who committed certain prohibited acts. At the moment, some of the virtual realities have terms of service which attempt to regulate conduct by restricting certain behaviors and providing remedies for violations, such as banning. It is worth noting that Meta does not have any terms of service or any rules regarding conduct in the Horizon Worlds. However, the problem here remains how to enforce these terms of service. Banning someone for a week or so is not enough. Actual laws need to be put in place in order to protect people from sexual assault and other violent acts. The fact that the Metaverse is outside the real world should not mean that people can do whatever they want, whenever they want.


Save the Children . . . From Algorithms?

Sarah Nelson, MJLST Staffer

Last week, a bill advanced out of the Minnesota House Commerce Finance and Policy Committee that would ban social media platforms from utilizing algorithms to suggest content to those under the age of 18. Under the bill, known as HF 3724, social media platforms with more than one million account holders that operate in Minnesota, like Instagram, Facebook, and TikTok, would no longer be able to use their algorithms to recommend user-generated content to minors.

The sponsor of the bill, Representative Kristin Robbins, a Republican from Maple Grove, said that she was motivated to sponsor HF 3724 after reading two articles from the Wall Street Journal. In the first, the Wall Street Journal created dozens of automated accounts on the app TikTok, which it registered as being between the ages of 13 and 15. The outlet then detailed how the TikTok algorithm, used to create a user’s For You feed, would inundate teenage users with sex- and drug-related content if they engaged with that content. Similarly, in the second article, the Wall Street Journal found that TikTok would repeatedly present teenagers with extreme weight loss and pro-eating disorder videos if they continued to interact with that content.

In response to the second article, TikTok said it would alter its For You algorithm “to avoid showing users too much of the same content.” It is also important to note that per TikTok’s terms of service, to use the platform, users must be over 13 and must have parental consent if they are under 18. TikTok also already prohibits “sexually explicit material” and works to remove pro-eating disorder content from the app while providing a link to the National Eating Disorders Association helpline.

As to enforcement, HF 3724 says social media platforms are liable to account holders if the account holder “received user-created content through a social media algorithm while the individual account holder was under the age of 18” and the social media platform “knew or had reason to know that the individual account holder was under the age of 18.” Social media platforms would then be “liable for damages and a civil penalty of $1,000 for each violation.” However, the bill provides an exception for content “that is created by a federal, state, or local government or by a public or private school, college, or university.”

According to an article written on the bill by the legislature, Robbins is hopeful that HF 3724 “could be a model for the rest of the country.”

 

Opposition from Tech

As TechDirt points out, algorithms are useful; they help separate relevant content from irrelevant content, which optimizes use of the platform and stops users from being overwhelmed. The bill would essentially stop young users from reaping the benefits of smarter technology.

A similar argument was raised by NetChoice, which expressed concerns that HF 3724 “removes the access to beneficial technologies from young people.” According to NetChoice, the definition of “social media” used in the bill is unacceptably broad and would rope in sites that teenagers use “for research and education.” For example, NetChoice cites to teenagers no longer being able to get book recommendations from the algorithm on Goodreads or additional article recommendations on a research topic from an online newspaper.

NetChoice also argues that HF 3724 needlessly involves the state in a matter that should be left to the discretion of parents. NetChoice explains that parents, likely knowing their child best, can decide on an individual basis whether they want their children on a particular social media platform.

Opponents of the bill also emphasize that complying with HF 3724 would prove difficult for social media companies, who would essentially have to have separate platforms with no algorithmic functions for those under 18. Additionally, in order to comply with the bill, social media platforms would have to collect more personal data from users, including age and location. Finally, opponents have also noted that some platforms actually use algorithms to present appropriatecontent to minors. Similarly, TikTok has begun utilizing its algorithms to remove videos that violate platform rules.

 

What About the First Amendment?

In its letter to the Minnesota House Commerce Committee, NetChoice said that HF 3724 would be found to violate the First Amendment. NetChoice argued that “multiple court cases have held that the distribution of speech, including by algorithms such as those used by search engines, are protected by the First Amendment” and that HF 3724 would be struck down if passed because it “result[s] in the government restraining the distribution of speech by platforms and Minnesotans access to information.”

NetChoice also cited to Ashcroft v. ACLU, a case in which “the Supreme Court struck down a federal law that attempted to prevent the posting of content harmful to teenagers on the web due to [the fact it was so broad it limited adult access] as well as the harm and chilling effect that the associated fines could have on legal protected speech.”

As Ars Technica notes, federal courts blocked laws pertaining to social media in both Texas and Florida last year. Both laws were challenged for violating the First Amendment.

 

Moving Forward

HF 3724 advanced unanimously out of the House Judiciary Finance and Civil Law Committee on March 22. The committee made some changes to the bill, specifying that the legislation would not impact algorithms associated with email and internet search providers. Additionally, the committee addressed a criticism by the bill’s opponents and exempted algorithms used to filter out age-inappropriate content. There is also a companion bill to HF 3724, SF3922, being considered in the Senate.

It will be interesting to see if legislators are dissuaded from voting for HF 3724 given its uncertain constitutionality and potential impact on those under the age of 18, who will no longer be able to use the optimized and personalized versions of social media platforms. However, so far, to legislators, technology companies have not put their best foot forward, as they have sent lobbyists in their stead to advocate against the bill.


Social Media Influencers Ask What “Intellectual Property” Means

Henry Killen, MJLST Staffer

Today, just about anyone can name their favorite social media influencer. The most popular influencers are athletes, musicians, politicians, entrepreneurs, or models. Ultra-famous influencers, such as Kylie Jenner, can charge over 1 million dollars for a single post with a company’s product. So what are the risks of being an influencer? Tik Tok star Charli D’Amelio has been on both sides of intellectual property disputes. A photo of Charli was included in media mogul Sheeraz Hasan’s video promoting his ability to “make anyone famous.” The video featured many other celebrities such as Logan Paul and Zendaya. Charli’s legal team sent a cease-and-desist letter to Sheeraz demanding that her portion of the promotional video is scrubbed. Her lawyers assert that her presence in the promo “is not approved and will not be approved.” Charli has also been on the other side of celebrity intellectual property issues. The star published her first book In December and has come under fire from photographer Jake Doolittle for allegedly using photos he took without his permission. Though no lawsuit has been filed, Jake posted a series of Instagram posts blaming Charli’s team for not compensating him for his work.

Charli’s controversies highlight a bigger question society is facing, is content shared on social media platforms considered intellectual property? A good place to begin is figuring out what exactly intellectual property is. Intellectual property “refers to creations of the mind, such as inventions; literary and artistic works; designs; and symbols, names, and images used in commerce.” Social media platforms make it possible to access endless displays of content – from images to ideas – creating a cultural norm of sharing many aspects of life. Legal teams at the major social media platforms already have policies in place that make it against the rules to take images from a social media feed and use them as one’s own. For example, Bloggers may not be aware what they write may already by trademarked or copyrighted or that the images they get off the internet for their posts may not be freely reposted. Influencers get reposted on sites like Instagram all the time, and not just by loyal fans. These reposts may seem harmless to many influencers, but it is actually against Instagram’s policy to repost a photo without the creator’s consent. This may seem like not a big deal because what influencer doesn’t want more attention? However, sometimes influencers’ work gets taken and then becomes a sensation. A group of BIPOC TikTok users are fighting to copyright a dance they created that eventually became one of biggest dances in TikTok history. A key fact in their case is that the dance only became wildly popular after the most famous TiKTok users began doing it.

There are few examples of social media copyright issues being litigated, but in August 2021, a Manhattan Federal judge ruled that the practice of embedding social media posts on third-party websites, without permission from the content owner, could violate the owner’s copyright. In reaching this decision, the judge rejected the “server test” from the 9th Circuit, which holds that embedding content from a third party’s social media account only violates the contents owner’s copyright if a copy is stored on the defendant’s serves. .  General copyright laws from Congress lay out four considerations when deciding if a work should be granted copyright protection: originality, fixation, idea versus expression, and functionality. These considerations notably leave a gray area in determining if dances or expressions on social media sites can be copyrighted. Congress should enact a more comprehensive law to better address intellectual property as it relates to social media.


Whitelist for Thee, but Not for Me: Facebook File Scandals and Section 230 Solutions

Warren Sexson, MJLST Staffer

When I was in 7th grade, I convinced my parents to let me get my first social media account. Back in the stone age, that phrase was synonymous with Facebook. I never thought too much of how growing up in the digital age affected me, but looking back, it is easy to see the cultural red flags. It came as no surprise to me when, this fall, the Wall Street Journal broke what has been dubbed “The Facebook Files,” and in them found an internal study from the company showing Instagram is toxic to teen girls. While tragic, this conclusion is something many Gen-Zers and late-Millennials have known for years. However, in the “Facebook Files” there is another, perhaps even more jarring, finding: Facebook exempts many celebrities and elite influencers from its rules of conduct. This revelation demands a discussion of the legal troubles the company may find itself in and the proposed solutions to the “whitelisting” problem.

The Wall Street Journal’s reporting describes an internal process by Facebook called “whitelisting” in which the company “exempted high-profile users from some or all of its rules, according to company documents . . . .” This includes individuals from a wide range of industries and political viewpoints, from Soccer mega star Neymar, to Elizabeth Warren, and Donald Trump (prior to January 6th). The practice put the tech giant in legal jeopardy after a whistleblower, later identified as Frances Haugen, submitted a whistleblower complaint with the Securities and Exchange Commission (SEC) that Facebook has “violated U.S. securities laws by making material misrepresentations and omissions in statements to investors and prospective investors . . . .” See 17 CFR § 240.14a-9 (enforcement provision on false or misleading statements to investors). Mark Zuckerberg himself has made statements regarding Facebook’s neutral application of standards that are at direct odds with the Facebook Files. Regardless of the potential SEC investigation, the whitelist has opened up the conversation regarding the need for serious reform in the big tech arena to make sure no company can make lists of privileged users again. All of the potential solutions deal with 47 U.S.C. § 230, known colloquially as “section 230.”

Section 230 allows big tech companies to censor content while still being treated as a platform instead of a publisher (where they would incur liability for what is on their website). Specifically, § 230(c)(2)(A) provides that no “interactive computer service” shall be held liable for taking action in good faith to restrict “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable [content] . . . .” It is the last phrase, “otherwise objectionable,” that tech companies have used as justification for removing “hate speech” or “misinformation” from their platform without incurring publisher like liability. The desire to police such speech has led Facebook to develop stringent platform rules which has in turn created the need for whitelisting. This brings us to our first proposal, eliminating the phrase “otherwise objectionable” from section 230 itself. The proposed “Stop the Censorship Act of 2020” brought by Republican Paul Gosar of Arizona does just that. Proponents argue that it would force tech companies to be neutral or lose liability protections. Thus, no big tech company would ever create standards stringent enough to require a “whitelist” or an exempted class, because the standard is near to First Amendment protections—problem solved! However, the current governing majority has serious concerns about forced neutrality, which would ignore problems of misinformation or the mental health effects of social media in the aftermath of January 6th.

Elizabeth Warren, similar to a recent proposal in the House Judiciary Committee, takes a different approach: breaking up big tech. Warren proposes passing legislation to limit big tech companies in competing with small businesses who use the platform and reversing/blocking mergers, such as Facebook purchasing Instagram. Her plan doesn’t necessarily stop companies from having whitelists, but it does limit the power held by Facebook and others which could in turn, make them think twice before unevenly applying the rules. Furthermore, Warren has called for regulators to use “every tool in the toolbox,” in regard to Facebook.

Third, some have claimed that Google, Facebook, and Twitter have crossed the line under existing legal doctrines to become state actors. So, the argument goes, government cannot “induce” or “encourage” private persons to do what the government cannot. See Norwood v. Harrison, 413 U.S. 455, 465 (1973). Since some in Congress have warned big tech executives to restrict what they see as bad content, the government has essentially co-opted the hand of industry to block out constitutionally protected speech. See Railway Employee’s Department v. Hanson, 351 U.S. 225 (1956) (finding state action despite no actual mandate by the government for action). If the Supreme Court were to adopt this reasoning, Facebook may be forced to adopt a First Amendment centric approach since the current hate speech and misinformation rules would be state action; whitelists would no longer be needed since companies would be blocked from policing fringe content. Finally, the perfect solution! The Court can act where Congress cannot agree. I am skeptical of this approach—needless to say, such a monumental decision would completely shift the nature of social media. While Justice Thomas has hinted at his openness to this argument, it is unclear if the other justices will follow suit.

All in all, Congress and the Court have tools at their disposal to combat the disturbing actions taken by Facebook. Outside of potential SEC violations, Section 230 is a complicated but necessary issue Congress must confront in the coming months. “The Facebook Files” have exposed the need for systemic change in social media. What I once used to use to play Farmville, has become a machine that has rules for me, but not for thee.


Lawyers in Flame Wars: The ABA Says Be Nice Online

Parker von Sternberg, MJLST Staffer

The advent of Web 2.0 around the turn of the millennium brought with it an absolute tidal wave of new social interactions. To this day we are in the throes of figuring out how to best engage with one another online, particularly when things get heated or otherwise out of hand. In this new wild west, lawyers sit at a perhaps unfortunate junction. Lawyers are indelibly linked to problems and moments of breakdown—precisely the events that lead to lashing out online. At the same time a lawyer, more so than many professions, relies upon their personal reputation to drive their business. When these factors collide, it creates pressure on the lawyer to defend themselves, but doing so properly can be a tricky thing.

When it comes to questions of ethics for lawyers, the first step is generally to crack open the Model Rules of Professional Conduct (MRPC), given that they have been adopted in 49 states and are kept up to date by the American Bar Association (ABA). While these model rules are customized to some extent from state to state, by and large the language used in the MRPC is an effective starting point for professional ethics issues across the country. Recently, the ABA has stepped into the fray with Formal Opinion 496, which lays out the official interpretation of MRPC 1.6 and how it comes into play in these situations.

MRPC 1.6 protects confidentiality of client information. For our purposes, the pertinent sections are

(a) A lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation or the disclosure is permitted by paragraph (b) and . . .

(b)(5) to establish a claim or defense on behalf of the lawyer in a controversy between the lawyer and the client, to establish a defense to a criminal charge or civil claim against the lawyer based upon conduct in which the client was involved, or to respond to allegations in any proceeding concerning the lawyer’s representation of the client.

So, when someone goes on Google Maps and excoriates your practice, a review that will pop up to everyone who even looks for directions to the office, what can be done? The first question is whether or not they are in fact a former client. If not, feel free to engage! Just wade in there and start publicly fighting with a stranger (really though, don’t do this. Even the ABA knows what the Streisand Effect is). However, if they are a former client, MRPC 1.6 or the local equivalent applies.

In Minnesota we have the MNRPC, with 1.6(b)(8) mirroring MRPC 1.6(b)(5). At its core, the ABA’s interpretation turns on the fact that an online review, on its own, does not qualify as a “controversy” or “proceeding.” That is not to say that it cannot give rise to one though! In 2016 an attorney in Florida took home $350,000 after former clients repeatedly defamed her because their divorce didn’t go how they wanted. But short of the outright lies in that case, lawyers suffering from online hit pieces are more limited in their options. The ABA lays out four possible responses to poor online reviews:

1) do not respond to the negative post or review at all, because as was brought up above, you tempt the Streisand Effect at your peril;

2) request that the website or search engine remove the review;

3) post an invitation to contact the lawyer privately to resolve the matter; and

4) indicate that professional considerations preclude a response.

While none of these options exactly inspire images of righteous fury, defending your besmirched professional honor or righting the wrongs done to your name, it appears unlikely that they will get you in trouble with the ethics board either. The ABA’s formal opinion lays out an impressive list of authorities from nearly a dozen states establishing that lawyers can and will face consequences for public review-related dust ups. The only option for an attorney really looking to have it out online it seems is to move to Washington D.C., where the local rules allow for disclosure “to the extent reasonably necessary to respond to specific allegations by the client concerning the lawyer’s representation of the client.”


Watching an APA Case Gestate Live!

Parker von Sternberg, MJLST Staffer

On October 15th the FCC published an official Statement of Chairman Pai on Section 230. Few particular statutes have come under greater fire in recent memory than the Protection for “Good Samaritan” Blocking and Screening of Offensive Material and the FCC’s decision to wade into the fray is almost certain to end up causing someone to bring suit regardless of which side of the issue the Commission comes down on.

As a brief introduction, 47 U.S. Code § 230 provides protections from civil suits for providers of Interactive Computer Services, which for our purposes can simply be considered websites. The statute was drafted and passed as a direct response by Congress to a pair of cases, namely Cubby, Inc. v. CompuServe Inc. and Stratton Oakmont, Inc. v. Prodigy Services Co.Cubby, Inc. v. CompuServe Inc., 776 F.Supp. 135 (S.D.N.Y. 1991) and Stratton Oakmont, Inc. v. Prodigy Services Co., 1995 WL 323710 (N.Y. Sup. Ct. 1995). Cubby held that the defendant, CompuServe, was not responsible for third-party posted content on its message board. The decisive reasoning by the court was that CompuServe was a distributor, not a publisher, and thus “must have knowledge of the contents of a publication before liability can be imposed.”Cubby, Inc. v. CompuServe Inc., 776 F.Supp. 135, 139 (S.D.N.Y. 1991). On the other hand, in Stratton Oakmont, the defendant’s exertion of “editorial control” over a message board otherwise identical to the one in Cubby “opened [them] up to a greater liability than CompuServe and other computer networks that make no such choice.” Stratton Oakmont, 1995 WL 323710 at *5.

Congress thus faced an issue: active moderation of online content, which is generally going to be a good idea, created civil liability where leaving message boards open as a completely lawless zone protects the owner of the board. The answer to this conundrum was § 230 which states, in part:

(c) Protection for “Good Samaritan” blocking and screening of offensive material

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability – No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected . . . .

Judicial application of the statute has so far largely read the language expansively. Zeran v. AOL held that “[b]y its plain language, § 230 creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”Zeran v. Am. Online, Inc., 129 F.3d 327, 330 (4th Cir. 1997). The court also declined to recognize a difference between a defendant acting as a publisher versus a distributor. Speaking to Congress’s legislative intent, the court charted a course that aimed to both immunize service providers as well as encourage self-regulation. Id. at 331-334. Zeran has proved immensely influential, having been cited over a hundred times in the ensuing thirteen years.

Today however, the functioning of § 230 has become a lightning rod for the complaints of many on social media. Rather than encouraging interactive computer services to self-regulate, the story goes that it instead protects them despite their “engaging in selective censorship that is harming our national discourse.” Republicans in the Senate have introduced a bill to amend the Communications Decency Act specifically to reestablish liability for website owners in a variety of ways that § 230 currently protects them from. The Supreme Court has also dipped its toes in the turbulent waters of online censorship fights, with Justice Thomas saying that “courts have relied on policy and purpose arguments to grant sweeping protection to Internet platforms” and that “[p]aring back the sweeping immunity courts have read into §230 would not necessarily render defendants liable for online misconduct.

On the other hand, numerous private entities and individuals hold that § 230 forms part of the backbone of the internet as we know it today. Congress and the courts, up until a couple of years ago, stood in agreement that it was vitally important to draw a bright line between the provider of an online service and those that used it. It goes without saying that some of the largest tech companies in the world directly benefit from the protections offered by this law, and it can be argued that the economic impact is not limited to those larger players alone.

What all of this hopefully goes to show is that, no matter what happens to this statute, someone somewhere will be willing to spend the time and the money necessary to litigate over it. The question is what shape that litigation will take. As it currently stands, the new bill in the Senate has little chance of getting through the House of Representatives to the President’s desk. The Supreme Court just recently denied cert to yet another § 230 case, upholding existing precedent. Enter Ajit Pai and the FCC, with their legal authority to interpret 47 U.S. Code § 230. Under the cover of Chevron deference protecting administrative action with regard to interpreting statutes the legislature has empowered them to enforce, the FCC wields massive influence with regard to the meaning of § 230. Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984).

While the FCC’s engagement is currently limited to a statement that it intends to “move forward with rulemaking to clarify [§ 230’s] meaning,” there are points to discuss. What limits are there on the power to alter the statute’s meaning? Based on the Commissioner’s statement, can we tell generally what side they are going to come down on? With regard to the former, as was said above, the limit is set largely by Chevron deference and by § 706 of the APA. The key words here are going to be if whoever ends up unhappy with the FCC’s interpretation can prove that it is “arbitrary and capricious” or goes beyond a “permissible construction of the statute.” Chevron U.S.A., Inc. v. Natural Resources Defense Council, Inc., 467 U.S. 837 (1984).

The FCC Chairman’s statement lays out that issues exist surrounding §230 and establishes that the FCC believes the legal authority exists for it to interpret the statute. It finishes by saying “[s]ocial media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.” Based on this statement alone, it certainly sounds like the FCC intends to narrow the protections for interactive computer services providers in some fashion. At the same time, it raises questions. For example, does § 230 provide websites with special forms of free speech that other individuals and groups do not have? The statute does not on its face make anything legal that without it would not be. Rather, it ensures that legal responsibility for speech lies with the speaker, rather than the digital venue in which it is said.

The current divide on liability for speech and content moderation on the internet draws our attention to issues of power as the internet continues to pervade all aspects of life. When the President of the United States is being publicly fact-checked, people will sit up and take notice. The current Administration, parts of the Supreme Court, some Senators, and now the FCC all appear to feel that legal proceedings are a necessary response to this happening. At the same time, alternative views do exist outside of Washington D.C., and at many points they may be more democratic than those proposed within our own government.

There is a chance that if the FCC takes too long to produce a “clarification” of §230 that Chairman Pai will be replaced after the upcoming Presidential election. Even if this does happen, I feel that the outlining of the basic positions surrounding this statute is nonetheless worthwhile. A change in administrations simply means that the fight will occur via proposed statutory amendments or in the Supreme Court, rather than via the FCC.

 


Privacy, Public Facebook Posts, and the Medicalization of Everything

Peter J. Teravskis, MD/JD Candidate, MJLST Staffer

Medicalization is “a process by which human problems come to be defined and treated as medical problems.” Medicalization is not a formalized process, but is instead “a social meaning embedded within other social meanings.” As the medical domain has expanded in recent years scholars have begun to point to problems with “over-medicalization” or “corrupted medicalization.” Specifically, medicalization is used to describe “the expansion of medicine in people’s lives.” For example, scholars have problematized the medicalization of obesity, shynesshousing, poverty, normal aging, and even dying, amongst many others. The process of medicalization has become so pervasive in recent years that various sociologists have begun to discuss it as the medicalization “of everyday life,” “of society,”  “of culture,” of the human condition, and “the medicalization of everything”—i.e. turning all human difference into pathology. Similarly, developments in “technoscientific biomedicine” have led scholars to blur the line of what is exclusively “medical” into a broader process of “biomedicalization.”

Medicalization does not carry a valence of “good” or “bad” per se: medicalization and demedicalization can both restrict and expand personal liberties. However, when everyday living is medicalized there are many attendant problems. First, medicalization places problems outside a person’s control: rather than the result of choice, personality, or character, a medicalized problem is considered biologically preordained or “curable.” Medicalized human differences are no longer considered normal; therefore, “treatment” becomes a “foregone conclusion.” Because of this, companies are incentivized to create pharmacological and biotechnological solutions to “cure” the medicalized problem. From a legal perspective, Professor Adele E. Clarke and colleagues note that through medicalization, “social problems deemed morally problematic . . . [are] moved from the professional jurisdiction of the law to that of medicine.” This process is referred to, generally, as the “medicalization of deviance.” Further, medicalization can de-normalize aspects of the human condition and classify people as “diseased.”

Medicalization is important to the sociological study of social control. Social control is defined as the “mechanisms, in the form of patterns of pressure, through which society maintains social order and cohesion.” Thus, once medicalized, an illness is subject to control by medicinal interventions (drugs, surgery, therapy, etc.) and a sick people are expected to take on the “sick role” whereby they become the subjects of physicians’ professional control. A recent example of medical social control is the social pressure to engage in hygienic habits, precautionary measures, and “social distancing” in response to the novel coronavirus, COVID-19. The COVID-19 pandemic is an expressly medical problem; however, when normal life, rather than a viral outbreak, is medicalized, medical social control becomes problematic. For example, the sociologist Peter Conrad argues that medical social control can take the form of “medical surveillance.” He states that “this form of medical social control suggests that certain conditions or behaviors become perceived through a ‘medical gaze’ and that physicians may legitimately lay claim to all activities concerning the condition” (quoting Michel Foucault’s seminal book The Birth of the Clinic).

The effects of medical social control are amplified due to the communal nature of medicine and healthcare, leading to “medical­legal hybrid[]” social control and, I argue, medical-corporate social control. For example, employers and insurers have interests in encouraging healthful behavior when it reduces members’ health care costs. Similarly, employers are interested in maximizing healthy working days, decreasing worker turnover, and maximizing healthy years, thus expanding the workforce. The State has similar interests, as well as interests in reducing end-of-life and old age medical costs. At first glance, this would seem to militate against overmedicalization. However, modern epidemiological methods have revealed the long term consequences of untreated medical problems. Thus, medicalization may result in the diversion of health care dollars towards less expensive preventative interventions and away from more expensive therapy that would help later in life.

An illustrative example is the medicalization of obesity. Historically, obesity was not considered a disease but was a socially desirable condition: demonstrating wealth; the ability to afford expensive, energy-dense foods; and a life of leisure rather than manual labor. Changing social norms, increased life expectancy, highly sensitive biomedical technologies for identifying subtle metabolic changes in blood chemistry, and population-level associations between obesity and later-life health complications have contributed to the medicalization of this conditions. Obesity, unlike many other conditions, it not attributable to a single biological process, rather, it is hypothesized to result from the contribution of multiple genetic and environmental factors. As such, there is no “silver bullet” treatment for obesity. Instead, “treatment” for obesity requires profound changes reaching deep into how a patient lives her life. Many of these interventions have profound psychosocial implications. Medicalized obesity has led, in part, to the stigmatization of people with obesity. Further, medical recommendations for the treatment of obesity, including gym membership, and expensive “health” foods, are costly for the individual.

Because medicalized problems are considered social problems affecting whole communities, governments and employers have stepped in to treat the problem. Politically, the so-called “obesity epidemic” has led to myriad policy changes and proposals. Restrictions designed to combat the obesity epidemic have included taxes, bans, and advertising restrictions on energy-dense food products. On the other hand, states and the federal government have implemented proactive measures to address obesity, for example public funds have been allocated to encourage access to and awareness of “healthy foods,” and healthy habits. Further, Social Security Disability, Medicare and Medicaid, and the Supplemental Nutrition Assistance Program have been modified to cope with economic and health effects of obesity.

Other tools of control are available to employers and insurance providers. Most punitively, corporate insurance plans can increase rates for obese employees.  As Abby Ellin, writing for Observer, explained “[p]enalizing employees for pounds is perfectly legal [under the Affordable Care Act]” (citing a policy brief published in the HealthAffairs journal). Alternatively, employers and insurers have paid for or provided incentives for gym memberships and use, some going so far as to provide exercise facilities in the workplace. Similarly, some employers have sought to modify employee food choices by providing or restricting food options available in the office. The development of wearable computer technologies has presented another option for enforcing obesity-focused behavioral control. Employer-provided FitBits are “an increasingly valuable source of workforce health intelligence for employers and insurance companies.” In fact, Apple advertises Apple Watch to corporate wellness divisions and various media outlets have noted how Apple Watch and iPhone applications can be used by employers for health surveillance.

Indeed, medicalization as a pretense for technological surveillance and social control is not exclusively used in the context of obesity prevention. For instance, the medicalization of old age has coincided with the technological surveillance of older people. Most troubling, medicalization in concert with other social forces have spawned an emerging field of technological surveillance of mental illness. Multiple studies, and current NIH-funded research, are aimed at developing algorithms for the diagnosis of mental illness based on data mined from publicly accessible social media and internet forum posts. This process is called “social media analysis.” These technologies are actively medicalizing the content of digital communications. They subject peoples’ social media postings to an algorithmic imitation of the medical gaze, whereby, “physicians may legitimately lay claim to” those social media interactions.  If social media analysis performs as hypothesized, certain combinations of words and phrases will constitute evidence of disease. Similar technology has already been coopted as a mechanism of social control to detect potential perpetrators of mass shootings. Policy makers have already seized upon the promise of medical social media analysis as a means to enforce “red flag” laws. Red flag laws “authorize courts to issue a special type of protection order, allowing the police to temporarily confiscate firearms from people who are deemed by a judge to be a danger to themselves or to others.” Similarly, it is conceivable that this type of evidence will be used in civil commitment proceedings. If implemented, such programs would constitute a link by which medical surveillance, under the banner of medicalization, could be used as grounds to deprive individuals of civil liberty, demonstrating an explicit medical-legal hybrid social control mechanism.

What protections does the law offer? The Fourth Amendment protects people from unreasonable searches. To determine whether a “search” has occurred courts ask whether the individual has a “reasonable expectation of privacy” in the contents of the search. Therefore, whether a person had a reasonable expectation of privacy in publicly available social media data is critical to determining whether that data can be used in civil commitment proceedings or for red flag law protective orders.

Public social media data is, obviously, public, so courts have generally held that individuals have no reasonable expectation of privacy in its contents. By contrast, the Supreme Court has ruled that individuals have a reasonable expectation of privacy in the data contained on their cell phones and personal computers, as well as their personal location data (cell-site location information) legally collected by third party cell service providers. Therefore, it is an open question how far a person’s reasonable expectation of privacy extends in the case of digital information. Specifically, when public social media data is used for medical surveillance and making psychological diagnoses the legal calculation may change. One interpretation of the “reasonable expectation of privacy” test argues that it is an objective test—asking whether a reasonable person would actually have a privacy interest. Indeed, some scholars have suggested using polling data to define the perimeter of Fourth Amendment protections. In that vein, an analysis of the American Psychiatric Association’s “Goldwater Rule” is illustrative.

The Goldwater Rule emerged after the media outlet “Fact” published psychiatrists’ medical impressions of 1964 presidential candidate Barry Goldwater. Goldwater filed a libel suit against Fact, and the jury awarded him $1.00 in compensatory damages and $75,000 in punitive damages resulting from the publication of the psychiatric evaluations. None of the quoted psychiatrists had met or examined Goldwater in person. Subsequently, concerned primarily about the inaccuracies of “diagnoses at a distance,” the APA adopted the Goldwater Rule, prohibiting psychiatrists from engaging in such practices. It is still in effect today.

The Goldwater Rule does not speak to privacy per se, but it does speak to the importance of personal, medical relationships between psychiatrists and patients when arriving at a diagnosis. Courts generally treat those types of relationships as private and protect them from needless public exposure. Further, using social media surveillance to diagnose mental illness is precisely the type of diagnosis-at-a-distance that concerns the APA. However, big-data techniques promise to obviate the diagnostic inaccuracies the 1960s APA was concerned with.

The jury verdict in favor of Goldwater is more instructive. While the jury found only nominal compensatory damages, it nevertheless chose to punish Fact magazine. This suggests that the jury took great umbrage with the publication of psychiatric diagnoses, even though they were obtained from publicly available data. Could this be because psychiatric diagnoses are private? The Second Circuit, upholding the jury verdict, noted that running roughshod over privacy interests is indicative of malice in cases of libel. Under an objective test, this seems to suggest that subjecting public information to the medical gaze, especially the psychiatrist’s gaze, unveils information that is private. In essence, applying big-data computer science techniques to public posts unveils or reveals private information contained in the publicly available words themselves. Even though the public social media posts are not subject to a reasonable expectation of privacy, a psychiatric diagnosis based on those words may be objectively private. In sum, the medicalization and medical surveillance of normal interactions on social media may create a Fourth Amendment privacy interest where none previously existed.


Zoinks! Can the FTC Unmask Advertisements Disguised by Social Media Influencers?

Jennifer Satterfield, MJLST Staffer

Social media sites like Instagram and YouTube are filled with people known as “influencers.” Influencers are people with a following on social media that use their online fame to promote products and services of a brand. But, with all that power comes great responsibility, and influencers, as a whole, are not being responsible. One huge example of irresponsible influencer activity is the epic failure and fraudulent music festival known as Fyre Festival. Although Fyre Festival promised a luxury, VIP experience on a remote Bahamian island, it was a true nightmare where “attendees were stranded with half-built huts to sleep in and cold cheese sandwiches to eat.” The most prominent legal action was against Fyre’s founders and organizers, Billy McFarland and Ja Rule, including a six-year criminal sentence for wire fraud against McFarland. Nonetheless, a class action lawsuit also targeted the influencers. According to the lawsuit, the influencers did not comply with Federal Trade Commission (“FTC”) guidelines and disclose they were being paid to advertise the festival. Instead, “influencers gave the impression that the guest list was full of the Social Elite and other celebrities.” Yet, the blowback against influencers since the Fyre Festival fiasco appears to be minimal.

According to a Mediakix report, “[i]n one year, a top celebrity will post an average of 58 sponsored posts and only 3 may be FTC compliant.” The endorsement guidelines specify that if there is a “material connection” between the influencer and the seller of an advertised product, this connection must be fully disclosed. The FTC even created a nifty guide for influencers to ensure compliance. While disclosure is a small burden and there are several resources informing influencers of their duty to disclose, these guidelines are still largely ignored.

Evens so, the FTC has sent several warning letters to individual influencers over the years, which indicates it is monitoring top influencers’ posts. However, a mere letter is not doing much to stop the ongoing, flippant, and ignorant disregard toward the FTC guidelines. Besides the letters, the FTC rarely takes action against individual influencers. Instead, if the FTC goes after a bad actor, “it’s usually a brand that[] [has] failed to issue firm disclosure guidelines to paid influencers.” Consequently, even though it appears as if the FTC is cracking down on influencers, it is really only going after the companies. Without actual penalties, it is no wonder most influencers are either unaware of the FTC guidelines or continue to blatantly ignore them.

Considering this problem, there is a question of what the FTC can really do about it. One solution is for the FTC to dig in and actually enforce its guidelines against influencers like it did in 2017 with CSGO Lotto and two individual influencers, Trevor Martin and Thomas Cassell. CSGO Lotto was a website in which users could gamble virtual items called “skins” from the game Counter-Strike: Global Offensive. According to the FTC’s complaint, Martin and Thomas endorsed CSGO Lotto but failed to disclose they were both the owners and officers of the company. CSGO Lotto also paid other influencers to promote the website. The complaint notes that numerous YouTube videos by these influencers either failed to include a sponsorship disclosure in the videos or inconspicuously placed such disclosures “below the fold” in the description box. While the CSGO Lotto action was a huge scandal in the video game industry, it was not widely publicized to the general population. Moreover, Martin and Cassell got away with a mere slap on the wrist—“[t]he [FTC] order settling the charges requires Martin and Cassell to clearly and conspicuously disclose any material connections with an endorser or between an endorser and any promoted product or service.” Thus, it was not enough to compel other influencers into compliance. Instead, if the FTC started enforcement actions against big-name influencers, other influencers may also fear retribution and comply.

On the other hand, the FTC could continue its enforcement against the companies themselves, but this time with more teeth. Currently, the FTC is preparing to take further steps to ensure consumer protection in the world of social media influencers. Recently, FTC Commissioner Rohit Chopra acknowledged in a public statement that “it is not clear whether our actions are deterring misconduct in the marketplace, due to the limited sanctions we have pursued.” Although Chopra is not interested in pursuing small influencers, but rather the advertisers that pay them, it is possible that enforcement against the companies will cause influencers to comply as well.

Accordingly, Chopra’s next steps include: (1) “[d]eveloping requirements for technology platforms (e.g. Instagram, YouTube, and TikTok) that facilitate and either directly or indirectly profit from influencer marketing;” (2) “[c]odifying elements of the existing endorsement guides into formal rules so that violators can be liable for civil penalties under Section 5(m)(1)(A) and liable for damages under Section 19; 7;” and (3) “[s]pecifying the requirements that companies must adhere to in their contractual arrangements with influencers, including through sample terms that companies can include in contracts.” By pushing some of the enforcement duties onto social media platforms themselves, the FTC gains more monitoring and enforcement capabilities. Furthermore, codifying the guidelines into formal rules gives the FTC teeth to impose civil penalties and creates tangible consequences for those who previously ignored the guidelines. Finally, by actually requiring companies to adhere to these rules via their contract with influencers, influencers will be compelled to follow the guidelines as well. Therefore, under these next steps, paid advertising disclosures on social media can become commonplace. But only time will really tell if the FTC will achieve these steps.


Google Fined for GDPR Non-Compliance, Consumers May Not Like the Price

Julia Lisi, MJLST Staffer

On January 14th, 2019, France’s Data Protection Authority (“DPA”) fined Google 50 million euros in one of the first enforcement actions taken under the EU’s General Data Protection Regulation (“GDPR”). The GDPR, which took effect in May of 2018, sent many U.S. companies scrambling in attempts to update their privacy policies. You, as a consumer, probably had to re-accept updated privacy policies from your social media accounts, phones, and many other data-based products. Google’s fine makes it the first U.S. tech giant to face GDPR enforcement. While a 50 million euro (roughly 57 million dollars) fine may sound hefty, it is actually relatively small compared to maximum fine allowed under the GDPR, which, for Google, would be roughly five billion dollars.

The French fine clarifies a small portion of the uncertainty surrounding GDPR enforcement. In particular, the French DPA rejected Google’s methods for getting consumers to consent to its  Privacy Policy and Terms of Service. The French DPA took issue with the (1) numerous steps users faced before they could opt out of Google’s data collection, (2) the pre-checked box indicating users’ consent, and (3) the inability of users to consent to individual data processes, instead requiring whole cloth acceptance of both Google’s Privacy Policy and Terms of Service.

The three practices rejected by the French DPA are commonplace in the lives of many consumers. Imagine turning on your new phone for the first time and scrolling through seemingly endless provisions detailing exactly how your daily phone use is tracked and processed by both the phone manufacturer and your cell provider. Imagine if you had to then scroll through the same thing for each major app on your phone. You would have much more control over your digital footprint, but would you spend hours reading each provision of the numerous privacy policies?

Google’s fine could mark the beginning of sweeping changes to the data privacy landscape. What once took a matter of seconds—e.g., checking one box consenting to Terms of Service—could now take hours. If Google’s fine sets a precedent, consumers could face another wave of re-consenting to data use policies, as other companies fall in line with the GDPR’s standards. While data privacy advocates may applaud the fine as the dawn of a new day, it is unclear how the average consumer will react when faced with an in-depth consent process.