Benjamin Riley

Social Media’s Rise to the Forefront

Over the last few decades, social media platforms have gained immense popularity with Americans,[1] and statistics point to the average American having accounts on multiple platforms.[2] Yet, as is the case with many trends, this growth has not come without its fair share of controversy. These platforms have taken center stage in many recent legal battles, perhaps most notably a high-profile case decided by the Supreme Court this summer that explored First Amendment issues and the dissemination of information through social media platforms.[3] Moreover, there has also been a wide array of legislative proposals relating to social media in 2024.[4] Apart from constitutional disputes and state legislation, questions have also been raised about worrisome political ramifications[5] and potential health effects.[6] Needless to say, social media’s rise to the forefront of the American consciousness has not been unanimously applauded.

Government Officials Take Action

Recently, concerns over social media’s health effects on children and teenagers have become a frequently discussed topic.[7] This concern was addressed by the Surgeon General of the United States, Vivek H. Murthy, in an advisory released in mid-2023, warning that social media can affect the well-being of the country’s young people.[8] This advisory was escalated in June of 2024 to a powerfully worded, public message to Congress and the country explaining that a Surgeon General’s warning on social media platforms is needed.[9] The message, which appeared as an opinion piece in The New York Times, draws attention to the effect social media has on children’s anxiety, depression, and self-image.[10] Moreover, the message also points to how surgeon general’s warning labels were able to combat tobacco use, in an attempt to establish the efficacy of these warnings.[11] Along with calling for warnings on the platform, the Surgeon General also challenged parents, medical professionals, schools, and companies to all play a role in limiting the adverse effects of social media.[12]

This opinion received a powerful show of support when a coalition of forty-two attorneys general, including North Carolina’s Attorney General Josh Stein, wrote a letter in support of the Surgeon General’s call for a warning on social media platforms.[13] The letter, which was addressed to Speaker of the House Mike Johnson, Senate Majority Leader Chuck Schumer, and Senate Minority Leader Mitch McConnell, argues that Congress can take action against the threats of social media and “protect future generations of Americans.”[14]

The letter explains that social media is contributing to a “mental health crisis” in children and teenagers.[15] This language makes clear the urgency with which the writers believe the issue needs to be addressed. More specifically, the letter takes issue with “algorithm-driven social media platforms,” and reinforces many of the concerns presented in the Surgeon General’s New York Times opinion.[16] Previous legislation and legal action taken by both state legislatures and State Attorneys General are highlighted, as well as ongoing state investigations and litigation against the social media powerhouse TikTok.[17]  However, it is contended that “this ubiquitous problem requires federal action.”[18] According to the group, a surgeon general’s warning on social media platforms “would be a consequential step” in addressing this problem.[19] This letter follows legal action taken by a similar coalition of State Attorneys General last fall, where lawsuits were filed against social media giant Meta, alleging that features on Meta’s social media platforms adversely affect children. [20]

One of the more interesting aspects of this letter is the impressively bipartisan nature of the coalition. The alliance of forty-two attorneys general is comprised of differing political ideologies and is spread across the country. The uniqueness of this cooperation is not lost in the letter, which explains that “[a]s State Attorneys General we sometimes disagree about important issues, but all of us share an abiding concern for the safety of the kids in our jurisdiction.”[21] The willingness of officials to work together on combating the adverse effects of social media can also be seen in recent legislation at the federal level. The Kids Online Safety Act, which was proposed by Senator Richard Blumenthal, a Democrat, has been cosponsored by many lawmakers on both sides of the aisle.[22]

It is also worth noting what this letter signals to social media companies. The letter accuses social media companies of being complacent in the crisis by saying the “problem will not solve itself and the social media platforms have demonstrated an unwillingness to fix the problem on their own.”[23] Moreover, with attorneys general making children’s online safety a priority,[24] this letter should serve as a reminder to social media companies that policymakers are unlikely to relent in their pursuit of greater safety measures on social media. 

Future Implications

At this time, it is unclear if Congress will follow the advice given by the Surgeon General and subsequently endorsed by many attorneys general. Similarly, it is also unclear whether these warnings would have any effect on children’s social media usage and the associated health effects.

However, while the viability of a surgeon general’s warning and its actual efficacy cannot yet be known, developments like this show that officials are unlikely to alleviate any of the pressure they have placed on social media companies. Officials calling for these warnings should be interpreted as an escalation against the youth mental health crisis, and consequently social media companies. In short, social media companies should expect further bipartisan action to counteract the negative side effects of social media, and citizens should be prepared that some of their favorite platforms may soon carry a warning about the potential health effects of scrolling.


[1]See Belle Wong, Top Social Media Statistics and Trends of 2024, Forbes Advisor,  https://www.forbes.com/advisor/business/social-media-statistics/ (May 18, 2023, 2:09 PM).

[2] Id.

[3] See Murthy v. Missouri, 144 S. Ct. 1972 (2024).

[4] See Social Media and Children 2024 Legislation, National Conference of State Legislatures, https://www.ncsl.org/technology-and-communication/social-media-and-children-2024-legislation (June 14, 2024).

[5] See Stephanie Burnett & Helen Coster, Fake U.S. Election-Related Accounts Proliferating on X, Study Says, Reuters (May 24, 2024, 8:31 AM) https://www.reuters.com/world/us/fake-us-election-related-accounts-proliferating-x-study-says-2024-05-24/; U.S. Groups Urge Social Media Companies to Fight ‘Big Lie,’ Election Misinformation, Reuters (May 12, 2022, 10:07 AM), https://www.reuters.com/world/us/us-groups-urge-social-media-companies-fight-big-lie-election-disinformation-2022-05-12/; Tiffany Hsu & Steven Lee Myers & Stuart A. Thompson, Elections and Disinformation Are Colliding Like Never Before in 2024, N.Y. Times, https://www.nytimes.com/2024/01/09/business/media/election-disinformation-2024.html (Jan. 11, 2024).

[6] See Teens and Social Media Use: What’s the Impact?, Mayo Clinic (Jan. 18, 2024), https://www.mayoclinic.org/healthy-lifestyle/tween-and-teen-health/in-depth/teens-and-social-media-use/art-20474437.

[7] See Claire Cain Miller, Everyone Says Social Media is Bad for Teens. Proving it is Another Thing, N.Y. Times: The Upshot (June 17, 2023), https://www.nytimes.com/2023/06/17/upshot/social-media-teen-mental-health.html; Natalie Proulx, Does Social Media Harm Young People’s Mental Health?, N.Y. Times (May 25, 2023) https://www.nytimes.com/2023/05/25/learning/does-social-media-harm-young-peoples-mental-health.html.

[8] Surgeon General Issues New Advisory About Effects Social Media Use Has on Youth Mental Health, U.S. Department of Health and Human Services (May 23, 2023), https://www.hhs.gov/about/news/2023/05/23/surgeon-general-issues-new-advisory-about-effects-social-media-use-has-youth-mental-health.html.

[9] See Vivek H. Murthy, Surgeon General: Why I’m Calling for a Warning Label on Social Media Platforms, N.Y. Times (June 17, 2024), https://www.nytimes.com/2024/06/17/opinion/social-media-health-warning.html.

[10] Id.

[11] Id.

[12] Id.

[13] Letter from Rob Bonta, Cal. Att’y Gen., Phil Weiser, Colo. Att’y Gen., Russel Coleman, Ky. Att’y Gen., Lynn Fitch, Miss. Att’y Gen., Matthew J. Platkin, N.J. Att’y Gen., Letitia James, N.Y. Att’y Gen., Jonathan Skrmetti, Tenn. Att’y Gen., Steve Marshall, Ala. Att’y Gen., Fainu’ulelei Falefatu Ala’ilima-Uta, Am. Sam. Att’y Gen., Tim Griffin, Ark. Att’y Gen., William Tong, Conn. Att’y Gen., Kathleen Jennings, Del. Att’y Gen., Brian Schwalb, D.C. Att’y Gen., Ashley Moody, Fla. Att’y Gen., Christopher M. Carr, Ga. Att’y Gen., Anne E. Lopez, Haw. Att’y Gen., Raúl Labrador, Idaho Att’y Gen., Kwame Raoul, Ill. Att’y Gen., Todd Rokita, Ind. Att’y Gen., Aaron M. Frey, Me. Att’y Gen., Anthony G. Brown, Md. Att’y Gen., Andrea Joy Campbell, Mass. Att’y Gen., Dana Nessel, Mich. Att’y Gen., Keith Ellison, Minn. Att’y Gen., Aaron D. Ford, Nev. Att’y Gen., John M. Formella, N.H. Att’y Gen., Raúl Torrez, N.M. Att’y Gen., Josh Stein, N.C. Att’y Gen., Drew H. Wrigley, N.D. Att’y Gen., Gentner Drummond, Okla. Att’y Gen., Ellen F. Rosenblum, Or. Att’y Gen., Michelle Henry, Pa. Att’y Gen., Peter F. Neronha, R.I. Att’y Gen., Alan Wilson, S.C. Att’y Gen., Marty Jackley, S.D. Att’y Gen., Gordon C. Rhea, V.I. Att’y Gen. (Nominee), Sean D. Reyes, Utah Att’y Gen., Charity Clark, Vt. Att’y Gen., Jason S. Miyares, Va. Att’y Gen., Robert W. Ferguson, Wash. Att’y Gen., Joshua L. Kaul, Wis. Att’y Gen., Bridget Hill, Wyo. Att’y Gen., to Mike Johnson, Speaker of the House, Chuck Schumer, Senate Majority Leader, Mitch McConnel, Senate Minority Leader (Sept. 9, 2024) (on file with the National Association of Attorneys General).

[14] Id.

[15] Id.

[16] Id.

[17] Id.

[18] Id.

[19] Id.

[20] See Barbara Ortutay, States Sue Meta Claiming its Social Platforms are Addictive and Harm Children’s Mental Health, Associated Press https://apnews.com/article/instagram-facebook-children-teens-harms-lawsuit-attorney-general-1805492a38f7cee111cbb865cc786c28 (Oct. 24, 2023); Cristiano Lima-Strong & Naomi Nix, 41 States Sue Meta, Claiming Instagram, Facebook are Addictive, Harm Kids, Washington Post, https://www.washingtonpost.com/technology/2023/10/24/meta-lawsuit-facebook-instagram-children-mental-health/ (Oct. 24, 2024, 3:25 PM).

[21] Letter from Rob Bonta et. al. to Mike Johnson et. al., supra note 13.

[22] The Kids Online Safety Act, S. 1409, 118th Cong. (2023).

[23] Letter from Rob Bonta et. al. to Mike Johnson et. al., supra note 13.

[24] Attorney General Josh Stein Urges Congress to Require Warning on Social Media Platforms, N.C. Department of Justice (Sept. 11, 2024), https://ncdoj.gov/attorney-general-josh-stein-urges-congress-to-require-warning-on-social-media-platforms/; see Ortutay, supra note 20.


Will Coltzer

The Supreme Court is set to determine whether the government can regulate the way social media platforms (“Platforms”) like X,[1] Facebook, and YouTube moderate third-party content.[2] Although social media has become ubiquitous and has been described as the modern “public forum,”[3] there remain serious questions about the authority of the government to require private entities to host certain third-party content. Must people rely on Elon Musk and Mark Zuckerberg—two of the wealthiest people in the world—to ensure “free speech around the globe”?[4]

The Freedom of Speech is one of the most essential tenants of American democracy, yet that right is not absolute.[5] The First Amendment prohibits States from passing laws that “abridg[e] the Freedom of Speech.”[6] Thus, because Platforms are private businesses, individuals cannot use the First Amendment to pursue recourse against censorship on a private platform.[7] Instead, States have attempted to enforce the ideals of free speech by regulating Platforms content moderation policies.[8] The question remains whether this regulation infringes the Platforms own right to control its “speech.”

On February 26, 2024, the Court will hear oral arguments to address these questions in Moody v. NetChoice[9] and NetChoice v. Paxton.[10] In 2021, Texas and Florida passed laws that prevented large Platforms from censuring third-party created content.[11] The proponents of these laws argue Platforms “have unfairly censored” and  “shadow banned” users based on political speech‚— particularly conservative speech.[12] In response, NetChoice, a trade association that represents large technology businesses including Meta,[13] filed actions in the Northern District of Florida and the Western District of Texas seeking preliminary injunctions against the State’s regulation of Platforms.[14]

On appeal, the Eleventh and Fifth Circuit split on the key constitutional questions. Now, the two main issues before the Court are: (1) whether Platform’s moderation of content is considered “speech” for First Amendment analysis, and (2) whether Platforms are “common carriers” who hold themselves open to the public.[15] This article will address both issues in turn, concluding that the Court should uphold the States regulations under the common carrier doctrine.

  1. The “Speech” Issue

The Court must first ascertain whether Texas and Florida’s regulations affect the Platform’s “Speech.”[16] In exercising some “doctrinal gymnastics,”[17] the Eleventh Circuit found Florida’s statute violates the Platform’s First Amendment rights because it removes its “editorial judgment” over the content published on its private platform.[18] On the other hand, the Fifth Circuit found the Texas statute “does not regulate the Platform’s speech at all; it protects other people’s speech and regulates the Platform’s conduct.”[19]

These conflicting interpretations derive from a complex body of case law that has attempted to apply the same First Amendment principles to vastly different mediums of communication.[20] The Court is tasked with comparing social media to the mediums in four major cases: Miami Herald Pub. Co. v. Tornillo,[21] Hurley v. Irish-Am. Gay, Lesbian & Bisexual Grp. of Bos.,[22] PruneYard Shopping Center v. Robbins,[23] and Rumsfeld v. Forum for Acad. & Inst. Rts, Inc. (“FAIR”).[24] These cases establish two lines of precedent. 

  1. Editorial Judgments

The first line of precedent, which derives from Miami Herald and Hurley, establishes the right of publishers to exercise “editorial judgment” over the content they publish.[25] In Miami Herald the Court held that a newspaper’s “choice of material” and the “treatment of public issues and public officials—whether fair or unfair—constitute the exercise of editorial control and judgment”  protected by the First Amendment.[26] Most recently, the Court extended the editorial-judgment principle in Hurley.[27] There, the Court rejected a Massachusetts public accommodation statute because it infringed on the parade organizer’s First Amendment right to control the message of the parade.[28]

Together, these editorial judgment cases can be read two ways. First, these cases may establish a private entity’s decisions about disseminating third-party content are “editorial judgments protected by the First Amendment,” as the Eleventh Circuit found.[29] Alternatively, editorial judgments may be merely a factor rather than a “freestanding category of protected expression,” as the Fifth Circuit found.[30]  The first reading is more persuasive; the decision to accept or reject third-party content creates a message that a reasonable user would perceive. A private speaker “who chooses to speak may also decide ‘what not to say’ and ‘tailor’ the content of his message as he sees fit.”[31] The message need not be substantially tailored.[32] Before evaluating the first issue here, these editorial judgment cases must be placed in contrast to the “host speech” cases.

  1. Hosting Speech

The second line of precedent, which derives from PruneYard and FAIR, establishes the government may sometimes compel private actors to “host other’s speech.”[33] In PruneYard, the Court affirmed a state court’s decision that required a privately owned shopping mall to allow members of the public to circulate pamphlets on its property.[34]Importantly, the mall owner did not allege this circulation affected the owner’s autonomy to speak.[35] Extending PruneYard, the Court in FAIR unanimously upheld a federal statute—the Solomon Amendment—that required law schools to allow military recruiters the same access to campuses as other employers.[36] The Court distinguished FAIR from the editorial judgment cases by noting “the schools are not speaking when they host interviews and recruiting receptions.”[37] Together, these cases apply to a narrow set of facts where “hosting” third-party speech does not interfere with the owner’s right to speak.[38]

How will the Court decide the “Speech” issue?

The Court is likely to find Platforms have First Amendment protections under the editorial judgment line of cases. Platforms require terms and conditions, remove content based on their guidelines, and are in the business of curating certain edited experiences.[39] Algorithms curate content for users based on past activity.[40] The fact this is accomplished by an algorithm does not change the constitutional analysis.[41] Because Platforms are in the business of curating a tailored experience and they exercise substantial control over the content published, the Court will likely find social media more analogous to the newspaper publisher in Miami Herald than the law school in FAIR. Furthermore, the very justification for States passing these statutes in Texas and Florida was the alleged threat of a leftist agenda in BigTech against conservative speech.[42] Overall, social media companies should still retain the First Amendment protection over third-party speech published on their platform. However, social media platforms that uniquely hold themselves out as public forums may still be vulnerable to public accommodation laws under the common carrier doctrine.

  1. Common Carrier Issue

The State has an alternative argument that is gaining steam among key Supreme Court Justices: the “common carrier” doctrine.[43] This common law doctrine allows States to pass public accommodation laws that regulate businesses that hold themselves open to the public, even if that regulation affects the private actor’s speech.[44] The doctrine derives from English common law and was incorporated early on into the Court’s analysis of the original meaning of “Freedom of Speech.”[45]

The Supreme Court’s recent decision in 303 Creative v. Elenis[46] illuminates the doctrine’s potential application to online platforms. In 303 Creative, the Court held a Colorado statute that required a private website to accommodate certain messages was an unconstitutional infringement on the private website’s Freedom of Speech because the website did not have “monopoly power” over a public utility.[47] Importantly, the three dissenting Justices critiqued the majority for requiring “monopoly power,” which may signal a lower threshold for upholding public accommodation laws among the liberal wing of the Court.[48] Still, the Court has not addressed the unique application of the doctrine to social media, which is likely distinguishable from the small website in 303 Creative..

The common carrier doctrine is the State’s best argument for upholding Texas and Florida’s regulations for three reasons. First, several key justices have signaled support for the theory.[49] Second, it is the best tool to align our modern understanding of social media with the original meaning of the Constitution while leaving needed room to apply the same legal principles to past and future technology. Finally, using the monopoly power concept espoused in 303 Creative, the Court could distinguish large social media companies that hold themselves out as “public forums” from other websites that do not receive the liability benefits of this common carrier designation.[50] Social media companies are not liable for the content of third parties under Section 230.[51] Because these Platforms receive the legal benefit of being a common carrier by avoiding liability, States should have the power to ensure the platforms meet constitutionally permissive public accommodations laws.[52] You cannot have your cake and eat it too: either social media businesses open their Platforms to the public, like a restaurant, or they close their doors and should be liable for the third-party content circulated, like a newspaper publisher.

  1. Conclusion

      In short, the Court should uphold the regulations in Moody and Paxton to promote public discourse. The Court must reconcile competing precedents and use century-old doctrines to evaluate our First Amendment rights on social media.[53] If social media is to remain a “public square,” [54] the Court should ensure these businesses are subject to some legal accountability. The State’s best argument is perhaps the most intuitive: the First Amendment should not be morphed into a tool for upholding censorship of political speech on the modern equivalent of the public square.[55] The Court should recognize the unique way social media affects modern discourse and use these flexible legal standards, especially the common carrier doctrine, to uphold the ideals of free speech.


[1] Twitter was renamed to X in the summer of 2023. See Ryan Mac & Tiffany Hsu, From Twitter to X: Elon Musk Begins Erasing an Iconic Internet Brand, N.Y. TIMES (July 24, 2023), https://www.nytimes.com/2023/07/24/technology/twitter-x-elon-musk.html#:~:text=Late%20on%20Sunday%2C%20Elon%20Musk,letter%20of%20the%20Latin%20alphabet.

[2] NetChoice, L.L.C. v. Paxton, 49 F.4th 439, 447 (5th Cir. 2022), cert. granted in part sub nom. Netchoice, LLC v. Paxton, 216 L. Ed. 2d 1313 (Sept. 29, 2023) (hereinafter “Paxton”); NetChoice, LLC v. Att’y Gen., Fla., 34 F.4th 1196, 1212 (11th Cir. 2022), cert. granted in part sub nom. Moody v. Netchoice, LLC, 216 L. Ed. 2d 1313 (Sept. 29, 2023), and cert. denied sub nom. NetChoice, LLC v. Moody, 144 S. Ct. 69 (2023) (hereinafter “Moody”)..

[3] Packingham v. North Carolina, 582 U.S. 98, 107108 (2017).

[4] Billy Perrigo, ‘The Idea Exposes His Naiveté.’ Twitter Employees On Why Elon Musk Is Wrong About Free Speech, Time (Apr. 14, 2022, 2:04 PM), https://time.com/6167099/twitter-employees-elon-musk-free-speech/ (noting that Musk claimed his reason for purchasing Twitter was to spread free speech in an SEC filing report).

[5] Gitlow v. People of State of New York, 268 U.S. 652, 666 (1925) (“It is a fundamental principle, long established, that the freedom of speech and of the press which is secured by the Constitution, does not confer an absolute right to speak or publish, without responsibility[.]”); Schenck v. United States, 249 U.S. 47, 52 (1919) (“The most stringent protection of free speech would not protect a man in falsely shouting fire in a theatre and causing a panic.”).

[6] U.S. Const. amend. I (“Congress shall make no law . . . prohibiting the free exercise thereof; or abriding the Freedom of Speech, or of the press.”); Gitlow, 268 U.S. at 666 (incorporating the Freedom of Speech against the States through the Due Process Clause of the Fourteenth Amendment).

[7] Grace Slicklen,  For Freedom or Full of It? State Attempts to Silence Social Media, 78

U. Miami L. Rev. 297, 319–23 (2023); see also Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1926 (2019) (noting that the Freedom of Speech is a shield that “constrains governmental actors and protects private actors”).

[8] See S.B. 7072, 123rd Reg. Sess. (Fla. 2021); H.B. 20, 87th Leg. Sess. § 1201.002(a) (Tex. 2021).

[9] Supreme Court Docket for NetChoice v. Moody, Supreme Court, https://www.supremecourt.gov/docket/docketfiles/html/public/22-277.html (last visited Jan. 21, 2023)

[10] Supreme Court Docket for NetChoice v. Paxton, Supreme Court, https://www.supremecourt.gov/docket/docketfiles/html/public/22-555.html (last visited Jan. 21, 2023)

[11] See S.B. 7072, 123rd Reg. Sess. (Fla. 2021); H.B. 20, 87th Leg. Sess. § 1201.002(a) (Tex. 2021).

[12] Moody, 34 F.4th at 1205.

[13] Slicklen, supra note 7 at 307.

[14] NetChoice, LLC v. Moody, 546 F. Supp. 3d 1082, 1096 (N.D. Fla. 2021) (finding Florida’s legislation “is plainly content-based and subject to strict scrutiny . . . [which] [t]he legislation does not survive”); NetChoice LLC v. Moody, 546 F. Supp. 3d 1092, 1100–01 (W.D. Tex. 2021) (granting a preliminary injunction against the State enforcement of Texas legislation, but finding the constitutional question a close call).

[15] Moody, 34 F.4th at 1210.

[16]Id. at 1209 (“In assessing whether the Act likely violates the First Amendment, we must initially consider whether it triggers First Amendment scrutiny in the first place—i.e., whether it regulates ‘speech’ within the meaning of the Amendment at all. In other words, we must determine whether social-media platforms engage in First Amendment-protected activity.” (citations omitted)).

[17] Paxton, 49 F.4th at 455 (rejecting the “Platforms’ efforts to reframe their censorship as speech” because “no amount of doctrinal gymnastics can turn the First Amendment’s protections for free speech into protections for free censoring”).

[18] Moody, 34 F.4th at 1213–14 (“Social-media platforms exercise editorial judgment that is inherently expressive.”).

[19] Paxton, 49 F.4th at 448.

[20] Brown v. Ent. Merchants Ass’n, 564 U.S. 786, 790 (2011) (“[W]hatever the challenges of applying the Constitution to ever-advancing technology, ‘the basic principles of freedom of speech and the press, like the First Amendment’s command, do not vary’ when a new and different medium for communication appears.”).

[21] 418 U.S. 241 (1974).

[22] 515 U.S. 557 (1995).

[23] 447 U.S. 74 (1980).

[24] 547 U.S. 47 (2006).

[25] Moody, 34 F.4th at 1210–1211.

[26] Miami Herald, 418 U.S. at 258.

[27] Id.; see Moody, 34 F.4th at 1211 (describing the extension of Miami Herald’s editorial judgment principle to several subsequent Supreme Court decisions); Pac. Gas & Elec. Co. v. Pub. Utilities Comm’n of California, 475 U.S. 1, 9–12 (1986) (plurality opinion); Turner Broad. Sys., Inc. v. F.C.C., 512 U.S. 622, 636 (1994).

[28] Hurley, 515 U.S. at 570–75 (noting that the choice “not to propound a particular point of view” was a form of expressive speech that was “presumed to lie beyond the government’s power to control”).

[29] Moody, 34 F.4th at 1210–12.

[30] Paxton, 49 F.4th at 463.

[31] Hurley, 515 U.S. at 576.

[32] See Id. at 574–75 (finding parade organizer exercised editorial control over its message by rejecting a “particular point of view” even though they generally did not provide “considered judgment” for most forms of content).

[33] Paxton, 49 F.4th at 462.

[34] PruneYard Shopping Center v. Robbins, 477 U.S. 74, 76–77 (1980).

[35] Moody, 34 F.4th at 1215 (noting that the PruneYard decision was narrowed significantly by Pacific Gas and Hurley and arguing that “PruneYard is inapposite” to social-media content); Hurley, 515 U.S. at 580 (“The principle of speaker’s autonomy was simply not threatened in [PruneYard].”).

[36] FAIR, 547 U.S. at 70.

[37] Id. at 56, 60, 64.

[38] 303 Creative LLC v. Elenis, 600 U.S. 570, 588–89 (2023) (noting that the key factor in Hurley and other editorial-judgment cases was the regulation “affect[ed] their message”).

[39] See Moody, 34 F.4th at 1204–05 (noting that “social-media platforms aren’t ‘dumb pipes,’” and that “the platforms invest significant time and resources into edition and organizing—the best word, we think is curating—users’ posts into collections of content that they then disseminate to others”).

[40] Id.

[41] Slicklen, supra note 7 at 332.

[42] Moody, 34 F.4th at 1203.

[43] See NetChoice, L.L.C. v. Paxton, 142 S. Ct. 1715, 1716 (2022) (Alito, J., joined by Thomas and Gorsuch, JJ., dissenting from grant of application to vacate stay) (noting that the issue of whether social media platforms are common carriers raises “issues of great importance that will plainly merit this Court’s review”); see also Biden v. Knight First Amend. Inst., 141 S. Ct. 1220, 1224 (2021) (Thomas, J., concurring) (“There is a fair argument that some digital platforms are sufficiently akin to common carriers or places of accommodation to be regulated in this manner.”); Paxton, 49 F.4th at 493 (“The Eleventh Circuit quickly dismissed the common carrier doctrine without addressing its history or propounding a test for how it should apply.”).

[44]  For a more in-depth discussion of the common carrier doctrine, see Eugene Volokh, Treating Social Media Platforms Like Common Carriers?; 1 J. Free Speech L. 377 (2021); Ashutosh Bhagwat, Why Social Media Platforms Are Not Common Carriers, 2 J J. Free Speech L. 127 (2022); Christopher S. Yoo, The First Amendment, Common Carriers, and Public Accommodations:  Net Neutrality, Digital Platforms, and Privacy, 1 J. Free Speech L. 463 (2021).

[45] Paxton, 49 F.4th at 469–73 (describing the historical root of common carrier and its application prior to the 20th century); Adam Candeub, Bargaining for Free Speech: Common Carriage, Network Neutrality, and Section 230, 22 Yale J.L. & Tech. 391, 401–402 (2020).

[46] 600 U.S. 570 (2023).

[47] Id. at 590–92.

[48] Id. at 610–611 (Sotomayor, J., joined by Kagan and Jackson, JJ., dissenting).

[49] NetChoice, L.L.C. v. Paxton, 142 S. Ct. 1715, 1716 (2022) (Alito, J., joined by Thomas and Gorsuch, JJ., dissenting from grant of application to vacate stay).

[50] See Candeub, supra note 42 at 403–413 (noting that the “history of telecommunications regulation” demonstrates the common carriage doctrine was a “regulatory deal” where the carrier gets “special liability breaks in return for the carrier refraining from using some market power to further some public good”); Id. at 418–422 (“Section 230 can be seen as a common carriage-type deal—but without the government demanding much in return from internet platforms)

[51] Communications Decency Act of 1996, 47 U.S.C. § 230 (2018); Candeub, supra note 42 at 395 (“[S]ection 230 exempts internet platforms from liability arising from third-party speech.”).

[52] Id. at 429–433.

[53] Biden v. Knight First Amend. Inst. At Columbia Univ., 141 S. Ct. 1220, 1221 (2021) (“Today’s digital platforms provide avenues for historically unprecedented amounts of speech, including speech by government actors. Also unprecedented, however, is the concentrated control of so much speech in the hands of a few private parties. We will soon have no choice but to address how our legal doctrines apply to highly concentrated, privately owned information infrastructure such as digital platforms.”).

[54] Packingham, 582 U.S. at 107-108 (2017) (“[Social media platforms] are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge.” (emphasis added)).

[55] Paxton, 49 F.4th at 445 (“[W]e reject the idea that corporations have a freewheeling First Amendment right to censor what people say.”). Id. at 455 (“We reject the Platforms efforts to reframe their censorship as speech. . . . [N]o amount of doctrinal gymnastics can turn the First Amendment’s protection for free speech into protections for free censoring.”)

Free Meta Logo illustration and picture


Trinity Chapman 

On October 24, 2023, thirty-three states filed suit against Meta[1], alleging that its social media content harms and exploits young users.[2] The plaintiffs go on to allege that Meta’s services are intentionally addictive, promoting compulsive use and leading to severe mental health problems in younger users.[3]  The lawsuit points to specific aspects of Meta’s services that the states believe cause harm. The complaint asserts that “Meta’s recommendation Algorithms encourage compulsive use” and are harmful to minors’ mental health,[4] and that the use of “social comparison features such as ‘likes’” cause further harm.[5]  The suit further asserts that the push notifications from Meta’s products disrupt minors’ sleep and that the company’s use of visual filters “promote[s] eating disorders and body dysmorphia in youth.”[6]

Social media plays a role in the lives of most young people.  A recent Advisory by the U.S. Surgeon General revealed that 95% of teens ages thirteen to seventeen and 40% of children ages eight to twelve report using social media.[7] The report explains that social media has both negative and positive effects.[8]  On one hand, social media connects young people with like-minded individuals online, offers a forum for self-expression, fosters a sense of acceptance, and promotes social connections.[9]  Despite these positive effects, social media harms many young people; researchers have linked greater social media use to poor sleep, online harassment, lower self-esteem, and symptoms of depression.[10]  Social media content undoubtedly impacts the minds of young people—often negatively.  However, the question remains as to whether companies like Meta should be held liable for these effects.

This is far from the first time that Meta has faced suit for its alleged harm to minors.  For example, in Rodriguez v. Meta Platforms, Inc., the mother of Selena Rodriguez, an eleven-year-old social media user, sued Meta after her daughter’s death by suicide.[11]  There, the plaintiff alleged that Selena’s tragic death was caused by her “addictive use and exposure to [Meta’s] unreasonabl[y] dangerous and defective social media products.”[12]  Similarly, in Heffner v. Meta Platforms, Inc., a mother sued Meta after her eleven-year-old son’s suicide.[13]  That complaint alleged that Meta’s products “psychologically manipulat[ed]” the boy, leading to social media addiction.[14]  Rodriguez and Heffner are illustrative of the type of lawsuit regularly filed against Meta.

A.        The Communications Decency Act

 In defending such suits, Meta invariably invokes the Communications Decency Act.  Section 230 of the act dictates that interactive online services “shall not be treated as the publisher or speaker of any information provided by another information content provider.”[15] In effect, the statute shields online services from liability arising from the effects of third-party content.  In asserting the act, defendant [1] [2] internet companies present a “hands off” picture of their activities; rather than playing an active role in the content that users consume, companies depict themselves as merely opening a forum through which third parties may produce content.[16]

Plaintiffs have responded with incredulity to this application of the act by online service providers, and the act’s exact scope is unsettled.[17]  In Gonzalez v. Google LLC, the parents of a man who died during an ISIS terrorist attack sued Google, alleging that YouTube’s algorithm recommended ISIS videos to some users, leading to increased success by ISIS in recruitment efforts.[18]  In defense, Google relied on Section 230 of the Communications Decency Act.[19]  The Ninth Circuit ruled that Section 230 barred the plaintiff’s claims,[20] but the Supreme Court vacated the Ninth Circuit’s Ruling on other grounds, leaving unanswered questions about the act’s scope.[21]

Despite that uncertainty, the defense retains a high likelihood of success. In the October 24 lawsuit, Meta’s success on the Section 230 defense depends on how active a role the court determines Meta played in suggesting and exposing the harmful content to minors.

B.        Product Liability

The October 24 complaint against Meta alleges theories of product liability.[22] In framing their product liability claims, plaintiffs focus on the harmful design of Meta’s “products” rather than the harmful content to which users may be exposed.[23] The most recent lawsuit alleges that “Meta designed and deployed harmful and psychologically manipulative product features to induce young users’ compulsive and extended use.”[24]

A look at Meta’s defense in Rodriguez is predictive of how the company will respond to the October 24 suit. There, the company refuted the mere qualification of Instagram as a “product.”[25] Meta’s Motion to Dismiss remarked that product liability law focuses on “tangible goods” or “physical articles” and contrasted these concepts with the “algorithm” used by Instagram to recommend content.[26]  Given traditional notions about what constitutes a “product,” Meta’s defenses are poised to succeed.  As suggested by Meta in their motion to dismiss Rodriguez’s suit, recommendations about content, features such as “likes,” and communications from third parties fall outside of what is typically considered a “product” by courts.[27]

To succeed on a product liability theory, plaintiffs must advocate for a more modernized conception of what counts as a “product” for purposes of product liability law.  Strong arguments may exist for shifting this conception; the world of technology has transformed completely since the ALI defined product liability in the Restatement (Second) of Torts.[28]  Still, considering this well-settled law, plaintiffs are likely to face an uphill battle.

 C.        Whose job is it anyway?

Lawsuits against Meta pose large societal questions about the role of courts and parents in ensuring minors’ safety.  Some advocates place the impetus on companies themselves, urging top-down prevention of access by minors to social media.[29]  Others emphasize the role of parents and families in preventing minors from unsafe exposure to social media content[30]; parents, families, and communities may be in better positions than tech giants to know, understand, and combat the struggles that teens face.  Regardless of who is to blame, nearly everyone can agree that the problem needs to be addressed.


[1] In 2021, the Facebook Company changed its name to Meta. Meta now encompasses social media apps like WhatsApp, Messenger, Facebook, and Instagram. See Introducing Meta: A Social Technology Company, Meta(Oct. 28, 2021), https://about.fb.com/news/2021/10/facebook-company-is-now-meta/

[2] Complaint at 1, Arizona v. Meta Platforms, Inc., 4:23-cv-05448 (N.D. Cal. Oct. 24, 2023) [hereinafter October 24 Complaint] (“[Meta’s] [p]latforms exploit and manipulate its most vulnerable users: teenagers and children.”).

[3] Id. at 23.

[4] Id. at 28.

[5] Id. at 41.

[6] Id. at 56.

[7] U.S. Surgeon General, Advisory: Social Media and Youth Mental Health 4 (2023).

[8] Id. at 5.

[9] Id. at 6.

[10] Id. at 7.

[11] Complaint at 2, Rodriguez v. Meta Platforms, Inc., 3:22-cv-00401 (Jan. 20, 2022) [hereinafter Rodriguez Complaint].

[12] Id.

[13] Complaint at 2, Heffner v. Meta Platforms, Inc., 3:22-cv-03849 (June 29, 2022).

[14] Id. at 13.

[15] 47 U.S.C.S. § 230 (LEXIS through Pub. L. No. 118-19).

[16] See, e.g., Dimeo v. Max, 433 F. Supp. 2d 523, 34 Media L. Rep. (BNA) 1921, 2006 U.S. Dist. LEXIS 34456 (E.D. Pa. 2006), aff’d, 248 Fed. Appx. 280, 2007 U.S. App. LEXIS 22467 (3d Cir. 2007). Dimeo is just one example of the strategy used repeatedly by Meta and other social media websites.

[17] Gonzalez v. Google LLC, ACLU, https://www.aclu.org/cases/google-v-gonzalez-llc#:~:text=Summary-,Google%20v.,content%20provided%20by%20their%20users (last updated May 18, 2023).

[18] Gonzalez v. Google LLC, 2 F.4th 871, 880–81 (9th Cir. 2021).

[19] Id. at 882.

[20] Id. at 881.

[21] Gonzalez v. Google LLC, 598 U.S. 617, 622 (2023).

[22] October 24 Complaint, supra note 1, at 145–98.

[23] Id. at 197.

[24] Id. at 1.

[25] Motion to Dismiss, Rodriguez v. Meta Platforms, Inc., 3:22-cv-00401 (June 24, 2022).

[26] Id.

[27] Id.

[28] Restatement (Second) of Torts § 402A (Am. L. Inst. 1965).

[29] Rachel Sample, Why Kids Shouldn’t get Social Media Until they are Eighteen, Medium (June 14, 2020), https://medium.com/illumination/why-kids-shouldnt-get-social-media-until-they-are-eighteen-2b3ef6dcbc3b.

[30] Jill Filipovic, Opinion: Parents, Get your Kids off Social Media, CNN (May 23, 2023, 6:10 PM), https://www.cnn.com/2023/05/23/opinions/social-media-kids-surgeon-general-report-filipovic/index.html.


By Mary Catherine Young

Last month, an Azerbaijani journalist was forced to deactivate her social media accounts after receiving sexually explicit and violent threats in response to a piece she wrote about Azerbaijan’s cease-fire with Armenia.[1] Some online users called for the Azerbaijan government to revoke columnist Arzu Geybulla’s citizenship—others called for her death.[2] Days later, an Irish man, Brendan Doolin, was criminally charged for online harassment of four female journalists.[3] The charges came on the heels of a three-year jail sentence rendered in 2019 based on charges for stalking six female writers and journalists online, one of whom reported receiving over 450 messages from Doolin.[4] Online harassment of journalists is palpable on an international scale.

Online harassment of journalists abounds in the United States as well, with females receiving the brunt of the persecution.[5] According to a 2019 survey conducted by the Committee to Protect Journalists, 90 percent of female or gender nonconforming American journalists said that online harassment is “the biggest threat facing journalists today.”[6] Fifty percent of those surveyed reported that they have been threatened online.[7] While online harassment plagues journalists around the world, the legal ramifications of such harassment are far from uniform.[8] Before diving into how the law can protect journalists from this abuse, it is necessary to expound on what online harassment actually looks like in the United States.

In a survey conducted in 2017 by the Pew Research Center, 41 percent of 4,248 American adults reported that they had personally experienced harassing behavior online.[9] The same study found that 66 percent of Americans said that they have witnessed harassment targeted at others.[10] Online harassment, however, takes many shapes.[11] For example, people may experience “doxing” which occurs when one’s personal information is revealed on the internet.[12] Or, they may experience a “technical attack,” which includes harassers hacking an email account or preventing traffic to a particular webpage.[13] Much of online harassment takes the form of “trolling,” which occurs when “a perpetrator seeks to elicit anger, annoyance or other negative emotions, often by posting inflammatory messages.”[14] Trolling can encompass situations in which harassers intend to silence women with sexualized threats.[15]

The consequences of online harassment of internet users can be significant, invoking mental distress and sometimes fear for one’s physical safety.[16] In the context of journalists, however, the implications of harassment commonly affect more than the individual journalist themselves—free flow of information in the media is frequently disrupted due to journalists’ fear of cyberbullying.[17] How legal systems punish those who harass journalists online varies greatly both internationally and domestically.[18]

For example, the United States provides several federal criminal and civil paths to recourse for victims of online harassment, though not specifically geared toward journalists.[19] In terms of criminal law, provisions protecting individuals against cyber-stalking are included in 18 U.S.C. § 2261A, which criminalizes stalking in general.[20] According to this statute, “[w]hoever . . . with the intent to kill, injure, harass, intimidate, or place under surveillance with intent to . . . harass, or intimidate another person, uses . . . any interactive computer service . . . [and] causes, attempts to cause, or would be reasonably expected to cause substantial emotional distress to a person . . .” may be imprisoned.[21] In terms of civil law, plaintiffs may be able to allege defamation or copyright infringement claims.[22] For example, when the harassment takes the form of sharing an individuals’ self-taken photographs without the photographer’s consent, whether they are explicit or not, the circumstances may allow the victim to pursue a claim under the Digital Millennium Copyright Act.[23]

Some states provide their own online harassment criminal laws, though states differ in whether the provisions are included in anti-harassment legislation or in their anti-stalking laws.[24] For example, Alabama,[25] Arizona,[26] and Hawaii[27] all provide for criminal prosecution for cyberbullying in their laws against harassment, whereas Wyoming,[28] California,[29] and North Carolina[30] include anti-online harassment provisions in their laws against stalking.[31] North Carolina’s stalking statute, however, was recently held unconstitutional as applied under the First Amendment after a defendant was charged for posting a slew of Google Plus posts about his bizarre wishes to marry the victim.[32] The North Carolina Court of Appeals decision in Shackelford seems to reflect a distinctly American general reluctance to interfere with individuals’ ability to freely post online out of extreme deference to First Amendment rights.

Other countries have taken more targeted approaches to legally protecting journalists from online harassment.[33] France, in particular, has several laws pertaining to cyberbullying and online harassment in general, and these laws have recently provided relief for journalists.[34] For example, in July 2018, two perpetrators were given six-month suspended prison sentences after targeting a journalist online.[35] The defendants subjected Nadia Daam, a French journalist and radio broadcaster, to months of online harassment after she condemned users of an online platform for harassing feminist activists.[36] Scholars who examine France’s willingness to prosecute perpetrators of online harassment against journalists and non-journalists alike point to the fact that while the country certainly holds freedom of expression in high regard, this freedom is held in check against other rights, including individuals’ right to privacy and “right to human dignity.”[37]

Some call for more rigorous criminalization of online harassment in the United States, particularly against journalists, to reduce the potential for online harassment to create a “crowding-out effect” that prevents actually helpful online speech from being heard.[38] It seems, however, that First Amendment interests may prevent many journalists from finding relief—at least for now.


[1] Aneeta Mathur-Ashton, Campaign of Hate Forces Azeri Journalist Offline, VOA (Jan. 8, 2021), https://www.voanews.com/press-freedom/campaign-hate-forces-azeri-journalist-offline.

[2] Id.

[3] Tom Tuite, Dubliner Charged with Harassing Journalists Remanded in Custody, The Irish Times (Jan. 18, 2021), https://www.irishtimes.com/news/crime-and-law/courts/district-court/dubliner-charged-with-harassing-journalists-remanded-in-custody-1.4461404.

[4] Brion Hoban & Sonya McLean, ‘Internet Troll’ Jailed for Sending Hundreds of Abusive Messages to Six Women, The Journal.ie (Nov. 14, 2019), https://www.thejournal.ie/brendan-doolin-court-case-4892196-Nov2019/.

[5] Lucy Westcott & James W. Foley, Why Newsrooms Need a Solution to End Online Harassment of Reporters, Comm. to Protect Journalists (Sept. 4, 2019), https://cpj.org/2019/09/newsrooms-solution-online-harassment-canada-usa/.

[6] Id.

[7] Id.

[8] See Anya Schiffrin, How to Protect Journalists from Online Harassment, Project Syndicate (July 1, 2020), https://www.project-syndicate.org/commentary/french-laws-tackle-online-abuse-of-journalists-by-anya-schiffrin-2020-07.

[9] Maeve Duggan, Online Harassment in 2017, Pew Rsch. Ctr. (July 11, 2017), https://www.pewresearch.org/internet/2017/07/11/online-harassment-2017/.

[10] Id.

[11] Autumn Slaughter & Elana Newman, Journalists and Online Harassment, Dart Ctr. for Journalism & Trauma (Jan. 14, 2020), https://dartcenter.org/resources/journalists-and-online-harassment.

[12] Id.

[13] Id.

[14] Id.

[15] Id.

[16] Duggan, supra note 9.

[17] Law Libr. of Cong., Laws Protecting Journalists from Online Harassment 1 (2019), https://www.loc.gov/law/help/protecting-journalists/compsum.php.

[18] See id. at 3–4; Marlisse Silver Sweeney, What the Law Can (and Can’t) Do About Online Harassment, The Atl. (Nov. 12, 2014), https://www.theatlantic.com/technology/archive/2014/11/what-the-law-can-and-cant-do-about-online-harassment/382638/.

[19] Hollaback!, Online Harassment: A Comparative Policy Analysis for Hollaback! 37 (2016), https://www.ihollaback.org/app/uploads/2016/12/Online-Harassment-Comparative-Policy-Analysis-DLA-Piper-for-Hollaback.pdf.

[20] 18 U.S.C. § 2261A.

[21] § 2261A(2)(b).

[22] Hollaback!, supra note 19, at 38.

[23] Id.; see also 17 U.S.C. §§ 1201–1332.

[24] Hollaback!, supra note 19, at 38–39.

[25] Ala. Code § 13A-11-8.

[26] Ariz. Rev. Stat. Ann. § 13-2916.

[27] Haw. Rev. Stat. § 711-1106.

[28] Wyo. Stat. Ann. § 6-2-506.

[29] Cal. Penal Code § 646.9.

[30] N.C. Gen. Stat. § 14-277.3A.

[31] Hollaback!, supra note 19, at 39 (providing more states that cover online harassment in their penal codes).

[32] State v. Shackelford, 825 S.E.2d 689, 701 (N.C. Ct. App. 2019), https://www.nccourts.gov/documents/appellate-court-opinions/state-v-shackelford. After meeting the victim once at a church service, the defendant promptly made four separate Google Plus posts in which he referenced the victim by name. Id. at 692. In one post, the defendant stated that “God chose [the victim]” to be his “soul mate,” and in a separate post wrote that he “freely chose [the victim] as his wife.” Id. After nearly a year of increasingly invasive posts in which he repeatedly referred to the victim as his wife, defendant was indicted by a grand jury on eight counts of felony stalking. Id. at 693–94.

[33] Law Libr. of Cong., supra note 17, at 1–2.

[34] Id. at 78–83.

[35] Id. at 83.

[36] Id.

[37] Id. at 78.

[38] Schiffrin, supra note 8.

Post Image by Kaur Kristjan on Unsplash.

Composite image created using an original photograph by Gage Skidmore of President Donald Trump, via flickr.com.

By Christopher R. Taylor

On August 6th, President Trump issued Executive Order 13,942 (“TikTok Prohibition Order”) prohibiting transactions with ByteDance Ltd. (“ByteDance”), TikTok’s parent company, because of the company’s data collection practices regarding U.S. users and its close relationship with the Peoples Republic of China (“PRC”).[1] Eight days later President Trump issued a subsequent order (“Disinvestment Order”) calling for ByteDance to disinvest from Musical.ly, an application that was acquired by ByteDance and later merged with TikTok’s application.[2] TikTok is now engulfed in a legal battle against the Trump administration fighting both of these orders and was recently partially granted a preliminary injunction from the TikTok Prohibition Order.[3] However, the question remains—how successful will TikTok be in stopping the orders and what effect does this have on future cross-border transactions?

The foundation for President Trump’s TikTok orders was laid over a year earlier with Executive Order 13,873.[4] This order declared a national emergency under the International Emergency Economic Power Act (“IEEPA”) because of the “unusual and extraordinary threat” of “foreign adversaries . . . exploiting vulnerabilities in information and communication technology services.”[5] This national emergency was renewed for another year on May 13th, 2020.[6] Shortly after this renewal, the Trump administration issued both TikTok orders.

The TikTok Prohibition Order delegated to the Secretary of the Department of Commerce the task of defining specific prohibited transactions with ByteDance within 45 days of the execution of the order.[7] Following the president’s directive, the Secretary issued five phased prohibitions on transactions with TikTok, all with the stated purpose of limiting TikTok’s spread of U.S. users’ sensitive personal information to the PRC.[8] The Department of Commence implemented these prohibitions based primarily on two threats: (1) TikTok would share U.S. users’ personal data with the PRC to further efforts of espionage on the U.S. government, U.S. corporations, and U.S. persons and (2) TikTok would use censorship on the application to shape U.S. users’ perspective of the PRC.[9]

While the Trump administration was at work attempting to remove or substantially change TikTok’s U.S. presence, TikTok did not stand by idly. Instead, TikTok and ByteDance initiated an action challenging the Trump administration’s authority under the Administrative Procedure Act (“APA”) and the U.S. Constitution.[10] After filing the action in the U.S. District Court for the District of Columbia, TikTok moved for a preliminary injunction.[11] On September 29th, the court partially granted the preliminary injunction.[12]

Among the various arguments presented for the preliminary injunction, TikTok’s strongest argument was that the Trump administration’s actions violated APA § 706(2)(C) by exceeding its statutory authority under the IEEPA.[13] The IEEPA prohibits the President from “directly or indirectly” regulating “personal communication, which does not involve a transfer of anything of value” or the importation or exportation of “information or information materials.”[14] The IEEPA does not define “information materials,” however, it does provide examples, which include photographs, films, artworks, and news wire feeds.[15]

TikTok argued both of these exceptions applied, making the Trump administration’s prohibitions unlawful.[16] First, TikTok argued that the information exchanged by its global users includes art, films, photographs, and news.[17] Therefore, the information exchanged on TikTok fits within the definition of information materials.[18] Second, TikTok argued most of the communications exchanged on the application are among friends, and thus do not involve anything of value.[19]

The government countered by arguing that neither exception applied, contending for a narrower interpretation of the IEEPA exceptions.[20] First, the government argued the information materials exception did not apply because the TikTok prohibitions only regulate “business-to-business economic transactions,” and does not regulate the exchange of “information materials” by TikTok users themselves.[21] In the alternative, the government asserted Congress did not intend to create such a broad exception that would allow foreign adversaries to control data services.[22] Second, the government argued that some communications on TikTok are of value to users and, even if all communications are not of value to all users, they are of value to TikTok itself.[23] The government asserted that the use of the application alone provides value to TikTok making the exchanged communications fall outside of the IEEPA exception.[24]

In partially granting TikTok’s preliminary injunction, the court found both exceptions applied to TikTok.[25] First, the court held the content on TikTok’s application constitutes “information materials.”[26] Although the government only regulates economic transactions, the prohibitions still indirectly regulate the exchange of “information materials.”[27] Thus, the Trump administration’s actions directly fit within the IEEPA exception barring indirect regulation of information materials.[28]

Turning to the second exception on value, the court recognized some information on TikTok was of value.[29] However, it found the majority of the information provided no value to users.[30] Furthermore, the government’s argument regarding the value of communications to TikTok was at odds with Congressional intent.[31] The court found if Congress meant to look at the value provided to the company, as opposed to the value provided to users, the exception would be read out of existence.[32]

After finding that both exceptions applied, the court found irreparable harm to TikTok and equity supported partially granting the preliminary injunction.[33] However, the court refused to grant an injunction blocking the whole TikTok Prohibition Order because only one of the prohibitions was an imminent threat to TikTok.[34] The injunction only blocked the prohibition on TikTok downloads and updates from online application stores and marketplaces, leaving the remaining four prohibitions unaffected.[35]

While it appears TikTok has won the first round of this legal dispute, this fight is likely far from over. In response to the grant of the partial preliminary injunction, the Department of Commerce explained it is prepared to “vigorously defend the . . . [Executive order] and the Secretary’s implementation efforts from legal challenges.”[36] Based on this strong reaction, the dispute seems fertile for further quarrels regarding the merits of both executive orders.

The current TikTok dispute and the Trump administration’s willingness to use the IEEPA will likely also have broader implications for cross-border transactions, especially those involving the Peoples Republic of China or personal data. Since its enactment in 1979, presidential use of the IEEPA has become more frequent and broader in scope.[37] Thus, it is likely presidential use of the IEEPA will continue to grow no matter the President. Furthermore, the Trump administration’s strong stance toward the PRC has exacerbated tensions and led to an uptick in investigations into cross-border deals with Chinese companies.[38] Therefore, in-depth looks at deals with Chinese companies will likely continue to be the norm, at least for the remainder of the Trump presidency. In an effort to avoid disputes similar to TikToks, business dealmakers should obtain clearance from the Committee on Foreign Investment in the United States before the completion of any cross-border transaction, especially those involving the PRC or personal data.[39]


[1] Exec. Order No. 13,942, 85 Fed. Reg. 48,637 (Aug. 6, 2020).

[2] Order on the Acquisition of Musical.ly by ByteDance Ltd, 2020 Daily Comp. Pres. Doc. 608 (Aug. 14, 2020).

[3] TikTok, Inc. v. Trump, No. 1:20-cv-02658, 2020 U.S. Dist. LEXIS 177250, at *11, *26 (D.D.C. Sept. 27, 2020).

[4] Exec. Order No. 13,873, 84 Fed. Reg. 22,689 (May 15, 2019).

[5] Id.

[6] Notice on Continuation of the National Emergency with Respect to Securing the Information and Communications Technology and Services Supply Chain, 2020 Daily Comp. Pres. Doc. 361 (May 13, 2020).

[7] Exec. Order 13,942, at 48,638.

[8] See Identification of Prohibited Transactions to Implement Executive Order 13942 and Address the Threat Posed by TikTok and the National Emergency with Respect to the Information and Communications Technology and Services Supply Chain, 85 Fed. Reg. 60,061 (Sept. 24, 2020) (prohibiting new downloads and updates from the app-store; servers supporting TikTok in the U.S.; content delivery services used by TikTok; internet transit or peering agreements; and the use of TikTok code, services or functions). The Secretary set up a phrased implementation of this order, making the app store ban effective September 20th, 2020, and the remaining four prohibitions effective November 12th, 2020. Id.

[9] Defendants’ Memorandum in Opposition to Plaintiffs’ Motion for a Preliminary Injunction at Ex. 1, TikTok, Inc. v. Trump, No. 1:20-cv-02658, 2020 U.S. Dist. LEXIS 177250 (D.D.C. Sept. 27, 2020).

[10] Complaint at 30–42, TikTok, Inc. v. Trump, No. 1:20-cv-02658, 2020 U.S. Dist. LEXIS 177250 (D.D.C. Sept. 27, 2020). The specific counts in the complaint include allegations of (1) violations of APA § 706(2)(A) and § 706(2)(E), (2) violations of the First Amendment’s Right to Free Speech, (3) violations of the Due Process Clause of Fifth Amendment, (4)  ultra vires action under IEEPA because there is no national emergency, (5) ultra vires action because actions restrict personal communications and information violating IEEPA, (6) violation of Non-Delegation Doctrine of IEEPA, and (7)  violation of Fifth Amendment Taking Clause. Id.

[11] TikTok, Inc. v. Trump, No. 1:20-cv-02658, 2020 U.S. Dist. LEXIS 177250, at *11–12 (D.D.C. Sept. 27, 2020).

[12] Id. at *26.

[13] See id. at *21. 

[14] 50 U.S.C. § 1702(b)(1), (3).

[15] Id. § 1702(b)(3).

[16] TikTok, 2020 U.S. Dist. LEXIS 177250, at *14.

[17] Id. at *15–16.

[18] Id. at *15.

[19] See id. at *20.

[20] See id. at *16, *17–18, *20.

[21] Id. at *16.

[22] Id. at *17–18.

[23] Id. at *20. The government’s argument was that value is provided to TikTok simply by users’ presence on the application. Id.

[24] Id.

[25] See id. at *20–21 (“Plaintiffs have demonstrated that they are likely to succeed on their claim that the prohibitions constitute indirect regulation of ‘personal communication[s]’ or the exchange of ‘information or information materials.'”).

[26] Id. at *16

[27] Id. at *16–17.

[28] See id. at *17.

[29] See id. at *20.

[30] Id.

[31] Id.

[32] Id.

[33] Id. at *21–25.

[34] Id. at *26.

[35] Id. at *25–26.

[36] Commerce Department Statement on U.S. District Court Ruling on TikTok Preliminary Injunction, U.S. Dept. of Commerce (Sept. 27, 2020), https://www.commerce.gov/news/press-releases/2020/09/commerce-department-statement-us-district-court-ruling-tiktok.

[37] Christopher A. Casey et al., Cong. Rsch. Serv., R45618, The International Emergency Economic Powers Act: Origins, Evolution, and Use 17 (2020).

[38] See Julia Horowitz, Under Trump, the US Government Gives Many Foreign Deals a Closer Look, CNN (Mar. 16, 2018, 12:11 AM), https://money.cnn.com/2018/03/16/news/economy/trump-cfius-china-technology/index.html; Jeanne Whalen, TikTok was Just the Beginning: Trump Administration is Stepping Up Scrutiny of Past Chinese Tech Investments, Wash. Post. (Sept. 29, 2020, 3:12 PM), https://www.washingtonpost.com/technology/2020/09/29/cfius-review-past-chinese-investment/.

[39] See Adam O. Emmerich et al., Cross-Border M&A–2019 Checklist for Successful Acquisitions in the United States, Harv. L. Sch. F. on Corp. Governance (Jan. 30, 2019), https://corpgov.law.harvard.edu/2019/01/30/cross-border-ma-2019-checklist-for-successful-acquisitions-in-the-united-states/.

By Gabriel L. Marx

Donald Trump is once again at the center of a legal dispute. The Forty-Fifth President of the United States has been no stranger to legal controversies during and before his presidency,[1] but the latest update in Knight First Amendment Institute at Columbia University v. Trump[2] has President Trump petitioning for a writ of certiorari to the Supreme Court after more than three years of litigation.[3]  

The case began in July 2017 when the Knight First Amendment Institute at Columbia University (“Knight Institute”) filed a lawsuit against President Trump in federal court alleging that he violated the First Amendment by blocking Twitter users from his @realDonaldTrump account after they criticized his policies and presidency.[4] The U.S. District Court for the Southern District of New York found that Donald Trump, as President, exercised sufficient control over the Twitter account such that the @realDonald Trump account was “susceptible to analysis under the Supreme Court’s [First Amendment] forum doctrines, and is properly characterized as a designated public forum.”[5] The District Court then held that President Trump’s blocking of these Twitter users was discrimination based on the users’ viewpoints and impermissible under the First Amendment.[6] In July 2019, a three-judge panel for the U.S. Court of Appeals for the Second Circuit unanimously affirmed the district court’s decision[7] and subsequently denied rehearing, sitting en banc, in March of this year.[8] Despite his lack of success so far, the administration has continued his fight against the Knight Institute as Acting Solicitor General Jefferey Wall submitted a petition for a writ of certiorari to the Supreme Court at the end of August.[9]

The petition includes both legal and policy-based arguments about the importance of the case.[10] In terms of legal arguments, Solicitor General Wall argues that the Second Circuit wrongly concluded that (1) President Trump’s blocking of the Twitter users was a state action susceptible to the First Amendment rather than an act of a private citizen; (2) the @realDonaldTrump account was a designated public forum; and (3) the governmental-speech doctrine, which would exempt President Trump’s account from a First Amendment challenge, did not apply to President Trump’s actions.[11] Putting the legal arguments aside, Solicitor General Wall also argues, “the court of appeals’ decision . . . has important legal and practical implications that reach beyond the circumstances of this case.”[12] That is, public officials are “increasingly likely to maintain social media accounts to communicate their views, both personal and official,”[13] so if the Second Circuit’s decision were allowed to stand, it would significantly hinder the ability of these public officials to choose who they want to interact with on their own accounts: a choice afforded to every other social media user.[14] According to the petition, this choice—or lack thereof—takes on an even greater significance when the public official in question in the President of the United States.[15]

In response, the Knight Institute filed its brief in opposition on Sept. 21.[16] The Knight Institute first argues that there is no reason for the Court to hear the case because amongst the various lower courts that have dealt with this issue, all agree that public officials blocking critics from their social media accounts violates the First Amendment.[17] It additionally argues that the second circuit properly concluded that blocking users from the @realDonaldTrump account was state action, was not government speech, and that the account itself is a public forum.[18] The Knight Institute also counters Solicitor General Wall’s policy-based arguments, asserting that the impact of the Second Circuit’s decision has not and will not hinder the President’s or other public officials’ use of social media to communicate to the general public.[19] Finally, the Knight Institute maintains that the only cases where the Court has granted certiorari solely due to presidential implications, and absent a circuit split, are those that deal with “fundamental issues of executive power” (such as separation-of-power concerns), unlike the case at hand, which only deals with whether President Trump can block Twitter users from his @realDonaldTrump account.[20]

Given the procedural history, the above arguments, and the fact that the Court usually only hears cases that have “national significance, might harmonize conflicting decisions in the federal circuit courts, and/or could have precedential value,”[21] it seems unlikely that the Court will grant certiorari. Looking at the procedural history, the two lower courts were in agreement that President Trump violated the First Amendment (with one panel holding that unanimously).[22] Therefore, the Court has little incentive to rehear a case that has already been decided so clearly, unless, as Solicitor General Wall argues, the court of appeals erred in its conclusions. The petition for rehearing was denied by the Second Circuit en banc, [23] however, so the decision has already been affirmed in some sense. Along similar lines, there is no conflict among federal circuit or district courts on the issue of public officials blocking users from their social media accounts, as the Knight Institute points out.[24] On the other hand, there has been an influx of cases dealing with this issue as of late,[25] so the Court might want to decide the issue once and for all to deter future litigation. Nevertheless, given, again, that so many lower courts are all in agreement on the issue, the Court probably will not wish to devote time and resources on a well-settled area of the law simply to deter future litigation—particularly as the issue does not reach an issue of traditional significance in executive authority, such as a separation-of-powers issue. As a final matter, neither the Court’s current make-up of Justices nor the projected addition of Amy Coney Barrett should have much effect on the decision-making process in light of the above factors weighing so heavily against granting certiorari.

While it is unlikely that the Court will grant President Trump’s petition, if it does grant certiorari, the case would be interesting to watch unfold at the nation’s highest court. If heard, Knight First Amendment Institute at Columbia University could set the precedent for the ever-prevalent issue of freedom of speech in social media, so it is certainly worth keeping an eye out for the Court’s decision on the petition for writ of certiorari in the coming weeks.


[1] See Peter Baker, Trump Is Fighting So Many Legal Battles, It’s Hard to Keep Track, N.Y. Times (Nov. 6, 2019), https://www.nytimes.com/2019/11/06/us/politics/donald-trump-lawsuits-investigations.html.

[2] 302 F. Supp. 3d 541 (S.D.N.Y. 2018), aff’d, 928 F.3d 226 (2d Cir. 2019).

[3] See Tucker Higgins, White House Asks Supreme Court to Let Trump Block Critics on Twitter, CNBC (Aug. 20, 2020, 12:00 PM), https://www.cnbc.com/2020/08/20/white-house-asks-supreme-court-to-let-trump-block-critics-on-twitter.html.

[4] See Knight Institute v. Trump, Knight First Amendment Inst. at Colum. Univ., https://knightcolumbia.org/cases/knight-institute-v-trump (last visited Oct. 8, 2020).

[5] Knight Inst., 302 F. Supp. 3d at 580.

[6] Id.

[7] See Knight First Amendment Inst. at Colum. Univ. v. Trump, 928 F.3d 226 (2d Cir. 2019);Knight First Amendment Inst. at Colum. Univ., supra note 4.

[8] See Knight First Amendment Inst. at Colum. Univ. v. Trump, 953 F.3d 216 (2d Cir. 2020) (en banc); Knight First Amendment Inst. at Colum. Univ., supra note 4.

[9] See Petition for Writ of Certiorari, Knight First Amendment Inst. at Colum. Univ. v. Trump, No. 20-197 (Aug. 20, 2020), https://www.supremecourt.gov/DocketPDF/20/20-197/150726/20200820102824291_Knight%20First%20Amendment%20Inst.pdf.

[10] See id.

[11] Id. at 11–27.

[12] See id. at 27.

[13] See id. at 27–28.

[14] Id. at 28–29.

[15] See id. at 29.

[16] See Brief in Opposition, Knight Inst., No. 20-197 (Sept. 21, 2020), https://www.supremecourt.gov/DocketPDF/20/20-197/154505/20200921141934655_20-197%20BIO.pdf.

[17] See id. at 11–15.

[18] See id. at 15–28.

[19] See id. at 29.

[20] See id. at 30.

[21] Supreme Court Procedures,U.S. Cts., https://www.uscourts.gov/about-federal-courts/educational-resources/about-educational-outreach/activity-resources/supreme-1 (last visited Oct. 8, 2020).

[22] See supra notes 5–8 and accompanying text.

[23] See supra note 8 and accompanying text.

[24] See supra note 17 and accompanying text.

[25] See Petition for Writ of Certiorari, supra note 9, at 28 n.2 (noting six recent cases from around the country concerning public officials’ blocking social media users on their personal accounts).

by: Hanna Monson and Sarah Spangenburg

Introduction

One recent issue circulating the legal world involves whether schools can discipline students for social media posts. In January 2018, the University of Alabama expelled a nineteen-year-old freshman after she posted two videos of her racist rantings to her Instagram account.[1] Another user recorded and posted the video on Twitter, which subsequently went viral and instilled anger both at the University of Alabama campus and across the country. As the University of Alabama is a public university, the student’s expulsion has raised questions surrounding the constitutionality of dismissing a student for using offensive speech. To further consider this constitutional issue, this post highlights some of the arguments made in a factually similar case Keefe v. Adams (8th Cir. 2016).[2] The Eighth Circuit concluded that a student who was removed from the Nursing Program of a college after he posted Facebook posts indicating frustration towards other students in the program did not have his First Amendment nor due process rights violated. While this Eighth Circuit case is the focus of our discussion, it is important to note that a case of this sort has also arisen in the Fifth Circuit, Bell v. Itawamba County School Board, where the Fifth Circuit also decided against the student and determined that his First Amendment free speech rights were not violated.[3]

Facts

Craig Keefe was a student in the Associate Degree Nursing Program at Central Lakes College.[4] Two students complained about posts the Keefe made on his Facebook account.[5] After a meeting with CLC Director of Nursing Connie Frisch during which “[Keefe] was defensive and did not seem to feel responsible or remorseful,” Frisch made the decision that Keefe should no longer be in the program.[6] In a letter sent to Keefe after the meeting, Frisch expressed concerns about Keefe’s professionalism and inability to represent the nursing profession because of his posts.[7] All students enrolled in this program had to follow the Nurses Association Code of Ethics, which included guidance on issues such as “relationships with colleagues and others,” “professional boundaries,” and “wholeness of character.”[8] Keefe appealed this decision to Vice President of Academic Affairs, Kelly McCalla, but the appeal was denied, prompting this lawsuit.[9]

First Amendment Claims

Keefe first contends that his First Amendment rights were violated because “a college student may not be punished for off-campus speech . . . unless it is speech that is unprotected by the First Amendment, such as obscenity.”[10] The Eighth Circuit addressed first the threshold question of whether a public university may even adopt this Code of Ethics.[11] The court held that the state has a large interest in the regulation the health profession, and “[b]ecause professional codes of ethics are broadly worded, they can be cited to restrict protected speech.”[12]

The court then considered Keefe’s contention that the university violated his First Amendment rights. The court held that “college administrators and educators in a professional school have discretion to require compliance with recognized standards of the profession, both on and off campus, ‘so long as their actions are reasonably related to legitimate pedagogical concerns.’”[13] Keefe’s words showed that he was acting contrary to the Code of Ethics, and “compliance with the Nurses Association Code of Ethics is a legitimate part of the Associate Degree Nursing Program’s curriculum . . . .”[14] The posts targeted and threatened his classmates and impacted their education, as one of the students stated she no longer wished to be in the same clinical as Keefe.[15] Keefe’s words also had the possibility of impacting patient care because adequate patient care requires the nurses to communicate and work together.[16] The court did not wish to interfere with Frisch’s discretion in deciding that Keefe’s actions showed that he was not fit for the profession, and the First Amendment did not prevent Frisch from making this decision.[17] Given that the district court had granted the defendant’s motion for summary judgment on the First Amendment claims, the Eighth Circuit affirmed.[18]

Due Process Claims

The second issue presented in this case was whether a violation of due process existed. Keefe argued that the Defendants violated his Fourteenth Amendment right to due process when he was removed from the Associate Degree Nursing Program.[19] Supreme Court precedent states that “federal courts can review an academic decision of a public educational institution under a substantive due process standard.”[20] One key inquiry is whether the removal was based on academic judgment that is not beyond the pale of reasoned academic decision making.[21] Even if a substantive due process claim is cognizable in these circumstances, there is no violation of substantive due process unless misconduct of government officials that violates a fundamental right is “so egregious, so outrageous, that it may fairly be said to shock the contemporary conscience” of federal judges.[22] Here, the court determined that Keefe’s removal rested on academic judgment that was not beyond the pale of reasoned academic decision making.[23] Ultimately, the court determined that Keefe had no substantive due process claim.[24]

The court also analyzed the procedural due process claim that Keefe presented. Citing Goss v. Lopez[25], the Eighth Circuit highlighted that the Supreme Court has held that even a short disciplinary suspension requires the student “be given oral or written notice of the charges against him and, if he denies them, an explanation of the evidence the authorities have and an opportunity to present his side of the story.”[26] The court believed that the Keefe’s removal after a disciplinary proceeding provided the kind of inquiry that involved effective notice and allowed Keefe to give his version of the events, thereby preventing erroneous action.[27] Ultimately, the court concluded that Keefe was given the due process he was required by the Fourteenth Amendment.

Conclusion

Ultimately, this issue presents free speech concerns for students. The decisions of the Eighth and Fifth Circuits seem to showcase that students’ free speech rights seem to stop at the door of the school, which contradicts much Supreme Court precedent. The prevalence of social media in today’s society ensures that this issue will continue to exist, and the Supreme Court one day might weigh in.

****

[1] Marwa Eltagouri, She was expelled from college after her racist chants went viral. Her mother thinks she deserves it.,Wash. Post (Jan. 19, 2018), https://www.washingtonpost.com/news/grade-point/wp/2018/01/19/she-was-expelled-from-college-after-her-racist-rants-went-viral-her-mother-thinks-she-deserves-it/?utm_term=.b0cd4c397d35.

[2] The full opinion can be found at: http://media.ca8.uscourts.gov/opndir/16/10/142988P.pdf.

[3] Mark Joseph Stern, Judges Have No Idea What to Do About Student Speech on the Internet, Slate (Feb. 18, 2016 5:15 PM), http://www.slate.com/articles/technology/future_tense/2016/02/in_bell_v_itawamba_county_school_board_scotus_may_rule_on_the_first_amendment.html.

[4] Keefe v. Adams, 840 F.3d 523, 525 (8th Cir. 2016).

[5] Id.at 526.

[6] Id. at 526–27.

[7] Id. at 527–28.

[8] Id. at 528–29.

[9] Id. at 526, 529.

[10] Id. at 529.

[11] Id. at 529–30.

[12] Id. at 530.

[13] Id. at 531 (quoting Hazelwood Sch. Dist. v. Kuhlmeier, 484 U.S. 260, 273 (1988)).

[14] Id.

[15] Id. at 532.

[16] Id.

[17] Id. at 533.

[18] Id.

[19] Id. at 533.

[20] Regents of University of Michigan v. Ewing, 474 U.S. 214, 222 (1985).

[21] Keefe, 840 F.3d at 533-34.

[22] Cnty. of Sacremento v. Lewis, 523 U.S. 833, 847 n.8 (1998) (quotation omitted).

[23] Keefe, 840 F.3d at 534.

[24] Id.

[25] 419 U.S. 565, 581 (1975).

[26] Keefe, 840 F.3d at 535.

[27] Id.

By: Kristina Wilson

On Monday, March 20, 2017, the Fourth Circuit issued a published opinion in the civil case Grutzmacher v. Howard County. The Fourth Circuit affirmed the District Court for the District of Maryland’s grant of summary judgment in favor of the defendant, holding that the defendant’s termination of plaintiffs did not violate the plaintiffs’ First Amendment Free Speech rights. The plaintiff raises two arguments on appeal.

Facts and Procedural History

Prior to initiating this action, plaintiffs worked for the defendant, the Howard County, Maryland Department of Fire and Rescue Services. In 2011, the defendant started drafting a Social Media Policy (“the Policy”) in response to a volunteer firefighter’s inflammatory and racially discriminatory social media posts that attracted negative media attention. The Policy prevented employees from posting any statements that may be perceived as discriminatory, harassing, or defamatory or that would impugn the defendant’s credibility. Additionally, in 2012, the defendant promulgated a Code of Conduct (“the Code”) that prohibited disrespectful conduct toward authority figures or the chain of command established by the defendant. Finally, the Code required employees to conduct themselves in a manner that reflected favorably on the defendant.

On January 20, 2013, one of the plaintiffs advocated killing “liberals” on his Facebook page while on duty for defendant. The defendant asked the plaintiff to review the Policy and remove any postings that did not conform. Although the plaintiff maintained that he was in compliance with the Policy, he removed the January 20th posting. On January 23, 2013, the plaintiff posted a series of statements that accused the defendant of stifling his First Amendment rights. On February 17, 2013, the plaintiff also “liked” a Facebook post by a coworker was captioned “For you, chief” and displayed a photo of an obscene gesture. Shortly thereafter, the defendant served the plaintiff with charges of dismissal and afforded the plaintiff an opportunity for a preliminary hearing on March 8, 2013. On March 14, 2013, the defendant terminated the plaintiff.

At the district court, the plaintiff argued that the defendant fired him in retaliation for his use of his First Amendment Free Speech rights and that the Policy and Code were facially unconstitutional for restricting employees’ Free Speech. The district court granted the defendant’s motion for summary judgment regarding the retaliation claims, holding that the plaintiff’s January 20th posts and “likes” were capable of disrupting the defendant’s ability to perform its duties and thus did not constitute protected speech. Similarly, the January 23rd post and February 17th “like” were not protected speech because they did not implicate a matter of public concern. In June of 2015, the defendant revised its Policy and Code to eliminate all the challenged provisions. As a result, the district court dismissed the plaintiff’s facial challenge as moot.

The Plaintiff’s Free Speech Rights Did Not Outweigh the Defendant’s Interest

In evaluating the plaintiff’s First Amendment retaliation claim, the Fourth Circuit applied the Mcvey v. Stacy three-prong test. 157 F.3d 271 (4th Cir. 1998). Under Mcvey, a plaintiff must show the following three conditions: i) that he was a public employee speaking on a matter of public concern, ii) that his interest in speaking about a matter of public concern outweighed the government’s interest in providing effective and efficient services to the public, and iii) that such speech was a “substantial factor” in the plaintiff’s termination. Id. at 277–78.

The first prong is satisfied when a plaintiff demonstrates that his speech involved an issue of social, political, or other interest to a community. Urofsky v. Gilmore, 216 F.3d 401, 406 (4th Cir. 2000) (en banc). To determine whether the issue was social, political, or of interest to a community, courts examine the speech’s content, context, and form in view of the entire record. Id. The Fourth Circuit concluded that at least some of the content of plaintiff’s posts and “likes” were matters of public concern because the public has an interest in the opinions of public employees. Although not all of the postings were of public concern, the Fourth Circuit advocated examining the entirety of the speech in context and therefore proceeded to the second prong of the Mcvey analysis.

The Mcvey Factors Weighed More Heavily in Favor of the Defendant

The Fourth Circuit next balanced the plaintiff’s interest in speaking about matters of public concern with the government’s interest in providing efficient and effective public services. The Fourth Circuit used the Mcvey multifactor test to weigh the following considerations: whether a public employee’s speech (1) impaired the maintenance of discipline by supervisors; (2) impaired harmony among coworkers; (3) damaged close personal relationships; (4) impeded the performance of the public employee’s duties; (5) interfered with the operation of the institution; (6) undermined the mission of the institution; (7) was communicated to the public or to coworkers in private; (8) conflicted with the responsibilities of the employee within the institution; and (9) abused the authority and public accountability that the employee’s role entailed. McVey, 157 F.3d at 278.

The Fourth Circuit held that all of the factors weighed in favor of the defendant. The first factor was satisfied because plaintiff was a chief battalion, a leadership position, and allowing plaintiff to violate the Policy and Code without repercussions would encourage others to engage in similar violations. The second and third factors weighed in the defendant’s favor because several minority firefighters issued complaints and refused to work with the plaintiff after the posts. Similarly, the fourth factor weighed in the government’s favor because of the plaintiff’s responsibilities as a leader. The plaintiff’s leadership duties depended on his subordinates taking him seriously and looking to him as an example. By violating the policies he was supposed to uphold, the plaintiff failed to act as a leader and carry out his duties as chief battalion. Finally, plaintiff’s actions also “undermined community trust” by advocating violence against certain groups of people. Community trust and preventing violence are central to the defendant’s mission because the defendant’s function is to protect the community. Therefore, although plaintiff’s speech did involve some matters of public concern, the matters were not of sufficient gravity to outweigh all nine factors of the Mcvey multifactor test. Thus, the government’s interest in effectively providing public services outweighed the plaintiff’s interest in speech about public concerns.

The District Court’s Dismissal of the Facial Challenge on Mootness Grounds Was Proper

While defendant repealed all the challenged sections of the Policy and Code, a party’s voluntary repeal of provisions can only moot an action if the wrongful behavior can be reasonably expected not to recur. The Fourth Circuit affirmed the district court’s dismissal of the facial challenge for mootness because the current Fire Chief issued a sworn affidavit asserting that the defendant will not revert to the former Policy or Code. Additionally, the defendant’s counsel at oral argument declared that the defendant has no hint of an intent to return to the former guidelines. The Fourth Circuit held that these formal declarations were sufficient to meet the defendant’s mootness burden.

Conclusion

The Fourth Circuit affirmed both the district court’s grant of summary judgment and its grant of a motion to dismiss on mootness grounds.