Free Meta Logo illustration and picture


Trinity Chapman 

On October 24, 2023, thirty-three states filed suit against Meta[1], alleging that its social media content harms and exploits young users.[2] The plaintiffs go on to allege that Meta’s services are intentionally addictive, promoting compulsive use and leading to severe mental health problems in younger users.[3]  The lawsuit points to specific aspects of Meta’s services that the states believe cause harm. The complaint asserts that “Meta’s recommendation Algorithms encourage compulsive use” and are harmful to minors’ mental health,[4] and that the use of “social comparison features such as ‘likes’” cause further harm.[5]  The suit further asserts that the push notifications from Meta’s products disrupt minors’ sleep and that the company’s use of visual filters “promote[s] eating disorders and body dysmorphia in youth.”[6]

Social media plays a role in the lives of most young people.  A recent Advisory by the U.S. Surgeon General revealed that 95% of teens ages thirteen to seventeen and 40% of children ages eight to twelve report using social media.[7] The report explains that social media has both negative and positive effects.[8]  On one hand, social media connects young people with like-minded individuals online, offers a forum for self-expression, fosters a sense of acceptance, and promotes social connections.[9]  Despite these positive effects, social media harms many young people; researchers have linked greater social media use to poor sleep, online harassment, lower self-esteem, and symptoms of depression.[10]  Social media content undoubtedly impacts the minds of young people—often negatively.  However, the question remains as to whether companies like Meta should be held liable for these effects.

This is far from the first time that Meta has faced suit for its alleged harm to minors.  For example, in Rodriguez v. Meta Platforms, Inc., the mother of Selena Rodriguez, an eleven-year-old social media user, sued Meta after her daughter’s death by suicide.[11]  There, the plaintiff alleged that Selena’s tragic death was caused by her “addictive use and exposure to [Meta’s] unreasonabl[y] dangerous and defective social media products.”[12]  Similarly, in Heffner v. Meta Platforms, Inc., a mother sued Meta after her eleven-year-old son’s suicide.[13]  That complaint alleged that Meta’s products “psychologically manipulat[ed]” the boy, leading to social media addiction.[14]  Rodriguez and Heffner are illustrative of the type of lawsuit regularly filed against Meta.

A.        The Communications Decency Act

 In defending such suits, Meta invariably invokes the Communications Decency Act.  Section 230 of the act dictates that interactive online services “shall not be treated as the publisher or speaker of any information provided by another information content provider.”[15] In effect, the statute shields online services from liability arising from the effects of third-party content.  In asserting the act, defendant [1] [2] internet companies present a “hands off” picture of their activities; rather than playing an active role in the content that users consume, companies depict themselves as merely opening a forum through which third parties may produce content.[16]

Plaintiffs have responded with incredulity to this application of the act by online service providers, and the act’s exact scope is unsettled.[17]  In Gonzalez v. Google LLC, the parents of a man who died during an ISIS terrorist attack sued Google, alleging that YouTube’s algorithm recommended ISIS videos to some users, leading to increased success by ISIS in recruitment efforts.[18]  In defense, Google relied on Section 230 of the Communications Decency Act.[19]  The Ninth Circuit ruled that Section 230 barred the plaintiff’s claims,[20] but the Supreme Court vacated the Ninth Circuit’s Ruling on other grounds, leaving unanswered questions about the act’s scope.[21]

Despite that uncertainty, the defense retains a high likelihood of success. In the October 24 lawsuit, Meta’s success on the Section 230 defense depends on how active a role the court determines Meta played in suggesting and exposing the harmful content to minors.

B.        Product Liability

The October 24 complaint against Meta alleges theories of product liability.[22] In framing their product liability claims, plaintiffs focus on the harmful design of Meta’s “products” rather than the harmful content to which users may be exposed.[23] The most recent lawsuit alleges that “Meta designed and deployed harmful and psychologically manipulative product features to induce young users’ compulsive and extended use.”[24]

A look at Meta’s defense in Rodriguez is predictive of how the company will respond to the October 24 suit. There, the company refuted the mere qualification of Instagram as a “product.”[25] Meta’s Motion to Dismiss remarked that product liability law focuses on “tangible goods” or “physical articles” and contrasted these concepts with the “algorithm” used by Instagram to recommend content.[26]  Given traditional notions about what constitutes a “product,” Meta’s defenses are poised to succeed.  As suggested by Meta in their motion to dismiss Rodriguez’s suit, recommendations about content, features such as “likes,” and communications from third parties fall outside of what is typically considered a “product” by courts.[27]

To succeed on a product liability theory, plaintiffs must advocate for a more modernized conception of what counts as a “product” for purposes of product liability law.  Strong arguments may exist for shifting this conception; the world of technology has transformed completely since the ALI defined product liability in the Restatement (Second) of Torts.[28]  Still, considering this well-settled law, plaintiffs are likely to face an uphill battle.

 C.        Whose job is it anyway?

Lawsuits against Meta pose large societal questions about the role of courts and parents in ensuring minors’ safety.  Some advocates place the impetus on companies themselves, urging top-down prevention of access by minors to social media.[29]  Others emphasize the role of parents and families in preventing minors from unsafe exposure to social media content[30]; parents, families, and communities may be in better positions than tech giants to know, understand, and combat the struggles that teens face.  Regardless of who is to blame, nearly everyone can agree that the problem needs to be addressed.


[1] In 2021, the Facebook Company changed its name to Meta. Meta now encompasses social media apps like WhatsApp, Messenger, Facebook, and Instagram. See Introducing Meta: A Social Technology Company, Meta(Oct. 28, 2021), https://about.fb.com/news/2021/10/facebook-company-is-now-meta/

[2] Complaint at 1, Arizona v. Meta Platforms, Inc., 4:23-cv-05448 (N.D. Cal. Oct. 24, 2023) [hereinafter October 24 Complaint] (“[Meta’s] [p]latforms exploit and manipulate its most vulnerable users: teenagers and children.”).

[3] Id. at 23.

[4] Id. at 28.

[5] Id. at 41.

[6] Id. at 56.

[7] U.S. Surgeon General, Advisory: Social Media and Youth Mental Health 4 (2023).

[8] Id. at 5.

[9] Id. at 6.

[10] Id. at 7.

[11] Complaint at 2, Rodriguez v. Meta Platforms, Inc., 3:22-cv-00401 (Jan. 20, 2022) [hereinafter Rodriguez Complaint].

[12] Id.

[13] Complaint at 2, Heffner v. Meta Platforms, Inc., 3:22-cv-03849 (June 29, 2022).

[14] Id. at 13.

[15] 47 U.S.C.S. § 230 (LEXIS through Pub. L. No. 118-19).

[16] See, e.g., Dimeo v. Max, 433 F. Supp. 2d 523, 34 Media L. Rep. (BNA) 1921, 2006 U.S. Dist. LEXIS 34456 (E.D. Pa. 2006), aff’d, 248 Fed. Appx. 280, 2007 U.S. App. LEXIS 22467 (3d Cir. 2007). Dimeo is just one example of the strategy used repeatedly by Meta and other social media websites.

[17] Gonzalez v. Google LLC, ACLU, https://www.aclu.org/cases/google-v-gonzalez-llc#:~:text=Summary-,Google%20v.,content%20provided%20by%20their%20users (last updated May 18, 2023).

[18] Gonzalez v. Google LLC, 2 F.4th 871, 880–81 (9th Cir. 2021).

[19] Id. at 882.

[20] Id. at 881.

[21] Gonzalez v. Google LLC, 598 U.S. 617, 622 (2023).

[22] October 24 Complaint, supra note 1, at 145–98.

[23] Id. at 197.

[24] Id. at 1.

[25] Motion to Dismiss, Rodriguez v. Meta Platforms, Inc., 3:22-cv-00401 (June 24, 2022).

[26] Id.

[27] Id.

[28] Restatement (Second) of Torts § 402A (Am. L. Inst. 1965).

[29] Rachel Sample, Why Kids Shouldn’t get Social Media Until they are Eighteen, Medium (June 14, 2020), https://medium.com/illumination/why-kids-shouldnt-get-social-media-until-they-are-eighteen-2b3ef6dcbc3b.

[30] Jill Filipovic, Opinion: Parents, Get your Kids off Social Media, CNN (May 23, 2023, 6:10 PM), https://www.cnn.com/2023/05/23/opinions/social-media-kids-surgeon-general-report-filipovic/index.html.


Free Person Holding a Gavel Stock Photo

By Allison Lizotte

In the early hours of the morning on January 1st, 2017, a gunman opened fire in a nightclub in Istanbul, Turkey.[1]  The attack, for which ISIS claimed responsibility, killed 39 people and left nearly 70 others injured.[2]  Six years later, a lawsuit related to the massacre has made its way before the United States Supreme Court, threatening to hold large tech companies accountable and shake up the way they run their businesses.[3] 

 

Shortly after the Istanbul attack, American relatives of Nawras Alassaf, one of the 39 people killed, filed a complaint in the Northern District of California against Twitter, Google, and Facebook, alleging violations of the Anti-Terrorism Act (“ATA”)[4].  In the complaint, the Plaintiffs argued that the Internet companies played a central role in ISIS’s growth by permitting the organization to “recruit members, issue terrorist threats, spread propaganda, instill fear, and intimidate civilian populations.”[5]  The Plaintiffs claim that, despite having the ability to remove and review content posted by users, Twitter, Google, and Facebook have allowed terrorist organizations like ISIS to use their platforms for many years with “‘little or no interference.’”[6]  The issue now before the Supreme Court is whether these Internet giants may be held liable for aiding and abetting international terrorism by failing to remove pro-ISIS content from their websites.[7]

 

This case is one of two currently before the Supreme Court on whether Internet companies can be held accountable for inflammatory content posted by users.[8]  The second case, similar in nature to the first, is a lawsuit against Youtube brought “by the family of an American woman killed in a Paris attack by Islamist militants.”[9]  While the cases both bring claims under the ATA, the second case raises an additional and controversial scope question regarding Section 230 of the Communications Decency Act, which provides certain legal immunity to Internet companies.[10]  Should the Supreme Court rule in the favor of the Internet companies in the case related to the Istanbul attacks, it might avoid tackling the stickier issue of Section 230 required by the second case.[11]

 

The Communications Decency Act (“the Act”) was enacted by Congress in 1996[12], “when websites were young and perceived to be vulnerable.”[13]  Section 230 of the Act ensured that website companies “would not get bogged down in lawsuits if users posted material to which others might object, such as bad restaurant reviews or complaints about neighbours.”[14]  The relevant provision states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[15]  Rather than risk “chilling free speech,  Congress ‘made a policy choice … not to deter harmful online speech through the separate route of imposing tort liability on companies that serve as intermediaries for other parties’ potentially injurious messages.”[16] 

 

In passing Section 230, Congress sought “‘to empower interactive computer service providers to self-regulate.’”[17]  However, as the Internet has evolved over the last thirty-plus years, cases like the two currently before the Supreme Court highlight the issues that come with allowing Internet companies to self-regulate.  As the current cases suggest, many of these issues arise when Internet companies take a minimalist approach to self-regulation and allow users to post controversial content with “‘little or no interference.’”[18]  Should the Supreme Court decide to restrict the scope of Section 230, tech companies could potentially be held liable for harm caused by content posted by users of their platforms, such as propaganda posted by terrorist organizations.  It is not difficult to imagine how such a restriction could result in an onslaught of litigation and cause detrimental financial burdens for these companies.

 

After hearing oral arguments on February 22nd and 22rd of this year, the Court remains uncertain about whether it will reach the Section 230 issue with the cases at hand.[19]  Justice Amy Coney Barrett, for example, “suggested that the [law]suit . . . lacks the kind of facts” necessary to hold the Internet companies liable under the ATA, and Justice Neil Gorsuch said he did not see how the Plaintiffs’ complaint “lines up” with the elements required under the ATA statute.[20]  If the Court dismisses the lawsuits due to these ATA-related shortcomings, it could “avoid” addressing Section 230 altogether.[21]

 

However, with the public becoming increasingly critical of the legal immunity afforded to large tech companies under Section 230,[22] it will be interesting to see if the Supreme Court will choose to narrow the scope of the current law.  Additionally, President Biden and former President Trump have each called for an overhaul of Section 230, suggesting that the issue before the Court will be of particular interest heading into the 2024 presidential election.[23]  Given the heightened public interest in the scope of the Act, it remains possible that the Court will confront Section 230 again in the near future, even if the Court fails to reach the issue in the current cases.

 

[1] Istanbul New Year Reina Nightclub Attack ‘Leaves 39 Dead’, BBC News (Jan. 1, 2017, 4:03 AM),  https://www.bbc.com/news/world-europe-38481521.

[2] Doreen Mccallister, ISIS Claims Responsibility in Turkish Nightclub Attack; U.S. Man Among Wounded, NPR (Jan. 2, 2017), https://www.npr.org/sections/thetwo-way/2017/01/02/507848348/isis-claims-responsibility-in-turkish-nightclub-attack-u-s-man-among-the-wounded.

[3] Andrew Chung & John Kruzet, U.S. Supreme Court Raises Doubts About Suit Against Twitter Over Istanbul Massacre, Reuters, https://www.reuters.com/legal/us-supreme-court-weighs-suit-against-twitter-over-istanbul-massacre-2023-02-22/ (Feb. 22, 2023, 4:51 PM).

[4] Gonzalez v. Google LLC, 2 F.4th 871, 879, 883 (9th Cir. 2021).

[5] Id. at 883.

[6] Id.

[7] Jessica Gresko & Mark Sherman, Supreme Court Seems to Favor Tech Giants in Terror Case, AP (Feb. 22, 2023), https://apnews.com/article/us-supreme-court-technology-crime-business-internet-6e4551a3f39461e77a82ff577e24e6e7.

[8] Chung & Kruzet, supra note 3.

[9] US Supreme Court Weighs Suit Against Twitter Over 2017 Istanbul Massacre, The Economic Times, https://economictimes.indiatimes.com/tech/technology/us-supreme-court-weighs-suit-against-twitter-over-2017-istanbul-massacre/articleshow/98153645.cms (Feb. 22, 2023, 5:13 PM).

[10] Gonzalez, 2 F.4th at 882–83, 886.

[11] Adam Liptak, Supreme Court Wrestles With Suit Claiming Twitter Aided Terrorists, N.Y. Times (Feb. 22, 2023), https://www.nytimes.com/2023/02/22/us/supreme-court-twitter-terrorism.html.

[12] Gonzalez, 2 F.4th at 886.

[13] What is Section 230? A Law Regulating Web Communications Comes Before the Supreme Court, The Economist (Feb. 20, 2023), https://www.economist.com/the-economist-explains/2023/02/20/what-is-section-230?utm_medium=cpc.adword.pd&utm_source=google&ppccampaignID=17210591673&ppcadID=&utm_campaign=a.22brand_pmax&utm_content=conversion.direct-response.anonymous&gclid=EAIaIQobChMIls2n4oa-_QIVUP7jBx0CBAaREAAYASAAEgJcqvD_BwE&gclsrc=aw.ds.

[14] Id.

[15] 47 U.S.C. § 230(c)(1)

[16] Gonzalez, 2 F.4th at 886 (quoting Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1123 (9th Cir. 2003)).

[17] Gonzalez, 2 F.4th at 886 (quoting Force v. Facebook, Inc., 934 F.3d 53, 7879 (2d Cir. 2019)).

[18] Gonzalez, 2 F.4th at 883.

[19] See Gresko & Sherman, supra note 7.

[20] Id.

[21] Id.

[22] What is Section 230? A Law Regulating Web Communications Comes Before the Supreme Court, supra note 13.

[23] Id.

Photo by Ekaterina Bolovtsova via Pexels

By Greg Berman

Controversy erupted last week after a George Washington University professor, Dave Karpf, tweeted a joke at New York Times columnist Bret Stephens’s expense.  Quoting an 8-word post about a bedbug infestation in the Times’ newsroom, Karpf joked that “[t]he bedbugs are a metaphor.  The bedbugs are Bret Stephens.”[1]  Although this tweet did not initially gain much traction, it later went viral when Stephens personally emailed Karpf, as well as the George Washington University provost, demanding an apology for the insult.[2]  After several more tweets and an off-scheduled column post by Stephens with visible references to the controversy, both sides of the feud seem to be slowing down.[3]  Although this back and forth is just one isolated incident between two individuals, it highlights a growing trend in our discourse.  With the growing usage of social media in our society, these sorts of ideological clashes have seemingly become more prevalent than ever.[4]  And even though these virtual arguments tend to be more of an annoyance than a liability, reputation-damaging attacks (even those made on the internet) still can run the risk of triggering a costly libel lawsuit.[5] 

The tort of libel is defined by Black’s Law Dictionary as “[a] defamatory statement expressed in a fixed medium, esp[ecially] writing but also a picture, sign, or electronic broadcast.”[6]  The enforcement of libel laws in the United States dates predates the ratification of the Constitution, most notably with the trial of John Peter Zenger, whose 1735 jury acquittal established the idea that someone cannot be charged with libel if the remark is true.[7]  Even today, the accuracy of the allegedly libelous statements continues to be one of key factors for courts to consider in libel cases, with each state setting their own standards for liability.[8]  Another key consideration for courts comes from New York Times v. Sullivan, where the Supreme Court differentiated defamation claims involving public figures and private individuals, holding that any libel suit against a public figure requires the inaccurate statement to be made with “actual malice.”[9]  Actual malice has been defined by the Court as “knowledge that (the statement) was false or with reckless disregard of whether it was false or not.”[10]  Additional protections against libel claims were enacted nine years later, when the Supreme Court limited libel laws to apply only to intentionally false statements of fact, even if a trial court is presented with baseless opinions that are similarly incorrect.[11]

Our ever-increasing move toward a digitalized world raises the question of how these libel laws can be applied to internet publications.  To start, no claim for libel can be made against any social media site, such as Facebook or Twitter, for content posted by a user of that social media site.[12]  This is primarily due to the expansive legal protections given to these “interactive computer services” by Section 230 of the Communications Decency Act of 1996.[13]  That being said, individuals may still be held liable for content that they post on the internet, with each state continuing to apply its own standards for libelous conduct even as information crosses state lines.[14]  When it comes to the question of jurisdiction, the Supreme Court clarified in Keeton v. Hustler Magazine, Inc. that a state can claim jurisdiction over a non-resident when injurious information is intentionally disseminated to its citizens.[15]  Specifically, the Court cited each state’s interest in protecting its citizens from intentional falsehoods as a key consideration in its decision.[16] While online information is disseminated in a different manner than the magazines from Keeton, courts have begun allow jurisdiction for internet libel cases when the online post directly targets one or more residents of the state.[17]

When applying libel laws to online statements, courts have used similar substantive principles to those used for print publications.  In 2009, former musician Courtney Love was sued by her former attorney after tweeting allegedly libelous remarks.[18]  As this was the first reported case to go to a jury decision for remarks made over Twitter, the trial court was left with a case of first impression.[19]  In a landmark decision, the court opted to apply traditional libel laws.  A jury found that Love did not know that the statements were false at the time they were made; she therefore lacked the actual malice required to be considered libel.[20]  

There have also been other cases involving libelous comments made over Twitter.[21]  For example, one such case took place after a tenant complained on her personal Twitter account about her “moldy apartment.”[22]  After seeing the post, the landlord sued the tenant under Illinois libel laws; the case was later dismissed with prejudice because the tweet was too vague to meet the requisite legal standards for libel.[23]  Another lawsuit took place after a mid-game conversation between an NBA coach and a referee was overheard and tweeted out by an AP reporter.[24]  The referee insisted that the reported conversation never took place, and the subsequent lawsuit ultimately resulted in a $20,000 settlement.[25]  Each of these cases present factually unique scenarios, but all together indicate a growing trend: even as the medium for public discourse has been rapidly shifting towards the digital sphere, traditional libel laws still continue to apply.

In addition to substantive treatment, there also remain unresolved legal questions stemming from courts’ application of the single publication rule.  The single publication rule provides that “any one edition of a book or newspaper, or any one radio or television broadcast, exhibition of a motion picture or similar aggregate communication is a single publication” and therefore “only one action for damages can be maintained.”[26]  The justification behind this rule is simple: by aggregating all damages allegedly caused by a publication to a single action, a party would not be perpetually bombarded with litigation long after their active role in publication has ended.[27]  This rule has already been adopted in “the great majority of states” and was implemented within the 4th Circuit in Morrissey v. William Morrow & Co.[28]  However, some academics have proposed that the single publication rule should not always be applied to social media posts, citing the possibility that a publisher could personally solicit shares or retweets and thereby maintain an active role in republishing libelous information.[29]  The issue of continual dissemination by means of retweeting seems primed to be raised in later litigation, but thus far has not been brought before any court.[30]  Still, many circuits have already begun the process of implementing the single publication rule to online posts in general (so far these cases have been litigated over personal blogs rather than Facebook or Twitter posts), so it will be interesting to see how courts handle the issue if eventually raised by litigants down the road.[31]

As the social media presence in our society grows stronger each day, only time will tell if courts will craft separate libel principles for online publications.  There are arguments to be made on both sides, especially now that online mediums are increasingly taking over many of the informational functions previously held by their print counterparts.[32]  For now, at least, courts are continuing to use the same traditional libel laws that have been evolving and changing since John Peter Zenger’s 1735 acquittal. [33]  And while the jury is still out on whether Dave Karpf actually thinks Bret Stephens is a metaphorical bedbug, he can likely rest easy knowing that current libel laws will protect his joke from any future legal trouble.


1. Dave Korpf (@davekorpf), Twitter (Aug. 26, 2019, 5:07 PM), https://twitter.com/davekarpf/status/1166094950024515584.

[2] See Dave Korpf (@davekorpf), Twitter (Aug. 26, 2019, 9:22 PM), https://twitter.com/davekarpf/status/1166159027589570566; Dave Korpf (@davekorpf), Twitter (Aug. 26, 2019, 10:13 PM) https://twitter.com/davekarpf/status/1166171837082079232; see also Tim Efrink & Morgan Krakow, A Professor Called Bret Stephens a ‘Bedbug.’ The New York Times Columnist Complained to the Professor’s Boss, Wash. Post (Aug. 27, 2019), https://www.washingtonpost.com/nation/2019/08/27/bret-stephens-bedbug-david-karpf-twitter/ (summarizing the context of Korpf’s tweet and the resulting controversy).

[3] See Dave Korpf (@davekorpf), Twitter (Aug. 30, 2019, 7:58 PM), https://twitter.com/davekarpf/status/1167587392292892672; Bret Stephens, Opinion, World War II and the Ingredients of Slaughter, N.Y. Times (Aug. 30, 2019), https://www.nytimes.com/2019/08/30/opinion/world-war-ii-anniversary.html.

[4] Jasmine Garsd, In An Increasingly Polarized America, Is It Possible To Be Civil On Social Media?, NPR (Mar. 31, 2019) https://www.npr.org/2019/03/31/708039892/in-an-increasingly-polarized-america-is-it-possible-to-be-civil-on-social-media.

[5] See id.; Adeline A. Allen, Twibel Retweeted: Twitter Libel and the Single Publication Rule,15 J. High Tech. L. 63, 81 n.99 (2014).

[6]  Libel, Black’s Law Dictionary (11th ed. 2019).

[7] Michael Kent Curtis, J. Wilson Parker, William G. Ross, Davison M. Douglas & Paul Finkelman, Constitutional Law in Context 1038 (4th ed. 2018).

[8] James L. Pielemeier, Constitutional Limitations on Choice of Law: The Special Case of Multistate Defamation, 133 U. Pa. L. Rev. 381, 384 (1985).

[9] 376 U.S. 254, 279–80 (1964); see also Gertz v. Robert Welch, Inc., 418 U.S. 323, 351 (1974) (defining a public figure as either “an individual achiev[ing] such pervasive fame or notoriety” or an individual who “voluntarily injects himself or is drawn into a particular public controversy”).

[10] Sullivan, 376 U.S. at 280.

[11] See Gertz, 418 U.S. at 339 (“[u]nder the First Amendment, there is no such thing as a false idea.”).

[12] See Allen, supra note 5, at 82.  Of course, Facebook and Twitter are not immunized against suits for content that they post on their own platforms.  Cf. Force v. Facebook, Inc., ___ F.3d ___, No. 18-397, 2019 WL 3432818, slip op. at 41 (2d Cir. July 31, 2019), http://www.ca2.uscourts.gov/decisions/isysquery/a9011811-1969-4f97-bef7-7eb025d7d66c/1/doc/18-397_complete_opn.pdf (“If Facebook was a creator or developer, even ‘in part,’ of the terrorism-related content upon which plaintiffs’ claims rely, then Facebook is an ‘information content provider’ of that content and is not protected by Section 230(c)(1) immunity.”).

[13] 47 U.S.C. §230(c)(1) (2017) (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”).  “Interactive computer service” is defined by the act as “any information service, system, or access software provider that provides or enables computer access by multiple users to a computer server”). Id. at §230(f)(2); see also Allen, supra note 5, at 82 n.100 (describing additional protections provided by the Communications Decency Act, including how Twitter falls under its definition of “interactive computer service”).

[14] See Allen, supra note 5, at 84; Pielemeier, supra note 8, at 384.

[15] 465 U.S. 770, 777 (1984); see also Calder v. Jones, 465 U.S. 783, 791 (1984) (holding that personal jurisdiction is proper over defendants who purposefully directed libelous information at the plaintiff’s home state with the intent of causing harm).

[16] Keeton, 465 U.S. at 777.

[17] See, e.g.,Zippo Mfg. Co. v. Zippo Dot Com, Inc., 952 F. Supp. 1119, 1124 (W.D. Pa. 1997); Young v. New Haven Advocate, 315 F.3d 256, 263 (4th Cir. 2002); Tamburo v. Dworkin, 601 F.3d 693, 707 (7th Cir. 2010) (each applying traditional libel tests for personal jurisdiction to online publications, requiring the publication to be intentionally targeted towards citizens of the state). 

[18] Gordon v. Love, No. B256367, 2016 WL 374950, at *2 (Cal. Ct. App. Feb. 1, 2016). The exact language of the tweet in question was “I was fucking devastated when Rhonda J. Holmes, Esquire, of San Diego was bought off @FairNewsSpears perhaps you can get a quote.”  Id.  The tweet was deleted five to seven minutes after it was posted.  Id. at *3.  This was Love’s second time being sued for defamation over comments made on her Twitter account, although the first lawsuit resulted in a $430,000 settlement before trial. Matthew Belloni, Courtney Love to Pay $430,000 in Twitter Case, Reuters (Mar. 3, 2011), https://www.reuters.com/article/us-courtneylove/courtney-love-to-pay-430000-in-twitter-case-idUSTRE7230F820110304.

[19] See Allen, supra note 5, at 81 n.99.

[20] Love, 2016 WL 374950, at *3.  The reason actual malice was required in the case is because Love’s attorney had gained public figure status, which was not disputed at trial. Id.

[21] See Joe Trevino, From Tweets to Twibel*: Why the Current Defamation Law Does Not Provide for Jay Cutler’s Feelings, 19 Sports Law J. 49, 61–63 (2012) (describing a series of libel lawsuits stemming from social media posts).

[22] Id. at 61.

[23] Andrew L. Wang, Twitter Apartment Mold Libel Suit Dismissed, Chi. Trib. (Jan. 22, 2010), https://www.chicagotribune.com/news/ct-xpm-2010-01-22-1001210830-story.html.

[24] Trevino, supra note 21, at 63. 

[25] Lauren Dugan, The AP Settles Over NBA Twitter Lawsuit, Pays $20,000 Fine, Adweek (Dec. 8, 2011), https://www.adweek.com/digital/the-ap-settles-over-nba-twitter-lawsuit-pays-20000-fine/.

[26] Restatement (Second) of Torts § 577A(3–4) (Am. Law Inst. 1977).

[27] Id. at § 577A cmt. b.

[28] 739 F.2d 962, 967 (4th Cir. 1984) (quoting Keeton, 465 U.S. at 777 n.8).

[29] Allen, supra note 5, at 87–88.

[30] See Lori A. Wood, Cyber-Defamation and the Single Publication Rule, 81 B.U. L. Rev. 895, 915 (2001) (calling for courts to define “republication” in the context of internet publications).

[31] See, e.g., Firth v. State, 775 N.E.2d 463, 466 (N.Y. 2002); Van Buskirk v. N.Y. Times Co., 325 F.3d 87, 90 (2d Cir. 2003); Oja v. U.S. Army Corps of Eng’rs, 440 F.3d 1122, 1130–31 (9th Cir. 2006); Nationwide Bi-Weekly Admin., Inc. v. Belo Corp., 512 F.3d 137, 144 (5th Cir. 2007).  But see Swafford v. Memphis Individual Prac. Ass’n, 1998 Tenn. App. LEXIS 361, at *38 (Tenn. App. 1998).

[32] See Allen, supra note 5, at 91 n.157.

[33] See Trevino, supra note 19, at 69.