By Tom Budzyn

On February 8, 2024, the Federal Communications Commission (“FCC”) issued a unanimous declaratory ruling giving agency guidance on the applicability of the Telephone Consumer Protection Act (“TCPA”) to unwanted and illegal robocalls using artificial intelligence.[1] In this ruling, the FCC stated its belief that unwanted spam and robocalls making use of artificial intelligence are in violation of existing consumer protections.[2] The FCC’s analysis focused on protecting consumers from the novel and unpredictable threats posed by artificial intelligence.[3] It may be a harbinger of things to come, as other agencies (and various tribunals) are forced to consider the applicability of older consumer protection laws to the unique challenge of artificial intelligence.[4] As federal agencies are often the first line of defense for consumers against predation,[5] the onus is on them to react to the dangers posed by artificial intelligence.

The FCC considered the TCPA, passed in 1991, which prohibits the use of “artificial” or “prerecorded” voices to call any residential phone line if the recipient has not previously consented to receiving such a call.[6] This blanket prohibition is effective unless there is an applicable statutory exception, or it is otherwise exempted by an FCC rule or order.[7] However, the statute does not define what an “artificial” or “prerecorded” voice is.[8] Thus, on November 16, 2023, the FCC solicited comments from the public as to the applicability of the TCPA to artificial intelligence in response to the technology’s fast and ongoing developments.[9] In its preliminary inquiry, the FCC noted that some artificial intelligence-based technologies such as voice cloning[10] facially appear to violate the TCPA.[11]

Following this initial inquiry, the FCC confirmed its original belief that phone calls made using artificial intelligence-generated technologies without the prior consent of the recipient violate the TCPA.[12] In doing so, the FCC looked to the rationale underlying the TCPA and its immediate applicability to artificial intelligence.[13] As a consumer protection statute, the TCPA safeguards phone users from deceptive, misleading, and harassing phone calls.[14] Artificial intelligence, and the almost limitless technological possibilities it offers,[15] presents a uniquely dangerous threat to consumers. While most phone users today are well-equipped to recognize and deal with robocalls or unwanted advertisements, they are likely much less able to deal with the shock of hearing the panicked voice of a loved one asking for help.[16] Pointing to these severe dangers, the FCC found that the TCPA must extend to artificial intelligence to adequately protect consumers.[17]

As a result, the FCC contemplates future enforcement of the TCPA against callers using artificial intelligence technology without the prior consent of the recipients of the calls.[18] The threat of enforcement looms heavy, as twenty-six state attorney generals wrote to the FCC in support of the decision, and more impressively, there is almost unanimous accord among the state attorney generals in their understanding of this law.[19]

It is worth noting that the FCC’s ruling is possibly not legally binding.[20]  The ruling serves to explain the agency’s interpretation of the TCPA, and as such, is not necessarily binding on the agency.[21] Moreover, the possible downfall of Chevron would mean that the FCC’s interpretation of the TCPA would likely be afforded little, if any deference.[22] Legal technicalities notwithstanding, the FCC’s common sense declaratory ruling states the obvious: unsolicited phone calls using artificial intelligence-generated voices are covered by the TCPA’s prohibition on “artificial” or “prerecorded” voices in unsolicited phone calls.[23] If there was any doubt before that callers should avoid the use of artificial intelligence, without the consent of call recipients, it is gone now.

Perhaps the most interesting part of the FCC’s ruling is its straightforward analysis of the application of the facts to the law. Other federal agencies will certainly be asked to make similar analyses in the future, as artificial intelligence becomes only more and more ubiquitous. In the TCPA context, the analysis is straightforward. It is much less so in the context of other consumer protection statutes.[24] For example, the Federal Trade Commission (“FTC”) is authorized to take action against “persons, partnerships, or corporations” from using unfair methods in competition affecting commerce or unfair or deceptive acts affecting commerce by 15 U.S.C. § 45.[25] Unsurprisingly, “person” is not defined by the statute[26]  as the law was originally enacted in 1914.[27] If it remains in its current form, it could exclude artificial intelligence from one of the most obvious consumer protections in the modern United States. While artificial intelligence has not been recognized as a person in other contexts,[28] it should be recognized as such where it can do as much harm, if not more, than a person could.

This statute is only one of many traditional consumer protection statutes that, as written, may not adequately protect consumers from the dangers of artificial intelligence.[29] While amending the law is certainly possible, legislative gridlock and inherent delays place greater importance on agencies being proactive to artificial intelligence developments. The FCC’s ruling is a step in the right direction, a sign that agencies will not wait for artificial intelligence to run rampant before seeking to rein it in. Hopefully, other agencies follow suit and issue similar guidance, using existing laws to protect consumers from new threats.


[1] F.C.C., CG Docket no. 23-362, Declaratory Ruling (2024) [hereinafter F.C.C. Ruling].  

[2] Id.

[3] Id.

[4] Fed. Trade Comm’n, FTC Chair Khan and Officials from DOJ, CFPB, AND EEOC Release Joint Statement on AI (2024), https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai.

[5] See, e.g., J. Harvie Wilkinson III, Assessing the Administrative State, 32 J. L. & Pol. 239 (2017) (discussing modern administrative state and its goals, including stabilizing financial institutions, making homes affordable and protecting the rights of employees to unionize).  

[6] Telephone Consumer Protection Act of 1991, 47 U.S.C. § 227.

[7] Id.

[8] Id.

[9] F.C.C. Ruling, supra note 1.

[10] See Fed. Trade Comm’n, Preventing the Harms of AI-enabled Voice Cloning (2024) https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2023/11/preventing-harms-ai-enabled-voice-cloning.

[11] FC.C. Ruling , supra note 1.

[12] Id.

[13] Id.

[14] See Telephone Consumer Protection Act of 1991, 47 U.S.C. § 227.

[15] See, e.g., Cade Metz, What’s the Future for AI?, N.Y. Times (Mar. 31, 2023).

[16] Ali Swenson & Will Weissert, New Hampshire investigating fake Biden robocall meant to discourage voters ahead of primary, Associated Press (Jan. 22, 2024), https://apnews.com/article/new-hampshire-primary-biden-ai-deepfake-robocall-f3469ceb6dd613079092287994663db5.

[17] F.C.C. Ruling, supra note 1.

[18] Id.

[19] Id.

[20] Azar v. Allina Health Servs., 139 S. Ct. 1804, 1811 (2019) (explaining that interpretive rules, which are exempt from notice and comment requirements under the Administrative Procedure Act, “merely advise” the public of the agency’s interpretation of a statute).

[21] Chang Chun Petrochemical Co. Ltd. v. U.S., 37 Ct. Int’l Trade, 514, 529 (2013) (“Unlike a statute or regulations promulgated through notice and comment procedures, an agency’s policy is not binding on itself.”).

[22] See generally Caleb B. Childers, The Major Question Left for the Roberts Court, will Chevron Survive? 112 Ky. L.J. 373 (2023).

[23] F.C.C. Ruling, supra note 1.

[24] See 15 U.S.C. §§ 1601–1616 (consumer credit cost disclosure statute defines “person” as a “natural person” or “organization”).

[25] 15 U.S.C.§ 45.

[26] Id.

[27] Id.  

[28] See Thaler v. Hirshfeld, 558 F.Supp. 3d 328 (2021) (affirming United States Patent and Trademark Office’s finding that the term “individual” in the Patent Act referred only to natural persons, and thus artificial intelligence could not be considered an inventor of patented technology).

[29] See, e.g., 15 U.S.C. §§ 1601-1616, supra note 24.                                      

By Kyle Brantley

It’s that time of day.  Your child is positioning the antenna just right in order to catch their favorite broadcast TV show.  No, that doesn’t sound quite right.  They are actually dialing up the old FM radio for their favorite weekly jamboree!  No, that’s definitely not happening.  Instead, kids today consume their entertainment through mobile devices—a recent study estimates that 90 percent of children have cell phones by the age of eleven and that on average they spend over three hours on that device per day.[1]

Given the realities of how today’s children access content, one would think that the legal doctrine for policing explicit TV/radio content would morph to accommodate the internet.  However, there is a double standard currently in place.  A high bar for obscene, indecent, and profane content exists on the broadcast airwaves.[2]  In contrast, there is no discernable regulation of expression on the internet.[3]

The lack of internet content policing stems from the First Amendment right to freedom of expression.[4]  While the First Amendment has a broad baseline standard,[5] the government limits what can be said in a few key areas including (but not limited to) fighting words,[6] incitement,[7] obscenity,[8] and indecent speech that invades the privacy of the home.[9]  The overarching authority for the latter still has its roots in FCC v. Pacifica Foundation.[10]  In Pacifica, a New York radio station aired a previously recorded skit by the comedian George Carlin entitled Dirty Words in which he expressed all of the curse words that he thought were disallowed on the public airwaves.[11]  The Supreme Court took issue with the airing of that slot in the middle of the afternoon and honed in on two overriding motivators for censoring the curse words used in the segment: (1) the unique pervasive access of the broadcast airwaves, and (2) the susceptibility of children to be exposed to the content.[12]  

Those overarching reasons delineated in Pacifica still form the basis for FCC guidance that broadcast providers must follow.[13]  The FCC currently prohibits indecent conduct that “portrays sexual or excretory organs” and profane conduct like “‘grossly offensive’ language that is considered a public nuisance.”[14]  Notably, these rules only apply to the major broadcast TV stations (e.g., ABC, NBC, FOX, PBS, etc.)[15] and FM/AM radio from 6:00 a.m. to 10:00 p.m.[16]  Cable and satellite TV are excluded since those are pay-for-service options.[17]

Twenty years later, the federal government saw a need to implement baseline measures for explicit content that children could access on the internet when it included specific protection provisions for “indecent and patently offensive communications” within the Communications Decency Act.[18]  The Supreme Court struck down that portion of the act in Reno v. ACLU[19] when it reasoned that, “[u]nlike communications received by radio or television, ‘the receipt of information on the Internet requires a series of affirmative steps more deliberate and directed than merely turning a dial.  A child requires some sophistication and some ability to read to retrieve material and thereby to use the Internet unattended.’”[20]  The Court then dug in its heels by saying “the Internet is not as ‘invasive’ as radio or television”[21] and that “users seldom encounter [sexually explicit] content accidentally.”[22] 

Times have changed since the Court decided Reno in 1997.  Today, internet access is often unabated, and one can easily stumble upon far more sexually explicit material than could be fathomed on the traditional broadcast airwaves.[23]  How many deliberative and affirmative steps does it take for a TikTok video to pop in front of your face?[24]  How about an Instagram post as you scroll down on your home page?  What about a tailored ad on the side of an otherwise mundane web page?  Apps like TikTok and Instagram present endless amounts of new revealing and potentially vulgar images and sounds automatically—the new videos will simply appear after the previous one ends.[25]  

Another example of a potential hazard that a child can stumble upon is pornography.  Porn’s online proliferation has been well documented; Pornhub, the world’s largest porn site, has 100 billion video views per year[26] and 130 million unique viewers per day. [27]  Twenty-five percent of those users are between the ages of eighteen and twenty-four.[28]  In contrast, only 4 percent of users are over the age of sixty-five.[29]  Its user traffic exceeds that of both Netflix and Yahoo.[30]  Eighty percent of that traffic comes from mobile devices.[31]  This pervasive medium can be accessed with as little as two clicks from Google’s homepage or an errant link from social media.[32]  

While the effects of easily accessible porn on children are still being studied, early experiments have shown that heavy porn consumption can lead to body shaming, eating disorders, and low self-esteem.[33]  There are many other issues with porn access beyond the mental effect on children that are actively being debated, including Pornhub’s lack of adequate age screening for its users and its blatantly illegal acts of profiting off children’s pornography.[34]  Big Tech is also finally getting the hint that they have skin in the game too as they begrudgingly start to put in age verification safeguards of their own.[35]  

When reevaluating the factors employed in Pacifica, it becomes clear that the two-prong test originally used for radio broadcasts is now satisfied on the internet.[36]  The ubiquitous access children have to the internet via smartphones demonstrates that the medium is pervasive.[37]  Children are susceptible to exposure to indecent content because of the ease of access through two quick clicks from Google,[38] automatic video recommendations on social media,[39] and the sheer popularity of porn content amongst their peers who are just a few years older than they are.[40]  The concern in Reno around the lack of a “series of affirmative steps” needed to access illicit content on the internet[41] is outdated because of the automatic content that will load on apps like TikTok and Instagram.[42]  Similarly, the majority of children as young as seven years old have both smartphones and the sophistication to seamlessly access the internet, even though they may not fully understand the ramifications of some of their content choices.[43]

Balancing the government’s interest in limiting children’s exposure to indecency and profanity with the right to express ideas freely online is no easy task.[44]  However, other countries have found ways to regulate the extreme ends of the porn industry and children’s access to such content.[45]  No matter where one stands on the issue, it is abundantly clear that the traditional view of mundane curse words encountered on broadcast television is not compatible with the endless explicit content that is so easily displayed on smartphones.  Both are uniquely pervasive and are accessible to children with minimal effort or “steps.”[46]  One of the two doctrines should evolve. 


[1] See Most Children Own Mobile Phone by Age of Seven, Study Finds, The Guardian (Jan. 29, 2020, 19:01 EST), https://www.theguardian.com/society/2020/jan/30/most-children-own-mobile-phone-by-age-of-seven-study-finds.

[2] See Obscene, Indecent and Profane Broadcasts, FCC, https://www.fcc.gov/consumers/guides/obscene-indecent-and-profane-broadcasts (Jan. 13, 2021) [hereinafter Obscene, Indecent and Profane Broadcasts].

[3] See Rebecca Jakubcin, Comment, Reno v. ACLU: Establishing a First Amendment Level of Protection for the Internet, 9 Univ. Fl. J.L. Pub. Pol’y 287, 292 (1998).

[4] See id.; U.S. Const. amend. I.

[5] See Jakubcin, supra note 3, at 288.

[6] See Cohen v. California, 403 U.S. 15, 20 (1971); Chaplinksy v. New Hampshire, 315 U.S. 568, 572, 574 (1942).

[7] See Brandenburg v. Ohio, 395 U.S. 444, 447, 449 (1969); Schenk v. United States, 249 U.S. 47, 52 (1919).

[8] See Miller v. California, 413 U.S. 15, 24 (1973).

[9] See 18 U.S.C. § 1464.

[10] 438 U.S. 726 (1978).

[11] Id. at 729–30.

[12] See id. at 748–50.

[13] Obscene, Indecent and Profane Broadcasts, supra note 2.

[14] Id.

[15] Id.

[16] Id.

[17] Id.

[18] See Am. C.L. Union v. Reno, 929 F. Supp. 824, 850 (E.D. Pa. 1996), aff’d, Reno v. Am. C.L. Union, 521 U.S. 844, 849 (1997).

[19] Reno, 521 U.S. at 854.

[20] Id. (emphasis added) (quoting Am. C.L. Union, 929 F. Supp. at 845).

[21] Id. at 869.

[22] Id. at 854.

[23] See Byrin Romney, Screens, Teens, and Porn Scenes: Legislative Approaches to Protecting Youth from Exposure to Pornography, 45 Vt. L. Rev. 43, 49 (2020).

[24] See generally Inside TikTok’s Algorithm: A WSJ Video Investigation, Wall St. J. (July 21, 2021, 10:26 AM), https://www.wsj.com/articles/tiktok-algorithm-video-investigation-11626877477 (demonstrating how TikTok’s algorithm pushes users towards more extreme content with recommendations that load automatically without any additional clicks).

[25] Id.

[26] Pornhub, https://www.pornhub.com/press (last visited Nov. 16, 2021).

[27] The Pornhub Tech Review, Pornhub: Insights (Apr. 8, 2021), https://www.pornhub.com/insights/tech-review.

[28] The 2019 Year in Review, Pornhub: Insights (Dec. 11, 2019), https://www.pornhub.com/insights/2019-year-in-review.

[29] Id.

[30] Joel Khalili, These Are the Most Popular Websites Right Now –  And They Might Just Surprise You, TechRadar (July 13, 2021), https://www.techradar.com/news/porn-sites-attract-more-visitors-than-netflix-and-amazon-youll-never-guess-how-many.

[31] The Pornhub Tech Review, supra note 27.

[32] See Gail Dines, What Kids Aren’t Telling Parents About Porn on Social Media, Thrive Global (July 15, 2019), https://thriveglobal.com/stories/what-kids-arent-telling-parents-about-porn-on-social-media/.

[33] Id.

[34] Nicholas Kristof, The Children of Pornhub, N.Y. Times (Dec 4, 2020), https://www.nytimes.com/2020/12/04/opinion/sunday/pornhub-rape-trafficking.html.

[35] See David McCabe, Anonymity No More? Age Checks Come to the Web, N.Y. Times (Oct. 27, 2021), https://www.nytimes.com/2021/10/27/technology/internet-age-check-proof.html.

[36] See FCC v. Pacifica Found., 438 U.S. 726, 748–50 (1978).

[37] See Most Children Own Mobile Phone by Age of Seven, Study Finds, supra note 1.

[38] Dines, supra note 32.

[39] See Inside TikTok’s Algorithm: A WSJ Video Investigation, supra note 24.

[40] See, e.g., The 2019 Year in Review, supra note 28.

[41] See Reno v. Am. C.L. Union, 521 U.S. 844, 854 (1997)

[42] See Inside TikTok’s Algorithm: A WSJ Video Investigation, supra note 24.

[43] See Most Children Own Mobile Phone by Age of Seven, Study Finds, supra note 1.

[44] See Romney, supra note 23, at 97.

[45] See Raphael Tsavkko Garcia, Anti-Porn Laws in Europe Bring Serious Privacy Issues, Yet They’re Fashionable As Ever, CyberNews (Nov. 30, 2020), https://cybernews.com/editorial/anti-porn-laws-in-europe-bring-serious-privacy-issues-yet-theyre-fashionable-as-ever/.

[46] Cf. Reno, 521 U.S. at 854; FCC v. Pacifica Found., 438 U.S. 726, 749–50 (1978).


Post image by ExpectGrain on Flickr.

Weekly Roundup: 2/26-3/2

By: Cara Katrinak & Raquel Macgregor

Carlton & Harris Chiropractic, Inc. v. PDR Network, LLC

In this civil case, Carlton & Harris Chiropractic appealed the district court’s dismissal of its claim against PDR Network for violating the Telephone Consumer Protection Act (TCPA) by sending an unsolicited advertisement via fax. Carlton & Harris argued that the district court erred by failing to defer to a 2006 rule promulgated by the Federal Communications Commission (FCC) interpreting provisions of the TCPA–specifically, interpreting the term “unsolicited advertisement.” Carlton & Harris further argued that the Hobbs Act required the district court to defer to the FCC’s rule. The Fourth Circuit vacated and remanded the case, holding both that the Hobbs Act deprived the district court of jurisdiction to consider the validity of the FCC rule and the district court’s reading of the FCC rule conflicted with the plain meaning of the rule’s text.  

Singer v. Reali

This appeal and cross-appeal arose from the district court’s dismissal of a securities fraud class action complaint related to the healthcare provider reimbursement practices of defendant TranS1 and four of its officers in connection with TranS1’s AxiaLIF system (the “System”). Named plaintiff Phillip J. Singer alleged that TranS1 and its officers, through the System, enabled surgeons to secure fraudulent reimbursements from health insurers and government-funded healthcare programs. Singer initiated this class action against TranS1 and its officers pursuant to Section 10(b) of the Securities Exchange Act, claiming that TranS1 and its officers concealed the fraudulent reimbursement scheme from the market through false and misleading statements and omissions and that TranS1’s stock price plummeted when the scheme was revealed.

Here, Singer appealed (No. 15-2579) the district court’s dismissal of his complaint for failure to sufficiently plead the material misrepresentation element or the scienter element of his Section 10(b) claim. TranS1 and its officers cross-appealed (No. 16-1019), contending that the district court erred in dismissing their challenge to the loss causation element of Singer’s claim. In reviewing the complaint, the Fourth Circuit held that Singer sufficiently pleaded the misrepresentation and scienter elements because the complaint specified statements made by TranS1 and its officers about its reimbursement practices that support Singer’s claim. In addition, the Court held that Singer also sufficiently pleaded the loss causation element because the complaint alleged losses resulting from “the relevant truth . . . leak[ing] out” about TranS1’s previously concealed fraudulent reimbursement scheme. Accordingly, the Fourth Circuit vacated and remanded No. 15-2579 and affirmed No. 16-1019.

Norfolk Southern Railway Co. v. Sprint Communications Co. L.P.

In this civil case, Sprint Communications appealed the district court’s order granting Norfolk Southern Railway’s motion to confirm an arbitration award. The arbitration arose from a disputed license agreement between the parties. The agreement granted use of Norfolk Southern’s railroad rights of way for Sprint’s fiber optic telecommunications system. The parties disagreed over the amount Sprint owed Norfolk Southern for such continued use and, pursuant to their agreement, hired three appraisers to determine an appropriate amount. On appeal, the parties disputed whether the final decision of the appraisers constituted a “final” arbitration award under the Federal Arbitration Act (FAA). Because the text of the appraisers’ final decision reserved the right to withdraw assent in the future, the award could not be considered “final.” Accordingly, the Fourth Circuit reversed and remanded the case, holding that the arbitration award was not “mutual, final, and definite” as required by the FAA.        

U.S. v. Phillips

In this civil case, claimant Damian Phillips appealed the district court’s holding that he lacked standing to intervene in his brother Byron Phillips’ forfeiture case. Damian sought to intervene after the United States claimed that $200,000 in cash found in a storage unit leased by Byron was subject to forfeiture under 21 U.S.C. § 881(a)(6) for being connected to the “exchange [of] a controlled substance.” Damian claimed that the cash was his life savings and, therefore, was not connected with drugs in violation of the statute. The Fourth Circuit affirmed the district court, holding that–based on the record–Damian lacked the necessary colorable interest in the $200,000 to establish standing.     

Janvey v. Romero

The Fourth Circuit affirmed the District Court of Maryland’s decision denying a motion to dismiss a bankruptcy petition. Appellee, Romero, had originally filed a Chapter 7 bankruptcy petition after he was found liable for a $1.275 million Ponzi scheme. The receiver, Janvey, moved to dismiss the bankruptcy petition due to bad faith under 11 U.S.C. §707(a). The Fourth Circuit was tasked with assessing whether the district court abused its discretion in deciding that Romero’s decision to file bankruptcy had not risen to the level of “bad faith.” The Fourth Circuit emphasized that the purpose of the Bankruptcy Code is to “grant a fresh start to the honest but unfortunate debtor,” and dismissing a bankruptcy petition for cause under bad faith is only warranted “in those egregious cases that entail concealed or misrepresented assets . . . excessive and continued expenditures, [and] lavish life-style.” The Court rejected Appellant’s arguments that filing bankruptcy in response to a single debt or the debtor’s ability to pay the debt constitute bad faith per se. The Court noted that although Romero had $5.348 million in assets, most of these assets were statutorily exempt. Moreover, Romero was supporting his wife’s medical costs, which averaged $12,000 a month for a bacterial brain infection that had left her incapacitated. The Court noted that Romero filed for bankruptcy in part for legitimate reasons, such as the inability to pay his wife’s medical expenses, and Romero was unable to find work after the Ponzi scheme was made public. Thus, the Court found that the district court had not abused its discretion in finding that Romero’s bankruptcy petition had not risen to the level of bad faith.

Hickerson v. Yamaha Motor Corp.

In this case, the Fourth Circuit affirmed the District Court of South Carolina’s decision to exclude the Plaintiff’s expert testimony and enter summary judgment for the Defendant. The Plaintiff had filed suit against Yamaha for a WaveRunner’s (jet ski) inadequate warnings and defective design that resulted in serious internal injuries during a watercraft accident. The WaveRunner itself contained several warnings to wear a swimsuit bottom and to only have three passengers riding the craft at a time. When the accident occurred, a ten-year-old was driving, the Plaintiff was only wearing a bikini bottom, and she was the fourth passenger. The district court excluded the Plaintiff’s expert testimony regarding potential warnings because the expert’s proposals were scientifically untested and thus were unreliable under the Daubert standard. The Fourth Circuit offered little independent analysis regarding the expert testimony exclusion, but the Court agreed with the district court’s reasoning under the abuse of discretion standard of review. Moreover, regarding Plaintiff’s defective design claims, the Court noted that in South Carolina, design defects can be “cured” by adequate product warnings. The Court found that the warnings were adequate as a matter of law, and thus the district court did not err in granting summary judgment on the Plaintiff’s design defect claims.

Elliott v. American States Insurance Co.

This appeal arose from Plaintiff Elliott’s claim against her automobile insurer. In 2013, Elliott was in an automobile accident that left her with serious bodily injuries. As Plaintiff’s insurance coverage through the Defendant was capped at $100,000 and Plaintiff claimed more than $200,000 in damages, her recovery was insufficient to cover her expenses. The Plaintiff then initiated an action to recover damages first against Jones, the other driver in the accident, and then against her insurer. The District Court for the Middle District of North Carolina ultimately denied Plaintiff’s motion to remand the case back to the Superior Court (where she originally filed the case) and granted Defendant’s 12(b)(6) motion for failure to state a claim. On appeal to the Fourth Circuit, the Plaintiff had three claims: (1) that the Defendant’s filing for removal to the district court was untimely, (2) that the district court erred in determining parties were diverse, and thus subject matter jurisdiction did not exist in federal court; and (3) the district court erred in granting Defendant’s motion to dismiss for failure to state a claim. On the Plaintiff’s first claim, the Court concluded that the original service of process was made on a “statutory agent,” not an agent appointed by the defendant. Thus, the thirty-day time period to file the notice of removal did not start until the Defendant actually received a copy of the complaint, not when the service of process was actually delivered. Consequently, the Defendant filed its notice of removal within the allotted time period. As to the second claim, the Court held that the “direct action” variation on diversity jurisdiction from § 1332(c)(1) does not include an insured’s suit against his or her own insurer for breach of the insurance policy terms; thus the parties were diverse. Lastly, the Court rejected the Plaintiff’s claims regarding the Defendant’s motion to dismiss on multiple grounds, including that the Defendant had no obligation to settle the Elliot’s claims until after a judgment was settled against the other motorist, Jones.