By Kyle Brantley

It’s that time of day.  Your child is positioning the antenna just right in order to catch their favorite broadcast TV show.  No, that doesn’t sound quite right.  They are actually dialing up the old FM radio for their favorite weekly jamboree!  No, that’s definitely not happening.  Instead, kids today consume their entertainment through mobile devices—a recent study estimates that 90 percent of children have cell phones by the age of eleven and that on average they spend over three hours on that device per day.[1]

Given the realities of how today’s children access content, one would think that the legal doctrine for policing explicit TV/radio content would morph to accommodate the internet.  However, there is a double standard currently in place.  A high bar for obscene, indecent, and profane content exists on the broadcast airwaves.[2]  In contrast, there is no discernable regulation of expression on the internet.[3]

The lack of internet content policing stems from the First Amendment right to freedom of expression.[4]  While the First Amendment has a broad baseline standard,[5] the government limits what can be said in a few key areas including (but not limited to) fighting words,[6] incitement,[7] obscenity,[8] and indecent speech that invades the privacy of the home.[9]  The overarching authority for the latter still has its roots in FCC v. Pacifica Foundation.[10]  In Pacifica, a New York radio station aired a previously recorded skit by the comedian George Carlin entitled Dirty Words in which he expressed all of the curse words that he thought were disallowed on the public airwaves.[11]  The Supreme Court took issue with the airing of that slot in the middle of the afternoon and honed in on two overriding motivators for censoring the curse words used in the segment: (1) the unique pervasive access of the broadcast airwaves, and (2) the susceptibility of children to be exposed to the content.[12]  

Those overarching reasons delineated in Pacifica still form the basis for FCC guidance that broadcast providers must follow.[13]  The FCC currently prohibits indecent conduct that “portrays sexual or excretory organs” and profane conduct like “‘grossly offensive’ language that is considered a public nuisance.”[14]  Notably, these rules only apply to the major broadcast TV stations (e.g., ABC, NBC, FOX, PBS, etc.)[15] and FM/AM radio from 6:00 a.m. to 10:00 p.m.[16]  Cable and satellite TV are excluded since those are pay-for-service options.[17]

Twenty years later, the federal government saw a need to implement baseline measures for explicit content that children could access on the internet when it included specific protection provisions for “indecent and patently offensive communications” within the Communications Decency Act.[18]  The Supreme Court struck down that portion of the act in Reno v. ACLU[19] when it reasoned that, “[u]nlike communications received by radio or television, ‘the receipt of information on the Internet requires a series of affirmative steps more deliberate and directed than merely turning a dial.  A child requires some sophistication and some ability to read to retrieve material and thereby to use the Internet unattended.’”[20]  The Court then dug in its heels by saying “the Internet is not as ‘invasive’ as radio or television”[21] and that “users seldom encounter [sexually explicit] content accidentally.”[22] 

Times have changed since the Court decided Reno in 1997.  Today, internet access is often unabated, and one can easily stumble upon far more sexually explicit material than could be fathomed on the traditional broadcast airwaves.[23]  How many deliberative and affirmative steps does it take for a TikTok video to pop in front of your face?[24]  How about an Instagram post as you scroll down on your home page?  What about a tailored ad on the side of an otherwise mundane web page?  Apps like TikTok and Instagram present endless amounts of new revealing and potentially vulgar images and sounds automatically—the new videos will simply appear after the previous one ends.[25]  

Another example of a potential hazard that a child can stumble upon is pornography.  Porn’s online proliferation has been well documented; Pornhub, the world’s largest porn site, has 100 billion video views per year[26] and 130 million unique viewers per day. [27]  Twenty-five percent of those users are between the ages of eighteen and twenty-four.[28]  In contrast, only 4 percent of users are over the age of sixty-five.[29]  Its user traffic exceeds that of both Netflix and Yahoo.[30]  Eighty percent of that traffic comes from mobile devices.[31]  This pervasive medium can be accessed with as little as two clicks from Google’s homepage or an errant link from social media.[32]  

While the effects of easily accessible porn on children are still being studied, early experiments have shown that heavy porn consumption can lead to body shaming, eating disorders, and low self-esteem.[33]  There are many other issues with porn access beyond the mental effect on children that are actively being debated, including Pornhub’s lack of adequate age screening for its users and its blatantly illegal acts of profiting off children’s pornography.[34]  Big Tech is also finally getting the hint that they have skin in the game too as they begrudgingly start to put in age verification safeguards of their own.[35]  

When reevaluating the factors employed in Pacifica, it becomes clear that the two-prong test originally used for radio broadcasts is now satisfied on the internet.[36]  The ubiquitous access children have to the internet via smartphones demonstrates that the medium is pervasive.[37]  Children are susceptible to exposure to indecent content because of the ease of access through two quick clicks from Google,[38] automatic video recommendations on social media,[39] and the sheer popularity of porn content amongst their peers who are just a few years older than they are.[40]  The concern in Reno around the lack of a “series of affirmative steps” needed to access illicit content on the internet[41] is outdated because of the automatic content that will load on apps like TikTok and Instagram.[42]  Similarly, the majority of children as young as seven years old have both smartphones and the sophistication to seamlessly access the internet, even though they may not fully understand the ramifications of some of their content choices.[43]

Balancing the government’s interest in limiting children’s exposure to indecency and profanity with the right to express ideas freely online is no easy task.[44]  However, other countries have found ways to regulate the extreme ends of the porn industry and children’s access to such content.[45]  No matter where one stands on the issue, it is abundantly clear that the traditional view of mundane curse words encountered on broadcast television is not compatible with the endless explicit content that is so easily displayed on smartphones.  Both are uniquely pervasive and are accessible to children with minimal effort or “steps.”[46]  One of the two doctrines should evolve. 


[1] See Most Children Own Mobile Phone by Age of Seven, Study Finds, The Guardian (Jan. 29, 2020, 19:01 EST), https://www.theguardian.com/society/2020/jan/30/most-children-own-mobile-phone-by-age-of-seven-study-finds.

[2] See Obscene, Indecent and Profane Broadcasts, FCC, https://www.fcc.gov/consumers/guides/obscene-indecent-and-profane-broadcasts (Jan. 13, 2021) [hereinafter Obscene, Indecent and Profane Broadcasts].

[3] See Rebecca Jakubcin, Comment, Reno v. ACLU: Establishing a First Amendment Level of Protection for the Internet, 9 Univ. Fl. J.L. Pub. Pol’y 287, 292 (1998).

[4] See id.; U.S. Const. amend. I.

[5] See Jakubcin, supra note 3, at 288.

[6] See Cohen v. California, 403 U.S. 15, 20 (1971); Chaplinksy v. New Hampshire, 315 U.S. 568, 572, 574 (1942).

[7] See Brandenburg v. Ohio, 395 U.S. 444, 447, 449 (1969); Schenk v. United States, 249 U.S. 47, 52 (1919).

[8] See Miller v. California, 413 U.S. 15, 24 (1973).

[9] See 18 U.S.C. § 1464.

[10] 438 U.S. 726 (1978).

[11] Id. at 729–30.

[12] See id. at 748–50.

[13] Obscene, Indecent and Profane Broadcasts, supra note 2.

[14] Id.

[15] Id.

[16] Id.

[17] Id.

[18] See Am. C.L. Union v. Reno, 929 F. Supp. 824, 850 (E.D. Pa. 1996), aff’d, Reno v. Am. C.L. Union, 521 U.S. 844, 849 (1997).

[19] Reno, 521 U.S. at 854.

[20] Id. (emphasis added) (quoting Am. C.L. Union, 929 F. Supp. at 845).

[21] Id. at 869.

[22] Id. at 854.

[23] See Byrin Romney, Screens, Teens, and Porn Scenes: Legislative Approaches to Protecting Youth from Exposure to Pornography, 45 Vt. L. Rev. 43, 49 (2020).

[24] See generally Inside TikTok’s Algorithm: A WSJ Video Investigation, Wall St. J. (July 21, 2021, 10:26 AM), https://www.wsj.com/articles/tiktok-algorithm-video-investigation-11626877477 (demonstrating how TikTok’s algorithm pushes users towards more extreme content with recommendations that load automatically without any additional clicks).

[25] Id.

[26] Pornhub, https://www.pornhub.com/press (last visited Nov. 16, 2021).

[27] The Pornhub Tech Review, Pornhub: Insights (Apr. 8, 2021), https://www.pornhub.com/insights/tech-review.

[28] The 2019 Year in Review, Pornhub: Insights (Dec. 11, 2019), https://www.pornhub.com/insights/2019-year-in-review.

[29] Id.

[30] Joel Khalili, These Are the Most Popular Websites Right Now –  And They Might Just Surprise You, TechRadar (July 13, 2021), https://www.techradar.com/news/porn-sites-attract-more-visitors-than-netflix-and-amazon-youll-never-guess-how-many.

[31] The Pornhub Tech Review, supra note 27.

[32] See Gail Dines, What Kids Aren’t Telling Parents About Porn on Social Media, Thrive Global (July 15, 2019), https://thriveglobal.com/stories/what-kids-arent-telling-parents-about-porn-on-social-media/.

[33] Id.

[34] Nicholas Kristof, The Children of Pornhub, N.Y. Times (Dec 4, 2020), https://www.nytimes.com/2020/12/04/opinion/sunday/pornhub-rape-trafficking.html.

[35] See David McCabe, Anonymity No More? Age Checks Come to the Web, N.Y. Times (Oct. 27, 2021), https://www.nytimes.com/2021/10/27/technology/internet-age-check-proof.html.

[36] See FCC v. Pacifica Found., 438 U.S. 726, 748–50 (1978).

[37] See Most Children Own Mobile Phone by Age of Seven, Study Finds, supra note 1.

[38] Dines, supra note 32.

[39] See Inside TikTok’s Algorithm: A WSJ Video Investigation, supra note 24.

[40] See, e.g., The 2019 Year in Review, supra note 28.

[41] See Reno v. Am. C.L. Union, 521 U.S. 844, 854 (1997)

[42] See Inside TikTok’s Algorithm: A WSJ Video Investigation, supra note 24.

[43] See Most Children Own Mobile Phone by Age of Seven, Study Finds, supra note 1.

[44] See Romney, supra note 23, at 97.

[45] See Raphael Tsavkko Garcia, Anti-Porn Laws in Europe Bring Serious Privacy Issues, Yet They’re Fashionable As Ever, CyberNews (Nov. 30, 2020), https://cybernews.com/editorial/anti-porn-laws-in-europe-bring-serious-privacy-issues-yet-theyre-fashionable-as-ever/.

[46] Cf. Reno, 521 U.S. at 854; FCC v. Pacifica Found., 438 U.S. 726, 749–50 (1978).


Post image by ExpectGrain on Flickr.

By Sarah Walton

On January 29, 2016, the Fourth Circuit issued a published opinion in the civil case of Central Radio Company Inc v. City of Norfolk. The Fourth Circuit dismissed the appeal in part, affirmed in part, reversed in part, and remanded the case to the district court for a determination regarding damages.

The Origins of the Dispute

Central Radio Company (“CRC”) is a radio manufacturing and repair business located in Norfolk, Virginia. In April 2010, the Norfolk Regional Housing Authority (“NRHA”) initiated a condemnation proceeding against CRC and other adjacent landowners. The NRHA initiated the proceeding so that it could transfer the land to Old Dominion University. The action was initially dismissed, but the NRHA appealed. While the appeal was pending, CRC put up a sign on the side of their building that said, “50 YEARS ON THIS STREET / 78 YEARS IN NORFOLK / 100 WORKERS / THREATENED BY / EMINENT DOMAIN!” An employee of Old Dominion University complained about the sign and a local zoning official for the City of Norfolk (“the City”) informed CRC that the banner violated Norfolk’s former sign code. The official cited CRC for displaying a sign that was too large and for failing to obtain a certificate before installing the sign. CRC subsequently filed an action to enjoin the city from enforcing the sign code.

The City’s Former Sign Code

The former sign code, which has since been amended, applied to “any sign within the city which is visible from any street, sidewalk or public or private common open space,” but did not include any “flag or emblem of any nation, organization of nations, state, city, or any religious organization,” or any “works of art which in no way identify or specifically relate to a product or service.” The code also mandated that individuals who wished to display a sign that fell within the aforementioned definition limit their sign to a certain size. The size of the sign depended on whether it was classified as “temporary” or “freestanding.”

Further, a sign that fell within the code’s definition had to be approved before being displayed. The individual seeking to display the sign had to apply for a certificate from the city, which in turn would review the application for compliance with the code. If the city determined that the sign complied with the code, it would issue a certificate to the individual seeking to display the sign.

The Supreme Court Vacates the Fourth Circuit’s Holding and Remands for Further Consideration

CRC alleged that the sign code was unconstitutional, arguing that it exempted certain flags or emblems, but not others. CRC also argued that the former sign code was a content-based regulation that warranted strict scrutiny. The district court disagreed. It held that the sign was content-neutral and applied intermediate scrutiny. In doing so, the district court concluded that the City’s justification for the regulation, which was to ensure that drivers and pedestrians would not be distracted when they viewed the signs from the road, satisfied intermediate scrutiny. CRC appealed and the Fourth Circuit affirmed the district court’s holding. CRC appealed to the Supreme Court, which vacated the Fourth Circuit’s ruling in light of its decision in Reed v. Town of Gilbert and remanded the case.

The Fourth Circuit Holds that the Sign was Content-Based and Applies Strict Scrutiny

On remand, CRC argued that the sign code was a content-based restriction on speech and the Fourth Circuit agreed. The court reasoned that the regulation applied to secular flags and banners, but did not apply to religious flags and banners, thereby making it a content-based restriction. Next, the court applied strict scrutiny and tested whether the City’s justification was compelling enough to restrict CRC’s speech. The court pointed out that neither it nor the Supreme Court had ever held that traffic safety was compelling enough to restrict speech. As a result, the Fourth Circuit reversed the district court on this issue.

The Fourth Circuit Dismisses CRC’s Claim for Prospective Relief and Affirms The District Court’s Holding Regarding CRC’s Discrimination Claim

CRC raised two other issues on appeal. First, CRC requested prospective relief based upon allegations of unconstitutional restrictions on speech. The court recognized that because the legislature amended the sign code and was unlikely to change it back to its prior form, the claim was moot and should be dismissed.

Second, CRC argued that the City applied the sign code in a discriminatory manner. The Fourth Circuit disagreed, holding that there was not enough evidence to show that the City acted with a discriminatory intent. As a result, the Fourth Circuit affirmed the district court’s holding on this issue.

The Fourth Circuit Dismisses in Part, Affirms in Part, Reverses in Part, and Remands for Further Proceedings

As a result, the Fourth Circuit dismissed CRC’s claim for prospective relief, affirmed the district court’s ruling on CRC’s discrimination claim, reversed the district court on the content-based versus content-neutral issue, and remanded the case for a determination of damages.

 

`

 

By Taylor Ey

On August 6, 2015, the Fourth Circuit issued its unanimous, published opinion in the civil case of Cahaly v. LaRosa.  This case involves Mr. Robert Cahaly’s (“Plaintiff”) constitutional challenge of South Carolina’s anti-robocall statute (S.C. Code Ann. § 16-17-446(A)), asserting that the statute violates the First Amendment.  After applying the Supreme Court’s 2015 test in Reed v. Town of Gilbert, the Fourth Circuit decided that South Carolina’s statute did not survive strict scrutiny.  However, it also decided that Cahaly lacked standing to bring his other constitutional challenges.  Cahaly also sought damages from law enforcement officials, Paul C. LaRosa, III, and Reginald I. Lloyd (“Defendants”), who arrested him.  The Court affirmed the district court’s grant of summary judgment in favor of defendants.

Applying Reed to Determine Content Neutrality

The Fourth Circuit applied the test in Reed to determine whether the statute’s restriction was content-neutral based restrictions on speech.  The statute prohibits robocalls that are “for the purpose of making an unsolicited consumer telephone call” or are “of a political nature including, but not limited to, calls relating to political campaigns.”

Under Reed, as a threshold inquiry, courts assess whether the law is content neutral on its face.  Next, if facially neutral, courts ask whether the law cannot be justified without reference to the content of the regulated speech or adopted by the government because of a disagreement with the message the speech conveys.

Applying Reed, the Fourth Circuit found that the statute is content based on its face.  The statute forbids calls with a consumer or political message and does not apply to calls made for any other purpose.

Because the Regulation Is Content Based, the Court Applied a Strict Scrutiny Analysis  

To survive strict scrutiny, the government must prove that the restriction furthers a compelling government interest and is narrowly tailored to further that interest.  In this case, South Carolina asserted its interest was to “protect residential privacy and tranquility from unwanted and intrusive robocalls.”  The Fourth Circuit assumed that it was a compelling interest.  However, it held that the statute was not the only way to serve this interest, and thus the statute was unconstitutional.  The Court further stated that the statute was underinclusive.

Plaintiff Lacked Standing to Assert Compelled-Speech Challenge

Additionally, Cahaly raised a compelled speech challenge, which the Defendants appealed.  Defendants argued that Calahy did not suffer an “injury in fact,” and therefore did not have standing to challenge the exceptions to the statute.  The Fourth Circuit agreed because Cahaly was not charged with a violation of the statute.  Because the district court ruled for Cahaly, stating that the exceptions were unconstitutional, the Fourth Circuit vacated the district court’s judgment on this issue.

The Fourth Circuit Affirmed the District Court’s Grant of Summary Judgment

The arresting officer had probable cause to arrest Cahaly for violating the anti-robocall statute.  Officer LaRosa had six witnesses who described the robocalls, a recording of a phone call, and an investigation that connected the phone number to Cahaly.  This evidence was sufficient to give probable cause, and therefore the Court affirmed.