Wake Forest Law Review

By Dan Menken

Today, in the civil case of Covey v. Assessor of Ohio County, a published opinion, the Fourth Circuit reversed the district court’s dismissal of Christopher and Lela Covey’s suit against government officials for entering the curtilage of their house without a search warrant.

Question of Fourth Amendment Protection From Unreasonable Government Intrusion

The Court was asked to decide whether government officials violated the Coveys’ Fourth Amendment right to protection from unreasonable government intrusion when the government officials entered the curtilage of the Covey’s home in search of marijuana without a warrant.

Government Tax Assessor Relayed Information to Police Regarding Marijuana Plants

On October 21, 2009, a field deputy for the tax assessor of Ohio County, West Virginia, entered the Covey’s property to collect data to assess the value of the property for tax purposes. The tax assessor entered the Covey’s property despite seeing “No Trespassing” signs, which is against West Virginia law. When searching the property, the tax assessor found marijuana in the Covey’s walk-out basement patio. The tax assessor then contacted the police.

When the police arrived, they entered the curtilage of the Covey’s residence and proceeded to the area where the marijuana was located. As they were searching the property they encountered Mr. Covey. The officers detained Mr. Covey and continued their search. The officers then waited several hours to obtain a warrant to search the house. During that time, Mrs. Covey returned home and was warned that she would be arrested if she entered the house, after which she left the premises. Upon returning an hour later, Mrs. Covey was seized and interrogated. After the police received the search warrant, the Coveys were arrested and jailed overnight.

On March 30, 2010, Mr. Covey pleaded guilty in state court to manufacturing marijuana in exchange for the government’s promise that they would not initiate prosecution against Mrs. Covey. He was sentenced to home confinement for a period of not less than one year and not more than five years. On October 20, 2011, the Coveys brought this suit pro se. The claims, brought under 42 U.S.C. § 1983 and Bivens, alleged that several defendants violated the Coveys’ Fourth Amendment rights by conducting an unreasonable search. The district court dismissed the Coveys’ claim concluding that none of the defendants violated the Fourth Amendment. This appeal followed.

Fourth Amendment Protects Curtilage of Home

The Court reviewed the district court’s grant of a motion to dismiss de novo. To prevail on a motion to dismiss, a plaintiff must “state a claim to relief that is plausible on its face.” Ashcroft v. Iqbal. A claim is plausible if “the plaintiff pleads factual content that allows the court to draw the reasonable inference that the defendant is liable for the misconduct alleged.” Id.

According to Oliver v. United States (1984), the Fourth Amendment protects homes and the “land immediately surrounding and associated” with homes, known as curtilage, from unreasonable government intrusions. Probable cause is the appropriate standard for searches of the curtilage and warrantless searches of curtilage is unreasonable.   The knock-and-talk exception to the Fourth Amendment’s warrant requirement allows an officer, without a warrant, to approach a home and knock on the door, just as any ordinary citizen could do. An officer may bypass the front door when circumstances reasonably indicate the officer might find the homeowner elsewhere on the property. The right to knock and talk does not entail a right to conduct a general investigation on a home’s curtilage.

The Complaint Presented Plausible Claims For Violations of the Fourth Amendment

Properly construed in the Coveys’ favor, the complaint alleges that the officers saw Mr. Covey only after they entered the curtilage. Thus, applying the Rule 12(b)(6) standard, the Court found that the Coveys plausibly alleged that the officers violated their Fourth Amendment rights by entering and searching the curtilage of their home without warrant. The district court erred by accepting the officers account of events, in which they stated that they saw Mr. Covey prior to entering the curtilage.

Turning to the tax assessor, the Court believed that his entering of the property, although illegal, was not a per se violation of the Fourth Amendment. In this case, the Court believed that the governmental interest in the search for tax purposes was minimal, while the Covey’s privacy interest is significant. Therefore, the Fourth Circuit held that the Coveys pleaded a plausible claim that the tax assessor conducted an unreasonable search of their home and curtilage.

Defendants’ Affirmative Defenses

According to Ashcroft v. al-Kidd (2011) qualified immunity “shields federal and state officials form money damages unless a plaintiff pleads facts showing (1) that the official violated a statutory or constitutional right, and (2) that the right was ‘clearly established’ at the time of the challenged conduct. As to the police officers, the Court stated that they should be aware that a warrantless search of the home, absent consent or exigency, is presumptively unconstitutional. Additionally, the Court noted that Fourth Circuit has, for over a decade, recognized that the curtilage of the home is entitled to Fourth Amendment protection. The Court felt that the tax assessor presented a closer case. Because there was no case law that spoke to a similar set of facts, and the tax assessor should have been aware that he was violating a Constitutional right by searching the property, the Court ruled that the tax assessor was not entitled to qualified immunity.

Finally, the defendants claimed that the Coveys’ § 1983 and Bivens claims are barred by Heck v. Humphrey (1994). There are two requirements for Heck to bar the Coveys’ claims. First, “a judgment in favor of the plaintiff [must] necessarily imply the invalidity of [a plaintiff’s] conviction or sentence.” Second, the claim must be brought by a claimant who is either (i) currently in custody or (ii) no longer in custody because the sentence has been served, but nevertheless could have practicably sought habeas relief while in custody. The court concluded that Mr. Covey’s claims did not necessarily imply the invalidity of his conviction and thus are not necessarily barred by Heck. The Court remanded the district court for further analysis under Heck.

Reversed and Remanded

Thus, the Fourth Circuit reversed the district court’s grant of dismissal and remanded the case for further proceedings.

By Michael Mitchell

Today, in Lynn v. Monarch Recovery Management, Inc., the Fourth Circuit affirmed the summary judgment granted to Kevin Lynn for Monarch Recovery Management’s violation of the Telephone Consumer Protection Act (“TCPA”).   On appeal, the Court rejected Monarch’s argument that it was exempt from the TCPA as a debt collector.

Under 47 U.S.C. § 227, the TCPA prohibits “making any call . . . [using] an artificial or prerecorded voice . . . to any telephone number assigned to a . . . cellular telephone service . . . or any service for which the called party is charged for the call.”   The United States District Court for the District of Maryland, at Baltimore, found that Monarch’s calls to Lynn violated the TCPA because Lynn was individually charged for each call.  Monarch made these calls to Lynn in its capacity as a debt collection company.

Affirming the district court’s grant of summary judgment to Lynn in an unpublished per curiam decision, the Fourth Circuit rejected Monarch’s attempt to avoid liability under the call-charged provision of the TCPA.  Specifically, Monarch argued that the FCC’s regulation excepted debt collectors from the TCPA’s prohibition on “call[s] to any residential telephone line using an artificial or prerecorded voice to deliver a message.”

The Court relied on its review of legislative intent in denying Monarch’s assertion that it was exempt from the call-charged provision of the TCPA.  Citing Clodfelter v. Republic of Sudan, 720 F.3d 199, the Court held that Congress did not intend for companies like Monarch to use the TCPA to limit their liability.  Thus, the Fourth Circuit has maintained civil liability for debt collectors under the call-charged provision of the TCPA.

By Joshua P. Bussen

Today in United States v. Mitchell, the Fourth Circuit, in a per curiam opinion affirmed the conviction of Sidney Mitchell for unlawful possession of a firearm by a felon. Mitchell entered a conditional plea of guilty in the Middle District of North Carolina, reserving his right to appeal the judgment of the district court. Mitchell contends that the district court erred in denying his motion to suppress evidence of a firearm that was found while police were conducting a search of his vehicle. Mitchell was sentenced to twenty-six months in prison.

In the waning hours of sunlight on November 20, 2012, a North Carolina police officer stopped Mitchell’s car on a suspicion that the tint on the vehicle’s windows was darker than allowed under North Carolina law. While performing a test that would gauge the level of tint on Mitchell’s windows—a process that involves placing a device on the inside of the vehicle—the officer claims he noticed the smell of “burnt marijuana.” Though Mitchell denied smoking marijuana, he consented to a search of his person. After searching Mitchell the officer turned to the vehicle, discovering a small amount of marijuana resin and a gun on the driver’s side floorboard.

In the Middle District of North Carolina Mitchell moved to suppress the firearm due to an improper search and seizure. The district court found that the tint on Mitchell’s windows gave the officer reasonable cause to pull the car over, and the smell of burnt marijuana subsequently warranted probable cause to search the vehicle. On appeal Mitchell did not question the officer’s motivation for detaining the vehicle, but disputed “lawfulness of the subsequent search of the [inside of the] car.”

The Fourth Circuit, relying on United States v. Scheetz, 293 F.3d 175, 184 (4th Cir. 2002), held that the odor of marijuana emanating from a car warranted sufficient probable cause to search the vehicle. Mitchell’s final argument that the officer’s credibility should be questioned fell on deaf ears, the Circuit judges were not willing to disturb the factual findings of the district court because “the district court is so much better situated to evaluate these matters.”

By Steven I. Friedland

“The world isn’t run by weapons anymore, or energy, or money.  It’s run by little ones and zeroes, little bits of data.  It’s all just electrons.”[1]

We live in an era of mass surveillance. Advertisers, corporations and the government engage in widespread data collection and analysis, using such avenues as cell phone location information, the Internet, camera observations, and drones.  As technology and analytics advance, mass surveillance opportunities continue to grow.[2]

The growing surveillance society is not necessarily harmful[3] or unconstitutional.  The United States must track people and gather data to defend against enemies and malevolent actors.  Defenses range from stopping attempts to breach government computers and software programs,[4] to identifying and thwarting potential terroristic conduct and threats at an embryonic stage.

Yet, without lines drawn to limit mass data gathering, especially in secret, unchecked government snooping likely will continue to expand.  John Kerry, the sitting Secretary of State, even recently acknowledged that the government has “sometimes reached too far” with its surveillance.[5] The stakes for drawing lines demarcating privacy rights and the government’s security efforts have never been higher or more uncertain.

This Article argues that the forgotten Third Amendment, long in desuetude, should be considered to harmonize and intersect with the Fourth Amendment to potentially limit at least some mass government surveillance.  While the Fourth Amendment has been the sole source of search and seizure limitations, the Third Amendment should be added to the privacy calculus,[6] because it provides a clear allocation of power between military and civil authorities and creates a realm of privacy governed by civil law.

Consequently, in today’s digital world it would be improper to read the words of the Third Amendment literally, merely as surplusage. Instead, the Amendment’s check on government tyranny should be viewed as restricting cybersoldiers from focusing surveillance instrumentalities[7] on and around private residences or businesses in an intrusive way—or using proxies to do so—that would serve as the functional equivalent of military quartering in the civil community.

I.  Mass Surveillance

Imagine an America with continual domestic drones, which collected camera and cell phone surveillance of every person in a particular residential subdivision, business headquarters, or city high-rise building.  The surveillance would be mostly secret but “in public,” capturing people sitting on rocking chairs on their front porches, unloading bags of groceries from their cars, opening their wallets to pay bills, and anything visible through windows in private residences and businesses.  People who go to sporting events or the supermarket would have their faces matched to an existing database. The metadata from Internet use, cell phone location data and other sources, including hyper-local observations, would be fed into computers for complex analysis and combined with other surveillance information.[8]  This information, all gathered and utilized outside the private space protected by the physical walls and doors of houses, would present a fairly intimate picture of these individuals over time, creating in essence a virtual window to what is occurring within the house or building, as well as without.[9]

Such a day is not far off. Drones and robots are currently being employed domestically in the skies,[10] on land, and in the seas[11] for various purposes, although apparently not yet on a continual and widespread basis. Yet, expansion of their use seems inevitable.[12] While most unmanned aircraft systems fly high overhead, out of sight, as more information is released and people look more carefully, we will know they are there. The government also is developing the Biometric Optical Surveillance System (“BOSS”), which will have tremendous capabilities for identifying people from distances of up to 100 meters.  This system was scheduled for testing at a public hockey game in the State of Washington in 2013.[13]  To supplement the information acquired directly, the government obtains considerable amounts of information through the consent of third parties.[14]

While surveillance is not overly intrusive when deployed in public places, where being watched can be expected, it still can be dangerous.[15] Surveillance, when taken as a whole with information and data gathering, can form a mosaic of intrusion in a manner similar to that described by Justices Alito and Sotomayor in their concurrences in the GPS tracking device case United States v. Jones.[16]  Pursuant to this “mosaic theory,” a privacy violation does not require a physical trespass.  One commentator noted the following,

Today’s police have to follow hunches, cultivate informants, subpoena ATM camera footage. . . . Tomorrow’s police . . . might sit in an office or vehicle as their metal agents methodically search for interesting behavior to record and relay. Americans can visualize and experience this activity as a physical violation of their privacy.[17]

Significantly, surveillance also is an expression of power—an accumulation of data that can be used against persons, even creating that intimate picture of what occurs inside a house when the cybersleuth never actually sets foot in it. As another commentator has observed about possible power abuses, “We cannot have a system, or even the appearance of a system, where surveillance is secret, or where decisions are made about individuals by a Kafkaesque system of opaque and unreviewable decision makers.” [18]

II.  The Third Amendment’s Place In Constitutional Orthodoxy

“[N]o Soldier shall, in time of peace be quartered in any house, without the consent of the Owner, nor in time of war, but in a manner to be prescribed by law.”[19]

A.     Origins and Interpretations

The Third Amendment might have an obscure[20] and obsolete[21] place in constitutional law orthodoxy, yet it draws on a rich history. The bright-line Amendment[22] traces its origins to pre-revolutionary war England, where multiple abuses by the king in quartering soldiers, the Royal entourage and their horses in private residences led to laws prohibiting quartering in England.[23]These laws were enacted in part to avoid maintaining a standing army, especially during peacetime.[24]  For example, in 1689,the British Parliament enacted the Mutiny Act, which outlawed the quartering of troops in private homes without the owner’s consent.[25] A standing army was thought to provide a slippery slope to tyranny, and it was the confluence of military with civil authority that was the real problem, not simply the taking of private resources by the King.

Continued quartering abuses in the colonies led to the adoption of the Third Amendment. Patrick Henry argued for the amendment because it offered rule by civil authority, not military force, [26] as did Samuel Adams, who objected to soldiers quartered “in the body of a city” and not just houses.[27]

Perhaps the amendment’s desuetude is attributable in part to the fact that it has only been the subject of Supreme Court cases in passing, such as in Griswold v. Connecticut,[28] and just one significant direct judicial interpretation, Engblom v. Carey,[29] a 1981 Second Circuit Court of Appeals case. In Engblom, the court was confronted with a claim by two correctional officers who claimed their Third Amendment rights were infringed by the State of New York when the state quartered national guardsmen in their dormitory-style residences during a prison strike by the guards in an upstate New York prison.[30]  The guards were renting their rooms from the State.[31]

The court first applied the Third Amendment to the State of New York through the incorporation doctrine of the Fourteenth Amendment.[32]  Significantly, the court viewed several of the key terms in the amendment expansively. The court considered the national guardsmen to be “soldiers” and held that the Third Amendment applied to the guardsmen as “tenants,” even if they did not own their quarters, despite the express language in the amendment.[33]

B.     The Relationship Between the Third and Fourth Amendments

The Second Circuit in Engblom also used an analysis borrowed from the Fourth Amendment, setting forth a standard of a “legitimate expectation of privacy” to determine if Third Amendment rights were triggered.[34] It noted that the amendment’s objective was to protect the fundamental right to privacy in conjunction with the use and enjoyment of property rights.[35]

The Engblom analysis at least implicitly recognized the interlocking nature of the Third and Fourth Amendments and the primary role of the Fourth Amendment as the privacy standard bearer. As one noted commentator observed, “If the Fourth Amendment had never been enacted, the Third Amendment might have provided the raw material for generating something like an anti-search and seizure principle.”[36]

Constitutionally, courts have used the Fourth Amendment to protect against government snooping on others, but the Fourth Amendment has been strapped with textual limits, given its language protecting only against unreasonable, not all, searches and seizures, and interpretive limits authored by a reticent Supreme Court that has stuck by rules created in predigital cases.[37] Also, while the Fourth Amendment protects against United States government spying, it does not apply to such conduct by foreign governments, which can and do swap data with the United States,[38] or apparent swaps of data with thousands of technology, finance, and manufacturing companies.[39]

Preoccupation with the Fourth Amendment doctrine, combined with a Gresham’s Law style of constitutional application suggesting that general principles often end up marginalizing specific provisions, help explain the Third Amendment’s disuse.  A contextual interpretation of this amendment in the digital era could offer a significant link in a system of digital checks and balances.

III.  Interpretation

The Third Amendment’s relevancy to surveillance privacy depends on its interpretation,[40] both in terms of its themes and words. The amendment’s broad themes resonate in the world of “Big Data” and the Internet. The amendment provides a bright line allocation of power, with a clear distinction that limits the military and protects homes from intrusion without consent.  As evidenced by Due Process, Equal Protection, and other constitutional doctrines such as the Eighth Amendment, the Court often takes into account evolving facts and cultural transformations over time.  A more specific analysis of each component of the Amendment follows.

A.     War and Peace

The wartime/peacetime distinction in the amendment provides a useful contrast about the expansiveness of government power at different times. When compared to the Fourth Amendment, the framers of the Third Amendment provided a clear line of what is reasonable in times of war or peace.

B.     Soldiers

History is instructive. Early English case law reflects the concern over forced accommodations and board not only by soldiers, but also by the royal court and its entourage.[41] The prohibition extended to the soldiers’ instrumentalities, namely their horses.[42]  In the late 1700s, soldiers honorably fought in uniform generally within full view of the enemy. Times have changed. InEngblom, national guardsmen were considered soldiers, even though they were defending a domestic prison.  Today, the definition would certainly include cyber agents, military personnel who are paid to hack and disrupt another country’s software and hardware and to protect our own. Instead of horses, these cyber soldiers use codes or metal instrumentalities to invade others’ cyber spaces.[43] Using stealth and remote access to obtain and crunch data is the new face of warfare; these soldiers disrupt and disable various aspects of a country to keep it off balance and vulnerable. For example, deployment of the Stuxnet worm, placed on computers in Iran to disrupt its quest for nuclear weapons, is but one illustration of the new military.

C.         Quartering

Quartering historically came to mean an “act of a government in billeting or assigning soldiers to private houses, without the consent of the owners of such houses, and requiring such owners to supply them with board or lodging or both.”[44] Billeting can mean a letter ordering the assignment or the assignment itself.         This definition yields some insights.  Significantly, it is a military intrusion into home life—civilian life—by soldiers, which is why early English analysis incorporated the forced provision of board and the tethering of horses as part of quartering. Thus, it is the intrusion and diminishment of civil authority and life that matter, even if it is through remote access rather than the physical presence of the soldiers. An unmanned drone is the equivalent of a piloted plane. Would military personnel stationed regularly at businesses, or operating cameras on the rooftops of private residences or businesses, or even on all public mailboxes generate intimidation or intrusion into daily life? Would the intrusions still be significant if the soldiers were outside of the houses and businesses, in the curtilages, peering inside or the equivalent? Especially if seen or heard, electronic surveillance devices could significantly interfere with civilian community life and intrude on civilian authority. As one commentator has noted, “[G]overnment or industry surveillance of the populace with drones would be visible and highly salient. People would feel observed, regardless of how or whether the information was actually used.”[45]

Quartering today also can involve proxies, where the U.S. government knows and promotes the equivalent of private or foreign quartering for its own gain.  One illustration of proxy quartering might involve an agreement between countries to swap sensitive data on each other’s citizens, revealing the intricacies of civil life inside the cities and their residences or businesses.[46]

D.    Any Houses

The term “any houses” on its face appears highly restrictive.[47] Yet, at least in Engblom, it also means tenancies. While tenancies refers to residences, today there are a proliferation of buildings housing businesses, which fall within the types of civil occupancies where sensitive and confidential civil life occurs.  Invasions of these buildings without physical entry can occur regularly in the digital world, which is how the term should be judged and is in keeping with the intent of the framers.

While the term “any houses” could be more broadly construed to mean all private chattel or real property, including electronic devices,[48] this likely would expand the meaning of the amendment to become a version of the Fifth Amendment Takings Clause, not likely intended for the Third Amendment’s “houses” distinction, particularly when the Fourth Amendment protects not only houses, but also “persons, places, and effects.”

E.     Without Consent

Although the amendment permits quartering in peacetime with consent, if quartering extends to businesses, the government-private business partnerships create questions about the voluntariness of the relationships.  This is especially the case if the government inserts employees into the private business locations.  This type of relationship might not generate adequate voluntary consent.[49]

Conclusion

The Third Amendment no longer will be the forgotten amendment if it is considered to interlock with the Fourth Amendment to provide a check on some domestic mass surveillance intruding on civil life, particularly within the home, business or curtilage of each.  In the digital era, the dual purposes of the Amendment should be understood to potentially limit the reach of cyber soldiers and protect the enjoyment of a private tenancy without governmental incursion.



        [1].   Sneakers (Universal Pictures 1992).
        [2].   See, e.g., Quentin Hardy, Big Data’s Little Brother, N.Y. Times, Nov. 12, 2013, at B1 (“Collecting data from all sorts of odd places and analyzing it much faster than was possible even a couple of years ago has become one of the hottest areas of the technology industry. . . . Now Big Data is evolving,         becoming more “hyper” and including all sorts of sources.”).
        [3].   Contra Neil Richards, The Dangers of Surveillance, 126 Harv. L. Rev. 1934 passim (2013) (arguing that surveillance is a direct threat to “intellectual privacy,” or the notion that ideas develop best in private).
        [4].   China allegedly attempts to hack U.S. computers on a daily basis. See Keith Bradsher, China Blasts Hacking Claim by Pentagon, N.Y. Times (May 7, 2013), http://www.nytimes.com/2013/05/08/world/asia/china-criticizes-pentagon-report-on-cyberattacks.html.
        [5].   Mark Memmott, U.S. Spying Efforts Sometimes ‘Reached Too Far,’ Kerry Says, The Two Way, Nat’l Pub. Radio (Nov. 1, 2013), http://www.npr.org/blogs/thetwo-way/2013/11/01/242288704/u-s-spying-efforts -sometimes-reached-too-far-kerry-says (quoting John Kerry as saying that “some of the electronic surveillance programs of the National Security Agency have been on ‘automatic pilot’ in recent years and have inappropriately ‘reached too far’”).  Google’s Executive Chairman, Eric Schmidt, was less restrained about secret government spying, calling reports of National Security Agency (“NSA”) interception of the main communication links used by Google and Yahoo to connect to their data centers “outrageous.”  See Eyder Peralta, Google’s Eric Schmidt Says Reports of NSA Spying ‘Outrageous,’  The Two Way, Nat’l Pub. Radio (Nov. 4, 2013), http://www.npr.org/blogs/thetwo-way/2013/11/04 /242960648/googles-eric-schmidt-says-reports-of-nsa-spying-are-outrageous (“There clearly are cases where evil people exist, but you don’t have to violate the privacy of every single citizen of America to find them.”).
        [6].   For the dual rationales of the Amendment, see Geoffrey M. Wyatt, The Third Amendment in the Twenty-First Century: Military Recruiting on Private Campuses, 40 New Eng. L. Rev. 113, 122–24 (2005).
        [7].   Instrumentalities do not include malware such as the “Stuxnet” computer worm, tracking devices, cookies and more. The Stuxnet worm was allegedly used by several countries to infiltrate and infect Iran’s nuclear facilities. See Alan Butler, When Cyberweapons End Up on Private Networks: Third Amendment Implications for Cybersecurity Policy, 62 Am. U. L. Rev. 1203, 1204–05 (2013).
        [8].   Indeed, the NSA alone gathers 20 billion “record events” per day. James Risen & Laura Poitras, N.S.A. Examines Social Networks of U.S. Citizens, N.Y. Times, Sep. 29, 2013, at A1.
        [9].   This off-the-wall versus through-the-wall distinction was advanced in Kyllo v. United States, 533 U.S. 27 (2001), where the Court found that the police unconstitutionally used an infrared heat detection device to determine whether heat lamps were being used in the house to grow marijuana. Id. at 40.
      [10].   In fact, Robert Mueller, the current F.B.I. Director, recently conceded at a Senate hearing that drones indeed have been used for some “very minimal” domestic surveillance operations. Phil Mattingly, FBI Uses Drones in Domestic Surveillance, Mueller Says, Bloomberg (June 19, 2013), http://www.bloomberg.com/news/2013-06-19/fbi-uses-drones-in-domestic -sureillance-mueller-says.html.
      [11].   William Herkewitz, Ocean Drones Plumb New Depths, N.Y. Times, Nov. 12, 2013, at D1.
      [12].   M. Ryan Calo, The Drone As Privacy Catalyst, 64 Stan. L. Rev. Online 29, 30–31 (2013). Calo notes that there are several counties where drone use is occurring; however, there are also several restrictions that limit use of drones. See Operation and Certification of Small Unmanned Aircraft Systems (SUAS), 76 Fed. Reg. 40,107, 40,107–08 (July 7, 2011), available athttp://www.gpo.gov/fdsys/pkg/FR-2011-07-07/pdf/2011-15494.pdf#page=16.
      [13].   Eddie Keogh, DHS to Test Facial Recognition Software at Hockey Game, Reuters (Sept. 18, 2013), http://rt.com/usa/dhs-hockey-washington-face-033/.
      [14].   Another way the government obtains information is through warrants and requests under FISA.  See Foreign Intelligence Surveillance Act, 50 U.S.C. §§ 1801-1885 (2010).
      [15].   See Neil Richards, supra note 3, at 1952–58. Professor Richards organizes his argument as follows: “Part II shows how surveillance menaces our intellectual privacy and threatens the development of individual beliefs in ways that are inconsistent with the basic commitments of democratic societies. Part III explores how surveillance distorts the power relationships between the watcher and the watched, enhancing the watcher’s ability to blackmail, coerce, and discriminate against the people under its scrutiny.” Id. at 1936.
      [16].   132 S. Ct. 945, 956 (2012) (Sotomayor, J., concurring); Id. at 961 (Alito, J., concurring).  The case involved the placement of a GPS device on a private individual’s car.  Id. at 948 (majority opinion).  Writing for the majority, Justice Scalia found that the installation of the device was a search within the meaning of the Fourth Amendment.  Id. at 952.
      [17].   Calo, supra note 12, at 32.
      [18].   Neil M. Richards & Jonathan H. King, Three Paradoxes of Big Data, 66 Stan. L. Rev. Online 41, 43 (2013).  The authors discuss the paradox of power associated with Big Data, stating that “[b]ig data will create winners and losers, and it is likely to benefit the institutions that wield its tools over the individuals being mined, analyzed, and sorted. Not knowing the appropriate legal or technical boundaries, each side is left guessing. Individuals succumb to denial while governments and corporations get away with what they can by default, until they are left reeling from scandal after shock of disclosure.”  Id. at 45.
      [19].   U.S. Const. amend. III.
      [20].   William S. Fields & David T. Hardy, The Third Amendment and the Issue of the Maintenance of Standing Armies: ALegal History, 35 Am. J. Legal Hist. 393, 429 (1991).
      [21].   Morton Horwitz, Is the Third Amendment Obsolete? 26 Val. U.  L.  Rev. 209 passim (1991).
      [22].   This provision firmly states its singular prohibition.  Interestingly, it still arguably has been violated on multiple occasions. See, e.g., B. Carmon Hardy, A Free People’s Intolerable Grievancein The Bill of Rights, A Lively Heritage 67, 69 (1987); Tom W. Bell, “Property” in the Constitution: A View From the Third Amendment, 20 Wm. & Mary Bill Rts. J. 1243, 1276 (2012).
      [23].   See, e.g., B. Carmon Hardy, A Free People’s Intolerable Grievance – The Quartering of Troops and the Third Amendment, 33 Va. Cavalcade 126 (1984); J. Alan Rogers, Colonial Opposition to the Quartering of Troops During the French and Indian War, 34 Mil. Aff. 7, 7–11 (1970).
      [24].   Fields & Hardy, supra note 20, at 395; Hardy, supra note 23; Rogers, supra note 23.
      [25].   Horwitz, supra note 21, at 210.
      [26].   Patrick Henry, Patrick Henry’s Objections to a National Army and James Madison’s Reply, Virginia Convention (June 16, 1788), in 2 The Debate on the Constitution 695, 696–97 (Bernard Bailyn ed., 1993).
      [27].   Samuel Adams, Letter to the Editor, Bos. Gazette, Oct. 17, 1768, reprinted in 5 The Founders’ Constitution 215, 215 (Philip B. Kurland & Ralph Lerner eds., 1987) (“No man can pretend to say that the peace and good order of the community is so secure with soldiers quartered in the body of a city as without them.”).
      [28].   381 U.S. 479, 484 (1965) (discussing the Third Amendment as a part of the penumbras forming a constitutional privacy right).
      [29].   677 F.2d 957 (2d Cir. 1982).
      [30].   Id. at 958–59.
      [31].   Id. at 959–60.
      [32].   Id. at 961.
      [33].   Id. at 961–62.
      [34].   See William Sutton Fields, The Third Amendment: Constitutional Protection from the Involuntary Quartering of Soldiers, 124 Mil. L. Rev. 195, 207 & n.108 (1989); Ann Marie C. Petrey, Comment, The Third Amendment’s Protection Against Unwanted Military Intrusion, 49 Brook. L. Rev. 857, 857–64 (1983).
      [35].   Engblom, 677 F.2d at 962.
      [36].   See Horwitz, supra note 21, at 214.
      [37].   See, e.g., the physical trespass test used in United States v. Jones, 132 S. Ct. 945, 950–52 (2012); Id. at 955 (Sotomayor, J., concurring) (“[T]he trespassory test applied in the majority’s opinion reflects an irreducible constitutional minimum.”). The case involved the placement of a GPS device on a private individual’s car. Id. at 948 (majority opinion).  Justice Scalia found that doing so without a warrant unconstitutionally violated Mr. Jones’s property rights. Id. at 949.
      [38].   A prime illustration is the relationship between England and the United States.  They have swapped sensitive data on each other’s citizens, doing indirectly what is not permitted directly.  British Spy Agency Taps Cables, Shares with U.S. NSA – Guardian, Reuters (June 21, 2013), http://uk.reuters.com/article/2013/06/21/uk-usa-security-britain -idUKBRE95K10620130621.
      [39].   Michael Riley, U.S. Agencies Said to Swap Data with Thousands of Firms, Bloomberg (June 15, 2013), http://www.bloomberg.com/news/2013-06 -14/u-s-agencies-said-to-swap-data-with-thousands-of-firms.html.
      [40].   Most scholars believe that words in the Constitution require interpretation.  Originalism, for example, looks to ground the meaning of the words based on the era and its sources.  Construction can have varying levels of strictness. For example, Justice Scalia believes that “[w]ords have meaning. And their meaning doesn’t change.” Jennifer Senior, In Conversation: Antonin Scalia, N.Y. Mag. (Oct. 6, 2013) http://nymag.com/news/features/antonin-scalia-2013-10/.
      [41].   Tom W. Bell, The Third Amendment: Forgotten but Not Gone, 2 Wm. & Mary Bill Rts. J. 117, 121 (1993).
      [42].   Id. at 123 n.46 (citing Coram Rege Roll, no. 564 (Easter 1402), m. 28d, at Westminster in Middlesex, reprinted in VII Select Cases in the Court of King’s Bench 121-23 (G.O. Sayles ed., 1971)).
      [43].   See Butler, supra note 7, at 1231–33, for an argument that it does trigger the Third Amendment.
      [44].   Quartering Soldiers, The Law Dictionary, http://thelawdictionary.org /quartering-soldiers/ (last visited Jan. 13, 2013).
      [45].   Calo, supra note 12, at 33.
      [46].   See supra note 38.
      [47].   Given the rejection of an alternative Amendment that would have limited it only to private and not public houses, the Framers opted for a broader approach.  Compare Bell, supra note 41, at 129 n.105, with U.S. Const. amend. III.
      [48].   A recent commentator has provided the Amendment with a similar construction. See Butler, supra note 7, at 1230.
      [49].   The government also pays and partners with companies to produce and swap data. Riley, supra note 39.

By Beverly Cohen*

Introduction

On June 23, 2011, the United States Supreme Court, in Sorrell v. IMS Health Inc.,[1] determined that Vermont’s law prohibiting pharmacies from selling prescription data to “data-mining companies” violated the Free Speech Clause of the First Amendment.[2]  Data miners purchased the prescription data to aggregate and resell it to pharmacy manufacturers for marketing purposes.[3]  Drug manufacturers used the information to target physicians for face-to-face visits (“detailing”) by salesmen to convince the physicians to prescribe more of the manufacturers’ costly brand-name drugs.[4]  The prescription information purchased from the data miners enabled the manufacturers to target particular physicians who were not prescribing their brand-name drugs or who were prescribing competing drugs.[5]

Several states objected to drug manufacturers’ use of prescription information for detailing, contending that it increased sales of brand-name drugs and drove up healthcare costs.[6]  When these states passed laws preventing the pharmacies’ sale of the prescription information to data-mining companies and the use of this information by drug manufacturers,[7]the data miners and drug manufacturers sued.[8]

When the challenge to Vermont’s data-mining law reached the Supreme Court, the Court invalidated it on the grounds that it violated the Free Speech Clause.[9]  The Court held that the law did not survive strict scrutiny.  It prohibited the use of prescription information with a particular content (prescriber histories) by particular speakers (data miners and detailers)[10] and did not advance Vermont’s asserted goals of ensuring physician privacy, improving the public health, and containing healthcare costs in a permissible way.[11]

The Federal Privacy Rule,[12] implementing the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”),[13] is similar to the data-mining laws in its restrictions on the disclosure of private health information.[14]  This Article applies the HIPAA Privacy Rule to the practice of data mining and, surprisingly, finds that HIPAA restricts it.[15]  The Privacy Rule flatly prohibits any unauthorized use or disclosure of protected health information for marketing purposes.[16]  Nevertheless, the practice of data mining continues despite HIPAA.  In fact, at least one court has recently declared that nothing in HIPAA restricts data mining.[17]

The question post-Sorrell is whether the marketing provisions of the Privacy Rule, like Vermont’s data-mining law, also violate freedom of speech.[18]  Although there are obvious similarities between HIPAA’s marketing provisions and the marketing restrictions of Vermont’s data-mining law, there are also substantial differences.[19]  The structure of the Privacy Rule is quite unlike the data-mining law in that the discriminatory intent and impact that the Supreme Court found objectionable in Sorrell is largely absent in HIPAA.[20]  Unlike Vermont’s data-mining law, the Privacy Rule does not target disclosures with particular content or by particular speakers.[21]  Therefore, this Article concludes that it is likely that application of the Sorrell analysis to the Privacy Rule would yield a different answer.[22]

This Article explains the practices of data mining and detailing[23] and describes the state laws that sprang up to prohibit them.[24]  It next discusses the various judicial outcomes of the data miners’ challenges to those laws,[25] culminating in the Supreme Court’s invalidation of Vermont’s data-mining law in Sorrell.[26]  The Article then applies the HIPAA Privacy Rule to the Sorrell facts and finds that the marketing provisions of HIPAA disallow the unauthorized use of such information for sale to data miners.[27]  Finally, the Article compares the HIPAA Privacy Rule to the data-mining law invalidated in Sorrell and finds that their structures considerably differ.[28]  Based on these differences, the Article opines that HIPAA presents a substantially different question from that considered in Sorrell and likely yields a different answer.[29]

I.  The Targeted Practices: Data Mining and Detailing[30]

Every time a pharmaceutical prescription is filled, the pharmacy retains information describing the transaction.[31]  These records generally include the identification of the patient; identification of the prescribing physician, including his name, address, and phone number; the drug prescribed, its dosage, and its refill information; price; and insurance information.[32]  In many cases, state law requires this information to be collected and maintained by the pharmacies[33] so that the state can monitor cases of illicit prescriptions and fraudulent prescribing practices by physicians.[34]

Companies, such as IMS Health Inc. and Verispan, LLC,[35] are in the business of “mining” this pharmacy data.[36]  They purchase prescription data from the pharmacies that the pharmacies’ computer software has collected and encrypted so that individual patients cannot be identified by name.[37]  The prescription information that data miners purchase is estimated to encompass several billion prescriptions per year.[38]  The data miners then aggregate the entries,[39] group the information by prescriber, and cross-reference the prescribing history with information on each prescriber available through publicly accessible databases, such as the American Medical Association’s database of physician specialists.[40]  The ultimate reports that the data miners produce show each prescriber’s identity, medical specialty, and a complete history of the drugs he or she prescribed over a given period of time.

The data miners’ customers for these reports are the pharmaceutical manufacturers because the reports are useful in facilitating the drug manufacturers’ practice of “detailing.”  This practice consists of drug-sales representatives visiting physicians and their staffs in a particular region where specific drugs are being marketed.[41]  At these face-to-face meetings, the sales representatives give the physicians “details” about their drugs (use, side effects, and risks) to convince the physicians that they are a better choice for their patients.[42]  Described as a “valuable tool,”[43] the data-mining reports allow the drug representatives to pinpoint prescribers who might be persuaded to switch to the manufacturer’s drugs or to prescribe the manufacturer’s drugs more frequently.[44]  The data-mining reports also enable the representatives to tailor their presentations based on the particular physician’s prescribing practices to maximize the effectiveness of their sales efforts:

That [data-mining] information enables the detailer to zero in on physicians who regularly prescribe competitors’ drugs, physicians who are prescribing large quantities of drugs for particular conditions, and “early adopters” (physicians with a demonstrated openness to prescribing drugs that have just come onto the market).  The information also allows the detailer to tailor her promotional message in light of the physician’s prescribing history.[45]

Merck’s use of data mining to market Vioxx provides an example of the usefulness of data mining to sell a particular drug:

When Merck marketed Vioxx, for example, it used a wealth of prescriber-identifying data to create monthly reports on individual prescribers in each detailer’s assigned territory.  The reports showed how many Merck versus non-Merck drugs the prescriber prescribed and estimated how many of these prescriptions could be substituted for Merck products.  Merck then tracked its detailers’ progress in converting prescribers in their territories to the Merck brand and gave detailers bonuses based on Merck’s sales volume and market share in the detailer’s territory.[46]

Detailing has been described as “a massive and expensive undertaking for pharmaceutical manufacturers.”[47]  Manufacturers reportedly spent $4 billion in 2000 for detailing,[48] employing some 90,000 sales representatives to make the physician office visits.[49]  The detailers often arrive with small gifts for the physicians and their staffs and drop off free drug samples for the physicians to try with their patients.[50]  It has been estimated that a single physician is visited by an average of twenty-eight detailers a week, and a specialist is visited by an average of fourteen detailers.[51]  Because of the time involved and high cost of detailing, drug manufacturers usually reserve it for marketing high-cost, brand-name drugs,[52] as opposed to lower-cost, generic drugs.[53]  Sales representatives try to convince physicians to switch from generic drugs to their brand-name drug, to utilize it instead of a competing brand-name drug, or to remain loyal to the brand-name drug when the patent expires and generic versions become available.[54]

II.  States’ Objections to Drug Manufacturers’ Use of Data Mining for Detailing

Some states, including New Hampshire, Maine, and Vermont, perceived that pharmaceutical manufacturers’ use of pharmacy data to enhance their detailing efforts increased the cost of prescription drugs with no concomitant improvement to the public health.[55]  These perceptions emanated from several factors.

First, the states became convinced that data mining improved the success of detailing.[56]  These states perceived that “detailers armed with prescribing histories enjoyed a significant marketing advantage, resulting in greater leverage, [and] increased sales of brand-name drugs.”[57]  This “leverage” refers to the detailer’s ability to target physicians who prescribe large quantities of generics, the ability to “zero in” on a physician’s particular prescribing choices, and the ability to “punish” physicians who abandon their loyalty to certain brand-name drugs.[58]  Thus, “prescribing histories helped the detailer to become more adversarial in her presentation and to focus on the weakness of the physician’s erstwhile drug of choice as opposed to the clinical virtues of the detailed drug.”[59]

Second, the states believed that the success of detailing often resulted from less than accurate and balanced information.  Vermont negatively characterized the detailers’ provision of information to physicians on pharmaceutical safety and efficacy as “frequently one-sided,” “incomplete,” and “biased.”[60]  The Vermont legislature found that the “[p]ublic health is ill served by the massive imbalance in information presented to doctors and other prescribers.”[61]  Vermont held detailers’ use of data mining responsible for creating “an unbalanced marketplace of ideas that undermines the state’s interests in promoting public health, protecting prescriber privacy, and reducing healthcare costs.”[62]

Third, the states perceived that detailing improperly influenced physicians’ prescription choices and unnecessarily raised the cost of prescription drugs.  New Hampshire viewed detailing as having a “pernicious effect” upon drug prescribing.[63]

The states’ “common sense” conclusion was that detailing worked to induce physicians to prescribe larger quantities of more expensive brand-name drugs.[64]  The fact “that the pharmaceutical industry spends over $4 billion annually on detailing bears loud witness to its efficacy.”[65]  Despite the much higher cost of detailed drugs, New Hampshire concluded that, based upon “competent evidence,” drugs that were aggressively marketed through detailing “provide no benefit vis-à-vis their far cheaper generic counterparts.”[66]  The State maintained that “detailers armed with prescribing histories encouraged the overzealous prescription of more costly brand-name drugs regardless of both the public health consequences and the probable outcome of a sensible cost/benefit analysis.”[67]

Finally, doctors themselves voiced “a predominantly negative view of detailing.”[68]  A 2006 survey by the Maine Medical Association reported that “a majority of Maine physicians did not want pharmaceutical manufacturers to be able to use their individual prescribing histories for marketing purposes.”[69]

III.  State Laws Regulating Data Mining[70]

In the interests of protecting prescriber privacy, safeguarding the public health, and containing healthcare costs,[71] New Hampshire in 2006 became the first state to enact a law limiting drug prescription data mining, known as the Prescription Information Law.[72]  The law prohibited the sale, transfer, use, or licensing of prescription records by pharmacies and insurance companies for any commercial purpose,[73] except for listed health-related purposes, such as pharmacy reimbursement, healthcare management, utilization review by a healthcare provider, and healthcare research.[74]  The statute did not prohibit the transfer of prescription information to fill patients’ prescriptions[75] and placed no restrictions upon prescription information that did not identify the patient or the prescriber.[76]

Shortly thereafter, Vermont followed suit, enacting Act 80, section 17 of the Vermont General Statutes to restrict the use of pharmacy records for drug marketing.[77]  Vermont’s policy goals, compatible with those of New Hampshire, were:

[T]o advance the state’s interest in protecting the public health of Vermonters, protecting the privacy of prescribers and prescribing information, and to ensure costs are contained in the private health care sector, as well as for state purchasers of prescription drugs, through the promotion of less costly drugs and ensuring prescribers receive unbiased information.[78]

Unlike the flat prohibition of New Hampshire’s statute, however, Vermont’s law adopted an “opt-out” approach, prohibiting insurers and pharmacies from selling or transferring prescription data for marketing purposes unless the prescriber opted out of the prohibition by consenting to the use.[79]  The law also prohibited pharmacy manufacturers from using the data for marketing absent prescribers’ consent.[80]  The law defined “marketing” as advertising or any activity that influenced the sale of a drug or influenced prescribing behavior.[81]  The statute contained a number of exceptions to the prohibition, most of which facilitated healthcare treatment and reimbursement, such as dispensing prescriptions, pharmacy reimbursement, patient care management, utilization review by healthcare professionals, healthcare research, and communicating treatment options to patients.[82]  The law also created a program to educate healthcare professionals on therapeutic and cost-effective drug prescribing.[83]

In 2008, Maine enacted similar legislation.  Its goals, like those of New Hampshire and Vermont, were “to improve the public health, to limit annual increases in the cost of healthcare and to protect the privacy of . . . prescribers in the healthcare system of this State.”[84]  Unlike Vermont’s “opt-out” approach, Maine passed an “opt-in” version, making it unlawful for a pharmacy to use, sell, or transfer prescription drug information for any marketing[85] purpose when the information identified the prescriber and the prescriber had opted in by registering for the statute’s protection.[86]  The law included a number of health-related exceptions to the definition of “marketing,” including pharmacy reimbursement, patient care management, utilization review by a healthcare provider, and healthcare research.[87]

IV.  Challenges to the Data-Mining Laws

With a number of other states considering enactment of similar laws,[88]and facing the loss of billions of dollars in business annually, several data-mining companies[89] and an association of pharmaceutical manufacturers[90] challenged the constitutionality of New Hampshire’s, Vermont’s, and Maine’s data-mining laws.[91]  The claims that survived to appeal included that the statutory prohibition violated the Free Speech Clause of the First Amendment, was unconstitutionally vague and overbroad under the First and Fourteenth Amendments, and offended the Commerce Clause.[92]

In 2008, the United States Court of Appeals for the First Circuit ruled in IMS Health Inc. v. Ayotte[93] that the New Hampshire statute regulated conduct, not speech, and therefore did not abridge the First Amendment rights of the data miners.[94]  Alternatively, the First Circuit ruled that even if New Hampshire’s law amounted to a regulation of protected speech, New Hampshire’s action to protect cost-effective healthcare passed constitutional muster.[95]  Utilizing the Central Hudson test,[96] the court found that healthcare cost containment was a substantial governmental interest,[97] data mining increased the success of detailing,[98] detailing increased the cost of prescription drugs,[99] and the statute was sufficiently tailored to achieve its objectives.[100]  The court summarily disposed of the remaining claims for vagueness[101] and violation of the Commerce Clause.[102]

When the challenge to the Maine statute reached the First Circuit in IMS Health Inc. v. Mills[103] approximately two years later, the court unsurprisingly relied upon its prior New Hampshire Ayotte ruling.[104]  The court rejected the First Amendment claim,[105] the vagueness claim,[106]and the Commerce Clause challenge[107] for the same reasons stated inAyotte.

Four months after Mills, the United States Court of Appeals for the Second Circuit ruled on the same issues with regard to the Vermont statute in IMS Health Inc. v. Sorrell.[108]  In Sorrell, the Second Circuit disagreed with nearly every basis for the First Circuit’s two prior decisions.  Applying theCentral Hudson test,[109] the court found that although Vermont did have a substantial interest in lowering healthcare costs and protecting the public health,[110] the statute did not directly advance those interests.[111]  Rather, the court characterized Vermont’s law as an attempt “to bring about indirectly some social good or alter some conduct by restricting the information available to those whose conduct the government seeks to influence.”[112]  Moreover, the court found that Vermont had “more direct, less speech-restrictive means available” to accomplish its goals.[113]  As less restrictive alternatives, the court suggested that the State could have assessed the results of its campaign to encourage the use of generics or could have mandated the use of generic drugs as a first course of treatment.[114]  Failing these critical prongs of the Central Hudson test, the court ruled that Vermont’s law unconstitutionally restricted freedom of speech.[115]

With the First Circuit and Second Circuit Courts of Appeal thus directly at odds on the constitutionality of the data-mining laws, the United States Supreme Court granted certiorari to consider Vermont’s appeal in Sorrell v. IMS Health Inc.[116]

V.  The Supreme Court’s Decision

In June 2011, the Supreme Court, in a six to three ruling,[117] held that Vermont’s drug prescription data-mining law violated the First Amendment.[118]  While conceding that Vermont’s asserted policy goals of containing pharmacy prescription costs and protecting public health were legitimate concerns,[119] the Court held that the statute was a broad, content-based rule[120] that did not satisfy strict scrutiny.[121]

Initially, the Court unequivocally held that the Vermont law was content and speaker based, as it prohibited the sale of pharmaceutical prescription data only for marketing purposes[122] and only to pharmaceutical manufacturers.[123]  Because the law “impose[d] burdens that are based on the content of speech and that are aimed at a particular viewpoint,” the Court ruled that it must apply strict scrutiny.[124]

The Court flatly rejected Vermont’s argument that the law regulated conduct as opposed to speech.[125]  Instead, the Court ruled that “[f]acts, after all, are the beginning point for much of the speech that is most essential to advance human knowledge and to conduct human affairs.  There is thus a strong argument that prescriber-identifying information is speech for First Amendment purposes.”[126]

Applying the Central Hudson test, whereby “[t]here must be a ‘fit between the legislature’s ends and the means chosen to accomplish those ends,’”[127] the Court held that none of the State’s asserted justifications—prescriber privacy, protecting public health, and reducing healthcare costs—withstood scrutiny.[128]  First, because the law permitted disclosure of prescription information for a number of other purposes and applied the ban only to marketing, the Court rejected the privacy justification.[129]  The Court ruled that Vermont’s statute “permits extensive use of prescriber-identifying information and so does not advance the State’s asserted interest in physician confidentiality.”[130]  In particular, the Court objected to the State’s own ability to use the same prescription information to engage in “counter-detailing” efforts to promote generic drugs.[131]  Moreover, the Court observed that privacy remedies less restrictive of speech were available.[132]  For example, prescribers could simply decline to meet with detailers.[133]  Even though physicians might find the use of their prescription histories by detailers to be “underhanded” or tantamount to “spying,”[134] the Court declared that “[s]peech remains protected even when it may . . . ‘inflict great pain.’”[135]

In similar fashion, the Court declared that Vermont’s stated policy goals of improving public health and reducing healthcare costs did not withstand scrutiny under the Central Hudson test,[136] because the law “does not advance them in a permissible way.”[137]  The law sought to protect patients’ health and cost-effectiveness only indirectly, aimed at the fear that physicians, admittedly sophisticated consumers,[138] would make poor purchasing decisions if given truthful information by detailers.[139]

In short, the Court viewed the statute as a means for the State to advance its own views over those of pharmacy manufacturers by stifling protected speech.[140]  The Court stated that if the statute had provided for only a few narrowly tailored exceptions to its ban on the sale or disclosure of prescription information, then its position that it was not targeting a disfavored speaker and disfavored content might be stronger.[141]  But, here, the law permitted disclosure of the same information to countless others and even to the State itself to persuade physicians to prescribe generic drugs.[142]  The Court declared that free access to and use of privately held information is “a right too essential to freedom to allow its manipulation to support just those ideas the government prefers.”[143]  The Court concluded that “the State has left unburdened those speakers whose messages are in accord with its own views.  This the State cannot do.”[144]

Several days after the Sorrell decision was issued, the Supreme Court vacated the First Circuit’s finding that the Maine data-mining laws were valid and remanded the case to the court for further consideration in light ofSorrell.[145]  Three months later, the New Hampshire District Court issued an order declaring that New Hampshire’s data-mining laws were invalid in light of Sorrell.[146]

VI.  Applying the HIPAA Privacy Rule to Data Mining

Since New Hampshire, Vermont, and Maine each enacted state laws that prohibited pharmacies from selling prescription information to data miners for use in detailing, presumably these states perceived that such laws were necessary to ban the practice.  This necessity apparently stemmed from the states’ belief that nothing in the HIPAA Privacy Rule prohibited these data sales by the pharmacies.  This Part of the Article explains why that belief is not supported by the text of HIPAA.

A.     Provisions of the HIPAA Privacy Rule

The HIPAA Privacy Rule[147] regulates covered entities’ use and disclosure of protected health information.[148]  The covered entities regulated by HIPAA include most health plans and healthcare providers.[149]  The term “provider” is defined by the Rule as “a provider of medical or health services . . . and any other person or organization who furnishes, bills, or is paid for health care in the normal course of business.”[150]

Under HIPAA, any time a covered entity uses or discloses protected health information, the use or disclosure must comply with HIPAA’s privacy provisions.[151]  The term “use” is broadly defined as “the sharing, employment, application, utilization, examination, or analysis” of health information protected by HIPAA.[152]  “Disclosure” is also broadly defined as “the release, transfer, provision of, access to, or divulging in any other manner of information outside the entity holding the information.”[153]

The health information protected by the Privacy Rule includes any information relating to healthcare treatment or payment[154] that has a potential to identify the patient to whom the information applies.[155]  Identifiers that can render health information protected include, inter alia, the patient’s name, address, social security number, phone number, photograph, zip code, treatment date, employer, and names of spouse and children.[156]  Furthermore, any identifier that is not specifically named in the Privacy Rule but, due to its uniqueness, has a potential to identify the subject of the information also renders the information protected.[157]

Under the Privacy Rule, any use or disclosure of protected health information by a covered entity must be explicitly permitted or required by HIPAA.[158]  The Privacy Rule requires disclosure in only two instances: (1) when the subject of the protected health information (“the individual”)[159]requests access to his own healthcare information,[160] and (2) when the Secretary of the Department of Health and Human Services (“HHS”) requests access in order to enforce HIPAA.[161]  All other uses and disclosures authorized by the Privacy Rule are permissive.[162]

Most of the permissive uses and disclosures under the Privacy Rule fall into two broad categories.[163]  First, covered entities may use and disclose protected health information for “treatment, payment, or healthcare operations.”[164]  “Treatment” is defined as the rendering of healthcare services to individuals or managing their care.[165]  “Payment” comprises paying insurance premiums and reimbursing providers.[166]  “Health care operations” broadly encompasses operating the business of healthcare entities, including such activities as business management and administrative activities, quality assessment, evaluating the credentials of providers, customer service, and obtaining legal and auditing services.[167]  Thus, treatment, payment, and healthcare operations cover the myriad activities that allow the healthcare industry to function.

The second broad category of permissive uses allows covered entities to use and disclose protected health information for twelve public-interest activities.[168]  These include, inter alia, participating in public-health activities to prevent or control disease; reporting abuse, neglect, or domestic violence; complying with healthcare audits and investigations; assisting law enforcement activities; engaging in healthcare research; and assisting national security and intelligence activities.[169]

If a covered entity’s use or disclosure of protected health information does not fit within one of the Privacy Rule’s enumerated required or permitted use and disclosure, then the use or disclosure may not occur[170] unless the individual authorizes the use or disclosure in writing.[171]

As the primary goal of HIPAA is to protect the privacy of individuals’ healthcare information,[172] HIPAA grants individuals rights of access to their own information and rights to control its uses and disclosures by covered entities.  These rights include the following:
(1) A right of individuals to access upon request their own protected health information,[173] along with a right to appeal denials of access;[174]

(2) A right of individuals to seek to amend their protected health information possessed by covered entities,[175] as well as a right to submit a written statement disagreeing with a denial of an amendment;[176]

(3) A right of individuals to receive an accounting of certain disclosures of their protected health information made by covered entities;[177]

(4) A right of individuals to request covered entities to restrict certain permissible uses and disclosures of their protected health information;[178]

(5) A right of individuals to request confidential communications of protected health information from providers and health plans,[179] which providers must accommodate[180] and which health plans must accommodate if the individuals state that they will be in danger unless accommodation is made;[181]

(6) A right of individuals to agree or object before covered entities make certain disclosures;[182]

(7) A right of individuals to authorize disclosures to third parties;[183] and

(8) A right of individuals to receive a Notice of Privacy Practices from covered entities, describing the covered entities’ uses and discloses of their protected health information and the individuals’ rights thereunder.[184]

B.     HIPAA’s De-identification and Marketing Provisions

HIPAA’s de-identification and marketing provisions are especially relevant to data mining.  In Sorrell, the data mining involved de-identification because, when the pharmacies’ computer software collected the raw prescription data, the software encrypted or stripped out the patients’ identifying information.[185]  Therefore, when the pharmacies sold the information to the data miners, it had been de-identified because the patients’ names could no longer be identified.[186]

The Privacy Rule provides that once protected health information is de-identified, it is no longer protected by HIPAA and thus is not subject to HIPAA’s use and disclosure restrictions.[187]  HIPAA gives explicit instructions on what information must be removed from protected health information to render it de-identified.[188]  Further, HIPAA specifically permits covered entities to de-identify protected health information.[189]  Moreover, HIPAA defines “health care operations,” one of the permissive uses and disclosures of protected health information under the Privacy Rule,[190] to include a covered entity’s creation of de-identified information when the de-identification relates to a “covered function.”[191]

HIPAA’s marketing provisions are also particularly relevant to data mining.  The data miners purchased the prescription information to market their aggregations and reports to pharmaceutical manufacturers.[192]  The drug manufacturers, in turn, purchased the prescription information to more effectively market their brand-name drugs to prescribers.[193]

HIPAA expressly provides that covered entities’ uses and disclosures of protected health information for the purpose of “marketing” are subject to heightened restrictions.[194]  HIPAA defines “marketing” in two ways.  First, marketing includes “a communication about a product or service that encourages recipients of the communication to purchase or use the product or service.”[195]  However, this definition excludes communications made for description of plan benefits, for treatment of the individual, or for case management or care coordination of the individual.[196]  Second, marketing includes a covered entity’s sale of protected health information to a third party to assist that party in marketing its products.[197]

HIPAA’s primary marketing restriction is that whenever a covered entity uses or discloses protected health information for marketing purposes, the individual must expressly authorize the use or disclosure.[198]  This mandate is stated emphatically: “Notwithstanding any provision of this subpart,[199] other than the transition provisions in § 164.532,[200] a covered entity must obtain an authorization for any use or disclosure of protected health information for marketing . . . .”[201]

The Rule states only two exceptions to this requirement that the individual must authorize the marketing uses and disclosures.  First, an authorization is not needed if the marketing consists of a face-to-face communication between the covered entity and the individual.[202]  Second, an authorization is not needed if the marketing consists of a promotional gift of nominal value provided by the covered entity.[203]  The affected individual must authorize all other marketing uses and disclosures.[204]

C.     How HIPAA’s Marketing and De-identification Rules Impact Data Mining

While Vermont and other states apparently believed that it was necessary to enact a law to prohibit pharmacies from selling prescription information to data miners, surprisingly, such laws were probably not necessary.  HIPAA already appears to have prohibited those sales, rendering the state laws inconsequential.

As explained above, HIPAA requires an authorization from every affected individual before his protected health information can be used or disclosed by covered entities for marketing purposes.[205]  Each Vermont pharmacy qualifies as “a provider of medical or health services” and as an entity that “furnishes, bills, or is paid for health care in the normal course of business.”[206]  Thus, the pharmacies are covered entity providers under HIPAA.[207]  The prescription information collected and retained by the pharmacies constitutes “protected health information,” as it includes the patients’ names and addresses, as well as other identifying information.[208]

Moreover, the pharmacies’ disclosures of prescription information to the data miners appear to have been “for marketing.”[209]  The pharmacies made the disclosures to data miners to enable the data miners to sell their aggregations and reports of pharmacy data to their customers, including drug manufacturers.[210]  Further, the data miners disclosed the prescription information to drug manufacturers to use in marketing their brand-name drugs to physicians.[211]  Selling prescription information for these purposes appears to qualify as marketing under the Privacy Rule’s broad definition: “a communication about a product or service that encourages recipients of the communication to purchase or use the product or service.”[212]

The “rub” with this analysis, however, is that according to the facts inSorrell, the pharmacies did not disclose “protected health information” to the data miners because the information had been de-identified by the pharmacies’ computer software prior to the sale.[213]  As stated above,[214] HIPAA expressly permits covered entities to de-identify protected health information, thereby removing it from any constraints that HIPAA imposes.[215]  Therefore, the pharmacies’ de-identification of their prescription information may have removed the information from the category of “protected health information”[216] and thereby enabled the pharmacies to make whatever use they wished of the information without violating HIPAA.[217]

But HIPAA’s marketing restrictions do not prohibit only unauthorized disclosures of protected health information.  The restrictions also prohibit the pharmacies from even using protected health information for marketing purposes.[218]  Creating de-identified information from protected health information appears to constitute a use of the protected health information because the pharmacies must “employ” and “utilize” the protected information in order to de-identify it.[219]  In fact, the Privacy Rule itself refers to the “use” of protected health information to create de-identified information.[220]  Here, the purpose of the pharmacies’ de-identification of the prescription information was to facilitate sales of the information to the data miners and to enable sales of the data miners’ aggregations and reports to the drug manufacturers, all for the purpose of marketing brand-name drugs to prescribers.[221]  Therefore, the de-identification itself appears to qualify as a marketing use,[222] so that the pharmacies would be prohibited from such use without the individuals’ express authorization under the Privacy Rule.[223]

Further, the Privacy Rule’s explicit statement that covered entities may de-identify protected health information[224] does not negate the authorization requirement.  The requirement that covered entities must obtain an authorization before any use or disclosure related to marketing expressly states that this requirement is imposed “[n]otwithstanding any provision of this subpart.”[225]  HIPAA’s de-identification provisions, on the other hand, lack this vital “notwithstanding” language.[226]  Therefore, the requirement to obtain individuals’ authorizations for any use or disclosure related to marketing trumps the de-identification provisions.  Although HIPAA expressly permits covered entities and their business associates to de-identify protected health information,[227] it appears that any such use (i.e., de-identification) of protected health information for marketing purposes may not occur without written authorizations from the affected individuals.[228]

Under this reading of HIPAA, whenever the purpose of the de-identification is marketing, the pharmacy must first obtain a written authorization from every individual whose protected health information is being so used before any pharmacy de-identifies its prescription information.[229]  Granted, the requirement to obtain authorizations is not an outright prohibition against the de-identification, subsequent sale, or ultimate use of the information for marketing.  However, the requirement to obtain an authorization from every individual—where billions of prescriptions are being disclosed[230]—places such an enormous burden on the pharmacies that, for all practical purposes, it quashes use of the information for data mining.[231]  Not only will pharmacies need to obtain written authorizations from every individual before his information may be de-identified or disclosed, but it is likely that most of these patients will either refuse to furnish the authorizations or not bother to execute them.[232]  Consequently, HIPAA’s authorization requirement adds so much additional effort and cost to data mining that drug companies will probably no longer find it a cost-effective tool for detailing.[233]

D.     Continuing Failure to Use HIPAA to Restrict Data Mining

It is apparent that parties continue to fail to apply the marketing provisions of the Privacy Rule to restrict data mining.  In a recent case, Steinberg v. CVS Caremark Corp.,[234] prescription drug purchasers sued a pharmacy chain for, inter alia, its disclosures of the purchasers’ prescription information to data miners.  Plaintiffs challenged the pharmacies for accepting remuneration from drug manufacturers for (1) sending letters to the consumers’ physicians suggesting that they prescribe alternate drugs, and (2) selling de-identified prescription information directly to the drug manufacturers and data companies.[235]  The plaintiffs brought state law claims for violation of Pennsylvania’s Unfair Trade Practices and Consumer Protection Law, unjust enrichment, and invasion of privacy.[236]

The United States District Court for the Eastern District of Pennsylvania dismissed the complaint for failure to state a claim.[237]  In so doing, the court made erroneous findings that nothing in the Privacy Rule restricted the defendants’ activities.[238]  First, the court declared that the pharmacies’ sale of de-identified drug prescription information to pharmaceutical manufacturers and data companies for marketing purposes did not offend the HIPAA Privacy Rule because the information had been de-identified prior to sale.[239]  Second, the court stated that the pharmacies’ use of the plaintiffs’ protected health information to send marketing notices to the plaintiffs’ physicians did not violate HIPAA because this constituted permissible healthcare operations.[240]

In fact, the Privacy Rule does not permit either activity.  Without authorizations from the affected individuals, the pharmacies could not use the plaintiffs’ protected health information (even when such use is de-identification) for marketing purposes,[241] thereby rendering the unauthorized de-identification itself illicit.  Further, without the appropriate authorizations, the pharmacies could not disclose protected health information to the plaintiffs’ physicians for marketing purposes,[242] even under the guise of suggesting “treatment alternatives.”[243]  While the court correctly observed that HIPAA does not provide a private right of action, thereby precluding the plaintiffs from bringing a claim directly under HIPAA,[244] the HIPAA violations could arguably have served as bases for the plaintiffs’ state law claims.

VII.  Applying the Sorrell Analysis to the HIPAA Privacy Rule

Both the data-mining laws and HIPAA impose restrictions on the use of health information for marketing.[245]  Both restrict pharmacies from selling de-identified prescription information to data miners.[246]  Although the Supreme Court invalidated the Vermont data-mining law in Sorrell,[247]HIPAA still effectively prevents pharmacies from using protected health information for marketing purposes—that is, de-identifying it for sale to data miners.[248]  Thus, the Sorrell holding raises a question of whether the marketing restrictions in the HIPAA Privacy Rule, like the data-mining law in Sorrell, violate the First Amendment rights of the data miners and drug manufacturers to obtain access to prescription information for marketing purposes.

At first glance, aspects of the data-mining laws and HIPAA’s marketing provisions appear quite similar.  Both bodies of law were motivated by substantial governmental interests—prescriber privacy, public health, and healthcare cost containment for the data- mining laws,[249] and privacy of patients’ medical information for the HIPAA Privacy Rule.[250]  Both laws seek to restrain the use of health information for marketing purposes.[251]  Both define marketing in similar ways.[252]  And both list a number of healthcare-related exceptions to their marketing restrictions.[253]

Despite these similarities, there are substantial grounds to argue that important distinctions between the data-mining laws and the HIPAA Privacy Rule predominate in any comparison.  First, the parties who sought protection are quite different.  The data-mining laws aimed to maintain the privacy of prescribers,[254] many of whom had complained that allowing drug manufacturers access to their prescribing history allowed the detailers “to target them for unwelcome marketing calls.”[255]  The Sorrell Court observed, however, that physicians are hardly hapless victims of detailing.  The Court noted, for instance, that “many listeners find detailing instructive,”[256] and physicians could “simply decline to meet with detailers.”[257]  In fact, the Court characterized prescribing physicians as “sophisticated and experienced consumers.”[258]

Quite unlike the physicians in Sorrell,[259] the HIPAA Privacy Rule seeks to protect private patients from unwarranted invasions into their most private medical information.[260]  Contrasted to physicians, private healthcare patients are typically much more in need of protection.  The medical records amassed on their behalves are done involuntarily, a necessary byproduct of seeking medical treatment.[261]  Not only are many individual patients undoubtedly less sophisticated than physicians, but they may also be unable to watchdog illicit uses of their medical records, particularly if they are ill or aged.  In fact, most patients are probably unaware of the many uses and disclosures of their medical information by covered entities that HIPAA permits.[262]  To address these concerns, HIPAA sets clear limits to covered entities’ uses and disclosures of individuals’ protected health information.[263]  It provides a means whereby covered entities must obtain individuals’ authorization for uses and disclosures that are not expressly permitted by HIPAA,[264] and whereby patients can prevent certain uses and disclosures prior to their occurrence.[265]  As a result, a strong argument can be made that the HIPAA Privacy Rule, unlike the data-mining laws, is a reasoned response to the critical need to protect patients’ medical privacy.

Second, the Supreme Court criticized Vermont’s data-mining law for attempting to advance its goals in too indirect a way.[266]  The State restricted access to prescription information in order to restrict data mining, which in turn would impair detailing, which in turn would result in physicians writing fewer prescriptions for brand-name drugs, which in turn would contain healthcare costs and avoid unnecessary health risks.[267]  HIPAA, on the other hand, directly accomplishes its goal of protecting individuals’ medical privacy by conferring upon the individuals themselves the ability to control, within certain limits,[268] the uses and disclosures of their own protected health information by covered entities.[269]

Third, there were readily available less restrictive alternatives to the data-mining laws that could have accomplished the asserted purposes of achieving prescriber privacy, protecting the public health, and containing pharmaceutical costs.  The Sorrell Court observed that physicians could easily refuse to meet with detailers, thereby preventing the detailers from using the physicians’ prescriber histories to pressure them into purchasing expensive brand-name drugs.[270]  Further, Vermont’s law authorized funds for a drug education program to provide physicians with information on “cost-effective utilization of prescription drugs.”[271]  Accordingly, before prohibiting data mining of pharmacy prescriptions, the State could have waited to see if that program was successful in limiting sales of nongeneric drugs.[272]

In contrast, with regard to HIPAA, there is no readily ascertainable less restrictive means to protect the privacy of patients’ medical records other than to permit limited uses and disclosures and to require patients’ consent for everything else.[273]  Congress, with limited exceptions,[274] conferred upon individuals the ability to control uses and disclosures of their own protected health information by covered entities.[275]  Requiring individuals to authorize uses and disclosures that are not otherwise needed to allow the healthcare industry to operate[276] and enable critical public interest activities[277] is therefore a direct means of achieving that control.  As a reasonable exercise of that control, HIPAA requires individuals to authorize any uses or disclosures of their protected health information to sell items or services that are not related to the individuals’ own healthcare management.[278]  As marketing third-party items and services is not critical either to providing and paying for individuals’ treatment or to enabling public interest activities, the authorization requirement for marketing uses and disclosures is necessary to achieve HIPAA’s privacy goal.

Fourth, the discriminatory impact of the data-mining laws that offended the Supreme Court in Sorrell[279] is largely absent in HIPAA.  The Sorrell Court characterized Vermont’s data-mining law as pointedly aimed at “diminish[ing] the effectiveness of marketing by manufacturers of brand-name drugs.”[280]  Convinced that detailing increased prescriptions for expensive brand-name drugs over just as effective and cheaper generic alternatives, the State sought to discourage detailing:

“In its practical operation,” Vermont’s law “goes even beyond mere content discrimination, to actual viewpoint discrimination.”  Given the legislature’s expressed statement of purpose, it is apparent that [the Vermont law] imposes burdens that are based on the content of speech and that are aimed at a particular viewpoint.[281]

The Sorrell Court found the State’s eradication of pharmacy data mining to be value based because “the State . . . engage[d] in content-based discrimination to advance its own side of a debate.”[282]  The law prohibited the communication of accurate information by detailers even though some prescribers found the information to be helpful.[283]  Also, the Court found that some brand-name drugs may be better for patients than their generic equivalents.[284]  Nevertheless, the State restricted access to prescription information to suppress speech with which it did not agree, while allowing access for itself and others to promote generics.[285]

This pointedly discriminatory goal and impact of Vermont’s data-mining law is absent with the HIPAA Privacy Rule.  Although marketing is not included in HIPAA’s list of permitted uses and disclosures,[286] it falls within a very broad category of all nonpermissive uses and disclosures for which an authorization is required.[287]  Admittedly, HIPAA singles out marketing for special restrictions,[288] as it comprises one of only two uses specified in HIPAA where protected health information may not even be de-identified absent the individual’s authorization.[289]  Here, however, it is all marketing that is so treated, not the more pointed restriction of a particular use by a particular speaker that was present in Sorrell.[290]

Consequently, the overall structure of the data-mining laws and the HIPAA Privacy Rule is markedly different.  Amid the thousands of uses and disclosures to which medical information is subject,[291] Vermont’s data-mining law pointedly prohibited only one—pharmacies’ disclosure of prescription information for marketing and the use of that information by drug manufacturers to market their drugs.[292]  Consequently, any nonmarketing use of prescription information was permitted.[293]  Even with regard to marketing uses, exceptions allowed the information to be utilized for “health care research,” to enforce “compliance” with health insurance preferred drug lists, for “care management educational communications” provided to patients on treatment options, for law enforcement operations, and as “otherwise provided by law.”[294]  Pharmacies could sell the information to insurers, researchers, journalists, the State, and others.[295]  The State itself could use the information for “counterdetailing” activities.[296]  Accordingly, the Court concluded that while the law “permits extensive use of prescriber-identifying information,”[297] it targeted only one use (marketing) and one user (drug manufacturers) for its prohibition.[298]

In contrast, the HIPAA Privacy Rule regulates from the reverse vantage point.  It declares at the outset that no use or disclosure of protected health information may occur unless it is specifically permitted by the Rule.[299]  Therefore, opposite to the structure of the data-mining laws, the prohibitions are virtually limitless, while the allowable uses are distinctly limited.[300]  Generally, the Privacy Rule permits uses and disclosures that fall within two broad categories[301]: (1) those that are related to healthcare treatment, payment, and business operations of the covered entities[302]and (2) those that are related to public interest activities that are so critical to society’s well being that Congress deemed they should not be hindered by medical privacy concerns.[303]  All nonpermitted uses must be authorized.[304]  While, like the data-mining laws, HIPAA earmarks marketing for special restrictions,[305] even those limitations are more broadly drawn in HIPAA, applying to all types of marketing, not just marketing of brand-name drugs by pharmaceutical manufacturers.[306]  This is quite different from Vermont’s prohibition applying solely to pharmacies’ and insurers’ sales of prescription information for drug marketing.[307]  In fact, the Sorrell Court itself pointed out the marked differences between the structure of Vermont’s data-mining law and the HIPAA Privacy Rule:

[T]he State might have advanced its asserted privacy interest by allowing the information’s sale or disclosure in only a few narrow and well-justified circumstances.  See, e.g., Health Insurance Portability and Accountability Act of 1996, 42 U.S.C. §1320d-2; 45 CFR pts. 160 and 164 (2010).  A statute of that type would present quite a different case than the one presented here.[308]

Conclusion

While several states found it necessary to pass laws prohibiting pharmacies from selling de-identified prescription information to data miners for use by drug manufacturers to market their brand-name drugs, a solid argument can be made that the HIPAA Privacy Rule already restricted such sales.  HIPAA prohibits covered entities, including pharmacies, from using protected health information for marketing purposes without the individuals’ authorization.  As a result, it appears that the Privacy Rule restricts pharmacies from even de-identifying protected health information for marketing purposes unless the affected individuals authorize such use.

The recent Sorrell holding, invalidating Vermont’s data-mining law on the ground that it violates the Free Speech Clause of the First Amendment, raises the question of whether the marketing provisions of the HIPAA Privacy Rule could be deemed invalid for similar reasons.  Both the data-mining laws and the HIPAA Privacy Rule restrict pharmacies from selling de-identified prescription information to data miners for marketing purposes.

However, it is evident that there are fundamental distinctions between the data-mining laws and HIPAA’s marketing restrictions.  The two laws protect different parties and are structured very differently.  Most significantly, the discriminatory intent and effect of the data-mining laws are largely absent in HIPAA.  These distinctions present a substantially different question regarding HIPAA from that considered in Sorrell and likely would yield a different answer.[309]


          *     Professor of Law, Albany Law School.  I would like to express my sincere gratitude to Robert Emery, who recently retired as Associate Director and Head of Reference from the Albany Law School Schaffer Law Library, for the outstanding research assistance he has given me over the years.  He has been my research “go-to” person ever since I came to Albany Law School as a student in 1984.  He provided invaluable research expertise throughout my fifteen years of private law practice in Albany and during my past eleven years as a professor at the school.  Each of the articles I have produced while at Albany Law bears his imprint.  I do not believe there is a finer, or more patient and helpful, research expert to be found than Bob Emery.
         [1].   131 S. Ct. 2653 (2011).
         [2].   Id. at 2659.
         [3].   See infra Part I.
         [4].   See infra Part I.
         [5].   See infra Part I.
         [6].   See infra Part II.
         [7].   See infra Part III.
         [8].   See infra Part IV.
         [9].   See infra Part V.
       [10].   See infra Part V.
       [11].   See infra Part V.
       [12].   See 45 C.F.R. pts. 160, 164 (2010).
       [13].   42 U.S.C. § 1320d-2 (Supp. IV 2011).  Hereinafter, the Privacy Rule will be referred to as “the Privacy Rule,” “the Rule,” or “HIPAA” interchangeably.
       [14].   See infra Parts VI.A–B.
       [15].   See infra Part VI.C.
       [16].   See infra Part VI.B–C.
       [17].   See infra Part VI.D.
       [18].   See infra Part VII.
       [19].   See infra Part VII.
       [20].   See infra Part VII.
       [21].   See infra Part VII.
       [22].   See infra Part VII.
       [23].   See infra Part I.
       [24].   See infra Part III.
       [25].   See infra Part IV.
       [26].   See infra Part V.
       [27].   See infra Part VI.
       [28].   See infra Part VII.
       [29].   See infra Part VII.
       [30].   See Marcia M. Boumil et al., Prescription Data Mining, Medical Privacy and the First Amendment: The U.S. Supreme Court in Sorrell v. IMS Health Inc., 21 Annals Health L. 447, 449–51 (2012) (describing the practices of data mining and detailing).
       [31].   See, e.g., Brief for the United States as Amicus Curiae Supporting Petitioners at 4–5, Sorrell v. IMS Health, Inc., 131 S. Ct. 2653 (2011) (No. 10-779) (stating that Vermont requires each pharmacy to maintain a “patient record system” that records the patient’s name, address, telephone number, age or date of birth, gender, name and strength of each drug prescribed, quantity, date received, prescription number, and name of the prescriber).
       [32].   IMS Health Inc. v. Sorrell, 630 F.3d 263, 267 (2d Cir. 2010) (“When filling prescriptions, pharmacies in Vermont collect information including the prescriber’s name and address, the name, dosage, and quantity of the drug, the date and place the prescription is filled, and the patient’s age and gender.”), aff’d, 131 S. Ct. 2653 (2011); IMS Health Inc. v. Ayotte, 550 F.3d 42, 45 (1st Cir. 2008) (describing the “potpourri” of prescription information retained by pharmacies, including “the name of the patient, the identity of the prescribing physician, the drug, its dosage, and the quantity dispensed”), abrogated by Sorrell, 131 S. Ct. 2653.
       [33].   See, e.g., N.Y. Educ. Law § 6810(5) (McKinney 2010) (“Records of all prescriptions filled or refilled shall be maintained for a period of at least five years and upon request made available for inspection and copying by a representative of the department.  Such records shall indicate date of filling or refilling, doctor’s name, patient’s name and address and the name or initials of the pharmacist who prepared, compounded, or dispensed the prescription.  Records of prescriptions for controlled substances shall be maintained pursuant to requirements of article thirty-three of the public health law.”).
       [34].   See, e.g., Al Baker & Joseph Goldstein, Focus on Prescription Records Leads to Arrest in 4 Killings, N.Y. Times, June 23, 2011, at A18 (reporting arrests stemming from information derived from prescription records: “A prosecutor in the Office of the Special Narcotics Prosecutor for New York City re-examined prescription records that the office had in its possession, another law enforcement official said.  Those records are part of continuing long-term investigations into prescription drug diversion, the official said”); see also Questions and Answers for Practitioners Regarding the New Official Prescription Program, N.Y. St. Dep’t Health, http://www.health.ny.gov/professionals/narcotic/official_prescription_program/questions_and_answers_for_practitioners.htm (last visited Aug. 28, 2012) (discussing section 21 of the New York Public Health Law, requiring prescriptions written in New York to be issued on official New York State prescription forms, to “combat the growing problem of prescription fraud.  Official prescriptions contain security features specifically designed to prevent alterations and forgeries that divert drugs for sale on the black market.  Some of these contaminated drugs end up in patients’ medicine cabinets.  By preventing fraudulent claims, the law will also save New York’s Medicaid program and private insurers many millions of dollars every year”).
       [35].   IMS and Verispan were plaintiffs in the Vermont, Maine, and New Hampshire data-mining cases.  See Sorrell, 630 F.3d at 263; IMS Health Inc. v. Mills, 616 F.3d 7 (1st Cir. 2010), vacated, IMS Health Inc. v. Schneider, 131 S. Ct. 3091 (2011); Ayotte, 550 F.3d at 42.
       [36].   Data miners have been described as “prescription drug information intermediaries that mine [purchase and process] specialized data.” Mills, 616 F.3d at 15–16.
       [37].   Sorrell, 630 F.3d at 267 (“The PI [prescriber-identifiable] data sold by the data-mining appellants is stripped of patient information, to protect patient privacy.”); Mills, 616 F.3d at 16 (stating that pharmacies’ computer software collects prescription data, encrypts the patient identifiers so that patients cannot be identified by name, and sends the information to the data miners who have purchased the information);Ayotte, 550 F.3d at 45 (stating that patients’ names are encrypted, “effectively eliminating the ability to match particular prescriptions with particular patients”).
       [38].   Ayotte, 550 F.3d at 45 (stating that “[t]he scope of the [data-mining] enterprise is mind-boggling” and noting that IMS and Verispan organize several billion prescriptions each year).
       [39].   Id.
       [40].   Mills, 616 F.3d at 16 (“They [data miners] assemble a complete picture of individual prescribers’ prescribing histories by cross-referencing prescriber names with publicly available databases, including the AMA’s database of medical doctors’ specialties.”); Ayotte, 550 F.3d at 45 (“[Data miners] group [the data] by prescriber, and cross-reference each physician’s prescribing history with physician-specific information available through the American Medical Association.”).
       [41].   Sorrell, 630 F.3d at 267 (“‘Detailing’ refers to visits by pharmaceutical representatives, called detailers, to individual physicians to provide information on specific prescription drugs.”); Ayotte, 550 F.3d at 46 (“Detailing involves tailored one-on-one visits by pharmaceutical sales representatives with physicians and their staffs.”).
       [42].   Sorrell, 630 F.3d at 267 (explaining that detailers provide information to physicians “including the use, side effects, and risks of drug interactions”); Mills, 616 F.3d at 14 (stating that detailers distribute “promotional materials and pamphlets about the different conditions their particular products can be used to treat”); Ayotte, 550 F.3d at 46 (“The detailer comes to the physician’s office armed with handouts and offers to educate the physician and his staff about the latest pharmacological developments . . . [thereby] holding out the promise of a convenient and efficient means for receiving practice-related updates.”).  The Maine drug prescription data-mining law defines “‘detailing’ as ‘one-to-one contact with a prescriber or employees or agents of a prescriber for the purpose of increasing or reinforcing the prescribing of a certain drug by the prescriber.’”  See Mills, 616 F.3d at 14 (citing Me. Rev. Stat. tit. 22, § 1711-E(1)(A-2) (2005)).
       [43].   Mills, 616 F.3d at 14 (“Prescriber-identifying data is a valuable tool in a detailer’s arsenal of sales techniques.”).
       [44].   Sorrell, 630 F.3d at 267 (“Pharmaceutical manufacturers use [the mined] data to identify audiences for their marketing efforts, to focus marketing messages for individual prescribers, [and] to direct scientific and safety messages to physicians most in need of that information.”); Mills, 616 F.3d at 14 (“With [data-mining reports], pharmaceutical manufacturers can pinpoint the prescribing habits of individual prescribers in a region and target prescribers who might be persuaded to switch brands or prescribe more of a detailer’s brand of products.”); Ayotte, 550 F.3d at 44–45 (explaining that data-mining reports enable “detailers . . . to target particular physicians and shape their sales pitches accordingly”).
       [45].   Ayotte, 550 F.3d at 47; see also Mills, 616 F.3d at 14 (“Detailers use prescriber-identifying data to [market their drugs] more effectively; every sales pitch can be tailored to what the detailer knows of the prescriber based on her prescribing history.”).
       [46].   Mills, 616 F.3d at 14 n.3.
       [47].   Id. at 14; see also Sorrell, 630 F.3d at 267 (“[P]harmaceutical industry spending on detailing has increased exponentially along with the rise of data mining.”).
       [48].   Ayotte, 550 F.3d at 46.
       [49].   Mills, 616 F.3d at 14 (“[Pharmaceutical manufacturers] have some 90,000 pharmaceutical sales representatives make weekly or monthly one-on-one visits to prescribers nationwide.”).  Data mining is lucrative for the miners as well.  IMS alone reported revenues of $1.75 billion in 2005.  Id. at 16.
       [50].   Id. at 14 (“[D]etailers distribute upwards of $1 million worth of free product samples per year.”); Ayotte, 550 F.3d at 46 (“[D]etailers typically distribute an array of small gifts to physicians and their staffs. . . . [I]n the year 2000, an estimated $1,000,000,000 in free drug samples flowed from detailers to physicians.”).
       [51].   Mills, 616 F.3d at 14 (“A single prescriber is visited by an average of twenty-eight detailers a week; an average of fourteen detailers a week call on a single specialist.”); Ayotte, 550 F.3d at 47 (“[T]he average primary care physician interacts with no fewer than twenty-eight detailers each week and the average specialist interacts with fourteen.”).
       [52].   Sorrell, 630 F.3d at 268 (“[W]hile a brand-name drug is not necessarily better than its generic version, the brand-name drug is typically more expensive.”).
       [53].   Ayotte, 550 F.3d at 46 (“[Detailing] is time-consuming and expensive work, not suited to the marketing of lower-priced bioequivalent generic drugs.”).  Generic drugs are described as “drugs that are pharmacologically indistinguishable from their brand-name counterparts save for potential differences in rates of absorption.”  Id.
       [54].   Id. (“[D]etailing is employed where a manufacturer seeks to encourage prescription of a patented brand-name drug as against generic drugs, or as against a competitor’s patented brand-name drug, or as a means of maintaining a physician’s brand loyalty after its patent on a brand-name drug has expired.”).
       [55].   See Boumil et al., supra note 30, at 450–53 (describing criticisms of data mining and detailing).
       [56].   See, e.g.Ayotte, 550 F.3d at 56–57 (discussing the effectiveness of data mining as a marketing tool by detailers).
       [57].   Id. at 56.
       [58].   Id.
       [59].   Id.  Indeed, promotional literature from IMS marketed its data reports for efficacy in detailing.  Id.
       [60].   IMS Health Inc. v. Sorrell, 630 F.3d 263, 270 (2d Cir. 2010) (internal quotation marks omitted), aff’d, 131 S. Ct. 2653 (2011); see alsoAyotte, 550 F.3d at 57 (discussing a study finding that eleven percent of detailers’ statements to physicians were “demonstrably inaccurate” (citing Michael G. Ziegler et al., The Accuracy of Drug Information from Pharmaceutical Sales Representatives, 273 J. Am. Med. Ass’n 1296, 1297 (1995))).
       [61].   Sorrell, 630 F.3d at 270 (internal quotation marks omitted).
       [62].   Id.
       [63].   Ayotte, 550 F.3d at 47.
       [64].   Id. at 56 (stating that the “common sense conclusion[]” is that “detailing substantially increases physicians’ rates of prescribing brand-name drugs”).
       [65].   Id.
       [66].   Id. at 57–58.
       [67].   Id. at 58.
       [68].   IMS Health Inc. v. Mills, 616 F.3d 7, 15 (1st Cir. 2010) (quoting Robert A. Musacchio & Robert J. Hunkler, More Than a Game of Keep-Away, Pharmaceutical Executive, May 2006, at 150) (“[P]hysicians ‘complain bitterly’ about detailers ‘who wave data in their faces’ and challenge them with their own prescribing stories when they fail to prescribe more of the product the detailer has been advertising.”), vacated, IMS Health, Inc. v. Schneider, 131 S. Ct. 3091 (2011); see also Boumil et al.,supra note 30, at 452–53 (describing doctors’ dissatisfaction with detailing).
       [69].   Mills, 616 F.3d at 15.
       [70].   See generally Boumil et al., supra note 30, at 453–57 (describing states’ legislative responses to pharmacy data mining).
       [71].   Ayotte, 550 F.3d at 47.
       [72].   N.H. Rev. Stat. Ann. § 318:47-f (2011); Boumil et al., supra note 30, at 453 (explaining New Hampshire was the first state to enact legislation to limit the use of prescription information for commercial or marketing purposes, followed closely by Vermont and Maine).
       [73].   In relevant part, the statute provides:Records relative to prescription information containing patient-identifiable and prescriber-identifiable data shall not be licensed, transferred, used, or sold by any pharmacy benefits manager, insurance company, electronic transmission intermediary, retail, mail order, or Internet pharmacy or other similar entity, for any commercial purpose, except for the limited purposes of pharmacy reimbursement; formulary compliance; care management; utilization review by a health care provider, the patient’s insurance provider or the agent of either; health care research; or as otherwise provided by law.  Commercial purpose includes, but is not limited to, advertising, marketing, promotion, or any activity that could be used to influence sales or market share of a pharmaceutical product, influence or evaluate the prescribing behavior of an individual health care professional, or evaluate the effectiveness of a professional pharmaceutical detailing sales force.§ 318:47-f.
       [74].   Ayotte, 550 F.3d at 47 (quoting § 318:47-f).
       [75].   Id.
       [76].   Id.
       [77].   Vt. Stat. Ann. tit. 18, § 4631(a) (2011); IMS Health Inc. v. Sorrell, 630 F.3d 263, 269 (2d Cir. 2010), aff’d, 131 S. Ct. 2653 (2011).
       [78].   Sorrell, 630 F.3d at 269 (quoting tit. 18, § 4631(a) (2011)).
       [79].   Id.see Boumil et al., supra note 30, at 455–56 (explaining that the Vermont “opt-in” approach differed from approaches used by other states).  The statute reads in relevant part as follows:A health insurer, a self-insured employer, an electronic transmission intermediary, a pharmacy, or other similar entity shall not sell, license, or exchange for value regulated records containing prescriber-identifiable information, nor permit the use of regulated records containing prescriber-identifiable information for marketing or promoting a prescription drug, unless the prescriber consents as provided in subsection (c) of this section.  Pharmaceutical manufacturers and pharmaceutical marketers shall not use prescriber-identifiable information for marketing or promoting a prescription drug unless the prescriber consents as provided in subsection (c) of this section.tit. 18, § 4631(d).
       [80].   Sorrell, 630 F.3d at 269–70.
       [81].   Id. at 270 (quoting tit. 18, § 4631(b)(5)) (“The law defines ‘marketing’ to include ‘advertising, promotion, or any activity that is intended to be used or is used to influence sales or the market share of a prescription drug, influence or evaluate the prescribing behavior of an individual health care professional to promote a prescription drug, market prescription drugs to patients, or to evaluate the effectiveness of a professional pharmaceutical detailing sales force.’”).
       [82].   Id. at 270 (citing tit. 18, § 4631(e)(1)–(7)) (“The statute expressly permits the sale, transfer, or use of PI [prescriber-identifiable] data for multiple other purposes, including the limited purposes of pharmacy reimbursement; prescription drug formulary compliance; patient care management; utilization review by a health care professional, the patient’s health insurer, or the agent of either; health care research; dispensing prescription medications; the transmission of prescription data from prescriber to pharmacy; care management; educational communications provided to a patient, including treatment options, recall or safety notices, or clinical trials; and for certain law enforcement purposes as otherwise authorized by law.”).
       [83].   Id. at 271 n.3 (citing Vt. Stat. Ann. tit. 33, §§ 2004, 2466a (2011)).
       [84].   IMS Health Inc. v. Mills, 616 F.3d 7, 17 (1st Cir. 2010) (citing Me. Rev. Stat. tit. 22, § 1711-E(1-A) (2010)) (internal quotation marks omitted),vacated, IMS Health Inc. v. Schneider, 131 S. Ct. 3091 (2011).
       [85].   The statute defines marketing to include “‘advertising, publicizing, promoting or selling a prescription drug;’ ‘activities undertaken for the purpose of influencing the market share of a prescription drug or the prescribing patterns of a prescriber, a detailing visit or a personal appearance;’ ‘[a]ctivities undertaken to evaluate or improve the effectiveness of a professional detailing sales force;’ or ‘[a] brochure, media advertisement or announcement, poster or free sample of a prescription drug.’”  Id. at 16 n.6 (quoting tit. 22, § 1711-E(1)(F-1)).
       [86].   Id. at 16 (citing tit. 22, § 1711-E(2-A)) (“[A] carrier, pharmacy or prescription drug information intermediary . . . may not license, use, sell, transfer, or exchange for value, for any marketing purpose, prescription drug information that identifies a prescriber who has filed for confidentiality protection.”).
       [87].   Id. at 16 n.6 (quoting tit. 22, § 1711-E(1)(F-1)) (“‘Marketing’ does not include pharmacy reimbursement, formulary compliance, pharmacy file transfers in response to a patient request or as a result of the sale or purchase of a pharmacy, patient care management, utilization review by a health care provider or agent of a health care provider or the patient’s health plan or an agent of the patient’s health plan, and health care research.”).
       [88].   Tom Ramstack, Drug Companies Seek Supreme Court Permission for “Data Mining,” GantDaily.com (Apr. 26, 2011, 11:41 AM), http://gantdaily.com/2011/04/26/drug-companies-seek-supreme-court-permission-for-data-mining (“Twenty-five states are considering similar laws[.]”); James Vicini, Supreme Court Strikes Down State Drug Data-Mining Law, Reuters (June 23, 2011, 1:48 PM), http://www.reuters.com/article/2011/06/23/us-usa-healthcare-privacy-idUSTRE75M3T720110623 (“[S]imilar measures have been proposed in about 25 states in the last three years[.]”).
       [89].   The data miners include IMS, Verispan, and Source Healthcare Analytics, Inc.  See, e.g., IMS Health Inc. v. Sorrell, 630 F.3d 263, 269 (2d Cir. 2010), aff’d, 131 S. Ct. 2653 (2011).
       [90].   The association was Pharmaceutical Research and Manufacturers of America.  See, e.g.id.
       [91].   See IMS Health Inc. v. Sorrell, 631 F. Supp. 2d 434, 440 (D. Vt. 2009), rev’d, 630 F.3d 263 (2d Cir. 2010); IMS Health Corp. v. Rowe, No. CV-07-127-B-W, 2007 U.S. Dist. LEXIS 94268, at *27 (D. Me. Dec. 21, 2007), rev’d, IMS Health Inc. v. Mills, 616 F.3d 7 (1st Cir. 2010), vacated,IMS Health, Inc. v. Schneider, 131 S. Ct. 3091 (2011); IMS Health Inc. v. Ayotte, 490 F. Supp. 2d 163, 174 (D.N.H. 2007), rev’d and vacated, 550 F.3d 42 (1st Cir. 2008), abrogated by Sorrell, 131 S.Ct. 2653.
       [92].   See Sorrell, 630 F.3d at 266; Mills, 616 F.3d at 13; Ayotte, 550 F.3d at 47–48.
       [93].   Ayotte, 550 F.3d at 42.
       [94].   Id. at 45.
       [95].   Id.
       [96].   The First Circuit described the Central Hudson test as follows:Under Central Hudson—so long as the speech in question concerns an otherwise lawful activity and is not misleading—statutory regulation of that speech is constitutionally permissible only if the statute is enacted in the service of a substantial governmental interest, directly advances that interest, and restricts speech no more than is necessary to further that interest.Id. at 55 (citing Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n, 447 U.S. 557, 566 (1980)).
       [97].   Id. (“Fiscal problems have caused entire civilizations to crumble, so cost containment is most assuredly a substantial governmental interest.”).
       [98].   Id. at 56–57 (discussing evidence showing that “prescribing histories made detailing more efficacious”).
       [99].   Id. at 56 (finding that it was a “‘common-sense conclusion[]” that detailing increases the prescriptions of brand-name drugs).
     [100].   Id. at 58 (“[W]hile a state legislature does not have unfettered discretion ‘to suppress truthful, nonmisleading information for paternalistic purposes . . . there is in this area ‘some room for the exercise of legislative judgment.”) (internal quotation marks omitted) (citation omitted).
     [101].   Id. at 60–61 (ruling that the voidness claim “need not detain us,” as it was “sufficiently clear to withstand the plaintiffs’ vagueness challenge”).
     [102].   Id. at 64 (ruling that the plaintiffs’ Commerce Clause argument was unavailing, as the court was “confident that the New Hampshire Supreme Court would interpret the Prescription Information Law to affect only domestic transactions”).
     [103].   616 F.3d 7 (1st Cir. 2010), vacated, IMS Health Inc. v. Schneider, 131 S. Ct. 3091 (2011).
     [104].   See id. at 13.
     [105].   Id. at 18–19 (“Plaintiffs’ [First Amendment] claims fail for the same reasons we rejected their nearly identical First Amendment challenge to New Hampshire’s similar statute in Ayotte. . . . Even assuming arguendo that the Maine law restricts protected commercial speech and not conduct, we hold that it directly advances the substantial purpose of protecting opted-in prescribers from having their identifying data used in unwanted solicitations by detailers, and thus Maine’s interests in lowering health care costs.”).
     [106].   Id. at 23 (“Even if there were possible ambiguity in [the statute’s] terms, the law is still not void for vagueness . . . [as it] surely provides enough of a benchmark to satisfy due process.”).
     [107].   Id. at 24–25 (“[T]he statute applies to plaintiffs’ out-of-state use or sale of opted-in Maine prescribers’ identifying data and that the statute does so constitutionally. . . . Plaintiffs have not shown any disproportionate burden on interstate commerce, and the law creates substantial in-state benefits for those Maine prescribers who have affirmatively asked Maine to protect their identifying data and for Maine in its efforts to lower health care costs.”).
     [108].   630 F.3d 263 (2d Cir. 2010), aff’d, 131 S. Ct. 2653 (2011).
     [109].   The Second Circuit described the Central Hudson test as follows:[T]he government may regulate commercial speech when (1) “the communication is neither misleading nor related to unlawful activity;” (2) the government “assert[s] a substantial interest to be achieved” by the regulation; (3) the restriction “must directly advance the state interest;” and finally (4) “if the governmental interest could be served as well by a more limited restriction on commercial speech, the excessive restrictions cannot survive.”Id. at 275 (quoting Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n, 447 U.S. 557, 564 (1980)).
     [110].   Id. at 276 (“[W]e agree with the district court that Vermont does have a substantial interest in both lowering health care costs and protecting public health.  However, the state’s asserted interest in ‘medical privacy’ is too speculative to satisfy the second prong of Central Hudson.”).
     [111].   Id. at 277 (“The Vermont statute cannot be said to advance the state’s interests in public health and reducing costs in a direct and material way.”).
     [112].   Id.
     [113].   Id. at 280.
     [114].   Id.
     [115].   Id. at 282.
     [116].   131 S. Ct. 2653 (2011).
     [117].    Id. at 2658–59.  Justice Kennedy delivered the opinion of the Court, with Justices Roberts, Scalia, Thomas, Alito, and Sotomayor joining.  Justice Breyer filed a dissenting opinion, in which Justices Ginsburg and Kagan joined.
     [118].   Id. at 2659.
     [119].   Id. (“Vermont argues that its prohibitions safeguard medical privacy and diminish the likelihood that marketing will lead to prescription decisions not in the best interests of patients or the State.  It can be assumed that these interests are significant.”).  The Court noted, however, that, at oral argument, the State declined to affirm that its purpose in enacting the law was to discourage detailing and influence drug prescribing.  Id. at 2670.  The Court concluded that “[t]he State’s reluctance to embrace its own legislature’s rationale reflects the vulnerability of its position.”  Id.  Nevertheless, the Court held that “[t]he text of § 4631(d), associated legislative findings, and the record developed in the District Court establish that Vermont enacted its law” to inhibit drug marketing schemes that increase the prescriptions for expensive brand-name drugs.  Id. at 2672.
     [120].   Id. at 2663 (“On its face, Vermont’s law enacts content- and speaker-based restrictions on the sale, disclosure, and use of prescriber-identifying information.”).
     [121].   Id. at 2659 (“Vermont’s statute must be subjected to heightened judicial scrutiny.  The law cannot satisfy that standard.”).
     [122].   Id. at 2656 (“The statute thus disfavors marketing, i.e., speech with a particular content.”); see also id. at 2668 (“Under Vermont’s law, pharmacies may share prescriber-identifying information with anyone for any reason save one: They must not allow the information to be used for marketing.”).
     [123].   Id. at 2663 (“[T]he statute disfavors specific speakers, namely pharmaceutical manufacturers. . . . Detailers are . . . barred from using the information for marketing, even though the information may be used by a wide range of other speakers.”).
     [124].   Id. at 2664 (“Act 80 [Vermont’s data-mining law] is designed to impose a specific, content-based burden on protected expression.  It follows that heightened judicial scrutiny is warranted.” (citation omitted)).
     [125].   Id. at 2666 (“The State also contends that heightened judicial scrutiny is unwarranted in this case because sales, transfer, and use of prescriber-identifying information are conduct, not speech.”).
     [126].   Id. at 2667.
     [127].   Id. at 2668 (citation omitted).
     [128].   See id. at 2668–72.
     [129].   Id. at 2668 (“The explicit structure of the statute allows the information to be studied and used by all but a narrow class of disfavored speakers.”).
     [130].   Id. at 2669.
     [131].   Id. at 2663 (“[I]t appears that Vermont could supply academic organizations with prescriber-identifying information to use in countering the messages of brand-name pharmaceutical manufacturers and in promoting the prescription of generic drugs.”); see also id. at 2660–61 (discussing the counter-detailing provisions in the Vermont law).
     [132].   See id. at 2670–71.
     [133].   Id. at 2669 (“Physicians can, and often do, simply decline to meet with detailers, including detailers who use prescriber-identifying information.”).
     [134].   Id. at 2670.
     [135].   Id.
     [136].   Id. at 2667–68 (citing, inter alia, Cent. Hudson Gas & Elec. Corp. v. Pub. Serv. Comm’n, 447 U.S. 557, 566 (1980)) (“[T]he State must show at least that the statute directly advances a substantial governmental interest and that the measure is drawn to achieve that interest.”).
     [137].   Id. at 2670.
     [138].   Id. at 2671 (characterizing physicians as “‘sophisticated and experienced’ consumers” (citation omitted)).
     [139].   Id. at 2670–71 (“The State seeks to achieve its policy objectives through the indirect means of restraining certain speech by certain speakers. . . . [T]he ‘fear that people would make bad decisions if given truthful information’ cannot justify content-based burdens on speech . . . when the audience, in this case prescribing physicians, consists of ‘sophisticated and experienced’ consumers.” (citations omitted)).
     [140].   Id. at 2672 (“[T]he State cannot engage in content-based discrimination to advance its own side of a debate.”).
     [141].   Id. (“If Vermont’s statute provided that prescriber-identifying information could not be sold or disclosed except in narrow circumstances then the State might have a stronger position.”); see also id. at 2668 (“[T]he State might have advanced its asserted privacy interest by allowing the information’s sale or disclosure in only a few narrow and well-justified circumstances.” (citations omitted)).
     [142].   Id. at 2672 (“[T]he State itself can use the information to counter the speech it seeks to suppress.”).
     [143].   Id.
     [144].   Id.
     [145].   IMS Health, Inc. v. Schneider, 131 S. Ct. 3091, 3091 (2011).
     [146].   IMS Health Inc. v. Ayotte, No. 06-cv-280-PB, 2011 U.S. Dist. LEXIS 116595, at *2–3 (D.N.H. Oct. 7, 2011) (“The parties agree that the Supreme Court’s recent decision in Sorrell v. IMS Health, Inc., 131 S. Ct. 2653, 180 L. Ed. 2d 544 (2011) requires ‘invalidation of N.H. Rev. Stat. Ann. §§ 318:47-f and 318-B:12 to the extent that they prohibit the transfer, use, sale, or licensing of prescriber-identifiable data.’  Accordingly, they have asked me to reinstate the court’s May 7, 2007 judgment for the plaintiffs.  I have reviewed Sorrell and agree that it requires the invalidation of the above-referenced statutes because they improperly restrict speech protected by the First Amendment.”).
     [147].   45 C.F.R. pts. 160, 164 (2010).
     [148].   See, e.g., 45 C.F.R. § 164.502(a) (2011) (regulating covered entities’ use and disclosure of protected health information); see also id. § 160.103 (defining “protected health information” to mean “individually identifiable health information . . . that is: (i) Transmitted by electronic media; (ii) Maintained in electronic media; or (iii) Transmitted or maintained in any other form or medium”).
     [149].   See 42 U.S.C. §§ 1320d(5), 1320d-1(a) (2006) (applying the Act to most health plans, healthcare providers, and other covered entities).  The Rule’s definition of a “covered entity” includes, inter alia, “[a] health plan” and “[a] health care provider who transmits any health information in electronic form in connection with a transaction covered by this subchapter.”  45 C.F.R. § 160.103.
     [150].   45 C.F.R. § 160.103.
     [151].   Id. § 164.502(a).
     [152].   Id. § 160.103.
     [153].   Id.
     [154].   See 42 U.S.C. § 1320d(4)(B) (defining “health information” as any information that “relates to the past, present, or future physical or mental health or condition of an individual, the provision of health care to an individual, or the past, present, or future payment for the provision of health care to an individual”).
     [155].   See id. § 1320d(6) (defining “individually identifiable health information” as “any information, including demographic information collected from an individual, that . . . relates to the past, present, or future physical or mental health or condition of an individual, the provision of health care to an individual, or the past, present, or future payment for the provision of health care to an individual, and . . . (i) identifies the individual; or (ii) with respect to which there is a reasonable basis to believe that the information can be used to identify the individual”); see also 45 C.F.R. § 160.103 (defining “[i]ndividually identifiable health information” as “information that is a subset of health information, including demographic information collected from an individual, and: (1) Is created or received by a health care provider, health plan, employer, or health care clearinghouse; and (2) Relates to the past, present, or future physical or mental health or condition of an individual; the provision of health care to an individual; or the past, present, or future payment for the provision of health care to an individual; and (i) That identifies the individual; or (ii) With respect to which there is a reasonable basis to believe the information can be used to identify the individual”).
     [156].   See 45 C.F.R. § 164.514(b)(2)(i) (listing the elements of health information that must be removed to de-identify the information).
     [157].   Id. § 164.514(b)(2)(i)(R) (requiring the removal of “any other unique identifying number, characteristic, or code” for de-identification of protected health information).
     [158].   Id. § 164.502(a).
     [159].   An “individual” is a “person who is the subject of protected health information.”  Id. § 160.103.
     [160].   Id. § 164.502(a)(2)(i).
     [161].   Id. § 164.502(a)(2)(ii).
     [162].   Id. § 164.502(a)(1) (providing a number of permitted uses and disclosures under HIPAA).
     [163].   In addition to the two broad categories of permissive uses described herein, there are also several minor categories of permissive uses.  Covered entities are permitted to disclose protected health information to the individual (who is the subject of the information) even when the individual does not specifically request disclosure.  See id. § 164.502(a)(1)(i).  Covered entities are permitted to inadvertently disclose protected health information when the disclosure occurs during another required or permitted use or disclosure (an “incident to” disclosure).  See id.§ 164.502(a)(1)(iii).  Finally, once covered entities have obtained the agreement of the individual, they are permitted to use and disclose protected health information to list the individual as a patient in a healthcare facility directory, to inform the individual’s visitors and members of the clergy that the individual is a patient in the facility, and to disclose protected health information to family and friends of the individual who are involved in the individual’s care or payment.  See id. §§ 164.502(a)(1)(v), 164.510.
     [164].   Id. § 164.502(a)(1)(ii).
     [165].   Id. § 164.501.
     [166].   Id.
     [167].   Id.
     [168].   See id. § 164.512.
     [169].   Id. (permitting covered entities to disclose protected health information “without the written authorization of the individual . . . or the opportunity for the individual to agree or object” for disclosures that are (a) required by law, (b) for public health activities, (c) about victims of abuse, neglect, or domestic violence, (d) for health oversight activities, (e) for judicial and administrative proceedings, (f) for law enforcement purposes, (g) about decedents, (h) for cadaveric organ, eye, or tissue donation purposes, (i) for research purposes, (j) to avert a serious threat to health or safety, (k) for specialized government functions, and (l) for workers’ compensation).
     [170].   Id. § 164.502(a).
     [171].   Id. § 164.502(a)(1)(iv) (allowing covered entities to disclose protected health information “[p]ursuant to and in compliance with a valid authorization”).  The subsection of the Privacy Rule that describes the twelve permitted public interest activities specifically provides that “[a] covered entity may use or disclose protected health information without the written authorization of the individual.”  Id. § 164.512.
     [172].   See, e.g., Prot. & Advocacy Sys., Inc. v. Freudenthal, 412 F. Supp. 2d 1211, 1220 (D. Wyo. 2006) (“The primary purpose of HIPAA’s Privacy Rule is to safeguard the privacy of medical protected health information.”).
     [173].   45 C.F.R. § 164.524(a)(1).
     [174].   Id. § 164.524(a)(3).
     [175].   Id. § 164.526(a)(1).
     [176].   Id. § 164.526(d)(2).
     [177].   Id. § 164.528(a)(1) (providing individuals a right “to receive an accounting of disclosures of protected health information made by a covered entity in the six years prior to the date on which the accounting is requested”).
     [178].   Id. § 164.522(a)(1)(i)(A)–(B) (granting individuals a right to request that the covered entity restrict uses or disclosures related to “treatment, payment, or health care operations” or disclosures to which individuals have a right to agree or object under 45 C.F.R. § 164.510(b)).  Covered entities need not comply with all of these requests.  See id. § 164.522(a)(1)(ii) (“A covered entity is not required to agree to a restriction.”).  However, where the request relates to health care operations and not treatment, and the protected health information pertains solely to a health care item or service for which the provider has already been fully reimbursed, then the covered entity must comply with the request.  See 42 U.S.C. § 17935(a) (Supp. IV 2010).
     [179].   45 C.F.R. § 164.522(b)(1).
     [180].   Id. § 164.522(b)(1)(i) (requiring providers to “accommodate reasonable requests”).
     [181].   Id. § 164.522(b)(1)(ii).
     [182].   Id. § 164.510(a)–(b) (giving an individual the right to agree or object before a covered entity lists the individual’s name in a facility directory, gives information to the individual’s visitors or members of the clergy, or discloses information to friends or family members who are concerned with the individual’s treatment or payment).
     [183].   Id. § 164.502(a)(1)(iv).
     [184].   Id. § 164.520(a)(1).
     [185].   IMS Health Inc. v. Sorrell, 630 F.3d 263, 267 (2d Cir. 2010) (“The [prescription] data sold by the data-mining appellants is stripped of patient information, to protect patient privacy.”), aff’d, 131 S. Ct. 2653 (2011).
     [186].   Id.see also, e.g., IMS Health Inc. v. Mills, 616 F.3d 7, 16 (1st Cir. 2010) (“The [pharmacies’] software encrypts patient-identifying data so that plaintiffs cannot identify individual patients by name . . . .”), vacated, IMS Health Inc. v. Schneider, 131 S. Ct. 3091 (2011); IMS Health Inc. v. Ayotte, 550 F.3d 42, 45 (1st Cir. 2008) (“To protect patient privacy, prescribees’ names are encrypted, effectively eliminating the ability to match particular prescriptions with particular patients.”), abrogated by Sorrell v. IMS Health Inc., 131 S. Ct. 2653 (2011).
     [187].   45 C.F.R. § 164.502(d)(2) (“The requirements of this subpart do not apply to information that has been de-identified in accordance with the applicable requirements of § 164.514 . . . .”).
     [188].   See id. § 164.502(d)(2) (“Health information that meets the standard and implementation specifications for de-identification under § 164.514(a) and (b) is considered not to be individually identifiable health information, i.e., de-identified.”); id. § 164.514(b)(2)(i)(A)–(R) (stating the identifiers that must be removed from protected health information for de-identification).
     [189].   Id. § 164.502(d)(1) (“A covered entity may use protected health information to create information that is not individually identifiable health information . . . whether or not the de-identified information is to be used by the covered entity.”).
     [190].   Id. § 164.502(a)(1)(ii) (listing “health care operations” as a permitted use).
     [191].   Id. § 164.501 (“Health care operations means any of the following activities of the covered entity to the extent that the activities are related to covered functions: . . . (6) Business management and general administrative activities of the entity, including, but not limited to: . . . (v) Consistent with the applicable requirements of § 164.514, creating de-identified health information or a limited data set, and fundraising for the benefit of the covered entity.”).  The term “covered function” is not explicitly defined in HIPAA, but presumably refers to the treatment, payment, and health care operations functions of covered entities.  See id. § 164.502(a)(1)(ii) (permitting a covered entity to use or disclose protected health information for “treatment, payment, or health care operations”).
     [192].   See, e.g., IMS Health Inc. v. Sorrell, 630 F.3d 263, 267 (2010) (“These data mining companies . . . aggregate the data to reveal individual physician prescribing patterns and sell it . . . primarily to pharmaceutical manufacturers.”), aff’d, 131 S. Ct. 2653 (2011).
     [193].   See, e.g.Sorrell, 131 S. Ct. at 2660 (“Detailers, who represent the [drug] manufacturers . . . use the [data-mining] reports to refine their marketing tactics and increase sales.”).
     [194].   See generally 45 C.F.R. § 164.508(a)(3).
     [195].   Id. § 164.501(1).
     [196].   See id. § 164.501(1)(i)–(iii) (describing exclusions from the definition of marketing).
     [197].   See id. § 164.501(1)(i) (defining “marketing” as “a communication about a product or service that encourages recipients of the communication to purchase or use the product or service,” excluding communications made for the purpose of describing an individual’s benefits in a health plan or relating to the individual’s treatment or case management); id. § 164.501(2) (defining “marketing” to include “[a]n arrangement between a covered entity and any other entity whereby the covered entity discloses protected health information to the other entity, in exchange for direct or indirect remuneration, for the other entity or its affiliate to make a communication about its own product or service that encourages recipients of the communication to purchase or use that product or service”).
     [198].   See id. § 164.508(a)(3).
     [199].   Subpart E of the Privacy Rule encompasses 45 C.F.R. §§ 164.500–164.534.  For Subpart E’s table of contents, see id. § 164.102.
     [200].   The transition provisions in 45 C.F.R. 164.532 refer to the effect of authorizations and contracts that existed prior to the effective date of the Privacy Rule.  For example, authorizations executed prior to HIPAA are deemed to be effective post-HIPAA as long as the authorization specifically permits the use or disclosure and there is no agreement between the covered entity and the individual restricting the use or disclosure.  See id. § 164.532 (“Effect of prior authorization for purposes other than research.  Notwithstanding any provisions in § 164.508, a covered entity may use or disclose protected health information that it created or received prior to the applicable compliance date of this subpart pursuant to an authorization or other express legal permission obtained from an individual prior to the applicable compliance date of this subpart, provided that the authorization or other express legal permission specifically permits such use or disclosure and there is no agreed-to restriction in accordance with § 164.522(a).”).
     [201].   Id. § 164.508(a)(3)(i).
     [202].   Id. § 164.508(a)(3)(i)(A).
     [203].   Id. § 164.508(a)(3)(i)(B).
     [204].   See generally id. § 164.508(a)(3)(i).
     [205].   See id.
     [206].   Id. § 160.103.  It should be noted that not every provider is a covered entity under HIPAA.  The Privacy Rule provides that a covered entity includes only those providers “who transmit[] any health information in electronic form in connection with a transaction covered by this subchapter.”  Id.  However, because virtually all pharmacies currently send health care claims and other covered transactions electronically, they qualify as covered entities under HIPAA.
     [207].   See id.
     [208].   IMS Health Inc. v. Sorrell, 630 F.3d 263, 267 (2d Cir. 2010) (“When filling prescriptions, pharmacies in Vermont collect information including the prescriber’s name and address, the name, dosage, and quantity of the drug, the date and place the prescription is filled, and the patient’s age and gender.”), aff’d, 131 S. Ct. 2653 (2011).
     [209].   See 45 C.F.R. § 164.508(a)(3)(i) (imposing requirements on uses and disclosures of protected health information “for marketing”).
     [210].   See, e.g.Sorrell, 630 F.3d at 267 (“Pharmacies sell this PI [prescriber-identifiable] data to the data mining appellants. . . . These data mining companies . . . aggregate the data to reveal individual physician prescribing patterns and sell it . . . primarily to pharmaceutical manufacturers.”).
     [211].   See, e.g.Sorrell, 131 S. Ct. at 2660 (“Detailers, who represent the [drug] manufacturers . . . use the [data-mining] reports to refine their marketing tactics and increase sales.”).
     [212].   45 C.F.R. § 164.501.  This marketing definition does not specify who must make the communication, or who must be the recipient of the communication.  Therefore, on its face, the definition does not require the covered entity making the use or disclosure to be either the communicator or the marketer, or that the recipient of the communication be the individual whose protected health information is being used or disclosed.  However, later additions to the Privacy Rule, enacted by Congress on February 17, 2009, appear to equate “recipient” with “individual.”  American Recovery and Reinvestment Act of 2009 (“ARRA”), Pub. L. No. 111–5, § 13406, 123 Stat. 115, 266–70 (2010).  In ARRA provisions relating to marketing, the law states that “the covered entity making such communication obtains from the recipient of the communication . . . a valid authorization . . . with respect to such communication.”  42 U.S.C. § 17936(a)(2)(B)(ii) (Supp. IV 2010).  In the context of the Privacy Rule, authorizations are obtained only from individuals. See 45 C.F.R. § 164.508(c)(1)(vi) (requiring an authorization to be signed by the “individual”).  Further, in proposed rules to implement ARRA, the HHS also appears to assume that the recipient of marketing communications is the individual.  See Modifications to the HIPAA Privacy, Security, and Enforcement Rules Under the Health Information Technology for Economic and Clinical Health Act, 75 Fed. Reg. 40,868, 40,884 (July 14, 2010) (to be codified at 45 C.F.R. pt. 160) (“The Privacy Rule requires covered entities to obtain a valid authorization from individuals before using or disclosing protected health information to market a product or service to them.” (emphasis added) (citation omitted)).  Nevertheless, HIPAA does not explicitly state that the recipient of a marketing communication must be the individual.  See 45 C.F.R. § 164.501 (defining marketing as “mak[ing] a communication about a product or service that encourages recipients of the communication to purchase or use the product or service”); id. § 164.508(a)(3) (providing that “a covered entity must obtain an authorization for any use or disclosure of protected health information for marketing”).  Moreover, the disclosures of prescription information described in Sorrellultimately resulted in sales of brand-name drugs to individuals whose privacy information may have been used to market the drugs.  Even if this were not the case, there is nothing unreasonable about reading the Privacy Rule precisely as it is written—requiring individuals to authorize any use of their protected health information to sell items or services, no matter the product, no matter the seller, and no matter the buyer.
     [213].   See, e.g., IMS Health Inc. v. Mills, 616 F.3d 7, 16 (1st Cir. 2010) (stating that pharmacies’ computer software collects prescription data, encrypts the patient identifiers so that patients cannot be identified by name, and sends the information to the data miners who have purchased the information), vacated, IMS Health Inc. v. Schneider, 131 S. Ct. 3091 (2011).
     [214].   See supra Part VI.B.
     [215].   See 45 C.F.R. § 164.502(d) (permitting a covered entity to use protected health information to create de-identified information, and providing that the Privacy Rule does not apply to de-identified information).
     [216].   See id. § 160.103 (defining “protected health information” to mean “individually identifiable health information”).
     [217].   See id. § 164.502(d)(2) (providing that the Privacy Rule does not apply to de-identified information).
     [218].   See id. § 164.508(a)(3)(i) (“[A] covered entity must obtain an authorization for any use or disclosure of protected health information for marketing . . . .”).
     [219].   See id. § 160.103 (broadly defining “use” of protected health information as “the sharing, employment, application, utilization, examination, or analysis of such information within an entity that maintains such information”).
     [220].   See id. § 164.502(d)(1) (permitting a covered entity to “use protected health information to create information that is not individually identifiable health information”).
     [221].   See, e.g., IMS Health Inc. v. Sorrell, 630 F.3d 263, 267 (2d Cir. 2010) (“Pharmacies sell this PI [prescriber-identifiable] data to the data mining appellants. . . . These data mining companies . . . aggregate the data to reveal individual physician prescribing patterns and sell it . . . primarily to pharmaceutical manufacturers.”), aff’d, 131 S. Ct. 2653 (2011).
     [222].   See 45 C.F.R. § 160.103 (defining “covered entity” to include “[a] health care provider who transmits any health information in electronic form in connection with a transaction covered by this subchapter”).  A “health care provider” is defined as “a provider of medical or health services . . . and any other person or organization who furnishes, bills, or is paid for health care in the normal course of business.”  Id.
     [223].   See id. § 164.508(a)(3)(i) (“[A] covered entity must obtain an authorization for any use or disclosure of protected health information for marketing . . . .”).
     [224].   Id. § 164.502(d)(1) (“A covered entity may use protected health information to create information that is not individually identifiable health information . . . whether or not the de-identified information is to be used by the covered entity.”).
     [225].   Id. § 164.508(a)(3)(i).
     [226].   See id.  There is no reason to believe that the HHS, the drafter of the Privacy Rule, meant anything by this “notwithstanding” language other than what the language unambiguously states.  The Agency used similar language in another provision of the Privacy Rule to require an authorization before any use or disclosure of psychotherapy notes, subject to limited exceptions.  See id. § 164.508(a)(2).  However, where the Agency intended a more limited impact of its use of the term “notwithstanding,” it clearly restricted its reach to particular provisions within the Privacy Rule. See, e.g.id. § 164.502(g)(3)(ii) (“Notwithstanding the provisions of paragraph (g)(3)(i) of this section”); id. § 164.502(g)(5) (“Notwithstanding a State law or any requirement of this paragraph to the contrary”); id. § 164.532(b) (“Notwithstanding any provisions in § 164.508”); id. § 164.532(c) (“Notwithstanding any provisions in §§ 164.508 and 164.512(i)”).
     [227].   See id. § 164.502(d)(1) (“A covered entity may use protected health information to create information that is not individually identifiable health information or disclose protected health information only to a business associate for such purpose, whether or not the de-identified information is to be used by the covered entity.”).
     [228].   See id. § 164.508 (a)(3)(i) (“Notwithstanding any provision of this subpart . . . a covered entity must obtain an authorization for any use or disclosure of protected health information for marketing . . . .”).
     [229].   See id.
     [230].   See IMS Health Inc. v. Ayotte, 550 F.3d 42, 45 (1st Cir. 2008) (stating that IMS and Verispan organize several billion prescriptions each year), abrogated by Sorrell v. IMS Health Inc., 131 S. Ct. 2653 (2011).
     [231].   See Modifications to the HIPAA Privacy, Security, and Enforcement Rules Under the Health Information Technology for Economic and Clinical Health Act, 75 Fed. Reg. 40,868, 40,907 (July 14, 2010) (to be codified at 45 C.F.R. pts. 160, 164) (opining on the effect of proposed HIPAA privacy rules that would expand the requirement of covered entities to obtain written authorizations prior to marketing disclosures and sales of protected health information: “Even if covered entities attempted to obtain authorizations in compliance with the proposed modifications, we believe most individuals would not authorize these types of disclosures.  It would not be worthwhile for covered entities to continue to attempt to obtain such authorizations, and as a result, we believe covered entities would simply discontinue making such disclosures.”).
     [232].   See id.
     [233].   See American Recovery and Reinvestment Act of 2009 (“ARRA”), Pub. L. No. 111–5, § 17935(d)(1), 123 Stat. 115 (2010) (adding the Privacy Rule restrictions on covered entities’ sale of protected health information and requiring covered entities to obtain an authorization from the affected individuals prior to selling their protected health information for any purpose).  Exceptions to the authorization requirement apply for activities such as public health activities, research, treatment, and healthcare operations.  See id. § 17935(d)(2)(A)–(G).  In addition, ARRA provides that communications by a covered entity encouraging the recipients to purchase or use a product or service may not be considered a health care operation, which would avoid the authorization requirement.  See id. § 17936(a)(1).  Rules proposed to implement ARRA underscore the Agency’s continuing concerns about covered entities’ use of protected health information for marketing purposes.  See Modifications to the HIPAA Privacy, Security, and Enforcement Rules Under the Health Information Technology for Economic and Clinical Health Act, 75 Fed. Reg. at 40,868.  The HHS declared:We believe Congress intended with these provisions [marketing and sale] to curtail a covered entity’s ability to use the exceptions to the definition of “marketing” in the Privacy Rule to send communications to the individual that were motivated more by commercial gain or other commercial purpose rather than for the purpose of the individual’s health care, despite the communication’s being about a health-related product or service.Id. at 40,884.  While ARRA restricts sales of protected health information, it does not prohibit sales of de-identified information; while it restricts marketing-related disclosures, it does not restrict marketing-related uses.  Therefore, there is nothing in the text of ARRA that explicitly prohibits a covered entity from first de-identifying protected health information and then selling it to a third party for any purpose without obtaining authorizations from the affected individuals.  Nevertheless, ARRA leaves unaltered HIPAA’s preexisting marketing requirement that covered entities must obtain authorizations from individuals before engaging in any marketing-related use or disclosure of their protected health information. See 45 C.F.R. § 164.508 (a)(3)(i) (“[A] covered entity must obtain an authorization for any use or disclosure of protected health information for marketing . . . .”).
     [234].   No. 11-2428, 2012 U.S. Dist. LEXIS 19372 (E.D. Pa. Feb. 15, 2012).
     [235].   Id. at *1–2.
     [236].   Id. at *1.
     [237].   Id. at *12.
     [238].   Id. at *14.
     [239].   Id. at *17 (“Under the Privacy Rule, healthcare providers are permitted to ‘de-identify’ Protected Health Information.  Once information is de-identified, it is no longer considered Protected Health Information.”).
     [240].   Id. (“[F]ederal regulations permit the disclosure of protected Health Information under certain circumstances, including for ‘treatment, payment, or health care operations.’  The term ‘health care operations’ is defined to include ‘contacting of health care providers and patients with information about treatment alternatives.’”).
     [241].   See 45 C.F.R. § 164.508(a)(3)(i) (2011).
     [242].   Id.
     [243].   The defendants’ letters to the plaintiffs’ physicians suggesting drug prescription alternatives should be characterized as marketing rather than treatment.  The drug manufacturers paid the pharmacies for sending the letters.  Steinberg, 2012 U.S. Dist. LEXIS 19372, at *6.  While the manufacturers stood to benefit when the physicians prescribed the suggested alternative drugs, the pharmacies had no motivation to send the communications other than their remuneration from the manufacturers.  In fact, the American Recovery and Reinvestment Act of 2009 (“ARRA”), Pub. L. No. 111–5, § 17936(a)(1), 123 Stat. 115 (2010), provides that communications by a covered entity encouraging the recipients to purchase or use a product or service may not be considered a health care operation, thereby avoiding the authorization requirement.  The HHS, in proposed rules to implement the ARRA, declared its intent:to curtail a covered entity’s ability to use the exceptions to the definition of ‘marketing’ in the Privacy Rule to send communications to the individual that were motivated more by commercial gain . . . rather than for the purpose of the individual’s health care, despite the communication’s being about a health-related product or service.Modifications to the HIPAA Privacy, Security, and Enforcement Rules Under the Health Information Technology for Economic and Clinical Health Act, 75 Fed. Reg. 40,868, 40,884 (July 14, 2010) (to be codified at 45 C.F.R. pt. 160)
     [244].   Steinberg, 2012 U.S. Dist. LEXIS 19372, at *13.
     [245].   See discussion of the states’ data-mining laws supra Part III, and discussion of the marketing provisions of the Privacy Rule supra Part VI.B;see also Brief for the United States as Amicus Curiae Supporting Petitioners,supra note 31, at 33 (“There are a number of federal statutory and regulatory provisions that regulate the dissemination or use of information by private parties for various reasons, including to protect individual privacy. . . . For instance, the Health Insurance Portability and Accountability Act of 1996 (‘HIPAA’) and its implementing regulations limit the nonconsensual dissemination and use of patient-identifiable health information by health plans . . . and most health care providers.”).
     [246].   See supra Part VI.C.
     [247].   See supra Part V.
     [248].   See supra Part VI.C.
     [249].   See, e.g., Sorrell v. IMS Health Inc., 131 S. Ct. 2653, 2668 (2011) (“[T]he State contends that its law is necessary to protect medical privacy, including physician confidentiality, avoidance of harassment, and the integrity of the doctor-patient relationship . . . [and] improved public health and reduced healthcare costs.”); IMS Health Inc. v. Sorrell, 630 F.3d 263, 267 (2d Cir. 2010) (“The Vermont legislature passed Act 80 in 2007, intending to protect public health, to protect prescriber privacy, and to reduce health care costs.”), aff’d, 131 S. Ct. 2653 (2011).
     [250].   See, e.g., Prot. & Advocacy Sys. v. Freudenthal, 412 F. Supp. 2d 1211, 1220 (D. Wyo. 2006) (“The primary purpose of HIPAA’s Privacy Rule is to safeguard the privacy of medical protected health information.”); Brief for the United States as Amicus Curiae Supporting Petitioners, supra note 31, at 34 (“The governmental interest in protecting patient privacy is clearly a substantial one.”).
     [251].   Compare tit. 22, § 1711-E (2009) (making it unlawful for a pharmacy to use, sell, or transfer prescription information where the prescriber had registered for confidentiality protection), and § 318:47-f (2006) (prohibiting pharmacies and insurance companies from selling or licensing prescription data for any commercial purpose), and tit. 18, § 4631 (2010) (prohibiting the sale or disclosure of pharmacy records for marketing purposes and prohibiting drug manufacturers from using the records for marketing unless the prescribers consented), with 45 C.F.R. § 164.508 (a)(3)(i) (2011) (prohibiting covered entities from using or disclosing protected health information for marketing purposes without the individual’s authorization).
     [252].   Compare tit. 22, § 1711-E(1)(F-1) (defining marketing as advertising, publicizing, promoting, or selling a prescription drug), and § 318:47-f  (defining commercial purpose as advertising, marketing, or any activity that influences sales), and 18, § 4631(b)(5) (defining marketing as advertising or any activity that influences the sale of a drug or influences prescribing behavior), with 45 C.F.R. § 164.501 (defining marketing as a communication that encourages the listener to purchase or use the item or service).
     [253].   Compare tit. 22, § 1711-E(1)(F-1) (excluding a number of health-related activities from the definition of “marketing,” including pharmacy reimbursement, patient care management, utilization review by a healthcare provider, and healthcare research), and § 318:47-f  (exempting from the marketing prohibition disclosures of prescription information for health-related purposes, such as pharmacy reimbursement, care management, utilization review by a healthcare provider, or healthcare research), and § 4631 (excluding from the definition of marketing certain health-related purposes, including pharmacy reimbursement, healthcare management, utilization review by a healthcare provider, and healthcare research), with45 C.F.R. § 164.501 (exempting from the definition of marketing communications to describe the benefits in a health plan, uses and disclosures for treatment, and case management).
     [254].   See Brief for Respondent Pharmaceutical Research and Manufacturers of America at 48, Sorrell v. IMS Health Inc., 131 S. Ct. 2653 (2011) (No. 10-779) (“[T]he State did not defend its law below on the basis ofpatient privacy.”).
     [255].   IMS Health Inc. v. Mills, 616 F.3d 7, 15 (1st Cir. 2010) (“[P]hysicians ‘complain bitterly’ about detailers ‘who wave data in their faces’ and challenge them with their own prescribing histories when they fail to prescribe more of the product the detailer has been advertising.” (citations omitted)), vacated, IMS Health Inc. v. Schneider, 131 S. Ct. 3091 (2011).
     [256].   Sorrell, 131 S. Ct. at 2671.
     [257].   Id. at 2669.
     [258].   Id. at 2671.
     [259].   The Second Circuit in Sorrell found that the privacy of patients’ medical information was not at issue.  IMS Health Inc. v. Sorrell, 630 F.3d 263, 276 (2d Cir. 2010) (“[T]he state’s asserted interest in medical privacy is too speculative to qualify as a substantial state interest. . . . Vermont has not shown any effect on the integrity of the prescribing process or the trust patients have in their doctors from the use of PI [prescriber-identifiable] data in marketing.”), aff’d, 131 S. Ct. 2653 (2011).
     [260].   See 45 C.F.R. § 164.502(a) (2011) (regulating the uses and disclosures of protected health information by covered entities).
     [261].   See, e.g., Brief for Petitioners at 23, Sorrell, 131 S. Ct. 2653 (No. 10-779) (characterizing pharmacies’ prescription information as nonpublic, “particularly where the information has been produced involuntarily”); Reply Brief for Petitioners at 3, Sorrell, 131 S. Ct. 2653 (No. 10-779) (“Doctors and patients do not voluntarily provide prescriptions to pharmacies; by law, they must provide this sensitive information to obtain medicine.”).
     [262].   See, e.g.45 C.F.R. § 164.502(a)(1)(ii) (permitting a covered entity to use or disclose protected health information for “treatment, payment, or health care operations”).  Treatment includes coordination of healthcare, managing healthcare, consultations among providers, and referrals.  Payment includes insurers’ collection of insurance premiums, providers’ obtaining reimbursement for providing healthcare, determining eligibility for coverage, adjudicating health benefit claims, risk adjusting, billing and collections, reviewing healthcare services to determine medical necessity, utilization review, and making disclosures to consumer reporting agencies.  Healthcare operations include quality assessment; reviewing the competence or qualifications of healthcare professionals; underwriting; conducting or arranging for medical review, legal services, and auditing, including fraud and abuse detection and compliance; business planning, business management and administrative activities; customer service; resolution of internal grievances; sale, transfer, merger, or consolidation of the covered entity with another entity; and fundraising.  See id. § 164.501.
     [263].   See id. § 164.502 (providing the permitted and required uses and disclosures of protected health information by covered entities).
     [264].   See id. § 164.502(a) (prohibiting covered entities from using or disclosing protected health information “except as permitted or required by [the Privacy Rule]”); id. § 164.502(a)(1)(iv) (allowing covered entities to disclose protected health information “[p]ursuant to and in compliance with a valid authorization”).
     [265].   See, e.g.id. § 164.510 (requiring a covered entity, prior to certain uses and disclosures of an individual’s protected health information, to inform the individual in advance of the use or disclosure and provide the individual an opportunity to agree, or to prohibit, or to restrict the use or disclosure).
     [266].   See Sorrell, 131 S. Ct. at 2670–71 (2011) (“[T]he ‘state’s own explanation of how [the data-mining law] advances its interests cannot be said to be direct.’  The State seeks to achieve its policy objectives through the indirect means of restraining certain speech by certain speakers—that is, by diminishing detailers’ ability to influence prescription decisions.  Those who seek to censor or burden free expression often assert that disfavored speech has adverse effects.  But the ‘fear that people would make bad decisions if given truthful information’ cannot justify content-based burdens on speech.” (citations omitted)).
     [267].   See id. at 2661 (discussing the impact of pharmacies’ sales of prescription information upon cost containment and the public health).
     [268].   See 45 C.F.R. § 164.502 (listing permitted uses and disclosures of protected health information by covered entities that do not require an authorization from the affected individuals).
     [269].   See id. (providing individuals with rights of access and rights to control certain uses and disclosures of their protected health information by covered entities); see also supra Part VI.A (explaining individuals’ rights of access and control over their protected health information under the Privacy Rule).
     [270].   Sorrell, 131 S. Ct. at 2669 (“Physicians can, and often do, simply decline to meet with detailers, including detailers who use prescriber-identifying information.”).
     [271].   Id. at 2660–61.  But see id. at 2681 (Breyer, J., dissenting) (noting that the education program funded by Vermont’s data-mining law “does notmake use of prescriber-identifying data”).
     [272].   IMS Health Inc. v. Sorrell, 630 F.3d 263, 280 (2d Cir. 2010), aff’d, 131 S. Ct. 2653 (2011) (“The state could wait to assess what the impact of its newly funded counter-speech program will be.”).
     [273].   See 45 C.F.R. § 164.502(a)(1)(ii), (iv) (listing permitted and required uses, and permitting any other use or disclosure “[p]ursuant to and in compliance with a valid authorization”); see also Brief for the United States as Amicus Curiae Supporting Petitioners, supra note 31, at 34 (“HIPAA and other such federal statutes directly advance substantial federal interests in a narrowly and reasonably tailored way.”).
     [274].   See 45 C.F.R. § 164.502(a)(1)(i)–(iii), (v)–(vi), (2)(i)–(ii) (listing permitted and required uses and disclosures for which an authorization is not required).
     [275].   See id. § 164.508(a)(1) (“Except as otherwise permitted or required by this subchapter, a covered entity may not use or disclose protected health information without an authorization that is valid under this section.”).
     [276].   See id. § 164.502(a)(1)(ii) (listing “treatment, payment, or health care operations” as a permitted use and disclosure).
     [277].   See id. § 164.502(a)(1)(vi) (listing permitted uses and disclosures “[a]s permitted by and in compliance with . . . § 164.512,” which, in turn, describes twelve public interest activities pursuant to which covered entities may use or disclose protected health information without obtaining an authorization from the affected individuals).
     [278].   See id. § 164.508(a)(3) (restricting marketing uses and disclosures); id. § 164.501 (providing health related exceptions to the definition of marketing); see also Reply Brief for Petitioners, supra note 261, at 21–22 (“Doctors and patients expect and intend these [health-related] uses of healthcare information, but they do not expect (or even know) that third parties purchase the information and use it as a marketing tool.”).
     [279].   Sorrell v. IMS Health Inc., 131 S. Ct. 2653, 2663–67 (2011).
     [280].   Id. at 2663.
     [281].   Id. at 2663–64 (citation omitted).
     [282].   Id. at 2672.
     [283].   Id. at 2671 (“[S]ome Vermont doctors view targeted detailing based on prescriber-identifying information as ‘very helpful’ because it allows detailers to shape their messages to each doctor’s practice.”).
     [284].   Id. (“[T]he United States, which appeared here in support of Vermont, took care to dispute the State’s ‘unwarranted view that the danger of [n]ew drugs outweigh their benefits to patients.’”); IMS Health Inc. v. Sorrell, 630 F.3d 263, 280 (2d Cir. 2010) (observing that the state law precludes the use of pharmacy information for marketing brand-name drugs “no matter how efficacious and no matter how beneficial those drugs may be compared to generic alternatives”), aff’d, 131 S. Ct. 2653 (2011).
     [285].   Sorrell, 131 S. Ct. at 2672 (concluding that the State “restrict[ed] the information’s use by some speakers and for some purposes, even while the State itself can use the information to counter the speech it seeks to suppress”).
     [286].   See 45 C.F.R. § 164.502(a)(1)–(2) (2011) (listing the permitted and required uses and disclosures of protected health information by covered entities).
     [287].   See id. § 164.508(a)(1) (“Except as otherwise permitted or required by this subchapter, a covered entity may not use or disclose protected health information without an authorization.”).
     [288].   See id. § 164.508(a)(3) (requiring an individual’s authorization for marketing uses and disclosures of protected health information by covered entities).
     [289].   The Privacy Rule provides that, notwithstanding any other provision in the Rule, a covered entity may not use or disclose protected health information for marketing and may not use or disclose protected health information in psychotherapy notes.  See id. § 164.508(a)(2)–(3).
     [290].   Sorrell, 131 S. Ct. at 2658 (“The State seeks to achieve its policy objectives through the indirect means of restraining certain speech by certain speakers.”).  Moreover, HIPAA similarly restricts all uses and disclosures of psychotherapy notes unless authorized by the individual.  45 C.F.R. § 164.508(a)(2) (“Notwithstanding any provision of this subpart . . . a covered entity must obtain an authorization for any use or disclosure of psychotherapy notes,” subject to limited exceptions, including, inter alia, uses for treatment by the psychotherapist, uses and disclosures for the psychotherapist’s training programs, and uses or disclosures to allow the psychotherapist to defend himself in a legal action brought by the individual).
     [291].   See 45 C.F.R. § 164.501 (describing the myriad permissible uses and disclosures of protected health information that comprise treatment, payment, and healthcare operations); id. § 164.502(a)(1)(ii) (indicating that permitted uses and disclosures of protected health information include treatment, payment, or healthcare operations).
     [292].   See Sorrell, 131 S. Ct. at 2660 (quoting the law as providing that pharmacies may not disclose pharmacy information for marketing and that drug manufacturers may not use the information for marketing, unless the prescriber consents).
     [293].   See id. at 2668 (“Under Vermont’s law, pharmacies may share prescriber-identifying information with anyone for any reason save one: They must not allow the information to be used for marketing.”).
     [294].   Id. at 2660.
     [295].   Id. at 2668 (“Exceptions further allow pharmacies to sell prescriber-identifying information for certain purposes, including ‘health care research.’  And the measure permits insurers, researchers, journalists, the State itself, and others to use the information.” (citations omitted)).
     [296].   See id. at 2663 (“[I]t appears that Vermont could supply academic organizations with prescriber-identifying information to use in countering the messages of brand-name pharmaceutical manufacturers and in promoting the prescription of generic drugs.”).  But see id. at 2680 (Breyer, J., dissenting) (noting that the record “contains no evidentiary basis for the conclusion that any such individualized counterdetailing is widespread, or exists at all, in Vermont”).
     [297].   Id. at 2669 (majority opinion); see also Brief of Respondents IMS Health Inc., Verispan, LLC, & Source Healthcare Analytics, Inc. at 9, Sorrell, 131 S. Ct. 2653 (No. 10-779) (“In stark contrast to HIPAA and other federal statutes and regulatory regimes that protect important personal privacy interests, Act 80 [Vermont’s data-mining law] contains numerous exceptions that freely permit the wide distribution of prescribers’ commercial prescription history information.”).
     [298].   See Sorrell, 131 S. Ct. at 2663 (“The statute thus disfavors marketing, that is, speech with a particular content.  More than that, the statute disfavors specific speakers, namely pharmaceutical manufacturers.”).
     [299].   See 45 C.F.R. § 164.502(a) (2011) (“A covered entity may not use or disclose protected health information, except as permitted or required by this subpart. . . .”).
     [300].   See id. § 164.502(a)(1)–(2) (listing the permissible and required uses and disclosures under the Privacy Rule).
     [301].   The Privacy Rule also permits uses and disclosures in several other areas and requires disclosures in two instances.  See supra Part VI.A.
     [302].   See 45 C.F.R. § 164.502(a)(1)(ii) (listing “treatment, payment or health care operations” as a permissible basis for covered entities to use or disclose protected health information); id. § 164.501 (defining the activities that comprise treatment, payment, and healthcare operations).
     [303].   See id. § 164.502(a)(1)(vi) (listing as permissible uses and disclosure of protected health information by covered entities those that are “permitted by and in compliance with . . . § 164.512”); id. § 164.512 (listing twelve public interest activities that comprise permissible uses of protected health information by covered entities, including uses and disclosures required by law; uses and disclosures for public health activities; disclosures about victims of abuse, neglect, or domestic violence; uses and disclosures for health oversight activities; disclosures for judicial and administrative proceedings; disclosures for law enforcement purposes; uses and disclosures about decedents; uses and disclosures for cadaveric organ, eye, or tissue donation purposes; uses and disclosures for research purposes; uses and disclosures to avert a serious threat to health or safety; uses and disclosures for specialized government functions; and disclosures for workers’ compensation); see also Brief for the United States as Amicus Curiae Supporting Petitioners, supra note 31, at 34 (“HIPAA’s regulations directly advance that interest, because they permit the nonconsensual disclosure or use of patient-identifiable information only in limited circumstances such as ‘treatment, payment, or health care operations,’ or national ‘public health activities . . . .’” (citations omitted)).
     [304].   See 45 C.F.R. § 164.502(a)(1) (listing permissible uses and disclosure of protected health information by covered entities); id. § 164.502(a)(1)(iv) (permitting disclosures pursuant to an authorization).
     [305].   See id. § 164.508(a)(3) (imposing special restrictions upon marketing uses and disclosures); see also discussion of marketing restrictions supra Part VI.B.
     [306].   See § 164.508(a)(3)(i) (providing generally that “a covered entity must obtain an authorization for any use or disclosure of protected health information for marketing”).  Under the American Recovery and Reinvestment Act of 2009 (“ARRA”), Pub. L. No. 111-5, § 17935(d)(1), 123 Stat. 115 (2009), sales of protected health information must be authorized, but this limit is broadly framed to apply to all sales of health information, both marketing and nonmarketing.
     [307].   See Sorrell v. IMS Health Inc., 131 S. Ct. 2653, 2660 (2011) (quoting the law as providing that pharmacies may not disclose pharmacy information for marketing and drug manufacturers may not use the information for marketing, unless the prescriber consents).
     [308].   Id. at 2668.
     [309].   See, e.g., Brief for Petitioners, supra note 261, at 36 (“The protection of free speech should not restrict reasonable consumer privacy protections that give consumers control over nonconsensual uses of their information.”); Brief of Respondents IMS Health Inc., Verispan, LLC, & Source Healthcare Analytics, Inc., supra note 297, at 32, 44 (“There is no dispute that, although genuine privacy measures restrict free speech by prohibiting the disclosure of factual information, they satisfy First Amendment scrutiny because they are tailored to further a substantial interest in protecting an important expectation of privacy. . . . Vermont errs in relying on several statutes and regulatory regimes that prohibit private parties from disclosing information.  All those measures satisfy constitutional scrutiny because they are not intended to restrict speech but instead consistently protect an important privacy interest.  The Solicitor General all but acknowledges that, in light of all the contradictions in Vermont law, Act 80 does not function as a genuine privacy statute.”); see also Brief for the United States as Amicus Curiae Supporting Petitioners,supra note 31, at 34–35 (“[T]his Court’s analysis of the ‘fit’ between the Vermont statute and the State’s legislative objectives should not affect those federal provisions [like HIPAA].”); Reply Brief for Petitioners, supra note 261, at 8 (“If respondents were correct, then privacy laws generally would be subject to strict scrutiny. . . . This position is plainly untenable . . . .”).

By Derek E. Bambauer

Cyberlaw is plagued by the myth of perfection.

Consider three examples: censorship, privacy, and intellectual property.  In each, the rhetoric and pursuit of perfection has proved harmful, in ways this Essay will explore.  And yet the myth persists—not only because it serves as a potent metaphor, but because it disguises the policy preferences of the mythmaker.  Scholars should cast out the myth of perfection, as Lucifer was cast out of heaven.  In its place, we should adopt the more realistic, and helpful, conclusion that often good enough is . . . good enough.

Start with Internet censorship. Countries such as China, Iran, and Vietnam use information technology to block their citizens from accessing on-line material that each government dislikes.  Democracies, too, filter content: Britain blocks child pornography using the Cleanfeed system,{{1}} and South Korea prevents users from reaching sites that support North Korea’s government.{{2}}  This filtering can be highly effective: China censors opposition political content pervasively,{{3}} and Iran blocks nearly all pornographic sites (along with political dissent).{{4}}  However, even technologically sophisticated systems, like China’s Golden Shield, are vulnerable to circumvention.  Users can employ proxy servers or specialized software, such as Tor, to access proscribed sites.{{5}}  This permeability has led many observers to conclude that effective censorship is impossible, because censorship is inevitably imperfect.{{6}}  Filtering is either trivially easy to bypass, or doomed to failure in the arms race between censors and readers.  The only meaningful censorship is perfect blocking, which is unattainable.

And yet, leaky Internet censorship works.  Even in authoritarian countries, few users employ circumvention tools.{{7}} Governments such as China’s capably block access to most content about taboo subjects, such as the Falun Gong movement{{8}} or coverage of the Arab Spring uprisings.{{9}}  Those who see imperfect censorship as useless make three errors.  First, they ignore offline pressures that users face.  Employing circumvention tools is like using a flashlight: it helps find what you seek, but it draws attention to you.  China has become adept at detecting and interfering with Tor,{{10}} and Iran recently purchased a sophisticated surveillance system for monitoring Internet communications.{{11}}  Bypassing censorship in cyberspace may have adverse consequences in realspace.  Second, most Internet users are not technologically sophisticated.  They use standard software, and the need to install and update specialized circumvention tools may be onerous.{{12}}  Finally, governments do not need perfect censorship to attain their goals.  They seek to prevent most people from obtaining prohibited content, not to banish it entirely.  Censorship that constrains the average user’s ordinary web browsing generally suffices.

Privacy discourse too is obsessed with perfection.  The reidentification wars have pitted researchers who assert that anonymizing data is impossible{{13}} against those who argue the risk of breaching properly sanitized datasets is vanishingly small.{{14}}  While the arguments are dauntingly technical (for those unfamiliar with advanced statistics), the empirical evidence points toward the less threatening conclusions.  The only rigorous study demonstrating an attack on a properly de-identified dataset under realistic circumstances revealed but 2 out of 15,000 (.013%) participants’ identities.{{15}}  Moreover, critics of anonymized data overlook the effects of incorrect matches. Attackers will have to weed out false matches from true ones, complicating their task.

Opponents make three mistakes by focusing on the theoretical risk of re-identification attacks on properly sanitized data.  First, the empirical evidence for their worries is slight, as the data above demonstrates.  There are no reports of such attacks in practice, and the only robust test demonstrated minimal risk.  Second, anonymized data is highly useful for socially beneficial purposes, such as predicting flu trends, spotting discrimination, and analyzing the effectiveness of medical and legal interventions.{{16}} Finally, the most significant privacy risk is from imperfectly sanitized data: organizations routinely release, deliberately or inadvertently, information that directly identifies people, or that enables an attacker to do so without advanced statistical knowledge.  Examples are legion, from the California firm Biofilm releasing the names and addresses of 200,000 customers who asked for free Astroglide samples{{17}} to AOL’s disclosure of user queries that allowed researchers to link people to their searches.{{18}}  Concentrating on whether perfect anonymization is possible distracts from far more potent privacy threats emanating from data.

Intellectual property (“IP”) in the digital age is similarly obsessed with perfection.  IP owners argue that with the advent of perfect digital copies, high-speed networks, and distributed dissemination technologies, such as peer-to-peer file-sharing software, any infringing copy of a protected work will spread without limit, undermining incentives to create.  This rhetoric of explosive peril has resulted in a perpetual increase in the protections for copyrighted works and in the penalties for violating them.{{19}}

The quest for perfect safeguards for IP predates the growth of the commercial Internet.  In September 1995, President Clinton’s administration released its White Paper, which argued that expanded copyright entitlements were necessary for content owners to feel secure in developing material for the nascent Information Superhighway.{{20}}  Without greater protection, the Paper argued, the Superhighway would be empty of content, as copyright owners would simply refuse to make material available via the new medium.

This prediction proved unfounded, but still persuasive.  In the last fifteen years, Congress has reinforced technological protection measures such as Digital Rights Management with stringent legal sanctions;{{21}} has augmented penalties for copyright infringement, including criminal punishments;{{22}} has pressed intermediaries, such as search engines, to take down allegedly infringing works upon notification by the copyright owner;{{23}} and has dedicated executive branch resources to fighting infringement.{{24}}  And yet, pressures from content owners for ever-greater protections continue unrelentingly.  In the current Congress, legislation introduced in both the House of Representatives and the Senate would, for the first time in American history, have authorized filtering of sites with a primary purpose of aiding infringement{{25}} and would have enabled rightsowners to terminate payment processing and Internet advertising services for such sites.{{26}}  These proposals advanced against a backdrop of relatively robust financial health for the American movie and music industries.{{27}}

Thus, the pursuit of perfection in IP also contradicts empirical evidence.  Content industries have sought to prohibit, or at least hobble, new technologies that reduce the cost of reproduction and dissemination of works for over a century—from the player piano{{28}} to the VCR{{29}} to the MP3 player{{30}} to peer-to-peer file-sharing software.{{31}}  And yet each of these advances has opened new revenue horizons for copyright owners.  The growth in digital music sales is buoying the record industry,{{32}} and the VCR proved to be a critical profit source for movies.{{33}}  New copying and consumption technologies destabilize prevailing business models, but not the production of content itself.{{34}}

Moreover, perfect control over IP-protected works would threaten both innovation and important normative commitments.  The music industry crippled Digital Audio Tapes{{35}} and failed to provide a viable Internet-based distribution mechanism until Apple introduced the iTunes Music Store.{{36}}  The movie industry has sought to cut off supply of films to firms such as Redbox that undercut its rental revenue model,{{37}} and Apple itself has successfully used copyright law to freeze out companies that sold generic PCs running MacOS.{{38}}  And, the breathing room afforded by the fair use and de minimis doctrines, along with exceptions to copyright entitlements, such as cover licenses, enables a thriving participatory culture of remixes, fan fiction, parody, criticism, and mash-ups.  Under a system of perfect control, copyright owners could withhold consent to derivative creators who produced works of which they disapproved, such as critical retellings of beloved classics, for example Gone With The Wind,{{39}} or could price licenses to use materials beyond the reach of amateur artists.{{40}}  Perfection in control over intellectual property is unattainable, and undesirable.

The myth of perfection persists because it is potent.  It advances policy goals for important groups—even, perhaps, groups on both sides of a debate.  For censorship, the specter of perfect filtering bolsters the perceived power of China’s security services.  It makes evasion appear futile.  For those who seek to hack the Great Firewall, claiming to offer the technological equivalent of David’s slingshot is an effective way to attract funding from Goliath’s opponents.  Technological optimism is a resilient, seductive philosophical belief among hackers and other elites{{41}} (though one that is increasingly questioned).{{42}}

Similarly, privacy scholars and advocates fear the advent of Big Data: the aggregation, analysis, and use of disparate strands of information to make decisions—whether by government or by private firms—with profound impacts on individuals’ lives.{{43}}  Their objections to disclosure of anonymized data are one component of a broader campaign of resistance to changes they see as threatening to obviate personal privacy.  If even perfectly anonymized data poses risks, then restrictions on data collection and concomitant use gain greater salience and appeal.

Finally, concentrating on the constant threat to incentives for cultural production in the digital ecosystem helps content owners, who seek desperately to adapt business models before they are displaced by newer, more nimble competitors.  They argue that greatly strengthened protections are necessary before they can innovate.  Evidence suggests, though, that enhanced entitlements enable content owners to resist innovation, rather than embracing it.  The pursuit of perfection turns IP law into a one-way ratchet: protections perpetually increase, and are forever insufficient.

We should abandon the ideal of the sublime in cyberlaw.  Good enough is, generally, good enough.  Patchy censorship bolsters authoritarian governments.  Imperfectly anonymized data generates socially valuable research at little risk.  And a leaky IP system still supports a thriving, diverse artistic scene.  Pursuing perfection distracts us from the tradeoffs inherent in information control, by reifying a perspective that downplays countervailing considerations.  Perfection is not an end, it is a means—a political tactic that advances one particular agenda.  This Essay argues that the imperfect—the flawed—is often both effective and even desirable as an outcome of legal regulation.


*    Associate Professor of Law, Brooklyn Law School (through spring 2012); Associate Professor of Law, University of Arizona James E. Rogers College of Law (beginning fall 2012).  Thanks for helpful suggestions and discussion are owed to Jane Yakowitz Bambauer, Dan Hunter, Thinh Nguyen, Derek Slater, and Chris Soghoian.  The author welcomes comments at derek.bambauer@brooklaw.edu.

[[1]]   Richard Clayton, Failures in a Hybrid Content Blocking Systemin Privacy Enhancing Technologies: 5th International Workshop PET 2005 78 (George Danezis & David Martin eds., 2006).[[1]]

[[2]]   Eric S. Fish, Is Internet Censorship Compatible With Democracy? Legal Restrictions of Online Speech in South Korea, Asia-Pac. J. Hum. Rts. & the L. (forthcoming 2012), available at http://papers.ssrn.com/sol3/papers.cfm?abstract
_id=1489621.[[2]]

[[3]]   China, OpenNet (June 15, 2009), http://opennet.net/research/profiles
/china-including-hong-kong.[[3]]

[[4]]   Iran, OpenNet (June 16, 2009), http://opennet.net/research/profiles
/iran.[[4]]

     [[5]]   See, e.g., James Fallows, “The Connection Has Been Reset”, The Atlantic (March 2008), http://www.theatlantic.com/magazine/archive/2008/03
/-ldquo-the-connection-has-been-reset-rdquo/6650/.[[5]]

      [[6]]   See, e.g., Oliver August, The Great Firewall: China’s Misguided—and Futile—Attempt to Control What Happens Online, Wired (Oct. 23, 2007), http://www.wired.com/politics/security/magazine/15‑11/ff_chinafirewall?currentPage=all; Troy Hunt, Browsing the broken Web: A Software Developer Behind the Great Firewall of China, Troy Hunt’s Blog (Mar. 16, 2012), http://www.troyhunt.com/2012/03/browsing‑broken‑web‑software‑developer
.html; Weiliang Nie Chinese Learn to Leap the “Great Firewall”, BBC News (Mar. 19, 2010), http://news.bbc.co.uk/2/hi/8575476.stm.[[6]]

[[7]]   Erica Naone, Censorship Circumvention Tools Aren’t Widely Used, Tech. Rev (Oct. 18, 2010), http://www.technologyreview.com/web/26574/.[[7]]

[[8]]   Chinasupra note 3.[[8]]

[[9]]   Richard Fontaine & Will Rogers, China’s Arab Spring Cyber Lessons, The Diplomat (Oct. 3, 2011), http://the-diplomat.com/2011/10/03/china%E2%80
%99s-arab-spring-cyber-lessons/.[[9]]

[[10]]   Tim Wilde, Knock Knock Knockin’ on Bridges’ Doors, Tor (Jan. 7, 2012), https://blog.torproject.org/blog/knock-knock-knockin-bridges-doors.[[10]]

[[11]]   Phil Vinter, Chinese Sell Iran £100m Surveillance System Capable of Spying on Dissidents’ Phone Calls and Internet, Daily Mail (Mar. 23, 2012), http://www.dailymail.co.uk/news/article‑2119389/Chinese‑sell‑Iran‑100m‑surveillance-capable-spying-dissidents-phone-calls-internet.html.[[11]]

[[12]]   See generally Nart Villeneuve, Choosing Circumvention: Technical Ways to Get Round Censorshipin Reporters Without Borders, Handbook for Bloggers and Cyberdissidents 63 (2005), available at http://www.rsf.org
/IMG/pdf/handbook_bloggers_cyberdissidents-GB.pdf.[[12]]

[[13]]   See, e.g., Paul Ohm, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, 57 UCLA L. Rev. 1701, 1752 (2010); Latanya Sweeney, Patient Identifiability in Pharmaceutical Marketing Data, (Data Privacy Lab, Working Paper No. 1015, 2011), available at http://dataprivacylab.org/projects/identifiability/pharma1.html.[[13]]

[[14]]   See, e.g., Jane Yakowitz, The Tragedy of the Data Commons, 25 Harv. J.L. & Tech. 1, 52 (2011); Khaled El Emam et al., A Systematic Review of Re-identification Attacks on Health Data, PLoS One (Dec. 2011), http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.002807.[[14]]

[[15]]   Deborah Lafky, Program Officer, Dep’t Health and Human Servs., The Safe Harbor Method of De-Identification: An Empirical Test, ONC Presentation (October 9, 2009), available at http://www.ehcca.com/presentations
/HIPAAWest4/lafky_2.pdf.[[15]]

[[16]]   See Yakowitz, supra note 14.[[16]]

[[17]]   Christopher Soghoian, Astroglide Data Loss Could Result in $18 Million Fine, DubFire (July 9, 2007), http://paranoia.dubfire.net/2007/07
/astroglide-data-loss-could-result-in-18.html.[[17]]

[[18]]   Katie Hafner, Leaked AOL Search Results Create Ethical Dilemma for Researchers, N.Y. Times (Aug. 23, 2006), http://www.nytimes.com/2006/08/23
/technology/23iht-search.2567825.html?pagewanted=all.[[18]]

[[19]]   See generally Robert Levine, Free Ride: How Digital Parasites are Destroying the Culture Business, and How the Culture Business Can Fight Back (2011); Jessica Litman, Digital Copyright (2001); Mike Masnick, Why Is The MPAA’s Top Priority “Fighting Piracy” Rather Than Helping the Film Industry Thrive?, Techdirt (Feb. 22, 2011), http://www.techdirt.com
/articles/20110221/15024713194/why‑is‑mpaas‑top‑priority‑fighting‑piracy‑rather-than-helping-film-industry-thrive.shtml.[[19]]

[[20]]   Pamela Samuelson, The Copyright Grab, Wired (Jan. 1996), http://www.wired.com/wired/archive/4.01/white.paper.html.[[20]]

[[21]]   17 U.S.C. § 1201 (2006).[[21]]

[[22]]  17 U.S.C. § 1204 (2006); No Electronic Theft (NET) Act, Pub. L. No. 105-147, 111 Stat. 2678 (1997).[[22]]

[[23]]  17 U.S.C. § 512(c) (2006).[[23]]

[[24]]  Prioritizing Resources and Organization for Intellectual Property (PRO IP) Act, Pub. L. No. 110-403, 122 Stat. 4256 (2008). [[24]]

[[25]]   PROTECT IP Act of 2011, S.968, 112th Cong. (2012).[[25]]

[[26]]   Stop Online Piracy Act of 2011, H.R. 3261, 112th Con. (2012).[[26]]

[[27]]   Robert Andrews, Music Industry Can See The Light After “Least Negative” Sales Since 2004, Time (Mar. 26, 2012), http://business.time.com/2012
/03/26/music-industry-can-see-the-light-after-least-negative-sales-since-2004/; Brooks Barnes, A Sliver of a Silver Lining for the Movie Industry, N.Y. Times (Mar. 22, 2012), http://mediadecoder.blogs.nytimes.com/2012/03/22/a-sliver-of-a
-silver-lining-for-the-movie-industry/#; Bob Lefsetz, Movie Industry Is Making Money from Technologies It Claimed Would KILL Profits, The Big Picture (Jan. 30, 2012, 4:30 PM), http://www.ritholtz.com/blog/2012/01/movie-industry-is
-making-money-from-technologies-it-claimed-would-kill-profits/.[[27]]

[[28]]   See White-Smith Music Publ’g Co. v. Apollo Co., 209 U.S. 1, 13–14 (1908) (holding that a piano roll does not infringe composer’s copyright because the perforated sheets are not copies of the sheet music).[[28]]

[[29]]   See Sony v. Universal Studios, 464 U.S. 417, 442 (1984) (holding that the manufacture of a VCR does not constitute contributory copyright infringement because it “is widely used for legitimate, unobjectionable purposes”).[[29]]

[[30]]   See Recording Indus. Ass’n of Am. v. Diamond Multimedia Sys., 180 F.3d 1072, 1081 (9th Cir. 1999) (upholding a district court denial of preliminary injunction against the manufacture of the Rio MP3 player because the Rio is not subject to the Audio Home Recording Act of 1992).[[30]]

[[31]]   See Metro-Goldwyn-Mayer Studios v. Grokster, 545 U.S. 913, 918 (2005) (holding that distributor of peer-to-peer file sharing network is liable for contributory copyright infringement when “the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement”).[[31]]

[[32]]   Andrews, supra note 27.[[32]]

[[33]]   Michelle Schusterman, Infographic: Why the Movie Industry is So Wrong About SOPA, Matador (Jan. 17, 2012), http://matadornetwork.com
/change/infographic-why-the-movie-industry-is-so-wrong-about-sopa/.[[33]]

[[34]]   See generally Mark A. Lemley, Is the Sky Falling on the Content Industries?, 9 J. Telecomm. & High Tech. L. 125 (2011) (explaining that while the introduction of new technologies in the past may have disrupted certain industries, the new technology did not stop the creation of new content).[[34]]

[[35]]   See generally Tia Hall, Music Piracy and the Audio Home Recording Act, 2002 Duke L. & Tech. Rev. 0023 (2002).[[35]]

[[36]]   Derek Slater et al., Content and Control: Assessing the Impact of Policy Choices on Potential Online Business Models in the Music and Film Industries, (Berkman Center for Internet & Society at Harvard Law School, Research Publication No. 2005-10, 2005), available at http://papers.ssrn.com/sol3/papers
.cfm?abstract_id=654602.[[36]]

[[37]]   Paul Bond, Warner Bros., Redbox Divided on DVD Terms, The Hollywood Reporter (Feb. 29, 2012), http://www.hollywoodreporter.com/news
/warner-bros-redbox-dvd-ultraviolet-flixster-kevin-tsujihara-296071.[[37]]

[[38]]   See Apple Inc. v. Psystar Corp., 658 F.3d 1150, 1162 (9th Cir. 2011).[[38]]

[[39]]   See SunTrust Bank v. Houghton Mifflin Co., 268 F.3d 1257, 1275 (11th Cir. 2001) (denying a preliminary injunction because a fair use defense would prevent the plaintiff, owner of the copyright of Gone With the Wind, from preventing the defendant from publishing a novel that critiques Gone With the Wind).[[39]]

[[40]]   See generally Derek E. Bambauer, Faulty Math: The Economics of Legalizing The Grey Album, 59 Ala. L. Rev. 345 (2007) (contending the economics of the derivative works right prevents the creation of new works and stifles the re-mix culture).[[40]]

[[41]]   John Gilmore averred that “[t]he Net interprets censorship as damage and routes around it.”  Philip Elmer-Dewitt, First Nation in Cyberspace, Time, Dec. 6, 1993, at 62.[[41]]

[[42]]   See generally Evgeny Morozov, The Net Delusion (2011) (arguing that the Internet makes it easier for dictators to prevent democratic uprisings).[[42]]

[[43]]   See generally Julie Cohen, Configuring the Networked Self (2011) (making the case that flows of private information are not restricted and proposing legal reforms to address the problem); Jessica Litman, Information Privacy / Information Property, 52 Stan. L. Rev. 1283 (2000) (contending that industry’s self-regulation of information privacy has failed and proposing that torts may be the best available avenue to improve privacy rights); danah boyd & Kate Crawford, Six Provocations for Big Data, Symposium, A Decade in Internet Time: Symposium on the Dynamics of Internet and Society, Oxford Internet Inst. (Sept. 2011), available at http://papers.ssrn.com/sol3/papers.cfm?abstract
_id=1926431 (proposing six questions about the potential negative effects of Big Data).[[43]]

By Margot Kaminski

My friends, who are generally well educated and intelligent, read a lot of garbage.  I know this because since September 2011, their taste in news about Justin Bieber, Snooki, and the Kardashians has been shared with me through “social readers” on Facebook.{{1}}  Social readers instantaneously list what you are reading on another website, without asking for your approval before disclosing each individual article you read.  They are an example of what Facebook calls “frictionless sharing,” where Facebook users ostensibly influence each other’s behavior by making their consumption of content on other websites instantly visible to their friends.{{2}}  Many people do not think twice about using these applications, and numerous publications have made them available, including the Washington PostWall Street Journal, and Guardian.{{3}}

I intend to prompt conversation about social readers on three fronts.  First, social readers are part of a shift toward real name policies online, and, for a number of reasons, should remain opt-in rather than becoming the default setting.  Second, if people do choose to use these applications, they should know that they are making that choice against a backdrop of related battles in privacy law concerning the right to consume content without a third party sharing your activity more broadly.  And third, when individuals choose to use these applications, they may be sharing their habits more widely than they think.

I.  Social Readers and Online Real-Name Policies

Social readers are part of a larger trend toward linking online activity to Internet users’ real identities.  Unlike America Online’s use of invented screen names, the two major social networks, Facebook and Google+, require users to register with their real names or verified pseudonyms.{{4}}  Both Google and Facebook aim to link user activity outside of the social network to one identifiable, real name profile, although Google’s aspirations currently appear limited to other Google services, while Facebook’s ambitions are broader.{{5}}  This real-name model is desirable to online companies and their supporting advertisers because it is easier to advertise to someone if you know who he or she is, and know all of his or her online behavior.

Business concerns are not the only motivating factor behind the shift toward real-name policy.  There is also an argument that real-name policies on comment forums may make people behave more civilly toward each other, because they are part of a social community that imports accountability into the online context.{{6}}  This has been compelling to some newspapers.  The Huffington Post, for example, has a Social News feature that encourages readers to log in through their Facebook accounts and comment on articles under their real identities.{{7}}  However, shifting to real name policy creates other problems, such as preventing pseudonymic or anonymous whistleblowing by commenters, and chilling more controversial or critical speech.{{8}}

Social readers are part of this potential collapse of anonymous or pseudonymic online activity.  It used to be the case that reading an article on the New York Times was a separate activity from communicating to your friends on a social network.  Social readers, however, import your reading activity from the newspaper website into your social network and broadcast it instantaneously under your real name.  Your presence on the other website is no longer anonymous.

Despite the potential benefits to companies, the decision to allow instantaneous sharing of all content consumed elsewhere connected to a user’s real identity should remain firmly in the hands of Internet users.  As individuals, we construct discrete identities for different circumstances: one for work, one for home, one for our closest friends.{{9}}  This is in fact the idea behind Google+’s “Circles” feature, which allows a user to tailor the parts of his or her identity that are visible to each “Circle,” whether it be friends, co-workers, or family.{{10}}  For a social network to retain value by mirroring reality, it needs to allow us to retain these distinctions.  Before social readers, one’s decision to read US Weekly at the gym would not be broadcast to one’s coworkers.  If one is forced to sign up for an US Weekly social reader, however, one’s network would see every article read.  This first point is about one’s relationship as an individual to other individuals: we should each be able to control the parts of our identity we want shown to other people.  We do this in real life; we should be able to do this online.  There may also be a benefit to media companies of allowing individuals to be pickier in their sharing: friends might take recommendations more seriously if they are deliberate and limited, rather than a list of everything their mutual friend haphazardly read.{{11}}

Already, some companies have experienced a backlash from making such pervasive sharing the default option for their software.  For example, in September 2011, the music service Spotify announced a partnership with Facebook that would allow new users to sign up only if they have a Facebook account.{{12}}  Users started seeing their music playlists automatically shared on Facebook; they could opt out of the service, but only by manually disabling the sharing feature.{{13}}  In response to a strong negative reaction, Spotify rolled out a more visible new privacy feature to allow users to “hide their guilty pleasures,” according to the Spotify CEO.{{14}}  The strong reaction to Spotify’s automatic frictionless sharing, and the fact that many newspapers have decided not to create social reader applications at all, shows that if users’ interests are kept in mind, frictionless sharing should remain an option, not the default.

II.  Social Readers and Other Privacy Law Battles

Coincidentally or consequentially, the legal debate over privacy and media consumption has taken on new dimensions at the same time that companies move toward frictionless sharing.  As people on Facebook allow the Washington Post to broadly share every article they have ever read, others are fighting to protect reader records from third parties.

First, it’s important to address whether, and why, reader privacy is important.  Librarians are adamant about the importance of reader privacy.{{15}}  The American Library Association has affirmed a right to privacy for readers since 1939,{{16}} and states that “one cannot exercise the right to read if the possible consequences include damage to one’s reputation, ostracism from the community or workplace, or criminal penalties.  Choice requires both a varied selection and the assurance that one’s choice is not monitored.”{{17}}  This concern comes in part from a historical awareness of how the government might abuse knowledge of citizens’ reading material.  Reading material can be used by the government to track dissidence.  Famously, Joseph McCarthy released a list of allegedly pro-communist authors, and the State Department ordered overseas librarians to remove such books from their shelves.{{18}}  Imagine if social readers had existed during the McCarthy era—the government would have been able to check each person’s virtual bookshelf for blacklisted material.  With the advent of data mining, the reading choices that seem innocuous to you can cumulatively be indicative of patterns, intent, or allegiances to others, including law enforcement.{{19}}

The United States has surprisingly scattered law on the question of readers’ privacy.  There is no federal statute explicitly protecting it.  This means that companies are not specifically prohibited on a federal level from sharing your reading history with others.  In practice, librarians usually require a court order for the government to obtain reader records, and most states make that requirement explicit.{{20}} The PATRIOT Act famously raised ire from librarians by permitting the government under certain circumstances to request library patron records secretly and without judicial oversight.{{21}}

Although there is no federal reader privacy statute, related laws concerning library patrons exist in forty-eight states.{{22}}  Recently, there has been a push at the state level to expand protections for reader privacy beyond libraries.  The California Reader Privacy Act, which was signed into law in October 2011 and took effect in January 2012, extends the type of protections traditionally afforded to library patrons to all books and e-books, although it does not extend to other types of reading online.{{23}}  Government entities must obtain a warrant before accessing reader records, and booksellers or providers must be afforded an opportunity to contest the request.  Booksellers must report the number and type of requests that they receive.{{24}}  Requests made in the context of civil actions must show that the requesting parties are using the “least intrusive means” and have a “compelling interest” in the records, and must obtain a court order.

The First Amendment could arguably protect readers from the discovery of their reading history by the government or by third parties using the court system to obtain the information.  A series of cases have given rise to a standard protecting the anonymity of online speakers.{{25}}  Julie E. Cohen has suggested that the First Amendment should extend its protections to a similar right to read anonymously.{{26}}  However, there has not yet been a case where a litigant has successfully made this argument to protect digital reader records under the First Amendment.

We do have one federal law protecting user privacy during content consumption: the Video Privacy Protection Act (“VPPA”),{{27}} which prohibits the disclosure of personally identifiable video rental information to third parties without a user’s specific consent, and prohibits disclosure of the same to police officers without a warrant.{{28}}  This strangely precise piece of law arose after Supreme Court nominee Robert Bork had his video rental records disclosed in a newspaper.{{29}}

Companies have realized, however, that VPPA is a hurdle to their business models.  In December 2011, the House of Representatives passed H.R. 2471, amending VPPA to allow the disclosure of video rental records with consent given in advance and until that consent is withdrawn by the consumer.{{30}}  This change would allow companies such as Netflix to get a one-time blanket consent to disclose user records through frictionless sharing on Facebook.  The Senate Judiciary Committee held a hearing on H.R. 2471 on January 31, 2012, at which many privacy concerns were raised.{{31}}

III.  Oversharing

Those who currently use social readers may be sharing their reading activity far more broadly than they expect.  Your close friends are not the only ones who can see your Facebook profile.  A Freedom of Information Act (“FOIA”) lawsuit by the Electronic Frontier Foundation revealed that law enforcement agencies use social media to obtain information about people by going undercover on social media sites to gain access to nonpublic information.{{32}}  And even if no police officer or other informant has posed as a friend of yours, using a social network to broadcast your reading records means you have shared those records with a third party—the social network itself—which under United States v. Miller means the police may not need a warrant to obtain those records from the social network.{{33}}

Perhaps more significantly, even if we get rid of the Miller doctrine, as Justice Sotomayor recently suggested, the wholesale sharing of your reading history with Facebook friends may ultimately impact the Supreme Court’s understanding of what constitutes a “reasonable expectation of privacy.”{{34}}  In the 1967 seminal Supreme Court case on wiretapping, Katz v. United States, Katz placed a phone call in a public phone booth with the door closed, and was found to have a reasonable expectation of privacy in the phone call, so a warrant was required for wiretapping the phone.{{35}}  Justice Alito recently contemplated that we may be moving toward a world in which so many people share information with so many friends that social norms no longer indicate a reasonable expectation of privacy in that information.{{36}}  Without a reasonable expectation of privacy, there will be no warrant requirement for law enforcement to obtain that information.  This analysis is troubling; sharing information with your friends should not mean that you expect it to be shared with law enforcement.  This would be like saying that just because you sent wedding invitations to 500 of your closest friends, the government is justified in opening the envelope.  The size of the audience for private communication should not change the fact that it is private.

The recent trend toward social readers and other types of frictionless sharing may at first glance seem innocuous, if inane.  But it has occurred just as privacy advocates are pushing to create more privacy protections for readers through state laws, and may result in the loss of VPPA, the one federal law that protects privacy in content consumption.  And users may not understand that sharing what they read with friends may mean sharing what they read with the government, as well.  That is a whole lot more serious than just annoying your friends with your taste for celebrity gossip.  Indeed, it may be another step toward the death of the Fourth Amendment by a thousand cuts.{{37}}



* Research Scholar in Law and Lecturer in Law at Yale Law School, and Executive Director of the Information Society Project at Yale Law School. She thanks Kevin Bankston of the Center for Democracy and Technology for his review and helpful comments.

[[1]] See, e.g., Ian Paul, Wall Street Journal Social on Facebook: A First Look, Today @PCWorld Blog (Sep. 20, 2011, 7:02 AM), http://www.pcworld.com/article/240274/wall_street_journal_social_on_facebook
_a_first_look.html.[[1]]

[[2]] Jason Gilbert, Facebook Frictionless App Frenzy Will Make Your Life More Open, Huffington Post (Jan. 18, 2012),  http://www.huffingtonpost.com
/2012/01/18/facebook‑actions‑arrive‑major‑changes_n_1213183.html.[[2]]

[[3]] See The Washington Post Social Reader, Wash. Post, http://www.washingtonpost.com/socialreader (last visited Feb. 26, 2012); Press Release, The Guardian, Guardian Announces New App on Facebook to Make News More Social (Sept, 23, 2011),available at http://www.guardian.co.uk/gnm
-press-office/guardian-launches-facebook-app; Paul, supra note 1.[[3]]

[[4]] Facebook requires real names as user names, allowing its users to sign into other sites and comment there—although it has just recently started allowing celebrities to use pseudonyms. See Somini Sengupta, Rushdie Runs Afoul of Web’s Real-Name Police, N.Y. Times (Nov. 14, 2011), http://www.nytimes.com/2011/11/15/technology/hiding‑or‑using‑your‑name‑online-and-who-decides.html; see also Nathan Olivarez-Giles, Facebook Verifying Celebrity Accounts, Allowing Pseudonyms, L.A. Times (Feb. 16, 2012), http://www.latimes.com/business/technology/la‑fi‑tn‑facebook‑verified‑accounts
‑nicknames-pseudonyms-20120216,0,3899048.story.  Google’s social network, Google+, uses real names and now pseudonyms, but only if you can prove to Google that you are in fact known by that name elsewhere.  See Claire Cain Miller, In a Switch, Google Plus Now Allows Pseudonyms, N.Y. Times Bits Blog (Jan. 23, 2012, 4:08 PM), http://bits.blogs.nytimes.com/2012/01/23/in-a-switch
-google-plus-now-allows-pseudonyms/.[[4]]

[[5]] Google’s new privacy policy is an example of this. The new privacy policy states that “[w]e may use the name you provide for your Google Profile across all of the services we offer that require a Google Account. In addition, we may replace past names associated with your Google Account so that you are represented consistently across all our services. If other users already have your email, or other information that identifies you, we may show them your publicly visible Google Profile information, such as your name and photo.” Preview: Privacy Policy, Google, http://www.google.com/policies/privacy/preview/ (last visited Feb. 29, 2012).[[5]]

[[6]] See, e.g., Lawrence Lessig, Code and Other Laws of Cyberspace 80 (1999) (“Just as anonymity might give you the strength to state an unpopular view, it can also shield you if you post an irresponsible view. Or a slanderous view. Or a hurtful view.”).[[6]]

[[7]] See Frequently Asked Questions, Huffington Post, http://www.huffingtonpost.com/p/frequently-asked-question.html (last visited Feb. 26, 2012).[[7]]

[[8]] Stone v. Paddock Publications, Electronic Frontier Found., https://www.eff.org/cases/stone-v-paddock (last visited Feb. 26, 2012) (noting that the Illinois Court of Appeals recognized the potential harms in the “chilling effect on the many citizens who choose to post anonymously on the countless comment boards for newspapers, magazines, websites and other information portals”).[[8]]

[[9]] See, e.g., Jan E. Stets & Michael M. Harrod, Verification Across Multiple Identities: The Role of Status, 67 Soc. Psych. Quart. 155 (2004) (investigating status verification across three identities: the worker identity, academic identity, and friend identity).[[9]]

[[10]] See, e.g.Google+ Overview, Google, http://www.google.com/
+/learnmore/ (last visited Feb. 29, 2012) (“You share different things with different people. But sharing the right stuff with the right people shouldn’t be a hassle. Circles make it easy to put your friends from Saturday night in one circle, your parents in another, and your boss in a circle by himself, just like real life.”).[[10]]

[[11]] Jeff Sonderman, With ‘Frictionless Sharing,’ Facebook and News Orgs Push Boundaries of Online Privacy, Poynter (Sep. 29, 2011), http://www.poynter.org/latest‑news/media‑lab/social‑media/147638/with‑frictionless-sharing-facebook-and-news-orgs-push-boundaries-of-reader-privacy/ (noting that “[i]f everything is shared automatically, nothing has significance”).[[11]]

[[12]] See  Sarah Jacobsson Purewal, Spotify Adds Facebook Requirement, Angering Users, Today @PCWorld Blog (Sep. 27, 2011), http://www.pcworld.com/article/240646/spotify_adds_facebook_requirement_angering_users.html.[[12]]

[[13]] See Zack Whittaker, Spotify’s ‘Frictionless Sharing’ Bows to Facebook Privacy Pressure, ZD Net Between the Lines Blog (Sept. 30, 2011), http://www.zdnet.com/blog/btl/spotifys‑frictionless‑sharing‑bows‑to‑facebook‑privacy-pressure/59408.[[13]]

[[14]] Id.[[14]]

[[15]] See, e.g.An Interpretation of the Library Bill of Rights, Am. Library Ass’n, http://www.ala.org/Template.cfm?Section=interpretations&Template=
/ContentManagement/ContentDisplay.cfm&ContentID=88625 (last visited Feb. 26, 2012).[[15]]

[[16]] Id.[[16]]

[[17]] Privacy and Confidentiality, Am. Library Ass’n, http://www.ala.org
/offices/oif/ifissues/privacyconfidentiality (last visited Feb. 26, 2012).[[17]]

[[18]] Robert Griffith, The Politics of Fear: Joseph R. McCarthy and the Senate 215–16 (1970).[[18]]

[[19]] See, e.g., Stephen L. Baker, The Numerati (2008).[[19]]

[[20]] See State Privacy Laws Regarding Library Records, Am. Library
Ass’n, http://www.ala.org/offices/oif/ifgroups/stateifcchairs/stateifcinaction
/stateprivacy (last visited Feb. 26, 2012) (stating that “[l]ibraries should have in place procedures for working with law enforcement officers when a subpoena or other legal order for records is made. Libraries will cooperate expeditiously with law enforcement within the framework of state law.”).[[20]]

[[21]] The USA Patriot Act, Am. Library Ass’n, http://www.ala.org/advocacy
/advleg/federallegislation/theusapatriotact (last visited Feb. 26, 2012) (observing that “[l]ibraries cooperate with law enforcement when presented with a lawful court order to obtain specific information about specific patrons; however, the library profession is concerned some provisions in the USA PATRIOT Act go beyond the traditional methods of seeking information from libraries.”); see also Resolution on the USA PATRIOT Act and Libraries, Am. Library Ass’n, (June 29, 2005),http://www.ala.org/offices/files/wo/reference
/colresolutions/PDFs/062905-CD20.6.pdf (explaining that “Section 215 of the USA PATRIOT Act allows the government to secretly request and obtain library records for large numbers of individuals without any reason to believe they are involved in illegal activity” and “Section 505 of the USA PATRIOT Act permits the FBI to obtain electronic records from libraries with a National Security Letter without prior judicial oversight”).[[21]]

[[22]] State Privacy Laws Regarding Library Records, Am. Library
Ass’n, http://www.ala.org/offices/oif/ifgroups/stateifcchairs/stateifcinaction
/stateprivacy (last visited Feb. 28, 2012).[[22]]

[[23]] See Joe Brockmeier, California Gets Reader Privacy Act: Still Not Enough, ReadWrite Enterprise (Oct. 3, 2011), http://www.readwriteweb.com
/enterprise/2011/10/california-gets-reader-privacy.php.[[23]]

[[24]] See Rebecca Jeschke, Reader Privacy Bill Passes California Senate—Moves on to State Assembly, Electronic Frontier Found. (May 9, 2011), https://www.eff.org/deeplinks/2011/05/reader‑privacy‑bill‑passes‑california‑senate-moves.[[24]]

[[25]] See, e.g., Dendrite Int’l, Inc. v. John Doe No. 3, 775 A.2d 756 (N.J. Super Ct. App. Div. 2001).[[25]]

[[26]] Julie E. Cohen, A Right to Read Anonymously: A Closer Look at “Copyright Management” In Cyberspace, 28 Conn. L. Rev. 981 (1996).[[26]]

[[27]] 18 U.S.C. § 2710 (2006).[[27]]

[[28]] Id.see also Video Privacy Protection Act, Electronic Privacy Info. Center, http://epic.org/privacy/vppa/ (last visited Feb. 28, 2012) (providing an overview of the VPPA).[[28]]

[[29]] See Video Privacy Protection Act, Electronic Privacy Info. Center, http://epic.org/privacy/vppa/ (last visited Feb. 28, 2012).[[29]]

[[30]] See H.R. 2471, 112th Cong. (1st Sess. 2011).[[30]]

[[31]] The Senate Judiciary Committee had a hearing on VPPA in January.  See The Video Privacy Protection Act: Protecting Viewer Privacy in the 21st Century: Hearing Before the Senate Committee on the Judiciary, Subcommittee on Privacy, Technology, and the Law, 112th Cong. (2nd Sess. 2012), available
at 
http://www.judiciary.senate.gov/hearings/hearing.cfm?id=f14e6e2889a80b6b5
3be6d4e412d460f. See also Grant Gross, Lawmakers Question Proposed Change to Video Privacy Law, PCworld (Jan. 31, 2012), http://www.pcworld.com
/businesscenter/article/249058/lawmakers_question_proposed_change_to_video_privacy_law.html.[[31]]

[[32]] Jaikumar Vijayan, IRS, DOJ Use Social Media Sites to Track Deadbeats, Criminal Activity, Computerworld (Mar. 16, 2010), http://www.computerworld.com/s/article/9171639/IRS_DOJ_use_social_media_sites_to_track_deadbeats_criminal_activity_.[[32]]

[[33]] 425 U.S. 435, 443 (1976).[[33]]

[[34]] United States v. Jones, No. 10–1259, slip op. at 3–6 (U.S. Jan. 23, 2012) (Sotomayor, J., concurring).[[34]]

[[35]] 389 U.S. 347, 348, 352 (1967); see also id. at 361 (Harlan, J., concurring) (developing the reasonable expectation of privacy test).  Later Courts would adopt the reasonable expectation of privacy test.  See Smith v. Maryland, 442 U.S. 735, 740 (1979).[[35]]

[[36]] Jones, slip op. at 10 (Alito, J., concurring in judgment). Alito in the concurrence in Jones noted that “even if the public does not welcome the diminution of privacy that new technology entails, they may eventually reconcile themselves to this development as inevitable.”  Id.  At oral argument, Alito remarked that “[t]echnology is changing people’s expectations of privacy. Suppose we look forward 10 years, and maybe 10 years from now 90 percent of the population will be using social networking sites and they will have on average 500 friends and they will have allowed their friends to monitor their location 24 hours a day, 365 days a year, through the use of their cell phones. Then—what would the expectation of privacy be then?”  Transcript of Oral Argument at 44, United States v. Jones 565 U.S. ___ (2012) (No.10–1259).[[36]]

[[37]] See Alex Kozinski & Stephanie Grace, Pulling the Plug on Privacy: How Technology Helped Make the 4th Amendment Obsolete, The Daily (June 22, 2011), http://www.thedaily.com/page/2011/06/22/062211-opinions-oped-privacy
-kozinski-grace-1-2/.[[37]]

By: M. Ryan Calo

Professor Patricia Sánchez Abril opens her article, Private Ordering: A Contractual Approach to Online Interpersonal Privacy, with a profound insight: online interpersonal privacy suffers from a case of broken windows.[1] By “broken windows,” Professor Abril refers to the well-evidenced phenomena that instances of minor disrepair can promote an overall environment of antisocial behavior.[2] Just as a building with one broken window will almost certainly have many more, so could other contexts degenerate if small infractions go visibly unaddressed.

There may be times when upholding an agreement of confidentiality is not in the public interest and even the mantle of law is overkill. For instance, what if a student’s use of a social network reveals that she is a danger to herself or others, but her peers have all contracted not to say anything?

Professor Abril invokes the metaphor of broken windows in the context of online interpersonal privacy to illustrate “the role of norms vis-à-vis legal rules in shaping human behavior.”[3] Specifically, she believes that some combination of four factors—the dominance of sharing-culture, the lack of close-knit groups, the dearth of opportunities for user control over data, and the misapplication of contract law—“conspire to create a public perception of ambivalence toward breaches of interpersonal privacy, perpetuating social disorder.”[4] Professor Abril ultimately “calls on the power of contract to create context and thereby address many online interpersonal privacy concerns.”[5]

I agree with much of Professor Abril’s sophisticated reframing of the problem.  I also see promise in her proposal to leverage contracts to combat a perception that “anything goes” on the Internet.[6] In particular, I appreciate the role Professor Abril has in mind for contract law: not just to create enforceable rights, but more importantly, to signal the solemnity of the transaction.  In her words: “Even when not readily enforceable by legal means, the mere existence of a contract serves the important role of expressing and establishing social norms.”[7]

Indeed, one way to challenge Professor Abril’s argument is to question whether contracts formed in the way she describes will carry any legal water at all.  Arguably, they will not.  Creative Commons gains force from copyright law, which provides an affirmative right that the right holder may then pare back or renounce.[8] The same is not true where, as in interpersonal privacy, there is no underlying statutory right.[9]Even if we agree that opening an e-mail or accepting a friend request can constitute both consideration and assent for purposes of contract formation, it is hard to imagine how damages might be calculated.  Professor Abril concedes at length that damages may prove an insurmountably high hurdle to recovery of a successful claim.[10]

In many ways, observing that a system of interpersonal contracts may in practice be unenforceable misses the point.  What Professor Abril seems to be after is “a new set of norms;”[11] she wants to leverage the formality of contract law to “create context.”[12] Regardless of whether a court would permit recovery for a breach of Professor Abril’s protocol, the very use of that protocol—its mere existence—tends to combat the prevailing cavalier attitude toward interpersonal privacy that we see today.[13] It helps mend the broken windows.

But this observation raises a second question: if all we are doing is signaling, ought we not to prefer a nonbinding, norms-based approach to online communication such as that championed by Jonathan Zittrain and Lauren Gelman?[14] These authors eschew the use of law per se in favor of a model based outright on principles of neighborliness.  There may be times when upholding an agreement of confidentiality is not in the public interest and even the mantle of law is overkill.  For instance, what if a student’s use of a social network reveals that she is a danger to herself or others, but her peers have all contracted not to say anything?[15]

Both models suffer, incidentally, from a common limitation.  No matter how user-friendly the signal is, when faced with too much signaling, users may begin to tune it out.  What effect will a sea of icons, or the need to click assent for every bit of content, have on the average user?  Judging by the literature on information overload generally, and “wear out” specifically, there is a danger users will become inured to even a standardized system of online communication.[16]

This brings me to a final point about how best to improve a social environment.  What is interesting about the broken windows theory is not necessarily that vandalism influences norms; presumably there are many phenomena that influence norms.[17] It is that broken windows are features of the physical environment—they are a form of architecture.[18]

Professor Abril is hardly unaware of the importance of design, as evidenced by her condition that user-to-user contracts be “user-friendly” and “standardized.”[19] Importantly, she is also aware of the existence of literature in psychology suggesting that the form of social interactions helps dictate its content.[20] She nevertheless underemphasizes what I consider to be a crucial point: the very design of a website has a powerful effect on user experience.  Many of the problems we face online result directly from design decisions that we could—and in some cases, ought to—revisit.

Consider the broken window of oversharing.  Design is instrumental both to promote and to combat this ostensible problem.  Social networks in particular are built to make sharing as attractive as possible.  Status-update fields loom large at the top of the screen, beckoning participation.  Comment fields are prepopulated with the user’s picture as though she has already begun to comment (might as well do so!).  These design decisions are not accidental.[21] Meanwhile, sharing content online has immediate, positive effects, whereas the downsides to sharing are not immediately felt.[22] The undergraduate deciding to share pictures of last night’s party probably has his fraternity brothers, not prospective employers, in mind.  He experiences positive feedback in the form of comments and “likes”; he may never know why he did not get that job.

Or consider the insight that people are more likely to disclose personal details to websites that are casual in design, as opposed to formal.  In a study by Leslie John and her colleagues, subjects were more likely to admit to controversial conduct when the study was presented in a silly, playful format.[23] This insight has policy repercussions.  We are ostensibly most concerned with the online disclosure behavior of children, for instance, so much so that we have a special law around it.[24] And yet what are the most casual websites on the Internet, including with respect to online forms that collect information?  The kids don’t stand a chance.

A more direct and potentially more effective way to address online privacy’s broken windows is to examine the design of websites themselves—windows, in a sense, onto the Internet.  What we need in privacy, I believe, is a set of architectural values—both aesthetic and systematic—capable of transforming the web experience in ways that promote public policy goals such as privacy and security.  In the 1970s, architect Oscar Newman revolutionized public housing by introducing the concept of defensible space.[25] We need an Oscar Newman for online privacy.

How might the design of websites, phones, energy meters, and other products help provide the user with an accurate mental model of data practice?  How might we empower users to frame their content in ways that limit abuse without recourse to contracts or even words?  These are the challenges that Acquisiti, Nancy Kim,[26] Woodrow Hartzog,[27] and others have started to address in their work (and which underpin my own notion of nonlinguistic or “visceral” notice).[28] In a sense, this Article is not a response to Professor Arbil’s thought-provoking and well-argued article.  It is an invitation.  Professor Abril ought to take her own metaphor more literally.


[1]. Patricia Sánchez Abril, Private Ordering: A Contractual Approach to Online Interpersonal Privacy, 45 Wake Forest L. Rev. 689, 690 (2010).

[2]. Id.; see also id. at 690 n.11 (citing evidence of the broken windows phenomenon).

[3]. Id. at 691.

[4]. Id. at 694.

[5]. Id.

[6]. Id. at 695, 719, 726.

[7]. Id. at 707.

[8]. 17 U.S.C. § 106 (2006) (granting various exclusive rights in copyrighted works to the copyright owner).

[9]. Federal and state law protects privacy not through a single, baseline statute, but in piecemeal through a series of sector or activity-specific statutes, common law torts, and constitutional doctrines.  See, e.g., Privacy Laws, California Office of Privacy Protection, http://www.privacy.ca.gov/privacy
_laws.htm (providing a list of some of the state and federal privacy laws of the United States).

[10]. Abril, supra note 1, at 716–19.

[11]. Id. at 723.

[12]. Id. at 691, 694.

[13]. Id. at 719 (“Although the ideal would be a legally enforceable contract, not all promises of confidentiality must be formal contracts in order to effectively safeguard privacy and counteract an ‘anything goes’ attitude toward online privacy.  Sociolegal scholarship indicates that the very existence of a promise or obligation can change social norms.”).

[14]. See Lauren Gelman, Privacy, Free Speech, and “Blurry Edged” Social Networks, 50 B.C. L. Rev. 1315, 1342 (2009) (suggesting “a tool for users to express and exercise privacy preferences over uploaded content.  It would permit users to express their intentions by tagging any uploaded content with an icon that immediately conveys privacy preferences to third parties.”); Jonathan Zittrain, Privacy 2.0, 2008 U. Chi. Legal F. 65, 106–09 (discussing the application of “code-based norms” to privacy).

[15]. Professor Abril assures us that her system, though it restricts information flow, “will not chill speech.”  Abril, supra note 1, at 722.  It may promote interpersonal intimacy, but it will also invoke the force and solemnity of law to limit sharing.  Id.

[16]. See, e.g., Christine Jolls & Cass R. Sunstein, Debiasing through Law, 35 J. Legal Stud. 199, 212 (2006) (describing “wear out” as the phenomenon “in which consumers learn to tune out messages that are repeated too often”).

[17]. For a detailed discussion, see Lawrence Lessig, The New Chicago School, 27 J. Legal Stud. 661 (1998).

[18]. Id. at 663.

[19]. Arbil, supra note 1, at 720.

[20]. Id. at 699 n.74.

[21]. Nor are they intrinsically harmful.  The point of the service is, after all, to communicate.

[22]. Carnegie Mellon Professor Alessandro Acquisti evidences this phenomenon in forthcoming work.

[23]. Leslie K. John, Alessandro Acquisti & George Loewenstein, Strangers on a Plane: Context-Dependent Willingness to Divulge Sensitive Information, 37 J. Consumer Res. 858, 868–69 (2011).

[24]. See Children’s Online Privacy Protection Act of 1998, 15 U.S.C. §§ 6501–6506 (2006).

[25]. Oscar Newman, Defensible Space: Crime Prevention Through Urban Design (1972) (arguing that architecture and urban design influence negative social behavior).

[26]. See, e.g., Nancy Kim, Online Contracts: Form as Function (2010) (unpublished manuscript) (on file with author); cf. Nancy Kim, Website Proprietorship and Online Harassment, 2009 Utah L. Rev. 993, 1014–17 (2009) (describing contractual and architectural techniques to constrain online harassment).

[27]. See, e.g., Woodrow Hartzog, Promises and Privacy: Promissory Estoppel and Confidential Disclosure in Online Communities, 82 Temp. L. Rev. 891, 907–08 (2009); Woodrow Hartzog, Website Design as Contract, 60 Am. U. L. Rev. (forthcoming 2011).

[28]. See, e.g., Steve Lohr, Redrawing the Route to Online Privacy, N.Y. Times, Feb. 28, 2010, at BU4 (“M. Ryan Calo, . . . at the Center for Internet and Society at the Stanford Law School, is exploring technologies that deliver ‘visceral notice.’  His research involves voice and animation technology that emulates humans.”).