Chapter 1: The Nature of Privacy

Privacy is control over disclosure. Observation cost decides if any targeted intervention runs, and cypherpunk tools raise it.
Chapter 1: The Nature of Privacy

Chapter 1: The Nature of Privacy

“Privacy is necessary for an open society in the electronic age.”

Eric Hughes, A Cypherpunk’s Manifesto (1993)^1^

Introduction

“If you have nothing to hide, you have nothing to fear.”

That line appears wherever privacy is discussed, and it works because it smuggles in a false definition before the argument even begins.^2^ It redefines privacy as concealment. Once that move succeeds, resistance to surveillance can be painted as suspicious from the start.

The right response is not embarrassment, nor the weak claim that everyone has private shame he would prefer to keep off display. The right response is definitional: privacy is not the hiding of guilt but control over disclosure. Once that point is fixed, the whole argument changes.

The full answer to “nothing to hide” comes at the end of the book in Chapter 25. For now, the task is to make the terms clear enough that the argument cannot cheat.

1.1 Privacy as Control Over Disclosure

Arguments about privacy usually collapse because the word is allowed to float. One speaker means secrecy, another means anonymity, while a third means legal protection against search. The result is not disagreement so much as collision.

Eric Hughes gave the clean definition this book adopts: privacy is the power to selectively reveal oneself to the world. The load-bearing word is selectively.

Privacy means controlling disclosure to the people who have a proper claim to the information, for the purposes that require it. A patient who speaks openly with a doctor is exercising privacy. So is a merchant who keeps supplier terms from competitors. These cases do not involve absolute concealment. They involve control over who knows what, and that control is active. Privacy is the ability to decide what information moves outward and what stays put, and if that ability disappears, privacy is gone even when no one has yet exploited the exposed data. The cost begins with the loss of control: every later decision the actor makes faces a wider range of possible adversary response.

The definition also keeps the subject clear. Privacy concerns information about persons: plans, preferences, correspondence, health, finances, relationships, political commitments, location, and the thousand other details that make a life legible to those who would exploit it. The book will later argue that information can not be property in the strict sense. But it does not follow that personal information is free for the taking.

This point dissolves the cheap force of the “nothing to hide” argument. Everyone manages disclosure, and everyone reveals different facts to different people. A person who speaks one way to his accountant and another in public is not acting suspiciously; he is acting like a person.

1.2 Three Related Terms

The next confusion comes from treating three distinct ideas as if they were interchangeable. Privacy concerns controlled disclosure; secrecy concerns concealment; anonymity concerns attribution. They overlap, but they do not collapse into each other.^3^

Privacy is the broadest of the three. It covers the ordinary management of information inside identified relationships. You may be fully known to your doctor and still have privacy if your records remain under proper control. You may be identified to a trading partner and still preserve privacy if only the terms needed for the trade are shared.

Secrecy is narrower and harder than privacy. A secret is hidden from everyone except the few who are meant to know it, or from everyone altogether. Trade secrets, source protection, intelligence methods, and private keys all rely on secrecy, but some private matters are secret and many are not.

Anonymity concerns the link between action and actor. An anonymous donation or an unlinked payment does not tell the observer who performed the act; a pseudonymous essay does so only under a chosen identity. A person may preserve anonymity in one domain while being fully identified in another. The question is not what is known about the person in general, but whether this act can be tied to that person.

Surveillance exploits their confusion.

If privacy is collapsed into secrecy, the defender is forced into the position of explaining why he hides things.

If anonymity is collapsed into criminal evasion, the defender must explain why any unattributed action should be tolerated.

If secrecy is treated as the whole of privacy, ordinary control over personal information starts to look extreme.

The better view is direct.

Privacy is ordinary and constant.

Secrecy is a narrower case that some activities require.

Anonymity is a tool that becomes necessary when attribution itself creates danger.

Many real situations use several of these tools at once. A journalist protecting a source may need privacy over communications and secrecy about source identity. A buyer using a privacy-preserving payment system may not seek total secrecy at all. He may want only to keep merchants and banks from watching his complete transaction history.

The argument against privacy cheats when the three words slide together.

1.3 Privacy as Strategic Defense

Privacy is a moral good and a strategic defense.

John Boyd’s OODA loop, Observe, Orient, Decide, Act, provides a simple way to see why.^4^ An adversary who wants to control, seize, punish, censor, or preempt must first observe. Without observation the rest of the sequence stalls. A state that cannot see funds cannot freeze them on command, a platform that cannot inspect messages cannot rank or suppress them with the same precision, and a regulator who cannot map the network cannot identify the soft points where pressure will hurt most.

Observation is why the fight over privacy begins so early in the chain. It looks passive. But it is not. Observation is what lets every later intervention become targeted and cheap.

The defender’s advantage lies exactly here. Surveillance requires infrastructure, storage, indexing, legal authority, software, analysts, and time. In many cases the countermeasure is far cheaper: a key pair costs almost nothing, and end-to-end encryption can run on hardware people already own. A privacy-preserving communication tool can deny easy collection at a fraction of the attacker’s cost.^5^

That asymmetry matters more than the slogan that privacy is a right. Rights language is important, but costs still decide much of political reality. A state can maintain control when observation is cheap. It struggles when observation becomes expensive and unreliable.

This book will return to that asymmetry many times. It is one reason privacy tools provoke such hostility from institutions built on surveillance. They do not hide information. They raise the price of domination.^6^

1.4 The Book’s Argument in Brief

This book develops one argument through several layers. Privacy belongs to the structure of action itself. It also supports economic coordination by protecting deliberation and negotiation, along with the signals on which markets depend. Surveillance is therefore not a neutral administrative convenience. It is a method of control used by states and allied institutions to tax, redirect, or suppress. The answer is practical as well as theoretical: privacy can be defended through cryptography, anonymous communication, resistant money, selective disclosure, and decentralized protocols. Those tools allow the growth of a parallel economy that depends less and less on systems built for observation.^7^

The order of the book follows that logic. Part II develops the axioms, Part III applies economic reasoning, Part IV studies the adversary, and Parts V and VI turn to tools, implementation, institutions, and the parallel economy itself.

The book also states its limits.

It does not claim that privacy removes every danger or that technology makes politics irrelevant. Nor does it claim that parallel institutions replace the official order overnight. It claims something narrower and stronger. Privacy is defensible in theory. It is possible in practice and becomes economically potent once enough people know how to use it.

The book also marks one further scope. Privacy is not a separate property right alongside ownership of one’s body and ownership of scarce external resources. What ordinary language calls “privacy” is what self-ownership of the body, property rights in physical devices and premises, and contractual confidentiality produce when respected. These three rights are common ground in the libertarian-Austrian tradition: Locke and Rothbard ground them in self-ownership and original appropriation;^8^ Hoppe grounds them in the presuppositions of argumentation;^9^ Kinsella sharpens them through the scarcity criterion.^10^ The book’s privacy argument rests on what these positions share. Chapter 6 develops the framework in detail and follows Kinsella’s later treatment because it is more precise about what contract can and cannot do, while marking where Rothbard’s older treatment differs and what the differences reach. The chapters in between build the case for protecting the underlying rights and for the engineering that defends them in practice.

Chapter Summary

Privacy is control over disclosure, not concealment, the power to selectively reveal oneself. That definition separates privacy from two adjacent concepts. Secrecy is narrower, and anonymity concerns attribution. The “nothing to hide” argument works only by collapsing these three into one; once they are held apart, it loses its force.

Privacy is also strategic. Observation is the first step of every targeted intervention, and raising its cost disrupts the whole sequence. Surveillance requires infrastructure, analysts, and time; a key pair costs almost nothing. That asymmetry, more than any appeal to rights, explains why privacy tools provoke such hostility from institutions built on surveillance.

What remains for later is the ground beneath these claims. Privacy as a structural feature of action waits for Chapter 3, the normative case that surveillance is unjust for Chapter 4’s argumentation ethics, and the economic consequences of observed exchange for Chapters 7 and 9. This chapter fixes the vocabulary those arguments presuppose.


Endnotes

^1^ Eric Hughes, “A Cypherpunk’s Manifesto” (March 9, 1993), https://www.activism.net/cypherpunk/manifesto.html. The opening declaration of the cypherpunk movement, written as a short manifesto and circulated on the cypherpunks mailing list. Hughes’s working definition of privacy as “the power to selectively reveal oneself to the world” supplies the operational definition this chapter adopts; the line in the epigraph names the structural claim, that privacy is a precondition of an open society in the electronic age, not a freestanding right that competes against other social goods. For the foundational mid-twentieth-century academic treatment of privacy as control over information, see Alan F. Westin, Privacy and Freedom (New York: Atheneum, 1967), which established the standard four-function taxonomy (solitude, intimacy, anonymity, reserve) and is the standard reference for privacy scholarship in the legal-academic tradition.

^2^ The definitive scholarly treatment of the “nothing to hide” argument is Daniel J. Solove, “‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy,” San Diego Law Review 44 (2007): 745, available at https://papers.ssrn.com/abstract=998565. Solove catalogs the rhetorical moves that let the slogan succeed (equating privacy with concealment, treating privacy-as-individual-right in conflict with security-as-collective-good, demanding specific harms before granting privacy standing) and shows why each fails. Book-length development in Solove, Nothing to Hide: The False Tradeoff between Privacy and Security (Yale University Press, 2011), which surveys the surveillance-state arguments that became dominant after 2001 and explains why the “I have nothing to hide” response concedes the entire framing. For the lived-experience refutation, Aleksandr Solzhenitsyn, The Gulag Archipelago, trans. Thomas P. Whitney (New York: Harper & Row, 1974), documents the Soviet-era truth that under sufficient state scrutiny any individual can be found to have something to conceal; Solzhenitsyn’s observation is that the “nothing to hide” defense presupposes a benign state whose behavior under pressure is not historically attested.

^3^ On the conceptual taxonomy of privacy, see Westin (cited in note 1 above), whose four-function taxonomy and control-based definition are the modern starting point. Daniel J. Solove, Understanding Privacy (Harvard University Press, 2008), extends Westin’s work and analyzes privacy through a taxonomy of sixteen distinct harms including surveillance, aggregation, identification, and disclosure; it is the standard modern scholarly treatment. Helen Nissenbaum, Privacy in Context: Technology, Policy, and the Integrity of Social Life (Stanford University Press, 2010), develops the contextual-integrity framework that explains why disclosures appropriate in one context (health information to a doctor) become violations in another (the same information to an employer). Anita L. Allen, Unpopular Privacy: What Must We Hide? (Oxford University Press, 2011), treats privacy rights from a legal-philosophical angle and makes the case that some privacy claims are owed regardless of the claimant’s preferences. Julie E. Cohen, “What Privacy Is For,” Harvard Law Review 126 (2013): 1904–1933, adds that privacy shelters the emergent subjectivity of the individual from the efforts of commercial and government actors to render it fixed, transparent, and predictable. For the constitutional-law origin of the American privacy-rights tradition, see Louis D. Brandeis’s dissent in Olmstead v. United States, 277 U.S. 438, 478 (1928), which established “the right to be let alone, the most comprehensive of rights and the right most valued by civilized men” as the standard formulation; and Samuel D. Warren and Louis D. Brandeis, “The Right to Privacy,” Harvard Law Review 4 (1890): 193–220, the founding article of American privacy jurisprudence.

^4^ John Boyd developed the OODA loop through a series of military briefings, especially “Patterns of Conflict” and “The Strategic Game of ? and ?”; his slides are archived at the Defense and the National Interest project (https://www.projectwhitehorse.com/pdfs/boyd/patterns%20of%20conflict.pdf). For a book-length treatment, see Frans P. B. Osinga, Science, Strategy and War: The Strategic Theory of John Boyd (Routledge, 2007), which is the standard academic study of Boyd’s work and traces the OODA loop from fighter-pilot tactics through grand strategy. Chet Richards, Certain to Win: The Strategy of John Boyd, Applied to Business (Xlibris, 2004), is the best non-military application. For the original briefing cycle compiled and edited, see Grant T. Hammond, The Mind of War: John Boyd and American Security (Smithsonian, 2001).

^5^ The economic asymmetry between cheap encryption and expensive attack is treated in Bruce Schneier, Applied Cryptography, 20th anniversary ed. (Wiley, 2015), and in Niels Ferguson, Bruce Schneier, and Tadayoshi Kohno, Cryptography Engineering: Design Principles and Practical Applications (Wiley, 2010). For the policy consequences, see Whitfield Diffie and Susan Landau, Privacy on the Line: The Politics of Wiretapping and Encryption, updated ed. (MIT Press, 2007), and Susan Landau, Listening In: Cybersecurity in an Insecure Age (Yale University Press, 2017). The foundational public-key paper is Whitfield Diffie and Martin Hellman, “New Directions in Cryptography,” IEEE Transactions on Information Theory IT-22, no. 6 (1976): 644–654, which argued that asymmetric cryptography changes the economics of secure communication by removing the key-distribution bottleneck.

^6^ The political economy of state and corporate surveillance is treated at book length in Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019), which frames surveillance as a business model that extracts behavioral data as its primary commodity. Carissa Véliz, Privacy Is Power: Why and How You Should Take Back Control of Your Data (Bantam Press, 2020), makes the shorter popular case. Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (Norton, 2015), surveys the technical and political infrastructure. For the historical frame that underlies the book’s account of why states pursue legibility, James C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (Yale University Press, 1998), documents how legibility itself is the prerequisite for state intervention and how resistant populations have used opacity as a political strategy. On the cryptocurrency-specific surveillance industry, Andy Greenberg, Tracers in the Dark: The Global Hunt for the Crime Lords of Cryptocurrency (Doubleday, 2022), is the definitive journalistic account of blockchain forensics as practiced by the firms this book’s Chapter 20 treats.

^7^ Further reading on privacy as a field: Julia Angwin, Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance (Times Books, 2014), documents one reporter’s practical attempt to disentangle from surveillance infrastructure, and is a useful companion to Chapter 23. Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide for Privacy and Protest (MIT Press, 2015), catalogs the techniques of adding noise to signals an adversary wants clean, and their ethics. Glenn Greenwald, No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State (Metropolitan Books, 2014), remains the best narrative account of the 2013 NSA revelations. Simon Singh, The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography (Anchor, 2000), is the most accessible general introduction to cryptographic history. Andy Greenberg, This Machine Kills Secrets: How WikiLeakers, Cypherpunks, and Hacktivists Aim to Free the World’s Information (Dutton, 2012), traces the cypherpunk lineage that Chapter 2 develops.

^8^ John Locke, Two Treatises of Government, ed. Peter Laslett (Cambridge: Cambridge University Press, 1988; orig. 1689), Second Treatise, Chapters V–IX, https://www.gutenberg.org/ebooks/7370. Locke grounds property rights in self-ownership: each person owns his body and therefore owns the labor of his body, and by mixing that labor with unowned resources acquires rightful title. Murray N. Rothbard extends and radicalizes this framework in The Ethics of Liberty (New York: New York University Press, 1998; orig. 1982), https://mises.org/library/book/ethics-liberty, arguing that all rights reduce to self-ownership and original appropriation, that the state violates both, and that contractual confidentiality is a direct application of property rights over one’s own person and information.

^9^ Hans-Hermann Hoppe, The Economics and Ethics of Private Property: Studies in Political Economy and Philosophy, 2nd ed. (Auburn, AL: Ludwig von Mises Institute, 2006), https://mises.org/library/book/economics-and-ethics-private-property, especially “The Ultimate Justification of the Private Property Ethic.” Hoppe’s argumentation ethics holds that the act of arguing presupposes exclusive control over one’s body and the resources one uses to argue; any attempt to deny property rights is therefore self-refuting in discourse. Chapter 4 of this book applies this framework directly to privacy: the capacity for rational discourse presupposes the ability to control what one discloses to interlocutors.

^10^ Stephan Kinsella, Against Intellectual Property (Auburn, AL: Ludwig von Mises Institute, 2008), https://mises.org/library/book/against-intellectual-property; and Kinsella, “How We Come to Own Ourselves,” Mises Daily (September 7, 2006), https://mises.org/mises-daily/how-we-come-own-ourselves. Kinsella’s contribution is to ground property rights exclusively in the scarcity of physical resources: only scarce goods can generate real conflict over use, and therefore only scarce goods can be the object of property rights. This scarcity criterion excludes information from the category of property in the strict sense (a conclusion this book accepts) while preserving full property rights in the physical media and devices that store or transmit information, which is what the privacy argument requires.


<- Previous: Endorsements, Foreword, Preface |

-> Next: Chapter 2: Two Traditions, One Conclusion |
Referenced article not yet available naddr1qqy8qm…guadgkxa

The Praxeology of Privacy – third edition. New chapters publish daily at 1600 UTC.


Write a comment
No comments yet.