I have on various occasions heard the term "Security Through Obscurity" used to denigrate a number of security systems and feel I must make some comment on it. Most people seem to define "security through obscurity" as a security system based on keeping some piece of information from an attacker. It's not clear if this is the definition security experts intended, but if we accept that definition then there's nothing wrong about security through obscurity. Most security systems I can think of rely on some subset of three factors: 1) Something you know. 2) Something you have (e.g. a conventional key, or a SecurID token). 3) Something you are (e.g. biometrics). The most popular of these systems is factor one, for reasons that it does not need special interfaces to the computer apart from what you already need to control a computer. And any system which relies on factor one can be labelled "security through obscurity". That includes knowing which port to connect to, a password that you might enter, or even the private key you use to cryptographically authenticate. These factors can sometimes have fuzzy boundaries. For example, is a public-key system like PGP relying on something you know (the value of the private key), or both something you know (the passphrase) and something you have (the private key, as stored on disk)? Depending on how we define "you" and "know", it could be either a one-factor or a two-factor authentication mechanism. To a limited extent, that is to enemies with the resources to make keys or latex fingerprints, factor two and three are also based on denying your opponent information, in this case the information he/she needs to manufacture what you have or are. For example, I believe SecurID tokens have secret data inside them (to distinguish one from another). It is irrelevant whether this is stored in NVRAM or a ROM; if you opponent has a token and the facilities to reprogram them, you're talking about factor one issues again. Similarly, a physical key can be reproduced easily, given some standardized measurements of the pins in the lock. So on some level, all three methods boil down to factor one. Since networks cannot currently carry parts of you or physical objects, most if not all distributed security systems involve factor one. You can try to create physically secure tamper-resistant endpoints that are difficult to replace with subverted devices, but physical security is never absolute. In the end, distributed security systems often boil down to factor one for the simple reason that networks carry information, not people or objects. Simply put, security through obscurity works. It has been and continues to be the most widely used security measure. To say that security through obscurity does not work requires a narrow definition of obscurity. The schematics for a modern military cryptographic device may be obscure, but so also are the settings that one applies to it. How much luck would a private citizen have of getting either? About the same. What is considered obscure (as opposed to secret) is a term that varies with the resources the attacker has. Obscurity are secrecy are nearly indistinguishable to the low end of the adversary spectrum. A number of people have attempted to redefine what "security through obscurity" means. Some say it means relying solely on obscurity, but even that is still flawed by the reasoning mentioned above. I find the term misleading and the statements about it equally imprecise. I feel it's time to stop using this phrase.