Still working to recover. Please don't edit quite yet.

public key cryptography

From Anarchopedia
Jump to: navigation, search
A big random number is used to make a public-key/private-key pair.
In an encryption scheme anyone can encrypt using the public key, but only the holder of the private key can decrypt. Security depends on the secrecy of the private key.
In a signature scheme the private key is needed to sign a message; but anyone can check the signature using the public key. Validity depends on private key security.
In the Diffie-Hellman key predistribution scheme, each party generates a public/private key pair and distributes the public key. After obtaining an authentic copy of each others public keys, Alice and Bob can compute a shared secret offline. The shared secret can be used as the key for a symmetric cipher.

Public-key cryptography, also known as asymmetric cryptography, is a form of cryptography in which the key used to encrypt a message differs from the key used to decrypt it. In public key cryptography, a user has a pair of cryptographic keys—a public key and a private key. The private key is kept secret, while the public key may be widely distributed. Incoming messages would have been encrypted with the recipient's public key and can only be decrypted with his corresponding private key. The keys are related mathematically, but the private key cannot be practically derived from the public key.

Conversely, secret key cryptography, also known as symmetric cryptography uses a single secret key for both encryption and decryption. To use symmetric cryptography for communication, both the sender & receiver would have to know the key beforehand, or it would have to be sent along with the message.

The two main branches of public key cryptography are:

  • Public key encryption — a message encrypted with a recipient's public key cannot be decrypted by anyone except the recipient possessing the corresponding private key. This is used to ensure confidentiality.
  • Digital signatures — a message signed with a sender's private key can be verified by anyone who has access to the sender's public key, thereby proving that the sender signed it and that the message has not been tampered with. This is used to ensure authenticity.

An analogy for public-key encryption is that of a locked mailbox with a mail slot. The mail slot is exposed and accessible to the public; its location (the street address) is in essence the public key. Anyone knowing the street address can go to the door and drop a written message through the slot; however, only the person who possesses the key can open the mailbox and read the message.

An analogy for digital signatures is the sealing of an envelope with a personal wax seal. The message can be opened by anyone, but the presence of the seal authenticates the sender.

A central problem for use of public-key cryptography is confidence (ideally proof) that a public key is correct, belongs to the person or entity claimed (ie, is 'authentic'), and has not been tampered with or replaced by a malicious third party. The usual approach to this problem is to use a public-key infrastructure (PKI), in which one or more third parties, known as certificate authorities, certify ownership of key pairs. Another approach, used by PGP, is the "web of trust" method to ensure authenticity of key pairs.

So far, public key techniques have been much more computationally intensive than purely symmetric algorithms. The judicious use of these techniques enables a wide variety of applications without incurring a prohibitive computational penalty. In practice, public key cryptography is often used in combination with secret-key methods for efficiency reasons. Such a combination is called a hybrid cryptosystem. For encryption, the sender encrypts the message with a secret-key algorithm using a randomly generated key, and that random key is then encrypted with the recipient's public key. For digital signatures, the sender hashes the message (using a cryptographic hash function) and then signs the resulting "hash value". Before verifying the signature, the recipient also computes the hash of the message, and compares this hash value with the signed hash value to check that the message has not been tampered with.

>==History== For most of the history of cryptography, a key had to be kept absolutely secret and would be agreed upon beforehand using a secure, but non-cryptographic, method; for example, a face-to-face meeting or a trusted courier. There are a number of significant practical difficulties in this approach to distributing keys. Public-key cryptography was invented to address these drawbacks — with public-key cryptography, users can communicate securely over an insecure channel without having to agree upon a shared key beforehand.

In 1874, a book by William Stanley Jevons <ref>Jevons, William Stanley, The Principles of Science: A Treatise on Logic and Scientific Method, Macmillan & Co., London, 1874, 2nd ed. 1877, 3rd ed. 1879. Reprinted with a foreword by Ernst Nagel, Dover Publications, New York, NY, 1958.</ref> described the relationship of one-way functions to cryptography and went on to discuss specifically the factorization problem used to create the trapdoor function in the RSA system. In July 1996, one observer <ref>Solomon W. Golomb, On Factoring Jevons' Number, CRYPTOLOGIA 243 (July 1996).</ref> commented on the Jevons book in this way:

<blockquote>In his book The Principles of Science: A Treatise on Logic and Scientific Method, written and published in the 1890s <ref>The 1890s date for the publication of Jevons' book in this quotation is incorrect.</ref>, William S. Jevons observed that there are many situations where the 'direct' operation is relatively easy, but the 'inverse' operation is significantly more difficult. One example mentioned briefly is that enciphering (encryption) is easy while deciphering (decryption) is not. In the same section of Chapter 7: Introduction titled 'Induction an Inverse Operation', much more attention is devoted to the principle that multiplication of integers is easy, but finding the (prime) factors of the product is much harder. Thus, Jevons anticipated a key feature of the RSA Algorithm for public key cryptography, though he certainly did not invent the concept of public key cryptography.</blockquote>

The first invention <ref>The NSA has also claimed to have invented public-key cryptography, in the 1960s; however, there is currently (as of 2004) little supporting public evidence for this claim [1].</ref> of asymmetric key algorithms was by James H. Ellis, Clifford Cocks, and Malcolm Williamson at GCHQ in the UK in the early 1970s; these inventions were what later became known as Diffie-Hellman key exchange, and a special case of RSA. The GCHQ cryptographers referred to the technique as "non-secret encryption". These inventions were not publicly disclosed at the time, and the fact that they had been developed was kept secret until 1997.

An asymmetric-key cryptosystem was published in 1976 by Whitfield Diffie and Martin Hellman, who, influenced by Ralph Merkle's work on public-key distribution, disclosed a method of public-key agreement. This method of exponential-key exchange, which came to be known as Diffie-Hellman key exchange, was the first published practical method for establishing a shared secret-key over an unprotected communications channel without using a prior shared secret. Merkle's public-key-agreement technique became known as Merkle's Puzzles, and was published in 1978.

A generalisation of the Cocks method was reinvented in 1977 by Rivest, Shamir and Adleman, all then at MIT. The latter authors published their work in 1978, and the algorithm appropriately came to be known as RSA. RSA uses exponentiation modulo a product of two large primes to encrypt and decrypt, performing both public key encryption and public key digital signature, and its security is connected to the presumed difficulty of factoring large integers, a problem for which there is no known efficient (i.e., practicably fast) general technique.

Since the 1970s, a large number and variety of encryption, digital signature, key agreement, and other techniques have been developed in the field of public-key cryptography. The ElGamal cryptosystem (invented by Taher ElGamal) relies on the (similar, and related) difficulty of the discrete logarithm problem, as does the closely related DSA developed at NSA and published by NIST as a proposed standard. The introduction of elliptic curve cryptography by Neal Koblitz in the mid 1980s has yielded a new family of analogous public-key algorithms. Although mathematically more complex, elliptic curves appear to provide a more efficient way to leverage the discrete logarithm problem, particularly with respect to key size for equivalent estimated security.


Security[edit]

Some encryption schemes can be proven secure based on the presumed hardness of a mathematical problem like factoring the product of two large primes or computing discrete logarithms Rabin has done so for an impracticable approach). Note that "secure" here has a precise mathematical meaning, and there are multiple different (meaningful) definitions of what it means for an encryption scheme to be secure. The "right" definition depends on the context in which the scheme will be deployed.

In contrast to the one-time pad, no public-key encryption scheme has been shown to be secure against eavesdroppers with unlimited computational power. Proofs of security therefore hold with respect to computationally-limited adversaries, and can give guarantees (relative to particular mathematical assumptions) of the form "the scheme cannot be broken using a desktop computer in 1000 years", or "this algorithm is secure if no improved method of (for instance, integer factoring) is found".

The most obvious application of a public key encryption system is confidentiality; a message which a sender encrypts using the recipient's public key can be decrypted only by the recipient's paired private key (assuming, of course that no flaw is discovered in the basic algorithm used).

Public-key digital signature algorithms can also be used for sender authentication and non-repudiation. For instance, a user can encrypt a message with his own private key and send it. If another user can successfully decrypt it using the corresponding public key, this provides assurance that the first user (and no other) sent it (if, that is, there is not a flaw in the algorithm, if the public key is the correct one, and if the corresponding private key has not been compromised). In practice, a cryptographic hash value of the message is usually calculated, encrypted with the private key and sent along with the message (in essence, a cryptographic signature of the message). The receiver can then verify message integrity and origin by calculating the hash value of the received message and comparing it against the decoded signature (the original hash). If the hash from the sender and the hash on the receiver side do not match, then the received message is not identical to the message which the sender "signed", or the sender's identity is wrong.

To achieve authentication, non-repudiation, and confidentiality, the sender would first encrypt the message using his private key, then a second encryption is performed using the recipient's public key.

These characteristics can be used to construct many other, sometimes surprising, cryptographic protocols and applications, like digital cash, password-authenticated key agreement, multi-party key agreement, etc.

Practical considerations[edit]

A postal analogy[edit]

An analogy which can be used to understand the advantages of an asymmetric system is to imagine two people, Alice and Bob, sending a secret message through the public mail. In this example, Alice wants to send a secret message to Bob, and expects a secret reply from Bob.

With a symmetric key system, Alice first puts the secret message in a box, and locks the box using a padlock to which she has a key. She then sends the box to Bob through regular mail. When Bob receives the box, he uses an identical copy of Alice's key (which he has somehow obtained previously, maybe by a face-to-face meeting) to open the box, and reads the message. Bob can then use the same padlock to send his secret reply.

In an asymmetric key system, Bob and Alice have separate padlocks. First, Alice asks Bob to send his open padlock to her through regular mail, keeping his key to himself. When Alice receives it she uses it to lock a box containing her message, and sends the locked box to Bob. Bob can then unlock the box with his key and read the message from Alice. To reply, Bob must similarly get Alice's open padlock to lock the box before sending it back to her.

The critical advantage in an asymmetric key system is that Bob and Alice never need to send a copy of their keys to each other. This prevents a third party (perhaps, in the example, a corrupt postal worker) from copying a key while it is in transit, allowing said third party to spy on all future messages sent between Alice and Bob. So in the public key scenario, Alice and Bob need not trust the postal service as much. In addition, if Bob were careless and allowed someone else to copy his key, Alice's messages to Bob would be compromised, but Alice's messages to other people would remain secret, since the other people would be providing different padlocks for Alice to use.

In another kind of asymmetric key system, Bob and Alice have separate padlocks. First, Alice puts the secret message in a box, and locks the box using a padlock to which only she has a key. She then sends the box to Bob through regular mail. When Bob receives the box, he adds his own padlock to the box, and sends it back to Alice. When Alice receives the box with the two padlocks, she removes her padlock and sends it back to Bob. When Bob receives the box with only his padlock on it, Bob can then unlock the box with his key and read the message from Alice. This three-pass protocol is typically used during key exchange.

Actual algorithms—two linked keys[edit]

Not all asymmetric key algorithms operate in precisely this fashion. The most common ones have the property that Alice and Bob each own two keys, one for encryption and one for decryption. In a secure asymmetric key encryption scheme, the private key should not be deducible from the public key. This is known as public-key encryption, since an encryption key can be published without compromising the security of messages encrypted with that key.

In the analogy above, Bob might publish instructions on how to make a lock ("public key"), but the lock is such that it is impossible (so far as is known) to deduce from these instructions how to make a key which will open that lock ("private key"). Those wishing to send messages to Bob use the public key to encrypt the message; Bob uses his private key to decrypt it.

The key pair can also be used in reverse; the private key can be used to encrypt messages that only the public key can decrypt.[1] This is useful for applications where, instead of confidentiality being the goal, integrity, authenticity, and/or transparency is the goal, such as with digital signing.

Weaknesses[edit]

Of course, there is a possibility that someone could "pick" Bob's or Alice's lock. Among symmetric key encryption algorithms, only the one-time pad can be proven to be secure against any adversary, no matter how much computing power is available. Unfortunately, there is no public-key scheme with this property, since all public-key schemes are susceptible to brute force key search attack. Such attacks are impractical if the amount of computation needed to succeed (termed 'work factor' by Claude Shannon) is out of reach of potential attackers. The work factor can be increased by simply choosing a longer key. Other attacks may be more efficient, and some are known for some public key encryption algorithms. Both RSA and ElGamal encryption have known attacks which are much faster than the brute force approach. Such estimates have changed both with the decreasing cost of computer power, and new mathematical discoveries.

In practice, these insecurities can be avoided by choosing key sizes large enough that the best known attack would take so long that it is not worth any adversary's time and money to break the code. For example, if an estimate of how long it takes to break an encryption scheme is one thousand years, and it were used to encrypt your credit card details, they would be safe enough, since the time needed to decrypt the details will be rather longer than the useful life of those details, which expire after a few years. Typically, the key size needed is much longer for public key algorithms than for symmetric key algorithms.

Besides the raw algorithmic strength of a particular keypair, the security of the certification hierarchy must be considered when deploying public key systems. This hierarchy vouches for the identities assigned to specific private keys. Public key digital certificates are typically valid for several years at a time, so the associated private keys must be held securely over the long term. When a private key higher in the hierarchy is compromised or accidentally disclosed then a man in the middle attack attack is possible.

Major weaknesses have been found for several formerly promising asymmetric key algorithms. The 'knapsack packing' algorithm was found to be insecure when a new attack was found. Recently, some attacks based on careful measurements of the exact amount of time it takes known hardware to encrypt plain text have been used to simplify the search for likely decryption keys (see side channel attack). Thus, mere use of asymmetric key algorithms does not ensure security; it is an area of active research to discover and protect against new attacks.

Another potential security vulnerability in using asymmetric keys is the possibility of a man in the middle attack, in which communication of public keys is intercepted by a third party and modified to provide different public keys instead. Encrypted messages and responses must also be intercepted, decrypted and re-encrypted by the attacker using the correct public keys for different communication segments in all instances to avoid suspicion. This attack may seem to be difficult to implement in practice, but it's not impossible when using insecure media (e.g. public networks such as the Internet or wireless communications). A malicious staff member at Alice or Bob's ISP might find it outright easy.

One approach to prevent such attacks is the use of a certificate authority, a trusted third party who is responsible for verifying the identity of a user of the system and issuing a digital certificate, which is a signed block of data stating that this public key belongs to that person, company or other entity. This approach also has weaknesses. For example, the certificate authority must be trusted to have properly checked the identity of the key-holder and the correctness of the public key when it issues a certificate, and has been correctly set up at communication participants before it can be used. An attacker who could subvert the certificate authority into issuing a certificate for a bogus public key could then mount a man in the middle attack as easily as if the certificate scheme were not used at all. Despite its problems, this approach is widely used; examples include SSL and its successor, TLS, which are commonly used to provide security in web browsers, for example, to securely send credit card details to an online store.

Computational cost[edit]

Public key algorithms known thus far are relatively computationally costly compared with most symmetric key algorithms of apparently equivalent security. The difference factor is typically quite large. This has important implications for their practical use. Most are used in hybrid cryptosystems for reasons of efficiency; in such a cryptosystem, a shared secret key ("session key") is generated by one party and this much briefer session key is then encrypted by each recipient's public key. Each recipient uses the corresponding private key to decrypt the session key. Once all parties have obtained the session key they can use a much faster symmetric algorithm to encrypt and decrypt messages. In many of these schemes, the session key is unique to each message exchange, being randomly chosen for each message.

Associating public keys with identities[edit]

The binding between a public key and its 'owner' must be correct, lest the algorithm function perfectly and yet be entirely insecure in practice. As with most cryptography, the protocols used to establish and verify this binding are critically important. Associating a public key with its owner is typically done by protocols implementing a public key infrastructure; these allow the validity of the association to be formally verified by reference to a trusted third party, either in the form of a hierarchical certificate authority (e.g., X.509), a local trust model (e.g., SPKI), or a web of trust scheme (e.g., that originally built into PGP and GPG and still to some extent usable with them). Whatever the cryptographic assurance of the protocols themselves, the association between a public key and its owner is ultimately a matter of subjective judgement on the part of the trusted third party, since the key is a mathematical entity whilst the owner, and the connection between owner and key, are not. For this reason, the formalism of a public key infrastructure must provide for explicit statements of the policy followed when making this judgement. For example, the complex and never fully implemented X.509 standard allows a certificate authority to identify its policy by means of an object identifier which functions as an index into a catalogue of registered policies. Policies may exist for many different purposes, ranging from anonymity to military classification.

Relation to real world events[edit]

A public key will be known to a large and, in practice, unknown set of users. All events requiring revocation or replacement of a public key can take a long time to take full effect with all who must be informed (i.e. all those users who possess that key). For this reason, systems which must react to events in real time (e.g. safety-critical systems or national security systems) should not use public-key encryption without taking great care. There are four issues of interest:

Privilege of key revocation[edit]

A malicious (or erroneous) revocation of some, or all, of the keys in the system is likely, in the second case, certain, to cause a complete failure of the system. If public keys can be revoked individually, this is a possibility. However, there are design approaches which can reduce the practical chance of this occurring. For example, by means of certificates we can create what is called a "compound principal"; one such principal could be "Alice and Bob have Revoke Authority". Now only Alice and Bob (in concert) can revoke a key, and neither Alice nor Bob can revoke keys alone. However, revoking a key now requires both Alice and Bob to be available, and this creates a problem of reliability. In concrete terms, from a security point of view, there is now a single point of failure in the public key revocation system. A successful Denial of Service attack against either Alice or Bob (or both) will block a required revocation. In fact, any partition of authority between Alice and Bob will have this effect, regardless of how it comes about.

Because the principal having revocation authority for keys is very powerful, the mechanisms used to control it should involve both as many participants as possible (to guard against malicious attacks of this type), while at the same time as few as possible (to ensure that a key can be revoked without dangerous delay). Public key certificates which include an expiry date are unsatisfactory in that the expiry date may not correspond with a real world revocation need, but at least such certificates need not all be tracked down system wide, nor must all users be in constant contact with the system at all times.

Distribution of a new key[edit]

After a key has been revoked, or when a new user is added to a system, a new key must be distributed in some predetermined manner. Assume that Carol's key has been revoked (e.g. automatically by exceeding its use-before date, or less so, because of a compromise of Carol's matching private key). Until a new key has been distributed, Carol is effectively out of contact. No one will be able to send her messages without violating system protocols (i.e. without a valid public key, no one can encrypt messages to her), and messages from her cannot be signed for the same reason. Or, in other words, the "part of the system" controlled by Carol is essentially unavailable. Security requirements have been ranked higher than system availability in such designs.

One could leave the power to create (and certify) keys as well as revoke them in the hands of each user, and the original PGP design did so, but this raises problems of user understanding and operation. For security reasons, this approach has considerable difficulties; if nothing else, some users will be forgetful or inattentive or confused. On one hand, a message revoking a public key certificate should be spread as fast as possible while, on the other hand, (parts of) the system might be rendered inoperable before a new key can be installed. The time window can obviously be reduced to zero by always issuing the new key together with the certificate that revokes the old one, but this requires co-location of both authority to revoke and to generate new keys.

It is most likely a system-wide failure if the (possibly combined) principal that issues new keys fails by issuing keys improperly. It is an instance of a common mutual exclusion; a design can make the reliability of a system high, but only at the cost of system availability, and vice versa.

Spreading the revocation[edit]

Notification of a key certificate revocation must be spread to all those who might potentially hold it, and as rapidly as possible.

There are two means of spreading information (e.g., a key revocation here) in a distributed system: either the information is pushed to users from a central point(s), or it is pulled from a central point(s) to end users.

Pushing the information is the simplest solution in that a message is sent to all participants. However, there is no way of knowing that all participants will actually receive the message, and if the number of participants is large and some of their physical or network distance great, the probability of complete success (which is, ideally, required for system security) will be rather low. In a partially updated state, the system is particularly vulnerable to denial of service attacks as security has been breached, and a vulnerability window will continue to exist as long as some users have not 'gotten the word'. In other words, pushing certificate revocation messages is neither easy to secure nor very reliable.

The alternative to pushing is pulling. In the extreme, all certificates contain all the keys needed to verify that the public key of interest (i.e. the one belonging to the user to whom one wishes to send a message, or whose signature is to be checked) is still valid. In this case, at least some use of the system will be blocked if a user cannot reach the verification service (i.e. one of those systems which can establish the current validity of another user's key). Again, such a system design can be made as reliable as one wishes, at the cost of lowering security (the more servers to check for the possibility of a key revocation, the longer the window of vulnerability).

Another trade-off is to use a somewhat less reliable, but more secure, verification service but to include an expiry date for each of the verification sources. How long this timeout should be is a decision which embodies a trade-off between availability and security that will have to be decided in advance, at system design time.

Recovery from a leaked key[edit]

Assume that the principal authorized to revoke a key has decided that a certain key must be revoked. In most cases this happens after the fact; for instance, it becomes known that at some time in the past an event occurred that endangered a private key. Let us denote the time at which it is decided that the compromise occurred with T.

Such a compromise has two implications. Messages encrypted with the matching public key (now or in the past) can no longer be assumed to be secret. One solution to avoid this problem is to use a protocol that has perfect forward secrecy. Second, signatures made with the no longer trusted to be actually private key after time T, can no longer be assumed to be authentic without additional information about who, where, when, etc of the events leading up to digital signature. These will not always be available, and so all such digital signatures will be less than credible. A solution to reduce the impact of leaking a private key of a signature scheme is to use timestamps.

Loss of secrecy and/or authenticity, even for a single user, has system-wide security implications, and a strategy for recovery must thus be established. Such a strategy will determine who has authority and under what conditions to revoke a public key certificate, how to spread the revocation, but also, ideally, how to deal with all messages signed with the key since time T (which will rarely be known precisely). Messages sent to that user (which require the proper, now compromised, private key to decrypt) must be considered compromised as well, no matter when they were sent.

Such a recovery procedure can be quite complex, and while it is in progress the system will likely be vulnerable against Denial of Service attacks[unverified], among other things.

Examples[edit]

Examples of well-regarded asymmetric key techniques for varied purposes include:

Examples of notable yet insecure asymmetric key algorithms include:

Examples of protocols using asymmetric key algorithms include:

Notes[edit]

  1. Public Key Cryptography

References[edit]

External links[edit]

This article contains content from Wikipedia. Current versions of the GNU FDL article public key cryptography on WP may contain information useful to the improvement of this article WP