Cipher

From Citizendium
Jump to navigation Jump to search
This article is a stub and thus not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.
For more information, see: Cryptography.

Information can be encrypted in two basic ways, cipher and code. For a discussion of the applications of each, see the cryptography article.

Ciphers apply an algorithm and a cryptographic key to plaintext in the form of digitally encoded information; the process of encryption is unaware of linguistic structure such as words. It would make no difference to a cipher if its inputs were the complete works of William Shakespeare, a digitized image of a toxic waste dump, the closing price of every stock on the Tokyo stock exchange, or an order to invade Vatican City.

Most often, there is a one-to-one correspondence between the elements — bits or bytes — of the plaintext, although some ciphers insert nonsense padding into the ciphertext, to lessen the statistical relationship between plaintext and ciphertext. Padding that was mistaken for plaintext has changed the course of battles.

Another technique for hiding the real message content is called masking, which is used on dedicated communications channels. On a channel where there is no cost for transmission, essentially random noise, in the form that does not appear superficially different than the encrypted messages, is transmitted whenever there is no traffic to send.

Encryption and decryption

Encryption is the process of converting plaintext to ciphertext; decryption reverses the process to produce plaintext.

Classical cipher components

There are two fundamental operations in ciphers, which strong systems combine. Substitution exchanges ciphertext for plaintext. As a trivial example, a substitution cipher could shift letters one place in the alphabet, so ZEBRAS would become AFCSBT.

In this example, the set of rules that substitutes A for Z, T for S, etc., is the key. The identical monoalphabetic substitution system, which replaces each plaintext letter with a different ciphertext letter, could use a different key. That key might substitute Q for Z, B for S, etc.

The other operation, transposition, changes the order of the plaintext elements. For example, a trivial transposition exchanges the order of each pair of letters, so ZEBRAS would become EZRBSA.

Transposition also has keys. A general pairwise transposition algorithm switches pairs of letters. This key exchanges every second letter; a different key might substitute a letter two places away, producing RASZEB.

In real ciphers, the operations are combined. For example, if the above substitution is followed by transposition, ZEBRAS would become FASCTB. Transposition followed by substitution would convert ZEBRAS to RASCTB.

The example above is a monoalphabetic cipher; the same transformation is applied to each symbol of plaintext. To gain even minimal security, polyalphabetic substitution creates rules such that the same ciphertext is not always produced from the same plaintext. As a trivial example, shift the odd letters one alphabetic place and the even letters two places, so that ZEBRAS becomes AGCTBU.

Real systems are more complex. Typically they work on chunks of plaintext far longer than the single word above. A key controls at least some of the operations; for example the amount to shift might be controlled by the key.

Design criteria

One design principle combines two objectives. The key needs to be long enough to make a brute force attack impractical and the cipher internals robust enough that no attack faster than brute force is available to the analyst. The first of these is easily achieved; the second much more difficult.

Jean-Guillame-Hubert-Victor-Francois-Alexandre-Auguste Kerckhoffs von Niewenhof, whose full name might make a start at a minimally strong polyalphabetic key, was usually known as Auguste Kerckhoffs.[1] In his 1883 book, La Cryptographie Militaire, he stated six axioms of cryptography.[2] Some are no longer relevant given the ability of computers to perform complex encryption, but the second is the most critical, and, perhaps, counterintuitive. It is sometimes called Kerckhoffs' Principle.

If the method of encipherment becomes known to one's adversary, this should not prevent one from continuing to use the cipher as long as the key remains unknown

Combining the above, we get the objective that no attack is faster than brute force, even if the attacker knows all the internal details of the cipher.

Some of the design objectives are usually described as confusion and diffusion, following a paper of Claude Shannon[3]. Very roughly, substitution provides confusion while transposition or other mixing operations provide diffusion. Some ciphers, called substitution-permutation networks, are explicitly designed around this concept; the substitution step gives confusion and the permutation step gives diffusion.

Types of cipher

Before there were machine ciphers, some manual ciphers were monographic; they worked on individual alphabetic characters. Other manual systems, such as the Playfair cipher, were digraphic, working with pairs of characters. Going beyond digraphs required at least a mechanical encryption device.

Manual or mechanical encryption systems may also be classified as monoalphabetic — only one mapping from plaintext characters to ciphertext is used, so the same input character always encrypts to the same output — or polyalphabetic — multiple mappings are used so a character that is repeated in the input text may produce different encrypted results each time. Monoalphabetic ciphers are highly vulnerable to frequency analysis attacks, so mechanical cryptosystems are generally polyalphabetic.

With cryptographic computers, there is no need to maintain the integrity of individual characters. The main types of cipher are a block cipher, which breaks the data up into fixed-size blocks and encrypts each block under control of the key, and a stream cipher which encrypts a stream of input data by combining it with a pseudo-random stream of data; the pseudo-random stream is generated under control of the encryption key. .

Stream ciphers are not tied to blocks, but apply a continually generated key to an arbitrary length sequence of symbols. Without loss of generality, it can be said that the encryption key is used to encipher the plaintext — typically using the Boolean exclusive OR (XOR) — to yield the ciphertext. To decrypt, use the decryption key to generate the same pseudo-random data stream; then simply reverse the mixing transformation.

A class of stream ciphers are which have been called autokeys, add an additional element of randomization that does not come from the key. At certain points in the encryption process, information formed from interaction of the plaintext and the key are fed back into the encryption.

A one-time pad, which is provably secure against cryptanalysis, has a totally random key of the same length as the message. It is not secure against theft or copying of the pad. Totally random key comes from physical phenomena, such as radioactive disintegrations or thermal noise; a presumed random source needs to be verified for randomness.

A one-time pad must absolutely, positively, be used only once. Even two uses, with different plaintext, can provide a break into the messages and even the system, as the Soviet Union learned when VENONA was revealed. [4] The key, of course, need not be on a paper pad; dense storage such as pairs of optical disks, destroyed as soon as used, is a more practical means of one-time key distribution.

Ciphers and networking

There are several ways to use ciphers within communications networks, ways that are not mutually exclusive; there may be different encryptions in effect at different layers of the protocol layers of the end-to-end network.

Re-keying

In most applications, a cipher should be re-keyed from time to time. This prevents an enemy from accumulating large amounts of data encrypted with a single key; such a collection both facilitates some attacks and makes the payoff for breaking that key very large. Re-keying also limits the damage if a key is compromised in some other way. Generally, ciphers do not include a re-keying mechanism. In network applications some higher-level protocol manages that and re-keys the cipher using the normal keying mechanism.

Bulk encryption

Most commonly applied to radio channels or physical lines, there is an encryptor between the information output and the transmitter, and a decryptor just after the receiver. There are several reasons for doing this, the most important making traffic analysis difficult or impossible. Indeed, using the technique of masking, the encryptor may continue to produce random output even when there is no information to send, so an adversary cannot even infer activity levels from listening to the link.[5]

Especially with radio channels, bulk encryption may be wise to apply in addition to encryption at the packet, message, session, or file levels.

It is more accurate to speak of bulk encryption than link encryption, as "link" may imply, to some, the OSI data link layer. Bulk encryption, usually of multiplexed and especially of multiplexed wireless channels, is more of a physical layer technique for aggregates.

Packet encryption

For a router to route packets, it has to be able to see the source and destination fields, but has no need to see the payload. If there are costs associated with sending traffic, which makes bulk encryption unattractive, routers can encrypt each packet as it leaves an interface, assuming the next hop router will decrypt the header and forward as necessary. The payload is separately encrypted if sensitive.

Message encryption

The next level would involve encrypting the message body from endpoint to endpoint. This provides content confidentiality and atomic integrity to messages, but does not provide sequential integrity, so that messages could be deleted or duplicated without fear of detection.

Session/file level

In this case, every exchange has a record/message number or timestamp, protected at least by a one-way hash, or perhaps being part of the information encrypted at the message level. Having trusted sequence numbers or timestamps protects sequential integrity, and may help ensure nonrepudiation.

References

  1. Kahn, David (second edition, 1996), The Codebreakers: the story of secret writing, Scribners p.235
  2. Savard, John J. G., The Ideal Cipher, A Cryptographic Compendium
  3. Shannon, Claude E., Communication Theory of Secrecy Systems
  4. National Security Agency, VENONA
  5. R. Shirey (August 2007), Internet Security Glossary, version 2