Misplaced Pages

S-box

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

In cryptography , an S-box ( substitution-box ) is a basic component of symmetric key algorithms which performs substitution. In block ciphers , they are typically used to obscure the relationship between the key and the ciphertext , thus ensuring Shannon's property of confusion . Mathematically, an S-box is a nonlinear vectorial Boolean function .

#167832

74-481: In general, an S-box takes some number of input bits , m , and transforms them into some number of output bits, n , where n is not necessarily equal to m . An m × n S-box can be implemented as a lookup table with 2 words of n bits each. Fixed tables are normally used, as in the Data Encryption Standard (DES), but in some ciphers the tables are generated dynamically from the key (e.g.

148-574: A backdoor (a vulnerability known only to its designers) might have been planted in the cipher. As the S-boxes are the only nonlinear part of the cipher, compromising those would compromise the entire cipher. The S-box design criteria were eventually published (in Coppersmith 1994 ) after the public rediscovery of differential cryptanalysis, showing that they had been carefully tuned to increase resistance against this specific attack such that it

222-400: A binit as an arbitrary information unit equivalent to some fixed but unspecified number of bits. Binary number A binary number is a number expressed in the base -2 numeral system or binary numeral system , a method for representing numbers that uses only two symbols for the natural numbers : typically "0" ( zero ) and "1" ( one ). A binary number may also refer to

296-409: A byte or word , is referred to, it is usually specified by a number from 0 upwards corresponding to its position within the byte or word. However, 0 can refer to either the most or least significant bit depending on the context. Similar to torque and energy in physics; information-theoretic information and data storage size have the same dimensionality of units of measurement , but there

370-461: A rational number that has a finite representation in the binary numeral system, that is, the quotient of an integer by a power of two. The base-2 numeral system is a positional notation with a radix of 2 . Each digit is referred to as bit , or binary digit. Because of its straightforward implementation in digital electronic circuitry using logic gates , the binary system is used by almost all modern computers and computer-based devices , as

444-509: A unit of information , the bit is also known as a shannon , named after Claude E. Shannon . The symbol for the binary digit is either "bit", per the IEC 80000-13 :2008 standard, or the lowercase character "b", per the IEEE 1541-2002 standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte. The encoding of data by discrete bits

518-482: A Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit". A bit can be stored by a digital device or other physical system that exists in either of two possible distinct states . These may be the two stable states of a flip-flop , two positions of an electrical switch , two distinct voltage or current levels allowed by a circuit , two distinct levels of light intensity , two directions of magnetization or polarization ,

592-610: A binary system for describing prosody . He described meters in the form of short and long syllables (the latter equal in length to two short syllables). They were known as laghu (light) and guru (heavy) syllables. Pingala's Hindu classic titled Chandaḥśāstra (8.23) describes the formation of a matrix in order to give a unique value to each meter. "Chandaḥśāstra" literally translates to science of meters in Sanskrit. The binary representations in Pingala's system increases towards

666-429: A bit was represented by the polarity of magnetization of a certain area of a ferromagnetic film, or by a change in polarity from one direction to the other. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro tickets and some credit cards . In modern semiconductor memory , such as dynamic random-access memory ,

740-531: A great interval of time, will seem all the more curious." The relation was a central idea to his universal concept of a language or characteristica universalis , a popular idea that would be followed closely by his successors such as Gottlob Frege and George Boole in forming modern symbolic logic . Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet , who visited China in 1685 as

814-512: A missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own religious beliefs as a Christian. Binary numerals were central to Leibniz's theology. He believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing. [A concept that] is not easy to impart to the pagans, is the creation ex nihilo through God's almighty power. Now one can say that nothing in

SECTION 10

#1732780651168

888-524: A number of simple basic principles or categories, for which he has been considered a predecessor of computing science and artificial intelligence. In 1605, Francis Bacon discussed a system whereby letters of the alphabet could be reduced to sequences of binary digits, which could then be encoded as scarcely visible variations in the font in any random text. Importantly for the general theory of binary encoding, he added that this method could be used with any objects at all: "provided those objects be capable of

962-601: A preferred system of use, over various other human techniques of communication, because of the simplicity of the language and the noise immunity in physical implementation. The modern binary number system was studied in Europe in the 16th and 17th centuries by Thomas Harriot , Gottfried Leibniz . However, systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, and India. The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions (not related to

1036-498: A second is performed by a sequence of steps in which a value (initially the first of the two numbers) is either doubled or has the first number added back into it; the order in which these steps are to be performed is given by the binary representation of the second number. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus , which dates to around 1650 BC. The I Ching dates from

1110-404: A time in serial transmission , and by a multiple number of bits in parallel transmission . A bitwise operation optionally processes bits one at a time. Data transfer rates are usually measured in decimal SI multiples of the unit bit per second (bit/s), such as kbit/s. In the earliest non-electronic information processing devices, such as Jacquard's loom or Babbage's Analytical Engine , a bit

1184-488: A twofold difference only; as by Bells, by Trumpets, by Lights and Torches, by the report of Muskets, and any instruments of like nature". (See Bacon's cipher .) In 1617, John Napier described a system he called location arithmetic for doing binary calculations using a non-positional representation by letters. Thomas Harriot investigated several positional numbering systems, including binary, but did not publish his results; they were found later among his papers. Possibly

1258-427: Is addition. Adding two single-digit binary numbers is relatively simple, using a form of carrying: Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented: This is known as carrying . When

1332-448: Is based on the simple premise that under the binary system, when given a stretch of digits composed entirely of n ones (where n is any integer length), adding 1 will result in the number 1 followed by a string of n zeros. That concept follows, logically, just as in the decimal system, where adding 1 to a string of n 9s will result in the number 1 followed by a string of n 0s: Such long strings are quite common in

1406-486: Is in general no meaning to adding, subtracting or otherwise combining the units mathematically, although one may act as a bound on the other. Units of information used in information theory include the shannon (Sh), the natural unit of information (nat) and the hartley (Hart). One shannon is the maximum amount of information needed to specify the state of one bit of storage. These are related by 1 Sh ≈ 0.693 nat ≈ 0.301 Hart. Some authors also define

1480-554: Is more compressed—the same bucket can hold more. For example, it is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits. However, when this storage space is filled and the corresponding content is optimally compressed, this only represents 295 exabytes of information. When optimally compressed, the resulting carrying capacity approaches Shannon information or information entropy . Certain bitwise computer processor instructions (such as bit set ) operate at

1554-521: Is not necessarily equivalent to the numerical value of one; it depends on the architecture in use. In keeping with the customary representation of numerals using Arabic numerals , binary numbers are commonly written using the symbols 0 and 1 . When written, binary numerals are often subscripted, prefixed, or suffixed to indicate their base, or radix . The following notations are equivalent: When spoken, binary numerals are usually read digit-by-digit, to distinguish them from decimal numerals. For example,

SECTION 20

#1732780651168

1628-434: Is often called the first digit . When the available symbols for this position are exhausted, the least significant digit is reset to 0 , and the next digit of higher significance (one position to the left) is incremented ( overflow ), and incremental substitution of the low-order digit resumes. This method of reset and overflow is repeated for each digit of significance. Counting progresses as follows: Binary counting follows

1702-440: Is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Before examining binary counting, it is useful to briefly discuss the more familiar decimal counting system as a frame of reference. Decimal counting uses the ten symbols 0 through 9 . Counting begins with the incremental substitution of the least significant digit (rightmost digit) which

1776-415: Is that 1 ∨ 1 = 1 {\displaystyle 1\lor 1=1} , while 1 + 1 = 10 {\displaystyle 1+1=10} . Subtraction works in much the same way: Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the next column. This is known as borrowing . The principle is the same as for carrying. When

1850-509: Is translated into English as the "Explanation of Binary Arithmetic, which uses only the characters 1 and 0, with some remarks on its usefulness, and on the light it throws on the ancient Chinese figures of Fu Xi " . Leibniz's system uses 0 and 1, like the modern binary numeral system. An example of Leibniz's binary numeral system is as follows: While corresponding with the Jesuit priest Joachim Bouvet in 1700, who had made himself an expert on

1924-570: The Blowfish and the Twofish encryption algorithms). One good example of a fixed table is the S-box from DES (S 5 ), mapping 6-bit input into a 4-bit output: Given a 6-bit input, the 4-bit output is found by selecting the row using the outer two bits (the first and last bits), and the column using the inner four bits. For example, an input " 0 1101 1 " has outer bits " 01 " and inner bits "1101";

1998-451: The I Ching have also been used in traditional African divination systems, such as Ifá among others, as well as in medieval Western geomancy . The majority of Indigenous Australian languages use a base-2 system. In the late 13th century Ramon Llull had the ambition to account for all wisdom in every branch of human knowledge of the time. For that purpose he developed a general method or "Ars generalis" based on binary combinations of

2072-577: The I Ching which has 64. The Ifá originated in 15th century West Africa among Yoruba people . In 2008, UNESCO added Ifá to its list of the " Masterpieces of the Oral and Intangible Heritage of Humanity ". The residents of the island of Mangareva in French Polynesia were using a hybrid binary- decimal system before 1450. Slit drums with binary tones are used to encode messages across Africa and Asia. Sets of binary combinations similar to

2146-542: The I Ching while a missionary in China, Leibniz explained his binary notation, and Bouvet demonstrated in his 1701 letters that the I Ching was an independent, parallel invention of binary notation. Leibniz & Bouvet concluded that this mapping was evidence of major Chinese accomplishments in the sort of philosophical mathematics he admired. Of this parallel invention, Leibniz wrote in his "Explanation Of Binary Arithmetic" that "this restitution of their meaning, after such

2220-573: The nonlinearity (bent, almost bent) and differential uniformity (perfectly nonlinear, almost perfectly nonlinear). Bit The bit is the most basic unit of information in computing and digital communication . The name is a portmanteau of binary digit . The bit represents a logical state with one of two possible values . These values are most commonly represented as either " 1 " or " 0 " , but other representations such as true / false , yes / no , on / off , or + / − are also widely used. The relation between these values and

2294-410: The yottabit (Ybit). When the information capacity of a storage system or a communication channel is presented in bits or bits per second , this often refers to binary digits, which is a computer hardware capacity to store binary data ( 0 or 1 , up or down, current or not, etc.). Information capacity of a storage system is only an upper bound to the quantity of information stored therein. If

S-box - Misplaced Pages Continue

2368-449: The 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line , charges stored on the inside surface of a cathode-ray tube , or opaque spots printed on glass discs by photolithographic techniques. In the 1950s and 1960s, these methods were largely supplanted by magnetic storage devices such as magnetic-core memory , magnetic tapes , drums , and disks , where

2442-497: The 9th century BC in China. The binary notation in the I Ching is used to interpret its quaternary divination technique. It is based on taoistic duality of yin and yang . Eight trigrams (Bagua) and a set of 64 hexagrams ("sixty-four" gua) , analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou dynasty of ancient China. The Song dynasty scholar Shao Yong (1011–1077) rearranged

2516-575: The Binary Progression" , in 1679, Leibniz introduced conversion between decimal and binary, along with algorithms for performing basic arithmetic operations such as addition, subtraction, multiplication, and division using binary numbers. He also developed a form of binary algebra to calculate the square of a six-digit number and to extract square roots.. His most well known work appears in his article Explication de l'Arithmétique Binaire (published in 1703). The full title of Leibniz's article

2590-409: The ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits. Computers usually manipulate bits in groups of a fixed size, conventionally named " words ". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In

2664-424: The average. This principle is the basis of data compression technology. Using an analogy, the hardware binary digits refer to the amount of storage space available (like the number of buckets available to store things), and the information content the filling, which comes in different levels of granularity (fine or coarse, that is, compressed or uncompressed information). When the granularity is finer—when information

2738-519: The binary expression for 1/3 = .010101..., this means: 1/3 = 0 × 2 + 1 × 2 + 0 × 2 + 1 × 2 + ... = 0.3125 + ... An exact value cannot be found with a sum of a finite number of inverse powers of two, the zeros and ones in the binary representation of 1/3 alternate forever. Arithmetic in binary is much like arithmetic in other positional notation numeral systems . Addition, subtraction, multiplication, and division can be performed on binary numerals. The simplest arithmetic operation in binary

2812-549: The binary fractions 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64. Early forms of this system can be found in documents from the Fifth Dynasty of Egypt , approximately 2400 BC, and its fully developed hieroglyphic form dates to the Nineteenth Dynasty of Egypt , approximately 1200 BC. The method used for ancient Egyptian multiplication is also closely related to binary numbers. In this method, multiplying one number by

2886-400: The binary number system) and Horus-Eye fractions (so called because many historians of mathematics believe that the symbols used for this system could be arranged to form the eye of Horus , although this has been disputed). Horus-Eye fractions are a binary numbering system for fractional quantities of grain, liquids, or other measures, in which a fraction of a hekat is expressed as a sum of

2960-492: The binary numeral 100 is pronounced one zero zero , rather than one hundred , to make its binary nature explicit and for purposes of correctness. Since the binary numeral 100 represents the value four, it would be confusing to refer to the numeral as one hundred (a word that represents a completely different value, or amount). Alternatively, the binary numeral 100 can be read out as "four" (the correct value ), but this does not make its binary nature explicit. Counting in binary

3034-422: The binary system. From that one finds that large binary numbers can be added using two simple steps, without excessive carry operations. In the following example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 0 2 (958 10 ) and 1 0 1 0 1 1 0 0 1 1 2 (691 10 ), using the traditional carry method on the left, and the long carry method on the right: The top row shows the carry bits used. Instead of

S-box - Misplaced Pages Continue

3108-412: The carry bits used. Starting in the rightmost column, 1 + 1 = 10 2 . The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 10 2 again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 11 2 . This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives

3182-476: The conference who witnessed the demonstration were John von Neumann , John Mauchly and Norbert Wiener , who wrote about it in his memoirs. The Z1 computer , which was designed and built by Konrad Zuse between 1935 and 1938, used Boolean logic and binary floating-point numbers . Any number can be represented by a sequence of bits (binary digits), which in turn may be represented by any mechanism capable of being in two mutually exclusive states. Any of

3256-413: The corresponding output would be "1001". When DES was first published in 1977, the design criteria of its S-boxes were kept secret to avoid compromising the technique of differential cryptanalysis (which was not yet publicly known). As a result, research in what made good S-boxes was sparse at the time. Rather, the eight S-boxes of DES were the subject of intense study for many years out of a concern that

3330-415: The early 21st century, retail personal or server computers have a word size of 32 or 64 bits. The International System of Units defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes kilo (10 ) through yotta (10 ) increment by multiples of one thousand, and the corresponding units are the kilobit (kbit) through

3404-431: The exact same procedure, and again the incremental substitution begins with the least significant binary digit, or bit (the rightmost one, also called the first bit ), except that only the two symbols 0 and 1 are available. Thus, after a bit reaches 1 in binary, an increment resets it to 0 but also causes an increment of the next bit to the left: In the binary system, each bit represents an increasing power of 2, with

3478-408: The final answer 100100 2 (36 10 ). When computers must add two numbers, the rule that: x xor y = (x + y) mod 2 for any two bits x and y allows for very fast calculation, as well. A simplification for many binary addition problems is the "long carry method" or "Brookhouse Method of Binary Addition". This method is particularly useful when one of the numbers contains a long stretch of ones. It

3552-454: The final answer of 1 1 0 0 1 1 1 0 0 0 1 2 (1649 10 ). In our simple example using small numbers, the traditional carry method required eight carry operations, yet the long carry method required only two, representing a substantial reduction of effort. The binary addition table is similar to, but not the same as, the truth table of the logical disjunction operation ∨ {\displaystyle \lor } . The difference

3626-462: The first publication of the system in Europe was by Juan Caramuel y Lobkowitz , in 1700. Leibniz wrote in excess of a hundred manuscripts on binary, most of them remaining unpublished. Before his first dedicated work in 1679, numerous manuscripts feature early attempts to explore binary concepts, including tables of numbers and basic calculations, often scribbled in the margins of works unrelated to mathematics. His first known work on binary, “On

3700-451: The first time in history. Entitled A Symbolic Analysis of Relay and Switching Circuits , Shannon's thesis essentially founded practical digital circuit design. In November 1937, George Stibitz , then working at Bell Labs , completed a relay-based computer he dubbed the "Model K" (for " K itchen", where he had assembled it), which calculated using binary addition. Bell Labs authorized a full research program in late 1938 with Stibitz at

3774-482: The following rows of symbols can be interpreted as the binary numeric value of 667: The numeric value represented in each case depends on the value assigned to each symbol. In the earlier days of computing, switches, punched holes, and punched paper tapes were used to represent binary values. In a modern computer, the numeric values may be represented by two different voltages ; on a magnetic disk , magnetic polarities may be used. A "positive", " yes ", or "on" state

SECTION 50

#1732780651168

3848-586: The helm. Their Complex Number Computer, completed 8 January 1940, was able to calculate complex numbers . In a demonstration to the American Mathematical Society conference at Dartmouth College on 11 September 1940, Stibitz was able to send the Complex Number Calculator remote commands over telephone lines by a teletype . It was the first computing machine ever used remotely over a phone line. Some participants of

3922-535: The hexagrams in a format that resembles modern binary numbers, although he did not intend his arrangement to be used mathematically. Viewing the least significant bit on top of single hexagrams in Shao Yong's square and reading along rows either from bottom right to top left with solid lines as 0 and broken lines as 1 or from top left to bottom right with solid lines as 1 and broken lines as 0 hexagrams can be interpreted as sequence from 0 to 63. Etruscans divided

3996-409: The level of manipulating bits rather than manipulating data interpreted as an aggregate of bits. In the 1980s, when bitmapped computer displays became popular, some computers provided specialized bit block transfer instructions to set or copy the bits that corresponded to a given rectangular area on the screen. In most computers and programming languages, when a bit within a group of bits, such as

4070-408: The orientation of reversible double stranded DNA , etc. Bits can be implemented in several forms. In most modern computing devices, a bit is usually represented by an electrical voltage or current pulse, or by the electrical state of a flip-flop circuit. For devices using positive logic , a digit value of 1 (or a logical value of true) is represented by a more positive voltage relative to

4144-475: The outer edge of divination livers into sixteen parts, each inscribed with the name of a divinity and its region of the sky. Each liver region produced a binary reading which was combined into a final binary for divination. Divination at Ancient Greek Dodona oracle worked by drawing from separate jars, questions tablets and "yes" and "no" pellets. The result was then combined to make a final prophecy. The Indian scholar Pingala (c. 2nd century BC) developed

4218-443: The physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program . It may be physically implemented with a two-state device. A contiguous group of binary digits is commonly called a bit string , a bit vector, or a single-dimensional (or multi-dimensional) bit array . A group of eight bits is called one  byte , but historically

4292-517: The representation of 0 . Different logic families require different voltages, and variations are allowed to account for component aging and noise immunity. For example, in transistor–transistor logic (TTL) and compatible circuits, digit values 0 and 1 at the output of a device are represented by no higher than 0.4 V and no lower than 2.6 V, respectively; while TTL inputs are specified to recognize 0.8 V or below as 0 and 2.2 V or above as 1 . Bits are transmitted one at

4366-413: The result of a subtraction is less than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is, 10/10) from the left, subtracting it from the next positional value. Subtracting a positive number is equivalent to adding a negative number of equal absolute value . Computers use signed number representations to handle negative numbers—most commonly

4440-456: The result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary: In this example, two numerals are being added together: 01101 2 (13 10 ) and 10111 2 (23 10 ). The top row shows

4514-438: The right, and not to the left like in the binary numbers of the modern positional notation . In Pingala's system, the numbers start from number one, and not zero. Four short syllables "0000" is the first pattern and corresponds to the value one. The numerical value is obtained by adding one to the sum of place values . The Ifá is an African divination system . Similar to the I Ching , but has up to 256 binary signs, unlike

SECTION 60

#1732780651168

4588-554: The rightmost bit representing 2 , the next representing 2 , then 2 , and so on. The value of a binary number is the sum of the powers of 2 represented by each "1" bit. For example, the binary number 100101 is converted to decimal form as follows: Fractions in binary arithmetic terminate only if the denominator is a power of 2 . As a result, 1/10 does not have a finite binary representation ( 10 has prime factors 2 and 5 ). This causes 10 × 1/10 not to precisely equal 1 in binary floating-point arithmetic . As an example, to interpret

4662-424: The size of the byte is not strictly defined. Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is usually a nibble . In information theory , one bit is the information entropy of a random binary variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known. As

4736-435: The standard carry from one column to the next, the lowest-ordered "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. The "used" numbers must be crossed off, since they are already added. Other long strings may likewise be cancelled using the same technique. Then, simply add together any remaining digits normally. Proceeding in this manner gives

4810-577: The thickness of alternating black and white lines. The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027 , which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit. However, the lower-case letter 'b' is widely used as well and

4884-556: The two possible values of one bit of storage are not equally likely, that bit of storage contains less than one bit of information. If the value is completely predictable, then the reading of that value provides no information at all (zero entropic bits, because no resolution of uncertainty occurs and therefore no information is available). If a computer file that uses n  bits of storage contains only m  <  n  bits of information, then that information can in principle be encoded in about m  bits, at least on

4958-444: The two values of a bit may be represented by two levels of electric charge stored in a capacitor . In certain types of programmable logic arrays and read-only memory , a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical discs , a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes , bits are encoded as

5032-602: The world can better present and demonstrate this power than the origin of numbers, as it is presented here through the simple and unadorned presentation of One and Zero or Nothing. In 1854, British mathematician George Boole published a landmark paper detailing an algebraic system of logic that would become known as Boolean algebra . His logical calculus was to become instrumental in the design of digital electronic circuitry. In 1937, Claude Shannon produced his master's thesis at MIT that implemented Boolean algebra and binary arithmetic using electronic relays and switches for

5106-451: Was also used in Morse code (1844) and early digital communications machines such as teletypes and stock ticker machines (1870). Ralph Hartley suggested the use of a logarithmic measure of information in 1928. Claude E. Shannon first used the word "bit" in his seminal 1948 paper " A Mathematical Theory of Communication ". He attributed its origin to John W. Tukey , who had written

5180-542: Was no better than brute force . Biham and Shamir found that even small modifications to an S-box could significantly weaken DES. Any S-box where any linear combination of output bits is produced by a bent function of the input bits is termed a perfect S-box . S-boxes can be analyzed using linear cryptanalysis and differential cryptanalysis in the form of a Linear approximation table (LAT) or Walsh transform and Difference Distribution Table (DDT) or autocorrelation table and spectrum. Its strength may be summarized by

5254-460: Was often stored as the position of a mechanical lever or gear, or the presence or absence of a hole at a specific point of a paper card or tape . The first electrical devices for discrete logic (such as elevator and traffic light control circuits , telephone switches , and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". When relays were replaced by vacuum tubes , starting in

5328-507: Was recommended by the IEEE 1541 Standard (2002) . In contrast, the upper case letter 'B' is the standard and customary symbol for byte. Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte , coined by Werner Buchholz in June 1956, which historically

5402-541: Was used in the punched cards invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov , Charles Babbage , Herman Hollerith , and early computer manufacturers like IBM . A variant of that idea was the perforated paper tape . In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits

5476-405: Was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer and for this reason it was used as the basic addressable element in many computer architectures . The trend in hardware design converged on the most common implementation of using eight bits per byte, as it is widely used today. However, because of

#167832