Misplaced Pages

Byte

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
#520479

116-411: The byte is a unit of digital information that most commonly consists of eight bits . Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures . To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as

232-540: A random access memory chip with a capacity of 2 bytes would be referred to as a 256-megabyte chip. The table below illustrates these differences. In the past, uppercase K has been used instead of lowercase k to indicate 1024 instead of 1000. However, this usage was not consistently applied. On the other hand, for external storage systems (such as optical discs ), the SI prefixes are commonly used with their decimal values (powers of 10). Many attempts have sought to resolve

348-529: A shift function (like in ITA2 ), which would allow more than 64 codes to be represented by a six-bit code . In a shifted code, some character codes determine choices between options for the following character codes. It allows compact encoding, but is less reliable for data transmission , as an error in transmitting the shift code typically makes a long part of the transmission unreadable. The standards committee decided against shifting, and so ASCII required at least

464-400: A 60-bit word without having to split a byte between one word and the next. If longer bytes were needed, 60 bits would, of course, no longer be ideal. With present applications, 1, 4, and 6 bits are the really important cases.     With 64-bit words, it would often be necessary to make some compromises, such as leaving 4 bits unused in a word when dealing with 6-bit bytes at

580-467: A 64-bit word length for Stretch. It also supports NSA 's requirement for 8-bit bytes. Werner's term "Byte" first popularized in this memo.     NB. This timeline erroneously specifies the birth date of the term "byte" as July 1956 , while Buchholz actually used the term as early as June 1956 .     [...] 60 is a multiple of 1, 2, 3, 4, 5, and 6. Hence bytes of length from 1 to 6 bits can be packed efficiently into

696-522: A BS (backspace). Instead, there was a key marked RUB OUT that sent code 127 (DEL). The purpose of this key was to erase mistakes in a manually-input paper tape: the operator had to push a button on the tape punch to back it up, then type the rubout, which punched all holes and replaced the mistake with a character that was intended to be ignored. Teletypes were commonly used with the less-expensive computers from Digital Equipment Corporation (DEC); these systems had to use what keys were available, and thus

812-465: A birth certificate. But I am sure that "byte" is coming of age in 1977 with its 21st birthday.     Many have assumed that byte, meaning 8 bits, originated with the IBM System/360, which spread such bytes far and wide in the mid-1960s. The editor is correct in pointing out that the term goes back to the earlier Stretch computer (but incorrect in that Stretch was the first, not

928-404: A byte, is sometimes called a nibble , nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same number of possible values as one hexadecimal digit has. Computers usually manipulate bits in groups of a fixed size, conventionally called words . The number of bits in a word is usually defined by the size of the registers in

1044-458: A character count followed by the characters of the line and which used EBCDIC rather than ASCII encoding. The Telnet protocol defined an ASCII "Network Virtual Terminal" (NVT), so that connections between hosts with different line-ending conventions and character sets could be supported by transmitting a standard text format over the network. Telnet used ASCII along with CR-LF line endings, and software using other conventions would translate between

1160-468: A conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates. The lowercase letter o for octet

1276-474: A convenience, because 1024 is approximately 1000 . This definition was popular in early decades of personal computing , with products like the Tandon 5 1 ⁄ 4 -inch DD floppy format (holding 368 640 bytes) being advertised as "360 KB", following the 1024 -byte convention. It was not universal, however. The Shugart SA-400 5 1 ⁄ 4 -inch floppy disk held 109,375 bytes unformatted, and

SECTION 10

#1732793689521

1392-483: A full transmission unit usually additionally includes a start bit, 1 or 2 stop bits, and possibly a parity bit , and thus its size may vary from seven to twelve bits for five to eight bits of actual data. For synchronous communication the error checking usually uses bytes at the end of a frame .     Terms used here to describe the structure imposed by the machine design, in addition to bit , are listed below.      Byte denotes

1508-456: A fundamental storage principle, which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted log b N . Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely log c N = (log c b ) log b N . Therefore,

1624-475: A group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term

1740-498: A line terminator. The tty driver would handle the LF to CRLF conversion on output so files can be directly printed to terminal, and NL (newline) is often used to refer to CRLF in UNIX documents. Unix and Unix-like systems, and Amiga systems, adopted this convention from Multics. On the other hand, the original Macintosh OS , Apple DOS , and ProDOS used carriage return (CR) alone as

1856-605: A line terminator; however, since Apple later replaced these obsolete operating systems with their Unix-based macOS (formerly named OS X) operating system, they now use line feed (LF) as well. The Radio Shack TRS-80 also used a lone CR to terminate lines. Computers attached to the ARPANET included machines running operating systems such as TOPS-10 and TENEX using CR-LF line endings; machines running operating systems such as Multics using LF line endings; and machines running operating systems such as OS/360 that represented lines as

1972-406: A number of bits, treated as a unit, and usually representing a character or a part of a character.     NOTES:     1 The number of bits in a byte is fixed for a given data processing system.     2 The number of bits in a byte is usually 8.      We received the following from W Buchholz, one of the individuals who

2088-600: A reserved device control (DC0), synchronous idle (SYNC), and acknowledge (ACK). These were positioned to maximize the Hamming distance between their bit patterns. ASCII-code order is also called ASCIIbetical order. Collation of data is sometimes done in this order rather than "standard" alphabetical order ( collating sequence ). The main deviations in ASCII order are: An intermediate order converts uppercase letters to lowercase before comparing ASCII values. ASCII reserves

2204-541: A reserved meaning. Over time this interpretation has been co-opted and has eventually been changed. In modern usage, an ESC sent to the terminal usually indicates the start of a command sequence, which can be used to address the cursor, scroll a region, set/query various terminal properties, and more. They are usually in the form of a so-called " ANSI escape code " (often starting with a " Control Sequence Introducer ", "CSI", " ESC [ ") from ECMA-48 (1972) and its successors. Some escape sequences do not have introducers, like

2320-404: A seven-bit code. The committee considered an eight-bit code, since eight bits ( octets ) would allow two four-bit patterns to efficiently encode two digits with binary-coded decimal . However, it would require all data transmission to send eight bits when seven could suffice. The committee voted to use a seven-bit code to minimize costs associated with data transmission. Since perforated tape at

2436-399: A terminal. Some operating systems such as CP/M tracked file length only in units of disk blocks, and used control-Z to mark the end of the actual text in the file. For these reasons, EOF, or end-of-file , was used colloquially and conventionally as a three-letter acronym for control-Z instead of SUBstitute. The end-of-text character ( ETX ), also known as control-C , was inappropriate for

SECTION 20

#1732793689521

2552-404: A unit which "contains an unspecified amount of information ... capable of holding at least 64 distinct values ... at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits". He notes that "Since 1975 or so, the word byte has come to mean a sequence of precisely eight binary digits...When we speak of bytes in connection with MIX we shall confine ourselves to

2668-449: A variety of reasons, while using control-Z as the control character to end a file is analogous to the letter Z's position at the end of the alphabet, and serves as a very convenient mnemonic aid . A historically common and still prevalent convention uses the ETX character convention to interrupt and halt a program via an input data stream, usually from a keyboard. The Unix terminal driver uses

2784-737: Is 0101 in binary). Many of the non-alphanumeric characters were positioned to correspond to their shifted position on typewriters; an important subtlety is that these were based on mechanical typewriters, not electric typewriters. Mechanical typewriters followed the de facto standard set by the Remington No. 2 (1878), the first typewriter with a shift key, and the shifted values of 23456789- were "#$ %_&'()  – early typewriters omitted 0 and 1 , using O (capital letter o ) and l (lowercase letter L ) instead, but 1! and 0) pairs became standard once 0 and 1 became common. Thus, in ASCII !"#$ % were placed in

2900-662: Is 1024 bytes = 1024 bytes, one mebibyte (1 MiB) is 1024 bytes = 1 048 576 bytes, and so on. In 1999, Donald Knuth suggested calling the kibibyte a "large kilobyte" ( KKB ). The IEC adopted the IUPAC proposal and published the standard in January 1999. The IEC prefixes are part of the International System of Quantities . The IEC further specified that the kilobyte should only be used to refer to 1000 bytes. Lawsuits arising from alleged consumer confusion over

3016-400: Is a character encoding standard for electronic communication. ASCII codes represent text in computers, telecommunications equipment , and other devices. ASCII has just 128 code points , of which only 95 are printable characters , which severely limit its scope. The set of available punctuation had significant impact on the syntax of computer languages and text markup. ASCII hugely influenced

3132-410: Is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in

3248-403: Is a deliberate respelling of bite to avoid accidental mutation to bit . Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits , is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which

3364-510: Is coined from bite , but respelled to avoid accidental mutation to bit .)     A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60  [ fr ] computer.)      Block refers to

3480-460: Is defined as eight bits. It is a signed data type, holding values from −128 to 127. .NET programming languages, such as C# , define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127 , respectively. In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. For asynchronous communication

3596-907: Is defined as the symbol for octet in IEC ;80000-13 and is commonly used in languages such as French and Romanian , and is also combined with metric prefixes for multiples, for example ko and Mo. More than one system exists to define unit multiples based on the byte. Some systems are based on powers of 10 , following the International System of Units (SI), which defines for example the prefix kilo as 1000 (10); other systems are based on powers of 2 . Nomenclature for these systems has led to confusion. Systems based on powers of 10 use standard SI prefixes ( kilo , mega , giga , ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes ( kibi , mebi , gibi , ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use

Byte - Misplaced Pages Continue

3712-580: Is defined by international standard IEC 80000-13 and is supported by national and international standards bodies ( BIPM , IEC , NIST ). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 1024 bytes. The natural binary counterparts to ronna- and quetta- were given in a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) as robi- (Ri, 1024) and quebi- (Qi, 1024), but have not yet been adopted by

3828-665: Is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC). The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 1000 bytes. The additional prefixes ronna- for 1000 and quetta- for 1000 were adopted by the International Bureau of Weights and Measures (BIPM) in 2022. This definition is most commonly used for data-rate units in computer networks , internal bus, hard drive and flash media transfer speeds, and for

3944-433: Is just as easy to use all six bits in alphanumeric work, or to handle bytes of only one bit for logical analysis, or to offset the bytes by any number of bits. All this can be done by pulling the appropriate shift diagonals. An analogous matrix arrangement is used to change from serial to parallel operation at the output of the adder. [...]     byte:     A string that consists of

4060-471: Is often called a nibble , also nybble , which is conveniently represented by a single hexadecimal digit. The term octet unambiguously specifies a size of eight bits. It is used extensively in protocol definitions. Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; however, this usage is no longer common. The exact origin of the term

4176-427: Is replaced by a second control-S to resume output. The 33 ASR also could be configured to employ control-R (DC2) and control-T (DC4) to start and stop the tape punch; on some units equipped with this function, the corresponding control character lettering on the keycap above the letter was TAPE and TAPE respectively. The Teletype could not move its typehead backwards, so it did not have a key on its keyboard to send

4292-404: Is the newline problem on various operating systems . Teletype machines required that a line of text be terminated with both "carriage return" (which moves the printhead to the beginning of the line) and "line feed" (which advances the paper one line without moving the printhead). The name "carriage return" comes from the fact that on a manual typewriter the carriage holding the paper moves while

4408-653: Is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. The unit symbol for the byte is specified in IEC 80000-13 , IEEE 1541 and the Metric Interchange Format as the upper-case character B. In the International System of Quantities (ISQ), B is also the symbol of the bel , a unit of logarithmic power ratio named after Alexander Graham Bell , creating

4524-457: Is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (ie, different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite , but respelled to avoid accidental mutation to bit. )      System/360 took over many of

4640-597: The IRE Transactions on Electronic Computers , June 1959, page 121. The notions of that paper were elaborated in Chapter 4 of Planning a Computer System (Project Stretch) , edited by W Buchholz, McGraw-Hill Book Company (1962). The rationale for coining the term was explained there on page 40 as follows: Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character

4756-649: The American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard , which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and

Byte - Misplaced Pages Continue

4872-503: The Comité Consultatif International Téléphonique et Télégraphique (CCITT) International Telegraph Alphabet No. 2 (ITA2) standard of 1932, FIELDATA (1956 ), and early EBCDIC (1963), more than 64 codes were required for ASCII. ITA2 was in turn based on Baudot code , the 5-bit telegraph code Émile Baudot invented in 1870 and patented in 1874. The committee debated the possibility of

4988-575: The International Union of Pure and Applied Chemistry 's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary). In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024. Thus one kibibyte (1 KiB)

5104-580: The Internet Protocol ( RFC   791 ) refer to an 8-bit byte as an octet . Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness . The size of the byte has historically been hardware -dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in

5220-636: The Teletype Model 33 , which used the left-shifted layout corresponding to ASCII, differently from traditional mechanical typewriters. Electric typewriters, notably the IBM Selectric (1961), used a somewhat different layout that has become de facto standard on computers – following the IBM PC (1981), especially Model M (1984) – and thus shift values for symbols on modern keyboards do not correspond as closely to

5336-730: The United States Federal Government support ASCII, stating: I have also approved recommendations of the Secretary of Commerce [ Luther H. Hodges ] regarding standards for recording the Standard Code for Information Interchange on magnetic tapes and paper tapes when they are used in computer operations. All computers and related equipment configurations brought into the Federal Government inventory on and after July 1, 1969, must have

5452-667: The carriage return , line feed , and tab codes. For example, lowercase i would be represented in the ASCII encoding by binary 1101001 = hexadecimal 69 ( i is the ninth letter) = decimal 105. Despite being an American standard, ASCII does not have a code point for the cent (¢). It also does not support English terms with diacritical marks such as résumé and jalapeño , or proper nouns with diacritical marks such as Beyoncé (although on certain devices characters could be combined with punctuation such as Tilde (~) and Backtick (`) to approximate such characters.) The American Standard Code for Information Interchange (ASCII)

5568-460: The entropy of random variables. The most commonly used units of data storage capacity are the bit , the capacity of a system that has only two states, and the byte (or octet ), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (power-of-two prefixes). In 1928, Ralph Hartley observed

5684-645: The "Reset to Initial State", "RIS" command " ESC c ". In contrast, an ESC read from the terminal is most often used as an out-of-band character used to terminate an operation or special mode, as in the TECO and vi text editors . In graphical user interface (GUI) and windowing systems, ESC generally causes an application to abort its current operation or to exit (terminate) altogether. The inherent ambiguity of many control characters, combined with their historical usage, created problems when transferring "plain text" files between systems. The best example of this

5800-582: The "help" prefix command in GNU Emacs . Many more of the control characters have been assigned meanings quite different from their original ones. The "escape" character (ESC, code 27), for example, was intended originally to allow sending of other control characters as literals instead of invoking their meaning, an "escape sequence". This is the same meaning of "escape" encountered in URL encodings, C language strings, and other systems where certain characters have

5916-401: The "line feed" function (which causes a printer to advance its paper), and character 8 represents " backspace ". RFC   2822 refers to control characters that do not include carriage return, line feed or white space as non-whitespace control characters. Except for the control characters that prescribe elementary line-oriented formatting, ASCII does not define any mechanism for describing

SECTION 50

#1732793689521

6032-495: The 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes, and persisted, in legacy systems, into the twenty-first century. In this era, bit groupings in the instruction stream were often referred to as syllables or slab , before the term byte became common. The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993,

6148-419: The ASCII chart in this article. Ninety-five of the encoded characters are printable: these include the digits 0 to 9 , lowercase letters a to z , uppercase letters A to Z , and punctuation symbols . In addition, the original ASCII specification included 33 non-printing control codes which originated with Teletype models ; most of these are now obsolete, although a few are still commonly used, such as

6264-679: The ASCII table as earlier keyboards did. The /? pair also dates to the No. 2, and the ,< .> pairs were used on some keyboards (others, including the No. 2, did not shift , (comma) or . (full stop) so they could be used in uppercase without unshifting). However, ASCII split the ;: pair (dating to No. 2), and rearranged mathematical symbols (varied conventions, commonly -* =+ ) to :* ;+ -= . Some then-common typewriter characters were not included, notably ½ ¼ ¢ , while ^ ` ~ were included as diacritics for international use, and < > for mathematical use, together with

6380-512: The Adder. The Adder may accept all or only some of the bits.     Assume that it is desired to operate on 4 bit decimal digits , starting at the right. The 0-diagonal is pulsed first, sending out the six bits 0 to 5, of which the Adder accepts only the first four (0-3). Bits 4 and 5 are ignored. Next, the 4 diagonal is pulsed. This sends out bits 4 to 9, of which the last two are again ignored, and so on.     It

6496-470: The DEL character was assigned to erase the previous character. Because of this, DEC video terminals (by default) sent the DEL character for the key marked "Backspace" while the separate key marked "Delete" sent an escape sequence ; many other competing terminals sent a BS character for the backspace key. The early Unix tty drivers, unlike some modern implementations, allowed only one character to be set to erase

6612-509: The IEC and ISO. An alternative system of nomenclature for the same units (referred to here as the customary convention ), in which 1 kilobyte (KB) is equal to 1,024 bytes, 1 megabyte (MB) is equal to 1024 bytes and 1 gigabyte (GB) is equal to 1024 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. While confusing and incorrect,

6728-512: The Shift Matrix to be used to convert a 60-bit word , coming from Memory in parallel, into characters , or 'bytes' as we have called them, to be sent to the Adder serially. The 60 bits are dumped into magnetic cores on six different levels. Thus, if a 1 comes out of position 9, it appears in all six cores underneath. Pulsing any diagonal line will send the six bits stored along that line to

6844-478: The Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing.     Since then the term byte has generally meant 8 bits, and it has thus passed into the general vocabulary.     Are there any other terms coined especially for

6960-628: The System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines . These used the eight-bit μ-law encoding . This large investment promised to reduce transmission costs for eight-bit data. In Volume 1 of The Art of Computer Programming (first published in 1968), Donald Knuth uses byte in his hypothetical MIX computer to denote

7076-469: The Teletype Model 33 machine assignments for codes 17 (control-Q, DC1, also known as XON), 19 (control-S, DC3, also known as XOFF), and 127 ( delete ) became de facto standards. The Model 33 was also notable for taking the description of control-G (code 7, BEL, meaning audibly alert the operator) literally, as the unit contained an actual bell which it rang when it received a BEL character. Because

SECTION 60

#1732793689521

7192-701: The Teletype Model 35 as a seven- bit teleprinter code promoted by Bell data services. Work on the ASCII standard began in May 1961, with the first meeting of the American Standards Association's (ASA) (now the American National Standards Institute or ANSI) X3.2 subcommittee. The first edition of the standard was published in 1963, underwent a major revision during 1967, and experienced its most recent update during 1986. Compared to earlier telegraph codes,

7308-589: The binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1 000 000 000 (10) bytes (the decimal definition), rather than the binary definition (2, i.e., 1 073 741 824 ). Specifically, the United States District Court for the Northern District of California held that "the U.S. Congress has deemed

7424-414: The capacities of computer memories and some storage units are often multiples of some large power of two, such as 2  = 268 435 456 bytes. To avoid such unwieldy numbers, people have often repurposed the SI prefixes to mean the nearest power of two, e.g., using the prefix kilo for 2  = 1024, mega for 2  = 1 048 576 , and giga for 2  = 1 073 741 824 , and so on. For example,

7540-432: The capacities of most storage media , particularly hard drives , flash -based storage, and DVDs . Operating systems that use this definition include macOS , iOS , Ubuntu , and Debian . It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance . A system of units based on powers of 2 in which 1 kibibyte (KiB) is equal to 1,024 (i.e., 2) bytes

7656-585: The change into its draft standard. The X3.2.4 task group voted its approval for the change to ASCII at its May 1963 meeting. Locating the lowercase letters in sticks 6 and 7 caused the characters to differ in bit pattern from the upper case by a single bit, which simplified case-insensitive character matching and the construction of keyboards and printers. The X3 committee made other changes, including other new characters (the brace and vertical bar characters), renaming some control characters (SOM became start of header (SOH)) and moving or removing others (RU

7772-564: The choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states. When b is 2, the unit is the shannon , equal to the information content of one "bit" (a portmanteau of binary digit ). A system with 8 possible states, for example, can store up to log 2 8 = 3 bits of information. Other units that have been named include: The trit, ban, and nat are rarely used to measure storage capacity; but

7888-453: The computer field which have found their way into general dictionaries of English language?     1956 Summer: Gerrit Blaauw , Fred Brooks , Werner Buchholz , John Cocke and Jim Pomerene join the Stretch team. Lloyd Hunter provides transistor leadership.     1956 July [ sic ]: In a report Werner Buchholz lists the advantages of

8004-869: The computer's CPU , or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 32 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72 bits or others. Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad"). Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks , or, in CPU caches , cache lines . Virtual memory systems partition

8120-636: The computer's main storage into even larger units, traditionally called pages . Terms for large quantities of bits can be formed using the standard range of SI prefixes for powers of 10, e.g., kilo  = 10  = 1000 (as in kilobit or kbit), mega  = 10  = 1 000 000 (as in megabit or Mbit) and giga  = 10 = 1 000 000 000 (as in gigabit or Gbit). These prefixes are more often used for multiples of bytes, as in kilobyte (1 kB = 8000 bit), megabyte (1 MB = 8 000 000 bit ), and gigabyte (1 GB = 8 000 000 000 bit ). However, for technical reasons,

8236-493: The concept of "carriage return" was meaningless. IBM's PC DOS (also marketed as MS-DOS by Microsoft) inherited the convention by virtue of being loosely based on CP/M, and Windows in turn inherited it from MS-DOS. Requiring two characters to mark the end of a line introduces unnecessary complexity and ambiguity as to how to interpret each character when encountered by itself. To simplify matters, plain text data streams, including files, on Multics used line feed (LF) alone as

8352-802: The confusion by providing alternative notations for power-of-two multiples. The International Electrotechnical Commission (IEC) issued a standard for this purpose by defining a series of binary prefixes that use 1024 instead of 1000 as the main radix: The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. Several other units of information storage have been named: Some of these names are jargon , obsolete, or used only in very restricted contexts. American Standard Code for Information Interchange ASCII ( / ˈ æ s k iː / ASS -kee ), an acronym for American Standard Code for Information Interchange ,

8468-527: The convention was so well established that backward compatibility necessitated continuing to follow it. When Gary Kildall created CP/M , he was inspired by some of the command line interface conventions used in DEC's RT-11 operating system. Until the introduction of PC DOS in 1981, IBM had no influence in this because their 1970s operating systems used EBCDIC encoding instead of ASCII, and they were oriented toward punch-card input and line printer output on which

8584-628: The customary convention is used by the Microsoft Windows operating system and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone , AT&T , Orange and Telstra . For storage capacity, the customary convention was used by macOS and iOS through Mac OS X 10.6 Snow Leopard and iOS 10, after which they switched to units based on powers of 10. Various computer vendors have coined terms for data of various sizes, sometimes with different sizes for

8700-451: The decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state. ' " Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital . Western Digital settled the challenge and added explicit disclaimers to products that

8816-476: The design of character sets used by modern computers, including Unicode which has over a million code points, but the first 128 of these are the same as ASCII. The Internet Assigned Numbers Authority (IANA) prefers the name US-ASCII for this character encoding. ASCII is one of the IEEE milestones . ASCII was developed in part from telegraph code . Its first commercial use was in the Teletype Model 33 and

8932-663: The earlier five-bit ITA2 , which was also used by the competing Telex teleprinter system. Bob Bemer introduced features such as the escape sequence . His British colleague Hugh McGregor Ross helped to popularize this work – according to Bemer, "so much so that the code that was to become ASCII was first called the Bemer–Ross Code in Europe". Because of his extensive work on ASCII, Bemer has been called "the father of ASCII". On March 11, 1968, US President Lyndon B. Johnson mandated that all computers purchased by

9048-576: The earlier teleprinter encoding systems. Like other character encodings , ASCII specifies a correspondence between digital bit patterns and character symbols (i.e. graphemes and control characters ). This allows digital devices to communicate with each other and to process, store, and communicate character-oriented information such as written language. Before ASCII was developed, the encodings in use included 26 alphabetic characters, 10 numerical digits , and from 11 to 25 special graphic symbols. To include all these, and control characters compatible with

9164-548: The end-of-transmission character ( EOT ), also known as control-D, to indicate the end of a data stream. In the C programming language , and in Unix conventions, the null character is used to terminate text strings ; such null-terminated strings can be known in abbreviation as ASCIZ or ASCIIZ, where here Z stands for "zero". Other representations might be used by specialist equipment, for example ISO 2047 graphics or hexadecimal numbers. Codes 20 hex to 7E hex , known as

9280-627: The first 32 code points (numbers 0–31 decimal) and the last one (number 127 decimal) for control characters . These are codes intended to control peripheral devices (such as printers ), or to provide meta-information about data streams, such as those stored on magnetic tape. Despite their name, these code points do not represent printable characters (i.e. they are not characters at all, but signals). For debugging purposes, "placeholder" symbols (such as those given in ISO 2047 and its predecessors) are assigned to them. For example, character 0x0A represents

9396-472: The former sense of the word, harking back to the days when bytes were not yet standardized." The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8080 , the direct predecessor of the 8086 , could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity

9512-566: The input and output. However, the LINK Computer can be equipped to edit out these gaps and to permit handling of bytes which are split between words. [...]     [...] The maximum input-output byte size for serial operation will now be 8 bits, not counting any error detection and correction bits. Thus, the Exchange will operate on an 8-bit byte basis, and any input-output units with less than 8 bits per byte will leave

9628-414: The integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. In addition, the C and C++ standards require that there are no gaps between two bytes. This means every bit in memory is part of a byte. Java's primitive data type byte

9744-448: The keytop for the O key also showed a left-arrow symbol (from ASCII-1963, which had this character instead of underscore ), a noncompliant use of code 15 (control-O, shift in) interpreted as "delete previous character" was also adopted by many early timesharing systems but eventually became neglected. When a Teletype 33 ASR equipped with the automatic paper tape reader received a control-S (XOFF, an abbreviation for transmit off), it caused

9860-465: The last, of IBM's second-generation transistorized computers to be developed).     The first reference found in the files was contained in an internal memo written in June 1956 during the early days of developing Stretch . A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use

9976-800: The local conventions and the NVT. The File Transfer Protocol adopted the Telnet protocol, including use of the Network Virtual Terminal, for use when transmitting commands and transferring data in the default ASCII mode. This adds complexity to implementations of those protocols, and to other network protocols, such as those used for E-mail and the World Wide Web, on systems not using the NVT's CR-LF line-ending convention. The PDP-6 monitor, and its PDP-10 successor TOPS-10, used control-Z (SUB) as an end-of-file indication for input from

10092-649: The nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases. Several conventional names are used for collections or groups of bits. Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture, but today it almost always means eight bits – that is, an octet . An 8-bit byte can represent 256 (2 ) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as

10208-455: The number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program.     [...] Most important, from the point of view of editing, will be the ability to handle any characters or digits, from 1 to 6 bits long.     Figure 2 shows

10324-405: The physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations used in earlier card punches. The prominence of

10440-465: The potential ambiguity of the term "byte". The symbol for octet, 'o', also conveniently eliminates the ambiguity in the symbol 'B' between byte and bel . The term byte was coined by Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction. It

10556-525: The prefixes K, M, and G, creating ambiguity when the prefixes M or G are used. While the difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based terabyte is about 9% smaller than power-of-2-based tebibyte. Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB)

10672-452: The previous character in canonical input processing (where a very simple line editor is available); this could be set to BS or DEL, but not both, resulting in recurring situations of ambiguity where users had to decide depending on what terminal they were using ( shells that allow line editing, such as ksh , bash , and zsh , understand both). The assumption that no key sent a BS character allowed Ctrl+H to be used for other purposes, such as

10788-515: The previous section. Code 7F hex corresponds to the non-printable "delete" (DEL) control character and is therefore omitted from this chart; it is covered in the previous section's chart. Earlier versions of ASCII used the up arrow instead of the caret (5E hex ) and the left arrow instead of the underscore (5F hex ). ASCII was first used commercially during 1963 as a seven-bit teleprinter code for American Telephone & Telegraph 's TWX (TeletypeWriter eXchange) network. TWX originally used

10904-413: The printable characters, represent letters, digits, punctuation marks , and a few miscellaneous symbols. There are 95 printable characters in total. Code 20 hex , the "space" character, denotes the space between words, as produced by the space bar of a keyboard. Since the space character is considered an invisible graphic (rather than a control character) it is listed in the table below instead of in

11020-444: The proposed Bell code and ASCII were both ordered for more convenient sorting (i.e., alphabetization) of lists and added features for devices other than teleprinters. The use of ASCII format for Network Interchange was described in 1969. That document was formally elevated to an Internet Standard in 2015. Originally based on the (modern) English alphabet , ASCII encodes 128 specified characters into seven-bit integers as shown by

11136-455: The remaining bits blank. The resultant gaps can be edited out later by programming [...] Units of information In digital computing and telecommunications , a unit of information is the capacity of some standard data storage system or communication channel , used to measure the capacities of other systems and channels. In information theory , units of information are also used to measure information contained in messages and

11252-522: The same reason, many special signs commonly used as separators were placed before digits. The committee decided it was important to support uppercase 64-character alphabets , and chose to pattern ASCII so it could be reduced easily to a usable 64-character set of graphic codes, as was done in the DEC SIXBIT code (1963). Lowercase letters were therefore not interleaved with uppercase . To keep options available for lowercase letters and other graphics,

11368-449: The same term even within a single vendor. These terms include double word , half word , long word , quad word , slab , superword and syllable . There are also informal terms. e.g., half byte and nybble for 4 bits, octal K for 1000 8 . Contemporary computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as

11484-494: The second stick, positions 1–5, corresponding to the digits 1–5 in the adjacent stick. The parentheses could not correspond to 9 and 0 , however, because the place corresponding to 0 was taken by the space character. This was accommodated by removing _ (underscore) from 6 and shifting the remaining characters, which corresponded to many European typewriters that placed the parentheses with 8 and 9 . This discrepancy from typewriters led to bit-paired keyboards , notably

11600-538: The simple line characters \ | (in addition to common / ). The @ symbol was not used in continental Europe and the committee expected it would be replaced by an accented À in the French variation, so the @ was placed in position 40 hex , right before the letter A. The control codes felt essential for data transmission were the start of message (SOM), end of address (EOA), end of message (EOM), end of transmission (EOT), "who are you?" (WRU), "are you?" (RU),

11716-402: The special and numeric codes were arranged before the letters, and the letter A was placed in position 41 hex to match the draft of the corresponding British standard. The digits 0–9 are prefixed with 011, but the remaining 4 bits correspond to their respective values in binary, making conversion with binary-coded decimal straightforward (for example, 5 in encoded to 011 0101 , where 5

11832-425: The standard is unclear about the meaning of "delete". Probably the most influential single device affecting the interpretation of these characters was the Teletype Model 33 ASR, which was a printing terminal with an available paper tape reader/punch option. Paper tape was a very popular medium for long-term program storage until the 1980s, less costly and in some ways less fragile than magnetic tape. In particular,

11948-440: The structure or appearance of text within a document. Other schemes, such as markup languages , address page and document layout and formatting. The original ASCII standard used only short descriptive phrases for each control character. The ambiguity this caused was sometimes intentional, for example where a character would be used slightly differently on a terminal link than on a data stream , and sometimes accidental, for example

12064-458: The symbol for byte ( IEC 80000-13 uses "o" for octet in French, but also allows "B" in English). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits. A group of four bits, or half

12180-441: The tape reader to stop; receiving control-Q (XON, transmit on) caused the tape reader to resume. This so-called flow control technique became adopted by several early computer operating systems as a "handshaking" signal warning a sender to stop transmission because of impending buffer overflow ; it persists to this day in many systems as a manual output control technique. On some systems, control-S retains its meaning, but control-Q

12296-600: The time could record eight bits in one position, it also allowed for a parity bit for error checking if desired. Eight-bit machines (with octets as the native data type) that did not use parity checking typically set the eighth bit to 0. The code itself was patterned so that most control codes were together and all graphic codes were together, for ease of identification. The first two so-called ASCII sticks (32 positions) were reserved for control characters. The "space" character had to come before graphics to make sorting easier, so it became position 20 hex ; for

12412-447: The typebars that strike the ribbon remain stationary. The entire carriage had to be pushed (returned) to the right in order to position the paper for the next line. DEC operating systems ( OS/8 , RT-11 , RSX-11 , RSTS , TOPS-10 , etc.) used both characters to mark the end of a line so that the console device (originally Teletype machines) would work. By the time so-called "glass TTYs" (later called CRTs or "dumb terminals") came along,

12528-530: The ubiquitous acceptance of the 8-bit byte. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively. The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE). Internationally, the unit octet explicitly defines a sequence of eight bits, eliminating

12644-422: The usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and also settled. Many programming languages define the data type byte . The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that

12760-402: Was advertised as "110 Kbyte", using the 1000 convention. Likewise, the 8-inch DEC RX01 floppy (1975) held 256 256 bytes formatted, and was advertised as "256k". Some devices were advertised using a mixture of the two definitions: most notably, floppy disks advertised as "1.44 MB" have an actual capacity of 1440 KiB , the equivalent of 1.47 MB or 1.41 MiB. In 1995,

12876-562: Was developed under the auspices of a committee of the American Standards Association (ASA), called the X3 committee, by its X3.2 (later X3L2) subcommittee, and later by that subcommittee's X3.2.4 working group (now INCITS ). The ASA later became the United States of America Standards Institute (USASI) and ultimately became the American National Standards Institute (ANSI). With the other special characters and control codes filled in, ASCII

12992-523: Was in the context of the input-output equipment of the 1950s, which handled six bits at a time. The possibility of going to 8-bit bytes was considered in August 1956 and incorporated in the design of Stretch shortly thereafter .     The first published reference to the term occurred in 1959 in a paper ' Processing Data in Bits and Pieces ' by G A Blaauw , F P Brooks Jr and W Buchholz in

13108-526: Was jointly developed by Rand , MIT, and IBM. Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31 . Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army ( FIELDATA ) and Navy . These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called

13224-695: Was published as ASA X3.4-1963, leaving 28 code positions without any assigned meaning, reserved for future standardization, and one unassigned control code. There was some debate at the time whether there should be more control characters rather than the lowercase alphabet. The indecision did not last long: during May 1963 the CCITT Working Party on the New Telegraph Alphabet proposed to assign lowercase characters to sticks 6 and 7, and International Organization for Standardization TC 97 SC 2 voted during October to incorporate

13340-450: Was removed). ASCII was subsequently updated as USAS X3.4-1967, then USAS X3.4-1968, ANSI X3.4-1977, and finally, ANSI X3.4-1986. In the X3.15 standard, the X3 committee also addressed how ASCII should be transmitted ( least significant bit first) and recorded on perforated tape. They proposed a 9-track standard for magnetic tape and attempted to deal with some punched card formats. The X3.2 subcommittee designed ASCII based on

13456-473: Was working on IBM's Project Stretch in the mid 1950s. His letter tells the story.     Not being a regular reader of your magazine, I heard about the question in the November 1976 issue regarding the origin of the term "byte" from a colleague who knew that I had perpetrated this piece of jargon [see page 77 of November 1976 BYTE, "Olde Englishe"] . I searched my files and could not locate

#520479