85-650: The megabyte is a multiple of the unit byte for digital information. Its recommended unit symbol is MB . The unit prefix mega is a multiplier of 1 000 000 (10) in the International System of Units (SI). Therefore, one megabyte is one million bytes of information. This definition has been incorporated into the International System of Quantities . In the computer and information technology fields, other definitions have been used that arose for historical reasons of convenience. A common usage has been to designate one megabyte as 1 048 576 bytes (2 B),
170-404: A power of two multiple of the unit of address resolution (byte or word). Converting the index of an item in an array into the memory address offset of the item then requires only a shift operation rather than a multiplication. In some cases this relationship can also avoid the use of division operations. As a result, most modern computer designs have word sizes (and other operand sizes) that are
255-400: A 60-bit word without having to split a byte between one word and the next. If longer bytes were needed, 60 bits would, of course, no longer be ideal. With present applications, 1, 4, and 6 bits are the really important cases. With 64-bit words, it would often be necessary to make some compromises, such as leaving 4 bits unused in a word when dealing with 6-bit bytes at
340-467: A 64-bit word length for Stretch. It also supports NSA 's requirement for 8-bit bytes. Werner's term "Byte" first popularized in this memo. NB. This timeline erroneously specifies the birth date of the term "byte" as July 1956 , while Buchholz actually used the term as early as June 1956 . [...] 60 is a multiple of 1, 2, 3, 4, 5, and 6. Hence bytes of length from 1 to 6 bits can be packed efficiently into
425-465: A birth certificate. But I am sure that "byte" is coming of age in 1977 with its 21st birthday. Many have assumed that byte, meaning 8 bits, originated with the IBM System/360, which spread such bytes far and wide in the mid-1960s. The editor is correct in pointing out that the term goes back to the earlier Stretch computer (but incorrect in that Stretch was the first, not
510-426: A computer architecture is designed, the choice of a word size is of substantial importance. There are design considerations which encourage particular bit-group sizes for particular uses (e.g. for addresses), and these considerations point to different sizes for different uses. However, considerations of economy in design strongly push for one size, or a very few sizes related by multiples or fractions (submultiples) to
595-476: A convenience, because 1024 is approximately 1000 . This definition was popular in early decades of personal computing , with products like the Tandon 5 1 ⁄ 4 -inch DD floppy format (holding 368 640 bytes) being advertised as "360 KB", following the 1024 -byte convention. It was not universal, however. The Shugart SA-400 5 1 ⁄ 4 -inch floppy disk held 109,375 bytes unformatted, and
680-486: A count field, by a delimiting character, or by an additional bit called, e.g., flag, or word mark . Such machines often use binary-coded decimal in 4-bit digits, or in 6-bit characters, for numbers. This class of machines includes the IBM 702 , IBM 705 , IBM 7080 , IBM 7010 , UNIVAC 1050 , IBM 1401 , IBM 1620 , and RCA 301. Most of these machines work on one unit of memory at a time and since each instruction or datum
765-435: A disk drive is the product of the sector size, number of sectors per track, number of tracks per side, and the number of disk platters in the drive. Changes in any of these factors would not usually double the size. Depending on compression methods and file format , a megabyte of data can roughly be: The novel The Picture of Dorian Gray , by Oscar Wilde , hosted on Project Gutenberg as an uncompressed plain text file,
850-463: A floating point instruction can only address words while an integer arithmetic instruction can specify a field length of 1-64 bits, a byte size of 1-8 bits and an accumulator offset of 0-127 bits. In a byte-addressable machine with storage-to-storage (SS) instructions, there are typically move instructions to copy one or multiple bytes from one arbitrary location to another. In a byte-oriented ( byte-addressable ) machine without SS instructions, moving
935-454: A fresh design has to coexist as an alternative size to the original word size in a backward compatible design. The original word size remains available in future designs, forming the basis of a size family. In the mid-1970s, DEC designed the VAX to be a 32-bit successor of the 16-bit PDP-11 . They used word for a 16-bit quantity, while longword referred to a 32-bit quantity; this terminology
SECTION 10
#17327905803931020-484: A full transmission unit usually additionally includes a start bit, 1 or 2 stop bits, and possibly a parity bit , and thus its size may vary from seven to twelve bits for five to eight bits of actual data. For synchronous communication the error checking usually uses bytes at the end of a frame . Terms used here to describe the structure imposed by the machine design, in addition to bit , are listed below. Byte denotes
1105-475: A group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term
1190-731: A location in memory, is typically a hardware word (here, "hardware word" means the full-sized natural word of the processor, as opposed to any other definition used). Documentation for older computers with fixed word size commonly states memory sizes in words rather than bytes or characters. The documentation sometimes uses metric prefixes correctly, sometimes with rounding, e.g., 65 kilowords (kW) meaning for 65536 words, and sometimes uses them incorrectly, with kilowords (kW) meaning 1024 words (2 ) and megawords (MW) meaning 1,048,576 words (2 ). With standardization on 8-bit bytes and byte addressability, stating memory sizes in bytes, kilobytes, and megabytes with powers of 1024 rather than 1000 has become
1275-406: A number of bits, treated as a unit, and usually representing a character or a part of a character. NOTES: 1 The number of bits in a byte is fixed for a given data processing system. 2 The number of bits in a byte is usually 8. We received the following from W Buchholz, one of the individuals who
1360-404: A power of two times the size of a byte. As computer designs have grown more complex, the central importance of a single word size to an architecture has decreased. Although more capable hardware can use a wider variety of sizes of data, market forces exert pressure to maintain backward compatibility while extending processor capability. As a result, what might have been the central word size in
1445-401: A primary size. That preferred size becomes the word size of the architecture. Character size was in the past (pre-variable-sized character encoding ) one of the influences on unit of address resolution and the choice of word size. Before the mid-1960s, characters were most often stored in six bits; this allowed no more than 64 characters, so the alphabet was limited to upper case. Since it
1530-428: A quantity that conveniently expresses the binary architecture of digital computer memory. Standards bodies have deprecated this binary usage of the mega- prefix in favor of a new set of binary prefixes , by means of which the quantity 2 B is named mebibyte (symbol MiB). The unit megabyte is commonly used for 1000 (one million) bytes or 1024 bytes. The interpretation of using base 1024 originated as technical jargon for
1615-692: A shorter word (16 or 32 bits) may be used in contexts where the range of a wider word is not needed (especially where this can save considerable stack space or cache memory space). For example, Microsoft's Windows API maintains the programming language definition of WORD as 16 bits, despite the fact that the API may be used on a 32- or 64-bit x86 processor, where the standard word size would be 32 or 64 bits, respectively. Data structures containing such different sized words refer to them as: A similar phenomenon has developed in Intel's x86 assembly language – because of
1700-504: A single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures . To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as the Internet Protocol ( RFC 791 ) refer to an 8-bit byte as an octet . Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on
1785-495: A single byte from one arbitrary location to another is typically: Individual bytes can be accessed on a word-oriented machine in one of two ways. Bytes can be manipulated by a combination of shift and mask operations in registers. Moving a single byte from one arbitrary location to another may require the equivalent of the following: Alternatively many word-oriented machines implement byte operations with instructions using special byte pointers in registers or memory. For example,
SECTION 20
#17327905803931870-548: A unit of logarithmic power ratio named after Alexander Graham Bell , creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates. The lowercase letter o for octet
1955-405: A unit which "contains an unspecified amount of information ... capable of holding at least 64 distinct values ... at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits". He notes that "Since 1975 or so, the word byte has come to mean a sequence of precisely eight binary digits...When we speak of bytes in connection with MIX we shall confine ourselves to
2040-435: A variable number of cycles, depending on the size of the operands. The memory model of an architecture is strongly influenced by the word size. In particular, the resolution of a memory address, that is, the smallest unit that can be designated by an address, has often been chosen to be the word. In this approach, the word-addressable machine approach, address values which differ by one designate adjacent memory words. This
2125-407: Is 0.429 MB. Great Expectations is 0.994 MB, and Moby Dick is 1.192 MB. The human genome consists of DNA representing 800 MB of data. The parts that differentiate one person from another can be compressed to 4 MB. Byte The byte is a unit of digital information that most commonly consists of eight bits . Historically, the byte was the number of bits used to encode
2210-597: Is 1024 bytes = 1024 bytes, one mebibyte (1 MiB) is 1024 bytes = 1 048 576 bytes, and so on. In 1999, Donald Knuth suggested calling the kibibyte a "large kilobyte" ( KKB ). The IEC adopted the IUPAC proposal and published the standard in January 1999. The IEC prefixes are part of the International System of Quantities . The IEC further specified that the kilobyte should only be used to refer to 1000 bytes. Lawsuits arising from alleged consumer confusion over
2295-451: Is an important characteristic of any specific processor design or computer architecture . The size of a word is reflected in many aspects of a computer's structure and operation; the majority of the registers in a processor are usually word-sized and the largest datum that can be transferred to and from the working memory in a single operation is a word in many (not all) architectures. The largest possible address size, used to designate
2380-510: Is coined from bite , but respelled to avoid accidental mutation to bit .) A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60 [ fr ] computer.) Block refers to
2465-460: Is defined as eight bits. It is a signed data type, holding values from −128 to 127. .NET programming languages, such as C# , define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127 , respectively. In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. For asynchronous communication
2550-455: Is defined as the symbol for octet in IEC ;80000-13 and is commonly used in languages such as French and Romanian , and is also combined with metric prefixes for multiples, for example ko and Mo. More than one system exists to define unit multiples based on the byte. Some systems are based on powers of 10 , following the International System of Units (SI), which defines for example
2635-672: Is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC). The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 1000 bytes. The additional prefixes ronna- for 1000 and quetta- for 1000 were adopted by the International Bureau of Weights and Measures (BIPM) in 2022. This definition is most commonly used for data-rate units in computer networks , internal bus, hard drive and flash media transfer speeds, and for
Megabyte - Misplaced Pages Continue
2720-483: Is efficient in time and space to have the word size be a multiple of the character size, word sizes in this period were usually multiples of 6 bits (in binary machines). A common choice then was the 36-bit word , which is also a good size for the numeric properties of a floating point format. After the introduction of the IBM System/360 design, which uses eight-bit characters and supports lower-case letters,
2805-618: Is equal to 1,024 (i.e., 2 ) bytes is defined by international standard IEC 80000-13 and is supported by national and international standards bodies ( BIPM , IEC , NIST ). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 1024 bytes. The natural binary counterparts to ronna- and quetta- were given in a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) as robi- (Ri, 1024 ) and quebi- (Qi, 1024 ), but have not yet been adopted by
2890-433: Is just as easy to use all six bits in alphanumeric work, or to handle bytes of only one bit for logical analysis, or to offset the bytes by any number of bits. All this can be done by pulling the appropriate shift diagonals. An analogous matrix arrangement is used to change from serial to parallel operation at the output of the adder. [...] byte: A string that consists of
2975-405: Is natural in machines which deal almost always in word (or multiple-word) units, and has the advantage of allowing instructions to use minimally sized fields to contain addresses, which can permit a smaller instruction size or a larger variety of instructions. When byte processing is to be a significant part of the workload, it is usually more advantageous to use the byte , rather than the word, as
3060-466: Is often called a nibble , also nybble , which is conveniently represented by a single hexadecimal digit. The term octet unambiguously specifies a size of eight bits. It is used extensively in protocol definitions. Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; however, this usage is no longer common. The exact origin of
3145-461: Is several units long, each instruction takes several cycles just to access memory. These machines are often quite slow because of this. For example, instruction fetches on an IBM 1620 Model I take 8 cycles (160 μs) just to read the 12 digits of the instruction (the Model II reduced this to 6 cycles, or 4 cycles if the instruction did not need both address fields). Instruction execution takes
3230-528: Is the x86 family, of which processors of three different word lengths (16-bit, later 32- and 64-bit) have been released, while word continues to designate a 16-bit quantity. As software is routinely ported from one word-length to the next, some APIs and documentation define or refer to an older (and thus shorter) word-length than the full word length on the CPU that software may be compiled for. Also, similar to how bytes are used for small numbers in many programs,
3315-510: Is the 64-bit member of that architecture family, continues to refer to 16-bit halfword s, 32-bit word s, and 64-bit doubleword s, and additionally features 128-bit quadword s. In general, new processors must use the same data word lengths and virtual address widths as an older processor to have binary compatibility with that older processor. Often carefully written source code – written with source-code compatibility and software portability in mind – can be recompiled to run on
3400-465: Is the same as the terminology used for the PDP-11. This was in contrast to earlier machines, where the natural unit of addressing memory would be called a word , while a quantity that is one half a word would be called a halfword . In fitting with this scheme, a VAX quadword is 64 bits. They continued this 16-bit word/32-bit longword/64-bit quadword terminology with the 64-bit Alpha . Another example
3485-457: Is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (ie, different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite , but respelled to avoid accidental mutation to bit. ) System/360 took over many of
Megabyte - Misplaced Pages Continue
3570-597: The IRE Transactions on Electronic Computers , June 1959, page 121. The notions of that paper were elaborated in Chapter 4 of Planning a Computer System (Project Stretch) , edited by W Buchholz, McGraw-Hill Book Company (1962). The rationale for coining the term was explained there on page 40 as follows: Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character
3655-649: The American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard , which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and
3740-626: The International Union of Pure and Applied Chemistry 's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary). In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024. Thus one kibibyte (1 KiB)
3825-502: The PDP-10 byte pointer contained the size of the byte in bits (allowing different-sized bytes to be accessed), the bit position of the byte within the word, and the word address of the data. Instructions could automatically adjust the pointer to the next byte on, for example, load and deposit (store) operations. Different amounts of memory are used to store data values with different degrees of precision. The commonly used sizes are usually
3910-526: The bit endianness . The size of the byte has historically been hardware -dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes, and persisted, in legacy systems, into
3995-426: The 36-bit word being especially common on mainframe computers . The introduction of ASCII led to the move to systems with word lengths that were a multiple of 8-bits, with 16-bit machines being popular in the 1970s before the move to modern processors with 32 or 64 bits. Special-purpose designs like digital signal processors , may have any word length from 4 to 80 bits. The size of a word can sometimes differ from
4080-512: The Adder. The Adder may accept all or only some of the bits. Assume that it is desired to operate on 4 bit decimal digits , starting at the right. The 0-diagonal is pulsed first, sending out the six bits 0 to 5, of which the Adder accepts only the first four (0-3). Bits 4 and 5 are ignored. Next, the 4 diagonal is pulsed. This sends out bits 4 to 9, of which the last two are again ignored, and so on. It
4165-515: The IEC and ISO. An alternative system of nomenclature for the same units (referred to here as the customary convention ), in which 1 kilobyte (KB) is equal to 1,024 bytes, 1 megabyte (MB) is equal to 1024 bytes and 1 gigabyte (GB) is equal to 1024 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. While confusing and incorrect,
4250-512: The Shift Matrix to be used to convert a 60-bit word , coming from Memory in parallel, into characters , or 'bytes' as we have called them, to be sent to the Adder serially. The 60 bits are dumped into magnetic cores on six different levels. Thus, if a 1 comes out of position 9, it appears in all six cores underneath. Pulsing any diagonal line will send the six bits stored along that line to
4335-478: The Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing. Since then the term byte has generally meant 8 bits, and it has thus passed into the general vocabulary. Are there any other terms coined especially for
SECTION 50
#17327905803934420-631: The System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines . These used the eight-bit μ-law encoding . This large investment promised to reduce transmission costs for eight-bit data. In Volume 1 of The Art of Computer Programming (first published in 1968), Donald Knuth uses byte in his hypothetical MIX computer to denote
4505-591: The binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1 000 000 000 (10 ) bytes (the decimal definition), rather than the binary definition (2 , i.e., 1 073 741 824 ). Specifically, the United States District Court for the Northern District of California held that "the U.S. Congress has deemed
4590-586: The byte multiples that needed to be expressed by the powers of 2 but lacked a convenient name. As 1024 (2) approximates 1000 (10), roughly corresponding to the SI prefix kilo- , it was a convenient term to denote the binary multiple. In 1999, the International Electrotechnical Commission (IEC) published standards for binary prefixes requiring the use of megabyte to denote 1000 bytes, and mebibyte to denote 1024 bytes. By
4675-452: The capacities of most storage media , particularly hard drives , flash -based storage, and DVDs . Operating systems that use this definition include macOS , iOS , Ubuntu , and Debian . It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance . A system of units based on powers of 2 in which 1 kibibyte (KiB)
4760-453: The computer field which have found their way into general dictionaries of English language? 1956 Summer: Gerrit Blaauw , Fred Brooks , Werner Buchholz , John Cocke and Jim Pomerene join the Stretch team. Lloyd Hunter provides transistor leadership. 1956 July [ sic ]: In a report Werner Buchholz lists the advantages of
4845-635: The customary convention is used by the Microsoft Windows operating system and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone , AT&T , Orange and Telstra . For storage capacity, the customary convention was used by macOS and iOS through Mac OS X 10.6 Snow Leopard and iOS 10, after which they switched to units based on powers of 10. Various computer vendors have coined terms for data of various sizes, sometimes with different sizes for
4930-454: The decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state. ' " Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital . Western Digital settled the challenge and added explicit disclaimers to products that
5015-603: The end of 2009, the IEC Standard had been adopted by the IEEE , EU , ISO and NIST . Nevertheless, the term megabyte continues to be widely used with different meanings. In this convention, one thousand megabytes (1000 MB) is equal to one gigabyte (1 GB), where 1 GB is one billion bytes. Randomly addressable semiconductor memory doubles in size for each address lane added to an integrated circuit package, which favors counts that are powers of two. The capacity of
5100-416: The expected due to backward compatibility with earlier computers. If multiple compatible variations or a family of processors share a common architecture and instruction set but differ in their word sizes, their documentation and software may become notationally complex to accommodate the difference (see Size families below). Depending on how a computer is organized, word-size units may be used for: When
5185-422: The former sense of the word, harking back to the days when bytes were not yet standardized." The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8080 , the direct predecessor of the 8086 , could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity
SECTION 60
#17327905803935270-566: The input and output. However, the LINK Computer can be equipped to edit out these gaps and to permit handling of bytes which are split between words. [...] [...] The maximum input-output byte size for serial operation will now be 8 bits, not counting any error detection and correction bits. Thus, the Exchange will operate on an 8-bit byte basis, and any input-output units with less than 8 bits per byte will leave
5355-428: The instruction. It is a deliberate respelling of bite to avoid accidental mutation to bit . Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits , is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which
5440-418: The integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. In addition, the C and C++ standards require that there are no gaps between two bytes. This means every bit in memory is part of a byte. Java's primitive data type byte
5525-465: The last, of IBM's second-generation transistorized computers to be developed). The first reference found in the files was contained in an internal memo written in June 1956 during the early days of developing Stretch . A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use
5610-402: The norm, although there is some use of the IEC binary prefixes . Several of the earliest computers (and a few modern as well) use binary-coded decimal rather than plain binary , typically having a word size of 10 or 12 decimal digits, and some early decimal computers have no fixed word length at all. Early binary systems tended to use word lengths that were some multiple of 6-bits, with
5695-455: The number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program. [...] Most important, from the point of view of editing, will be the ability to handle any characters or digits, from 1 to 6 bits long. Figure 2 shows
5780-460: The physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations used in earlier card punches. The prominence of
5865-458: The potential ambiguity of the term "byte". The symbol for octet, 'o', also conveniently eliminates the ambiguity in the symbol 'B' between byte and bel . The term byte was coined by Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in
5950-453: The prefix kilo as 1000 (10 ); other systems are based on powers of 2 . Nomenclature for these systems has led to confusion. Systems based on powers of 10 use standard SI prefixes ( kilo , mega , giga , ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes ( kibi , mebi , gibi , ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use
6035-525: The prefixes K, M, and G, creating ambiguity when the prefixes M or G are used. While the difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based terabyte is about 9% smaller than power-of-2-based tebibyte. Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB)
6120-428: The remaining bits blank. The resultant gaps can be edited out later by programming [...] Word (computer architecture) In computing , a word is the natural unit of data used by a particular processor design. A word is a fixed-sized datum handled as a unit by the instruction set or the hardware of the processor. The number of bits or digits in a word (the word size , word width , or word length )
6205-450: The same term even within a single vendor. These terms include double word , half word , long word , quad word , slab , superword and syllable . There are also informal terms. e.g., half byte and nybble for 4 bits, octal K for 1000 8 . Contemporary computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as
6290-426: The standard size of a character (or more accurately, a byte ) becomes eight bits. Word sizes thereafter are naturally multiples of eight bits, with 16, 32, and 64 bits being commonly used. Early machine designs included some that used what is often termed a variable word length . In this type of organization, an operand has no fixed length. Depending on the machine and the instruction, the length might be denoted by
6375-590: The support for various sizes (and backward compatibility) in the instruction set, some instruction mnemonics carry "d" or "q" identifiers denoting "double-", "quad-" or "double-quad-", which are in terms of the architecture's original 16-bit word size. An example with a different word size is the IBM System/360 family. In the System/360 architecture , System/370 architecture and System/390 architecture, there are 8-bit byte s, 16-bit halfword s, 32-bit word s and 64-bit doubleword s. The z/Architecture , which
6460-535: The term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. The unit symbol for the byte is specified in IEC 80000-13 , IEEE 1541 and the Metric Interchange Format as the upper-case character B. In the International System of Quantities (ISQ), B is also the symbol of the bel ,
6545-724: The twenty-first century. In this era, bit groupings in the instruction stream were often referred to as syllables or slab , before the term byte became common. The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in
6630-433: The ubiquitous acceptance of the 8-bit byte. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively. The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE). Internationally, the unit octet explicitly defines a sequence of eight bits, eliminating
6715-438: The unit of address resolution. Address values which differ by one designate adjacent bytes in memory. This allows an arbitrary character within a character string to be addressed straightforwardly. A word can still be addressed, but the address to be used requires a few more bits than the word-resolution alternative. The word size needs to be an integer multiple of the character size in this organization. This addressing approach
6800-425: The usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and also settled. Many programming languages define the data type byte . The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that
6885-404: Was advertised as "110 Kbyte", using the 1000 convention. Likewise, the 8-inch DEC RX01 floppy (1975) held 256 256 bytes formatted, and was advertised as "256k". Some devices were advertised using a mixture of the two definitions: most notably, floppy disks advertised as "1.44 MB" have an actual capacity of 1440 KiB , the equivalent of 1.47 MB or 1.41 MiB. In 1995,
6970-523: Was in the context of the input-output equipment of the 1950s, which handled six bits at a time. The possibility of going to 8-bit bytes was considered in August 1956 and incorporated in the design of Stretch shortly thereafter . The first published reference to the term occurred in 1959 in a paper ' Processing Data in Bits and Pieces ' by G A Blaauw , F P Brooks Jr and W Buchholz in
7055-530: Was jointly developed by Rand , MIT, and IBM. Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31 . Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army ( FIELDATA ) and Navy . These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called
7140-457: Was used in the IBM 360, and has been the most common approach in machines designed since then. When the workload involves processing fields of different sizes, it can be advantageous to address to the bit. Machines with bit addressing may have some instructions that use a programmer-defined byte size and other instructions that operate on fixed data sizes. As an example, on the IBM 7030 ("Stretch"),
7225-473: Was working on IBM's Project Stretch in the mid 1950s. His letter tells the story. Not being a regular reader of your magazine, I heard about the question in the November 1976 issue regarding the origin of the term "byte" from a colleague who knew that I had perpetrated this piece of jargon [see page 77 of November 1976 BYTE, "Olde Englishe"] . I searched my files and could not locate
#392607