Misplaced Pages

Unicode

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
#523476

95-526: Unicode , formally The Unicode Standard , is a text encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 of the standard defines 154 998 characters and 168 scripts used in various ordinary, literary, academic, and technical contexts. Many common characters, including numerals, punctuation, and other symbols, are unified within

190-402: A byte order mark or escape sequences ; compressing schemes try to minimize the number of bytes used per code unit (such as SCSU and BOCU ). Although UTF-32BE and UTF-32LE are simpler CESes, most systems working with Unicode use either UTF-8 , which is backward compatible with fixed-length ASCII and maps Unicode code points to variable-length sequences of octets, or UTF-16BE , which

285-431: A codespace : a sequence of integers called code points in the range from 0 to 1 114 111 , notated according to the standard as U+0000 – U+10FFFF . The codespace is a systematic, architecture-independent representation of The Unicode Standard ; actual text is processed as binary data via one of several Unicode encodings, such as UTF-8 . In this normative notation, the two-character prefix U+ always precedes

380-437: A string of the letters "ab̲c𐐀"—that is, a string containing a Unicode combining character ( U+0332 ̲ COMBINING LOW LINE ) as well as a supplementary character ( U+10400 𐐀 DESERET CAPITAL LETTER LONG I ). This string has several Unicode representations which are logically equivalent, yet while each is suited to a diverse set of circumstances or range of requirements: Note in particular that 𐐀

475-498: A block is always a multiple of 16, and is often a multiple of 128, but is otherwise arbitrary. Characters required for a given script may be spread out over several different, potentially disjunct blocks within the codespace. Each code point is assigned a classification, listed as the code point's General Category property. Here, at the uppermost level code points are categorized as one of Letter, Mark, Number, Punctuation, Symbol, Separator, or Other. Under each category, each code point

570-726: A calendar year and with rare cases where the scheduled release had to be postponed. For instance, in April 2020, a month after version 13.0 was published, the Unicode Consortium announced they had changed the intended release date for version 14.0, pushing it back six months to September 2021 due to the COVID-19 pandemic . Unicode 16.0, the latest version, was released on 10 September 2024. It added 5,185 characters and seven new scripts: Garay , Gurung Khema , Kirat Rai , Ol Onal , Sunuwar , Todhri , and Tulu-Tigalari . Thus far,

665-472: A code space, a code page , or character map . Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages , sometimes restricted to upper case letters , numerals and some punctuation only. The advent of digital computer systems allows more elaborate encodings codes (such as Unicode ) to support hundreds of written languages. The most popular character encoding on

760-431: A comprehensive catalog of character properties, including those needed for supporting bidirectional text , as well as visual charts and reference data sets to aid implementers. Previously, The Unicode Standard was sold as a print volume containing the complete core specification, standard annexes, and code charts. However, version 5.0, published in 2006, was the last version printed this way. Starting with version 5.2, only

855-542: A document is submitted directly for approval as a draft International Standard (DIS) to the ISO member bodies or as a final draft International Standard (FDIS), if the document was developed by an international standardizing body recognized by the ISO Council. The first step, a proposal of work (New Proposal), is approved at the relevant subcommittee or technical committee (e.g., SC 29 and JTC 1 respectively in

950-574: A full semantic duplicate of the Latin alphabet, because legacy CJK encodings contained both "fullwidth" (matching the width of CJK characters) and "halfwidth" (matching ordinary Latin script) characters. The Unicode Bulldog Award is given to people deemed to be influential in Unicode's development, with recipients including Tatsuo Kobayashi , Thomas Milo, Roozbeh Pournader , Ken Lunde , and Michael Everson . The origins of Unicode can be traced back to

1045-429: A handful of scripts—often primarily between a given script and Latin characters —not between a large number of scripts, and not with all of the scripts supported being treated in a consistent manner. The philosophy that underpins Unicode seeks to encode the underlying characters— graphemes and grapheme-like units—rather than graphical distinctions considered mere variant glyphs thereof, that are instead best handled by

SECTION 10

#1732764756524

1140-442: A long process that commonly starts with the proposal of new work within a committee. Some abbreviations used for marking a standard with its status are: Abbreviations used for amendments are: Other abbreviations are: International Standards are developed by ISO technical committees (TC) and subcommittees (SC) by a process with six steps: The TC/SC may set up working groups  (WG) of experts for

1235-534: A low-surrogate code point forms a surrogate pair in UTF-16 in order to represent code points greater than U+FFFF . In principle, these code points cannot otherwise be used, though in practice this rule is often ignored, especially when not using UTF-16. A small set of code points are guaranteed never to be assigned to characters, although third-parties may make independent use of them at their discretion. There are 66 of these noncharacters : U+FDD0 – U+FDEF and

1330-420: A particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points . Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for eight-bit units,

1425-405: A process known as transcoding . Some of these are cited below. Cross-platform : Windows : The most used character encoding on the web is UTF-8 , used in 98.2% of surveyed web sites, as of May 2024. In application programs and operating system tasks, both UTF-8 and UTF-16 are popular options. ISO Early research and development: Merging the networks and creating

1520-485: A project run by Deborah Anderson at the University of California, Berkeley was founded in 2002 with the goal of funding proposals for scripts not yet encoded in the standard. The project has become a major source of proposed additions to the standard in recent years. The Unicode Consortium together with the ISO have developed a shared repertoire following the initial publication of The Unicode Standard : Unicode and

1615-423: A properly engineered design, 16 bits per character are more than sufficient for this purpose. This design decision was made based on the assumption that only scripts and characters in "modern" use would require encoding: Unicode gives higher priority to ensuring utility for the future than to preserving past antiquities. Unicode aims in the first instance at the characters published in the modern text (e.g. in

1710-548: A proposal to form a new global standards body. In October 1946, ISA and UNSCC delegates from 25 countries met in London and agreed to join forces to create the International Organization for Standardization. The organization officially began operations on 23 February 1947. ISO Standards were originally known as ISO Recommendations ( ISO/R ), e.g., " ISO 1 " was issued in 1951 as "ISO/R 1". ISO

1805-436: A relatively small number of standards, ISO standards are not available free of charge, but rather for a purchase fee, which has been seen by some as unaffordable for small open-source projects. The process of developing standards within ISO was criticized around 2007 as being too difficult for timely completion of large and complex standards, and some members were failing to respond to ballots, causing problems in completing

1900-461: A single glyph . The former simplifies the text handling system, but the latter allows any letter/diacritic combination to be used in text. Ligatures pose similar problems. Exactly how to handle glyph variants is a choice that must be made when constructing a particular character encoding. Some writing systems, such as Arabic and Hebrew, need to accommodate things like graphemes that are joined in different ways in different contexts, but represent

1995-549: A single character per code unit. However, due to the emergence of more sophisticated character encodings, the distinction between these terms has become important. "Code page" is a historical name for a coded character set. Originally, a code page referred to a specific page number in the IBM standard character set manual, which would define a particular character encoding. Other vendors, including Microsoft , SAP , and Oracle Corporation , also published their own sets of code pages;

SECTION 20

#1732764756524

2090-432: A stream of octets (bytes). The purpose of this decomposition is to establish a universal set of characters that can be encoded in a variety of ways. To describe this model precisely, Unicode uses its own set of terminology to describe its process: An abstract character repertoire (ACR) is the full set of abstract characters that a system supports. Unicode has an open repertoire, meaning that new characters will be added to

2185-451: A text. The exclusion of surrogates and noncharacters leaves 1 111 998 code points available for use. Text encoding Character encoding is the process of assigning numbers to graphical characters , especially the written characters of human language, allowing them to be stored, transmitted, and transformed using computers. The numerical values that make up a character encoding are known as code points and collectively comprise

2280-556: A total of 168 scripts are included in the latest version of Unicode (covering alphabets , abugidas and syllabaries ), although there are still scripts that are not yet encoded, particularly those mainly used in historical, liturgical, and academic contexts. Further additions of characters to the already encoded scripts, as well as symbols, in particular for mathematics and music (in the form of notes and rhythmic symbols), also occur. The Unicode Roadmap Committee ( Michael Everson , Rick McGowan, Ken Whistler, V.S. Umamaheswaran) maintain

2375-652: A universal encoding than the original Unicode architecture envisioned. Version 1.0 of Microsoft's TrueType specification, published in 1992, used the name "Apple Unicode" instead of "Unicode" for the Platform ID in the naming table. The Unicode Consortium is a nonprofit organization that coordinates Unicode's development. Full members include most of the main computer software and hardware companies (and few others) with any interest in text-processing standards, including Adobe , Apple , Google , IBM , Meta (previously as Facebook), Microsoft , Netflix , and SAP . Over

2470-505: A well-defined and extensible encoding system, has replaced most earlier character encodings, but the path of code development to the present is fairly well known. The Baudot code, a five- bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name baudot has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and

2565-460: A written code point, and the code points themselves are written as hexadecimal numbers. At least four hexadecimal digits are always written, with leading zeros prepended as needed. For example, the code point U+00F7 ÷ DIVISION SIGN is padded with two leading zeros, but U+13254 𓉔 EGYPTIAN HIEROGLYPH O004 ( [REDACTED] ) is not padded. There are a total of 2 + (2 − 2) = 1 112 064 valid code points within

2660-423: Is backward compatible with fixed-length UCS-2BE and maps Unicode code points to variable-length sequences of 16-bit words. See comparison of Unicode encodings for a detailed discussion. Finally, there may be a higher-level protocol which supplies additional information to select the particular variant of a Unicode character, particularly where there are regional variants that have been 'unified' in Unicode as

2755-637: Is "to develop worldwide Information and Communication Technology (ICT) standards for business and consumer applications." There was previously also a JTC 2 that was created in 2009 for a joint project to establish common terminology for "standardization in the field of energy efficiency and renewable energy sources". It was later disbanded. As of 2022 , there are 167 national members representing ISO in their country, with each country having only one member. ISO has three membership categories, Participating members are called "P" members, as opposed to observing members, who are called "O" members. ISO

2850-466: Is a voluntary organization whose members are recognized authorities on standards, each one representing one country. Members meet annually at a General Assembly to discuss the strategic objectives of ISO. The organization is coordinated by a central secretariat based in Geneva . A council with a rotating membership of 20 member bodies provides guidance and governance, including setting the annual budget of

2945-464: Is abused, ISO should halt the process... ISO is an engineering old boys club and these things are boring so you have to have a lot of passion ... then suddenly you have an investment of a lot of money and lobbying and you get artificial results. The process is not set up to deal with intensive corporate lobbying and so you end up with something being a standard that is not clear. International Workshop Agreements (IWAs) are documents that establish

Unicode - Misplaced Pages Continue

3040-517: Is an abbreviation for "International Standardization Organization" or a similar title in another language, the letters do not officially represent an acronym or initialism . The organization provides this explanation of the name: Because 'International Organization for Standardization' would have different acronyms in different languages (IOS in English, OIN in French), our founders decided to give it

3135-521: Is approved as an International Standard (IS) if a two-thirds majority of the P-members of the TC/SC is in favour and not more than one-quarter of the total number of votes cast are negative. After approval, the document is published by the ISO central secretariat , with only minor editorial changes introduced in the publication process before the publication as an International Standard. Except for

3230-442: Is defined by a CEF. A character encoding scheme (CES) is the mapping of code units to a sequence of octets to facilitate storage on an octet-based file system or transmission over an octet-based network. Simple character encoding schemes include UTF-8 , UTF-16BE , UTF-32BE , UTF-16LE , and UTF-32LE ; compound character encoding schemes, such as UTF-16 , UTF-32 and ISO/IEC 2022 , switch between several simple schemes by using

3325-444: Is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding: Exactly what constitutes a character varies between character encodings. For example, for letters with diacritics , there are two distinct approaches that can be taken to encode them: they can be encoded either as a single unified character (known as a precomposed character), or as separate characters that combine into

3420-522: Is funded by a combination of: International standards are the main products of ISO. It also publishes technical reports, technical specifications, publicly available specifications, technical corrigenda (corrections), and guides. International standards Technical reports For example: Technical and publicly available specifications For example: Technical corrigenda ISO guides For example: ISO documents have strict copyright restrictions and ISO charges for most copies. As of 2020 ,

3515-411: Is intended to suggest a unique, unified, universal encoding". In this document, entitled Unicode 88 , Becker outlined a scheme using 16-bit characters: Unicode is intended to address the need for a workable, reliable world text encoding. Unicode could be roughly described as "wide-body ASCII " that has been stretched to 16 bits to encompass the characters of all the world's living languages. In

3610-427: Is more than just a repertoire within which characters are assigned. To aid developers and designers, the standard also provides charts and reference data, as well as annexes explaining concepts germane to various scripts, providing guidance for their implementation. Topics covered by these annexes include character normalization , character composition and decomposition, collation , and directionality . Unicode text

3705-430: Is preferred, usually in the larger context of locales. IBM's Character Data Representation Architecture (CDRA) designates entities with coded character set identifiers ( CCSIDs ), each of which is variously called a "charset", "character set", "code page", or "CHARMAP". The code unit size is equivalent to the bit measurement for the particular encoding: A code point is represented by a sequence of code units. The mapping

3800-417: Is processed and stored as binary data using one of several encodings , which define how to translate the standard's abstracted codes for characters into sequences of bytes. The Unicode Standard itself defines three encodings: UTF-8 , UTF-16 , and UTF-32 , though several others exist. Of these, UTF-8 is the most widely used by a large margin, in part due to its backwards-compatibility with ASCII . Unicode

3895-425: Is produced, for example, for audio and video coding standards is called a verification model (VM) (previously also called a "simulation and test model"). When a sufficient confidence in the stability of the standard under development is reached, a working draft (WD) is produced. This is in the form of a standard, but is kept internal to working group for revision. When a working draft is sufficiently mature and

Unicode - Misplaced Pages Continue

3990-492: Is represented with either one 32-bit value (UTF-32), two 16-bit values (UTF-16), or four 8-bit values (UTF-8). Although each of those forms uses the same total number of bits (32) to represent the glyph, it is not obvious how the actual numeric byte values are related. As a result of having many character encoding methods in use (and the need for backward compatibility with archived data), many computer programs have been developed to translate data between character encoding schemes,

4085-617: Is restricted. The organization that is known today as ISO began in 1926 as the International Federation of the National Standardizing Associations ( ISA ), which primarily focused on mechanical engineering . The ISA was suspended in 1942 during World War II but, after the war, the ISA was approached by the recently-formed United Nations Standards Coordinating Committee (UNSCC) with

4180-756: Is the Basic Multilingual Plane (BMP), and contains the most commonly used characters. All code points in the BMP are accessed as a single code unit in UTF-16 encoding and can be encoded in one, two or three bytes in UTF-8. Code points in planes 1 through 16 (the supplementary planes ) are accessed as surrogate pairs in UTF-16 and encoded in four bytes in UTF-8 . Within each plane, characters are allocated within named blocks of related characters. The size of

4275-400: Is then further subcategorized. In most cases, other properties must be used to adequately describe all the characteristics of any given code point. The 1024 points in the range U+D800 – U+DBFF are known as high-surrogate code points, and code points in the range U+DC00 – U+DFFF ( 1024 code points) are known as low-surrogate code points. A high-surrogate code point followed by

4370-655: The International Electrotechnical Commission . It is headquartered in Geneva , Switzerland. The three official languages of ISO are English , French , and Russian . The International Organization for Standardization in French is Organisation internationale de normalisation and in Russian, Международная организация по стандартизации ( Mezhdunarodnaya organizatsiya po standartizatsii ). Although one might think ISO

4465-604: The World Wide Web is UTF-8 , which is used in 98.2% of surveyed web sites, as of May 2024. In application programs and operating system tasks, both UTF-8 and UTF-16 are popular options. The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher , Braille , international maritime signal flags , and

4560-571: The typeface , through the use of markup , or by some other means. In particularly complex cases, such as the treatment of orthographical variants in Han characters , there is considerable disagreement regarding which differences justify their own encodings, and which are only graphical variants of other characters. At the most abstract level, Unicode assigns a unique number called a code point to each character. Many issues of visual representation—including size, shape, and style—are intended to be up to

4655-434: The 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985,

4750-575: The 1980s, to a group of individuals with connections to Xerox 's Character Code Standard (XCCS). In 1987, Xerox employee Joe Becker , along with Apple employees Lee Collins and Mark Davis , started investigating the practicalities of creating a universal character set. With additional input from Peter Fenwick and Dave Opstad , Becker published a draft proposal for an "international/multilingual text character encoding system in August 1988, tentatively called Unicode". He explained that "the name 'Unicode'

4845-532: The 4-digit encoding of Chinese characters for a Chinese telegraph code ( Hans Schjellerup , 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code , introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code

SECTION 50

#1732764756524

4940-694: The ISO Statutes. ISO was founded on 23 February 1947, and (as of July 2024 ) it has published over 25,000 international standards covering almost all aspects of technology and manufacturing. It has over 800 technical committees (TCs) and subcommittees (SCs) to take care of standards development. The organization develops and publishes international standards in technical and nontechnical fields, including everything from manufactured products and technology to food safety, transport, IT, agriculture, and healthcare. More specialized topics like electrical and electronic engineering are instead handled by

5035-567: The ISO's Universal Coded Character Set (UCS) use identical character names and code points. However, the Unicode versions do differ from their ISO equivalents in two significant ways. While the UCS is a simple character map, Unicode specifies the rules, algorithms, and properties necessary to achieve interoperability between different platforms and languages. Thus, The Unicode Standard includes more information, covering in-depth topics such as bitwise encoding, collation , and rendering. It also provides

5130-593: The Internet: Commercialization, privatization, broader access leads to the modern Internet: Examples of Internet services: The International Organization for Standardization ( ISO / ˈ aɪ s oʊ / ) is an independent, non-governmental , international standard development organization composed of representatives from the national standards organizations of member countries. Membership requirements are given in Article 3 of

5225-443: The Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes , identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane (BMP). This plane contains the most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters . The following table shows examples of code point values: Consider

5320-464: The average personal computer user's hard disk drive could store only about 10 megabytes, and it cost approximately US$ 250 on the wholesale market (and much higher if purchased separately at retail), so it was very important at the time to make every bit count. The compromise solution that was eventually found and developed into Unicode was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to

5415-491: The case of MPEG, the Moving Picture Experts Group ). A working group (WG) of experts is typically set up by the subcommittee for the preparation of a working draft (e.g., MPEG is a collection of seven working groups as of 2023). When the scope of a new work is sufficiently clarified, some of the working groups may make an open request for proposals—known as a "call for proposals". The first document that

5510-418: The central secretariat. The technical management board is responsible for more than 250 technical committees , who develop the ISO standards. ISO has a joint technical committee (JTC) with the International Electrotechnical Commission (IEC) to develop standards relating to information technology (IT). Known as JTC 1 and entitled "Information technology", it was created in 1987 and its mission

5605-431: The codespace. (This number arises from the limitations of the UTF-16 character encoding, which can encode the 2 code points in the range U+0000 through U+FFFF except for the 2 code points in the range U+D800 through U+DFFF , which are used as surrogate pairs to encode the 2 code points in the range U+10000 through U+10FFFF .) The Unicode codespace is divided into 17 planes , numbered 0 to 16. Plane 0

5700-421: The confidence people have in the standards setting process", and alleged that ISO did not carry out its responsibility. He also said that Microsoft had intensely lobbied many countries that traditionally had not participated in ISO and stacked technical committees with Microsoft employees, solution providers, and resellers sympathetic to Office Open XML: When you have a process built on trust and when that trust

5795-495: The core specification, published as a print-on-demand paperback, may be purchased. The full text, on the other hand, is published as a free PDF on the Unicode website. A practical reason for this publication method highlights the second significant difference between the UCS and Unicode—the frequency with which updated versions are released and new characters added. The Unicode Standard has regularly released annual expanded versions, occasionally with more than one version released in

SECTION 60

#1732764756524

5890-475: The discretion of the software actually rendering the text, such as a web browser or word processor . However, partially with the intent of encouraging rapid adoption, the simplicity of this original model has become somewhat more elaborate over time, and various pragmatic concessions have been made over the course of the standard's development. The first 256 code points mirror the ISO/IEC 8859-1 standard, with

5985-413: The document, the draft is then approved for submission as a Final Draft International Standard (FDIS) if a two-thirds majority of the P-members of the TC/SC are in favour and if not more than one-quarter of the total number of votes cast are negative. ISO will then hold a ballot among the national bodies where no technical changes are allowed (a yes/no final approval ballot), within a period of two months. It

6080-543: The era had their own character codes, often six-bit, but usually had the ability to read tapes produced on IBM equipment. These BCD encodings were the precursors of IBM's Extended Binary-Coded Decimal Interchange Code (usually abbreviated as EBCDIC), an eight-bit encoding scheme developed in 1963 for the IBM System/360 that featured a larger character set, including lower case letters. In trying to develop universally interchangeable character encodings, researchers in

6175-485: The following versions of The Unicode Standard have been published. Update versions, which do not include any changes to character repertoire, are signified by the third number (e.g., "version 4.0.1") and are omitted in the table below. The Unicode Consortium normally releases a new version of The Unicode Standard once a year. Version 17.0, the next major version, is projected to include 4301 new unified CJK characters . The Unicode Standard defines

6270-525: The group. By the end of 1990, most of the work of remapping existing standards had been completed, and a final review draft of Unicode was ready. The Unicode Consortium was incorporated in California on 3 January 1991, and the first volume of The Unicode Standard was published that October. The second volume, now adding Han ideographs, was published in June 1992. In 1996, a surrogate character mechanism

6365-562: The intent of trivializing the conversion of text already written in Western European scripts. To preserve the distinctions made by different legacy encodings, therefore allowing for conversion between them and Unicode without any loss of information, many characters nearly identical to others , in both appearance and intended function, were given distinct code points. For example, the Halfwidth and Fullwidth Forms block encompasses

6460-402: The last two code points in each of the 17 planes (e.g. U+FFFE , U+FFFF , U+1FFFE , U+1FFFF , ..., U+10FFFE , U+10FFFF ). The set of noncharacters is stable, and no new noncharacters will ever be defined. Like surrogates, the rule that these cannot be used is often ignored, although the operation of the byte order mark assumes that U+FFFE will never be the first code point in

6555-636: The list of scripts that are candidates or potential candidates for encoding and their tentative code block assignments on the Unicode Roadmap page of the Unicode Consortium website. For some scripts on the Roadmap, such as Jurchen and Khitan large script , encoding proposals have been made and they are working their way through the approval process. For other scripts, such as Numidian and Rongorongo , no proposal has yet been made, and they await agreement on character repertoire and other details from

6650-479: The most well-known code page suites are " Windows " (based on Windows-1252) and "IBM"/"DOS" (based on code page 437 ). Despite no longer referring to specific page numbers in a standard, many character encodings are still referred to by their code page number; likewise, the term "code page" is often still used to refer to character encodings in general. The term "code page" is not used in Unix or Linux, where "charmap"

6745-669: The necessary steps within the prescribed time limits. In some cases, alternative processes have been used to develop standards outside of ISO and then submit them for its approval. A more rapid "fast-track" approval procedure was used in ISO/IEC JTC 1 for the standardization of Office Open XML (OOXML, ISO/IEC 29500, approved in April 2008), and another rapid alternative "publicly available specification" (PAS) process had been used by OASIS to obtain approval of OpenDocument as an ISO/IEC standard (ISO/IEC 26300, approved in May 2006). As

6840-492: The next stage, called the "enquiry stage". After a consensus to proceed is established, the subcommittee will produce a draft international standard (DIS), and the text is submitted to national bodies for voting and comment within a period of five months. A document in the DIS stage is available to the public for purchase and may be referred to with its ISO DIS reference number. Following consideration of any comments and revision of

6935-411: The preparation of a working drafts. Subcommittees may have several working groups, which may have several Sub Groups (SG). It is possible to omit certain stages, if there is a document with a certain degree of maturity at the start of a standardization project, for example, a standard developed by another organization. ISO/IEC directives also allow the so-called "Fast-track procedure". In this procedure,

7030-559: The previous environment of a myriad of incompatible character sets , each used within different locales and on different computer architectures. Unicode is used to encode the vast majority of text on the Internet, including most web pages , and relevant Unicode support has become a common consideration in contemporary software development. The Unicode character repertoire is synchronized with ISO/IEC 10646 , each being code-for-code identical with one another. However, The Unicode Standard

7125-412: The punched card code then in use only allowed digits, upper-case English letters and a few special characters, six bits were sufficient. These BCD encodings extended existing simple four-bit numeric encoding to include alphabetic and special characters, mapping them easily to punch-card encoding which was already in widespread use. IBM's codes were used primarily with IBM equipment; other computer vendors of

7220-460: The repertoire over time. A coded character set (CCS) is a function that maps characters to code points (each code point represents one character). For example, in a given repertoire, the capital letter "A" in the Latin alphabet might be represented by the code point 65, the character "B" by 66, and so on. Multiple coded character sets may share the same character repertoire; for example ISO/IEC 8859-1 and IBM code pages 037 and 500 all cover

7315-497: The same character. An example is the XML attribute xml:lang. The Unicode model uses the term "character map" for other systems which directly assign a sequence of characters to a sequence of bytes, covering all of the CCS, CEF and CES layers. In Unicode, a character can be referred to as 'U+' followed by its codepoint value in hexadecimal. The range of valid code points (the codespace) for

7410-537: The same repertoire but map them to different code points. A character encoding form (CEF) is the mapping of code points to code units to facilitate storage in a system that represents numbers as bit sequences of fixed length (i.e. practically any computer system). For example, a system that stores numeric information in 16-bit units can only directly represent code points 0 to 65,535 in each unit, but larger code points (say, 65,536 to 1.4 million) could be represented by using multiple 16-bit units. This correspondence

7505-527: The same semantic character. Unicode and its parallel standard, the ISO/IEC 10646 Universal Character Set , together constitute a unified standard for character encoding. Rather than mapping characters directly to bytes , Unicode separately defines a coded character set that maps characters to unique natural numbers ( code points ), how those code points are mapped to a series of fixed-size natural numbers (code units), and finally how those units are encoded as

7600-472: The short form ISO . ISO is derived from the Greek word isos ( ίσος , meaning "equal"). Whatever the country, whatever the language, the short form of our name is always ISO . During the founding meetings of the new organization, however, the Greek word explanation was not invoked, so this meaning may be a false etymology . Both the name ISO and the ISO logo are registered trademarks and their use

7695-433: The solution was to implement variable-length encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point. Informally, the terms "character encoding", "character map", "character set" and "code page" are often used interchangeably. Historically, the same standard would specify a repertoire of characters and how they were to be encoded into a stream of code units — usually with

7790-498: The standard and are not treated as specific to any given writing system. Unicode encodes 3790 emoji , with the continued development thereof conducted by the Consortium as a part of the standard. Moreover, the widespread adoption of Unicode was in large part responsible for the initial popularization of emoji outside of Japan. Unicode is ultimately capable of encoding more than 1.1 million characters. Unicode has largely supplanted

7885-509: The subcommittee is satisfied that it has developed an appropriate technical document for the problem being addressed, it becomes a committee draft (CD) and is sent to the P-member national bodies of the SC for the collection of formal comments. Revisions may be made in response to the comments, and successive committee drafts may be produced and circulated until consensus is reached to proceed to

7980-414: The typical cost of a copy of an ISO standard is about US$ 120 or more (and electronic copies typically have a single-user license, so they cannot be shared among groups of people). Some standards by ISO and its official U.S. representative (and, via the U.S. National Committee, the International Electrotechnical Commission ) are made freely available. A standard published by ISO/IEC is the last stage of

8075-649: The union of all newspapers and magazines printed in the world in 1988), whose number is undoubtedly far below 2 = 16,384. Beyond those modern-use characters, all others may be defined to be obsolete or rare; these are better candidates for private-use registration than for congesting the public list of generally useful Unicode. In early 1989, the Unicode working group expanded to include Ken Whistler and Mike Kernaghan of Metaphor, Karen Smith-Yoshimura and Joan Aliprand of Research Libraries Group , and Glenn Wright of Sun Microsystems . In 1990, Michel Suignard and Asmus Freytag of Microsoft and NeXT 's Rick McGowan had also joined

8170-618: The user communities involved. Some modern invented scripts which have not yet been included in Unicode (e.g., Tengwar ) or which do not qualify for inclusion in Unicode due to lack of real-world use (e.g., Klingon ) are listed in the ConScript Unicode Registry , along with unofficial but widely used Private Use Areas code assignments. There is also a Medieval Unicode Font Initiative focused on special Latin medieval characters. Part of these proposals has been already included in Unicode. The Script Encoding Initiative,

8265-636: The years several countries or government agencies have been members of the Unicode Consortium. Presently only the Ministry of Endowments and Religious Affairs (Oman) is a full member with voting rights. The Consortium has the ambitious goal of eventually replacing existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes, as many of the existing schemes are limited in size and scope and are incompatible with multilingual environments. Unicode currently covers most major writing systems in use today. As of 2024,

8360-511: Was adopted fairly widely. ASCII67's American-centric nature was somewhat addressed in the European ECMA-6 standard. Herman Hollerith invented punch card data encoding in the late 19th century to analyze census data. Initially, each hole position represented a different data element, but later, numeric information was encoded by numbering the lower rows 0 to 9, with a punch in a column representing its row number. Later alphabetic data

8455-670: Was encoded by allowing more than one punch per column. Electromechanical tabulating machines represented date internally by the timing of pulses relative to the motion of the cards through the machine. When IBM went to electronic processing, starting with the IBM 603 Electronic Multiplier, it used a variety of binary encoding schemes that were tied to the punch card code. IBM used several Binary Coded Decimal ( BCD ) six-bit character encoding schemes, starting as early as 1953 in its 702 and 704 computers, and in its later 7000 Series and 1400 series , as well as in associated peripherals. Since

8550-491: Was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. This increased the Unicode codespace to over a million code points, which allowed for the encoding of many historic scripts, such as Egyptian hieroglyphs , and thousands of rarely used or obsolete characters that had not been anticipated for inclusion in the standard. Among these characters are various rarely used CJK characters—many mainly being used in proper names, making them far more necessary for

8645-409: Was often improved by many equipment manufacturers, sometimes creating compatibility issues. In 1959 the U.S. military defined its Fieldata code, a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), it fell short of its goals and was short-lived. In 1963 the first ASCII code

8740-483: Was originally designed with the intent of transcending limitations present in all text encodings designed up to that point: each encoding was relied upon for use in its own context, but with no particular expectation of compatibility with any other. Indeed, any two encodings chosen were often totally unworkable when used together, with text encoded in one interpreted as garbage characters by the other. Most encodings had only been designed to facilitate interoperation between

8835-591: Was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert), which addressed most of the shortcomings of Fieldata, using a simpler code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some "control code" issues) ASCII67

8930-517: Was suggested at the time by Martin Bryan, the outgoing convenor (chairman) of working group 1 (WG1) of ISO/IEC JTC 1/SC 34 , the rules of ISO were eventually tightened so that participating members that fail to respond to votes are demoted to observer status. The computer security entrepreneur and Ubuntu founder, Mark Shuttleworth , was quoted in a ZDNet blog article in 2008 about the process of standardization of OOXML as saying: "I think it de-values

9025-590: Was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode ). Common examples of character encoding systems include Morse code, the Baudot code , the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode,

#523476