Misplaced Pages

Public Domain Mark

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Public Domain Mark ( PDM ) is a symbol used to indicate that a work is free of known copyright restrictions and therefore in the public domain . It is analogous to the copyright symbol , which is commonly used to indicate that a work is copyrighted, often as part of a copyright notice . The Public Domain Mark was developed by Creative Commons and is only an indicator of the public domain status of a work – it itself does not release a copyrighted work into the public domain like CC0 .

#803196

53-560: The symbol is encoded in Unicode as U+1F16E 🅮 CIRCLED C WITH OVERLAID BACKSLASH , which was added in Unicode 13.0 in March 2020. As there is no single definition of public domain and copyright laws differ by jurisdiction, a work can be in the public domain in some countries while still being under copyright in others (so called hybrid status). It is also difficult to assess

106-480: A text encoding standard maintained by the Unicode Consortium designed to support the use of text in all of the world's writing systems that can be digitized. Version 16.0 of the standard defines 154 998 characters and 168 scripts used in various ordinary, literary, academic, and technical contexts. Many common characters, including numerals, punctuation, and other symbols, are unified within

159-498: A block is always a multiple of 16, and is often a multiple of 128, but is otherwise arbitrary. Characters required for a given script may be spread out over several different, potentially disjunct blocks within the codespace. Each code point is assigned a classification, listed as the code point's General Category property. Here, at the uppermost level code points are categorized as one of Letter, Mark, Number, Punctuation, Symbol, Separator, or Other. Under each category, each code point

212-727: A calendar year and with rare cases where the scheduled release had to be postponed. For instance, in April 2020, a month after version 13.0 was published, the Unicode Consortium announced they had changed the intended release date for version 14.0, pushing it back six months to September 2021 due to the COVID-19 pandemic . Unicode 16.0, the latest version, was released on 10 September 2024. It added 5,185 characters and seven new scripts: Garay , Gurung Khema , Kirat Rai , Ol Onal , Sunuwar , Todhri , and Tulu-Tigalari . Thus far,

265-571: A certain amount of deviation from the ideal of exact grapheme–phoneme correspondence. A phoneme may be represented by a multigraph (sequence of more than one grapheme), as the digraph sh represents a single sound in English (and sometimes a single grapheme may represent more than one phoneme, as with the Russian letter я or the Spanish c). Some graphemes may not represent any sound at all (like

318-432: A comprehensive catalog of character properties, including those needed for supporting bidirectional text , as well as visual charts and reference data sets to aid implementers. Previously, The Unicode Standard was sold as a print volume containing the complete core specification, standard annexes, and code charts. However, version 5.0, published in 2006, was the last version printed this way. Starting with version 5.2, only

371-520: A full semantic duplicate of the Latin alphabet, because legacy CJK encodings contained both "fullwidth" (matching the width of CJK characters) and "halfwidth" (matching ordinary Latin script) characters. The Unicode Bulldog Award is given to people deemed to be influential in Unicode's development, with recipients including Tatsuo Kobayashi , Thomas Milo, Roozbeh Pournader , Ken Lunde , and Michael Everson . The origins of Unicode can be traced back to

424-429: A handful of scripts—often primarily between a given script and Latin characters —not between a large number of scripts, and not with all of the scripts supported being treated in a consistent manner. The philosophy that underpins Unicode seeks to encode the underlying characters— graphemes and grapheme-like units—rather than graphical distinctions considered mere variant glyphs thereof, that are instead best handled by

477-534: A low-surrogate code point forms a surrogate pair in UTF-16 in order to represent code points greater than U+FFFF . In principle, these code points cannot otherwise be used, though in practice this rule is often ignored, especially when not using UTF-16. A small set of code points are guaranteed never to be assigned to characters, although third-parties may make independent use of them at their discretion. There are 66 of these noncharacters : U+FDD0 – U+FDEF and

530-535: A project run by Deborah Anderson at the University of California, Berkeley was founded in 2002 with the goal of funding proposals for scripts not yet encoded in the standard. The project has become a major source of proposed additions to the standard in recent years. The Unicode Consortium together with the ISO have developed a shared repertoire following the initial publication of The Unicode Standard : Unicode and

583-399: A properly engineered design, 16 bits per character are more than sufficient for this purpose. This design decision was made based on the assumption that only scripts and characters in "modern" use would require encoding: Unicode gives higher priority to ensuring utility for the future than to preserving past antiquities. Unicode aims in the first instance at the characters published in

SECTION 10

#1732773170804

636-438: A text. The exclusion of surrogates and noncharacters leaves 1 111 998 code points available for use. Grapheme In linguistics , a grapheme is the smallest functional unit of a writing system . The word grapheme is derived from Ancient Greek gráphō ('write'), and the suffix -eme by analogy with phoneme and other emic units . The study of graphemes is called graphemics . The concept of graphemes

689-558: A total of 168 scripts are included in the latest version of Unicode (covering alphabets , abugidas and syllabaries ), although there are still scripts that are not yet encoded, particularly those mainly used in historical, liturgical, and academic contexts. Further additions of characters to the already encoded scripts, as well as symbols, in particular for mathematics and music (in the form of notes and rhythmic symbols), also occur. The Unicode Roadmap Committee ( Michael Everson , Rick McGowan, Ken Whistler, V.S. Umamaheswaran) maintain

742-654: A universal encoding than the original Unicode architecture envisioned. Version 1.0 of Microsoft's TrueType specification, published in 1992, used the name "Apple Unicode" instead of "Unicode" for the Platform ID in the naming table. The Unicode Consortium is a nonprofit organization that coordinates Unicode's development. Full members include most of the main computer software and hardware companies (and few others) with any interest in text-processing standards, including Adobe , Apple , Google , IBM , Meta (previously as Facebook), Microsoft , Netflix , and SAP . Over

795-414: Is abstract and similar to the notion in computing of a character . By comparison, a specific shape that represents any particular grapheme in a given typeface is called a glyph . There are two main opposing grapheme concepts. In the so-called referential conception , graphemes are interpreted as the smallest units of writing that correspond with sounds (more accurately phonemes ). In this concept,

848-523: Is analogous to the slash notation /a/ used for phonemes . Analogous to the square bracket notation [a] used for phones , glyphs are sometimes denoted with vertical lines, e.g. | ɑ | . In the same way that the surface forms of phonemes are speech sounds or phones (and different phones representing the same phoneme are called allophones ), the surface forms of graphemes are glyphs (sometimes graphs ), namely concrete written representations of symbols (and different glyphs representing

901-427: Is associated with the autonomy hypothesis which holds that writing is a system in its own right and should be studied independently from speech. Both concepts have weaknesses. Some models adhere to both concepts simultaneously by including two individual units, which are given names such as graphemic grapheme for the grapheme according to the analogical conception ( h in shake ), and phonological-fit grapheme for

954-413: Is intended to suggest a unique, unified, universal encoding". In this document, entitled Unicode 88 , Becker outlined a scheme using 16-bit characters: Unicode is intended to address the need for a workable, reliable world text encoding. Unicode could be roughly described as "wide-body ASCII " that has been stretched to 16 bits to encompass the characters of all the world's living languages. In

1007-428: Is more than just a repertoire within which characters are assigned. To aid developers and designers, the standard also provides charts and reference data, as well as annexes explaining concepts germane to various scripts, providing guidance for their implementation. Topics covered by these annexes include character normalization , character composition and decomposition, collation , and directionality . Unicode text

1060-457: Is not padded. There are a total of 2 + (2 − 2 ) = 1 112 064 valid code points within the codespace. (This number arises from the limitations of the UTF-16 character encoding, which can encode the 2 code points in the range U+0000 through U+FFFF except for the 2 code points in the range U+D800 through U+DFFF , which are used as surrogate pairs to encode the 2 code points in

1113-417: Is processed and stored as binary data using one of several encodings , which define how to translate the standard's abstracted codes for characters into sequences of bytes. The Unicode Standard itself defines three encodings: UTF-8 , UTF-16 , and UTF-32 , though several others exist. Of these, UTF-8 is the most widely used by a large margin, in part due to its backwards-compatibility with ASCII . Unicode

SECTION 20

#1732773170804

1166-480: Is projected to include 4301 new unified CJK characters . The Unicode Standard defines a codespace : a sequence of integers called code points in the range from 0 to 1 114 111 , notated according to the standard as U+0000 – U+10FFFF . The codespace is a systematic, architecture-independent representation of The Unicode Standard ; actual text is processed as binary data via one of several Unicode encodings, such as UTF-8 . In this normative notation,

1219-406: Is some disagreement as to whether capital and lower case letters are allographs or distinct graphemes. Capitals are generally found in certain triggering contexts that do not change the meaning of a word: a proper name, for example, or at the beginning of a sentence, or all caps in a newspaper headline. In other contexts, capitalization can determine meaning: compare, for example Polish and polish :

1272-400: Is then further subcategorized. In most cases, other properties must be used to adequately describe all the characteristics of any given code point. The 1024 points in the range U+D800 – U+DBFF are known as high-surrogate code points, and code points in the range U+DC00 – U+DFFF ( 1024 code points) are known as low-surrogate code points. A high-surrogate code point followed by

1325-522: The ampersand "&" representing the word and , Arabic numerals ); syllabic characters, representing syllables (as in Japanese kana ); and alphabetic letters, corresponding roughly to phonemes (see next section). For a full discussion of the different types, see Writing system § Functional classification . There are additional graphemic components used in writing, such as punctuation marks , mathematical symbols , word dividers such as

1378-602: The b in English debt or the h in all Spanish words containing the said letter), and often the rules of correspondence between graphemes and phonemes become complex or irregular, particularly as a result of historical sound changes that are not necessarily reflected in spelling. "Shallow" orthographies such as those of standard Spanish and Finnish have relatively regular (though not always one-to-one) correspondence between graphemes and phonemes, while those of French and English have much less regular correspondence, and are known as deep orthographies . Multigraphs representing

1431-467: The sh in the written English word shake would be a grapheme because it represents the phoneme /ʃ/ . This referential concept is linked to the dependency hypothesis that claims that writing merely depicts speech. By contrast, the analogical concept defines graphemes analogously to phonemes, i.e. via written minimal pairs such as shake vs. snake . In this example, h and n are graphemes because they distinguish two words. This analogical concept

1484-571: The typeface , through the use of markup , or by some other means. In particularly complex cases, such as the treatment of orthographical variants in Han characters , there is considerable disagreement regarding which differences justify their own encodings, and which are only graphical variants of other characters. At the most abstract level, Unicode assigns a unique number called a code point to each character. Many issues of visual representation—including size, shape, and style—are intended to be up to

1537-578: The 1980s, to a group of individuals with connections to Xerox 's Character Code Standard (XCCS). In 1987, Xerox employee Joe Becker , along with Apple employees Lee Collins and Mark Davis , started investigating the practicalities of creating a universal character set. With additional input from Peter Fenwick and Dave Opstad , Becker published a draft proposal for an "international/multilingual text character encoding system in August 1988, tentatively called Unicode". He explained that "the name 'Unicode'

1590-567: The ISO's Universal Coded Character Set (UCS) use identical character names and code points. However, the Unicode versions do differ from their ISO equivalents in two significant ways. While the UCS is a simple character map, Unicode specifies the rules, algorithms, and properties necessary to achieve interoperability between different platforms and languages. Thus, The Unicode Standard includes more information, covering in-depth topics such as bitwise encoding, collation , and rendering. It also provides

1643-496: The core specification, published as a print-on-demand paperback, may be purchased. The full text, on the other hand, is published as a free PDF on the Unicode website. A practical reason for this publication method highlights the second significant difference between the UCS and Unicode—the frequency with which updated versions are released and new characters added. The Unicode Standard has regularly released annual expanded versions, occasionally with more than one version released in

Public Domain Mark - Misplaced Pages Continue

1696-475: The discretion of the software actually rendering the text, such as a web browser or word processor . However, partially with the intent of encouraging rapid adoption, the simplicity of this original model has become somewhat more elaborate over time, and various pragmatic concessions have been made over the course of the standard's development. The first 256 code points mirror the ISO/IEC 8859-1 standard, with

1749-401: The following versions of The Unicode Standard have been published. Update versions, which do not include any changes to character repertoire, are signified by the third number (e.g., "version 4.0.1") and are omitted in the table below. The Unicode Consortium normally releases a new version of The Unicode Standard once a year. Version 17.0, the next major version,

1802-437: The former is a language, the latter is for shining shoes. Some linguists consider digraphs like the ⟨sh⟩ in ship to be distinct graphemes, but these are generally analyzed as sequences of graphemes. Non-stylistic ligatures , however, such as ⟨æ⟩ , are distinct graphemes, as are various letters with distinctive diacritics , such as ⟨ç⟩ . Identical glyphs may not always represent

1855-419: The grapheme according to the referential concept ( sh in shake ). In newer concepts, in which the grapheme is interpreted semiotically as a dyadic linguistic sign , it is defined as a minimal unit of writing that is both lexically distinctive and corresponds with a linguistic unit ( phoneme , syllable , or morpheme ). Graphemes are often notated within angle brackets : e.g. ⟨a⟩ . This

1908-526: The group. By the end of 1990, most of the work of remapping existing standards had been completed, and a final review draft of Unicode was ready. The Unicode Consortium was incorporated in California on 3 January 1991, and the first volume of The Unicode Standard was published that October. The second volume, now adding Han ideographs, was published in June 1992. In 1996, a surrogate character mechanism

1961-562: The intent of trivializing the conversion of text already written in Western European scripts. To preserve the distinctions made by different legacy encodings, therefore allowing for conversion between them and Unicode without any loss of information, many characters nearly identical to others , in both appearance and intended function, were given distinct code points. For example, the Halfwidth and Fullwidth Forms block encompasses

2014-403: The last two code points in each of the 17 planes (e.g. U+FFFE , U+FFFF , U+1FFFE , U+1FFFF , ..., U+10FFFE , U+10FFFF ). The set of noncharacters is stable, and no new noncharacters will ever be defined. Like surrogates, the rule that these cannot be used is often ignored, although the operation of the byte order mark assumes that U+FFFE will never be the first code point in

2067-401: The legal status of many works. The PDM is recommended to be used only for works that are likely free from any copyright restrictions worldwide . This law -related article is a stub . You can help Misplaced Pages by expanding it . This publishing -related article is a stub . You can help Misplaced Pages by expanding it . Unicode Unicode , formally The Unicode Standard , is

2120-637: The list of scripts that are candidates or potential candidates for encoding and their tentative code block assignments on the Unicode Roadmap page of the Unicode Consortium website. For some scripts on the Roadmap, such as Jurchen and Khitan large script , encoding proposals have been made and they are working their way through the approval process. For other scripts, such as Numidian and Rongorongo , no proposal has yet been made, and they await agreement on character repertoire and other details from

2173-675: The modern text (e.g. in the union of all newspapers and magazines printed in the world in 1988), whose number is undoubtedly far below 2 = 16,384. Beyond those modern-use characters, all others may be defined to be obsolete or rare; these are better candidates for private-use registration than for congesting the public list of generally useful Unicode. In early 1989, the Unicode working group expanded to include Ken Whistler and Mike Kernaghan of Metaphor, Karen Smith-Yoshimura and Joan Aliprand of Research Libraries Group , and Glenn Wright of Sun Microsystems . In 1990, Michel Suignard and Asmus Freytag of Microsoft and NeXT 's Rick McGowan had also joined

Public Domain Mark - Misplaced Pages Continue

2226-610: The other cannot change the meaning of a word, they are considered to be allographs of the same grapheme, which can be written ⟨a⟩ . Similarly, the grapheme corresponding to "Arabic numeral zero" has a unique semantic identity and Unicode value U+0030 but exhibits variation in the form of slashed zero . Italic and bold face forms are also allographic, as is the variation seen in serif (as in Times New Roman ) versus sans-serif (as in Helvetica ) forms. There

2279-559: The previous environment of a myriad of incompatible character sets , each used within different locales and on different computer architectures. Unicode is used to encode the vast majority of text on the Internet, including most web pages , and relevant Unicode support has become a common consideration in contemporary software development. The Unicode character repertoire is synchronized with ISO/IEC 10646 , each being code-for-code identical with one another. However, The Unicode Standard

2332-828: The range U+10000 through U+10FFFF .) The Unicode codespace is divided into 17 planes , numbered 0 to 16. Plane 0 is the Basic Multilingual Plane (BMP), and contains the most commonly used characters. All code points in the BMP are accessed as a single code unit in UTF-16 encoding and can be encoded in one, two or three bytes in UTF-8. Code points in planes 1 through 16 (the supplementary planes ) are accessed as surrogate pairs in UTF-16 and encoded in four bytes in UTF-8 . Within each plane, characters are allocated within named blocks of related characters. The size of

2385-458: The same grapheme are called allographs ). Thus, a grapheme can be regarded as an abstraction of a collection of glyphs that are all functionally equivalent. For example, in written English (or other languages using the Latin alphabet ), there are two different physical representations of the lowercase Latin letter "a": " a " and " ɑ ". Since, however, the substitution of either of them for

2438-809: The same grapheme. For example, the three letters ⟨A⟩ , ⟨А⟩ and ⟨Α⟩ appear identical but each has a different meaning: in order, they are the Latin letter A , the Cyrillic letter Azǔ/Азъ and the Greek letter Alpha . Each has its own code point in Unicode: U+0041 A LATIN CAPITAL LETTER A , U+0410 А CYRILLIC CAPITAL LETTER A and U+0391 Α GREEK CAPITAL LETTER ALPHA . The principal types of graphemes are logograms (more accurately termed morphograms ), which represent words or morphemes (for example Chinese characters ,

2491-438: The space, and other typographic symbols . Ancient logographic scripts often used silent determinatives to disambiguate the meaning of a neighboring (non-silent) word. As mentioned in the previous section, in languages that use alphabetic writing systems, many of the graphemes stand in principle for the phonemes (significant sounds) of the language. In practice, however, the orthographies of such languages entail at least

2544-499: The standard and are not treated as specific to any given writing system. Unicode encodes 3790 emoji , with the continued development thereof conducted by the Consortium as a part of the standard. Moreover, the widespread adoption of Unicode was in large part responsible for the initial popularization of emoji outside of Japan. Unicode is ultimately capable of encoding more than 1.1 million characters. Unicode has largely supplanted

2597-418: The two-character prefix U+ always precedes a written code point, and the code points themselves are written as hexadecimal numbers. At least four hexadecimal digits are always written, with leading zeros prepended as needed. For example, the code point U+00F7 ÷ DIVISION SIGN is padded with two leading zeros, but U+13254 𓉔 EGYPTIAN HIEROGLYPH O004 ( [REDACTED] )

2650-618: The user communities involved. Some modern invented scripts which have not yet been included in Unicode (e.g., Tengwar ) or which do not qualify for inclusion in Unicode due to lack of real-world use (e.g., Klingon ) are listed in the ConScript Unicode Registry , along with unofficial but widely used Private Use Areas code assignments. There is also a Medieval Unicode Font Initiative focused on special Latin medieval characters. Part of these proposals has been already included in Unicode. The Script Encoding Initiative,

2703-640: The years several countries or government agencies have been members of the Unicode Consortium. Presently only the Ministry of Endowments and Religious Affairs (Oman) is a full member with voting rights. The Consortium has the ambitious goal of eventually replacing existing character encoding schemes with Unicode and its standard Unicode Transformation Format (UTF) schemes, as many of the existing schemes are limited in size and scope and are incompatible with multilingual environments. Unicode currently covers most major writing systems in use today. As of 2024 ,

SECTION 50

#1732773170804

2756-491: Was implemented in Unicode 2.0, so that Unicode was no longer restricted to 16 bits. This increased the Unicode codespace to over a million code points, which allowed for the encoding of many historic scripts, such as Egyptian hieroglyphs , and thousands of rarely used or obsolete characters that had not been anticipated for inclusion in the standard. Among these characters are various rarely used CJK characters—many mainly being used in proper names, making them far more necessary for

2809-483: Was originally designed with the intent of transcending limitations present in all text encodings designed up to that point: each encoding was relied upon for use in its own context, but with no particular expectation of compatibility with any other. Indeed, any two encodings chosen were often totally unworkable when used together, with text encoded in one interpreted as garbage characters by the other. Most encodings had only been designed to facilitate interoperation between

#803196