Misplaced Pages

P-Grid

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

A distributed data store is a computer network where information is stored on more than one node , often in a replicated fashion. It is usually specifically used to refer to either a distributed database where users store information on a number of nodes , or a computer network in which users store information on a number of peer network nodes .

#806193

48-480: In distributed data storage , a P-Grid is a self-organizing structured peer-to-peer system, which can accommodate arbitrary key distributions (and hence support lexicographic key ordering and range queries), still providing storage load-balancing and efficient search by using randomized routing. P-Grid abstracts a trie and resolves queries based on prefix matching. The actual topology has no hierarchy. Queries are resolved by matching prefixes. This also determines

96-453: A generator polynomial , which is used as the divisor in a polynomial long division over a finite field , taking the input data as the dividend . The remainder becomes the result. A CRC has properties that make it well suited for detecting burst errors . CRCs are particularly easy to implement in hardware and are therefore commonly used in computer networks and storage devices such as hard disk drives . The parity bit can be seen as

144-433: A peer-to-peer network , Amazon 's Dynamo and Microsoft Azure Storage . As the ability of arbitrary querying is not as important as the availability , designers of distributed data stores have increased the latter at an expense of consistency. But the high-speed read/write access results in reduced consistency, as it is not possible to guarantee both consistency and availability on a partitioned network, as stated by

192-457: A certain probability, and dynamic models where errors occur primarily in bursts . Consequently, error-detecting and -correcting codes can be generally distinguished between random-error-detecting/correcting and burst-error-detecting/correcting . Some codes can also be suitable for a mixture of random errors and burst errors. If the channel characteristics cannot be determined, or are highly variable, an error-detection scheme may be combined with

240-408: A checksum (most often CRC32 ) to detect corruption and truncation and can employ redundancy or parity files to recover portions of corrupted data. Reed-Solomon codes are used in compact discs to correct errors caused by scratches. Modern hard drives use Reed–Solomon codes to detect and correct minor errors in sector reads, and to recover corrupted data from failing sectors and store that data in

288-564: A lexicographic order preserving function to generate the keys, and still realize a load-balanced P-Grid network which supports efficient search of exact keys. Moreover, because of the preservation of lexicographic ordering, range queries can be done efficiently and precisely on P-Grid. The trie-structure of P-Grid allows different range query strategies, processed serially or in parallel, trading off message overheads and query resolution latency. Simple vector-based data storage architectural frameworks are also subject to variable query limitations within

336-509: A message is a modular arithmetic sum of message code words of a fixed word length (e.g., byte values). The sum may be negated by means of a ones'-complement operation prior to transmission to detect unintentional all-zero messages. Checksum schemes include parity bits, check digits , and longitudinal redundancy checks . Some checksum schemes, such as the Damm algorithm , the Luhn algorithm , and

384-452: A message, which receivers can use to check consistency of the delivered message and to recover data that has been determined to be corrupted. Error detection and correction schemes can be either systematic or non-systematic. In a systematic scheme, the transmitter sends the original (error-free) data and attaches a fixed number of check bits (or parity data ), which are derived from the data bits by some encoding algorithm. If error detection

432-643: A single error in it will be detected. It will not be known where in the word the error is, however. If, in addition, after each stream of n words a parity sum is sent, each bit of which shows whether there were an odd or even number of ones at that bit-position sent in the most recent group, the exact position of the error can be determined and the error corrected. This method is only guaranteed to be effective, however, if there are no more than 1 error in every group of n words. With more error correction bits, more errors can be detected and in some cases corrected. There are also other bit-grouping techniques. A checksum of

480-453: A special-case 1-bit CRC. The output of a cryptographic hash function , also known as a message digest , can provide strong assurances about data integrity , whether changes of the data are accidental (e.g., due to transmission errors) or maliciously introduced. Any modification to the data will likely be detected through a mismatching hash value. Furthermore, given some hash value, it is typically infeasible to find some input data (other than

528-415: A strict guarantee on the number of detectable errors, but it may not protect against a preimage attack . A repetition code is a coding scheme that repeats the bits across a channel to achieve error-free communication. Given a stream of data to be transmitted, the data are divided into blocks of bits. Each block is transmitted some predetermined number of times. For example, to send the bit pattern 1011 ,

SECTION 10

#1732772918807

576-440: A strict limit on the minimum number of errors to be detected is desired. Codes with minimum Hamming distance d = 2 are degenerate cases of error-correcting codes and can be used to detect single errors. The parity bit is an example of a single-error-detecting code. Applications that require low latency (such as telephone conversations) cannot use automatic repeat request (ARQ); they must use forward error correction (FEC). By

624-646: A system for retransmissions of erroneous data. This is known as automatic repeat request (ARQ), and is most notably used in the Internet. An alternate approach for error control is hybrid automatic repeat request (HARQ), which is a combination of ARQ and error-correction coding. There are three major types of error correction: Automatic repeat request (ARQ) is an error control method for data transmission that makes use of error-detection codes, acknowledgment and/or negative acknowledgment messages, and timeouts to achieve reliable data transmission. An acknowledgment

672-407: A system that uses a non-systematic code, the original message is transformed into an encoded message carrying the same information and that has at least as many bits as the original message. Good error control performance requires the scheme to be selected based on the characteristics of the communication channel. Common channel models include memoryless models where errors occur randomly and with

720-443: Is a combination of ARQ and forward error correction. There are two basic approaches: The latter approach is particularly attractive on an erasure channel when using a rateless erasure code . Error detection is most commonly realized using a suitable hash function (or specifically, a checksum , cyclic redundancy check or other algorithm). A hash function adds a fixed-length tag to a message, which enables receivers to verify

768-501: Is a message sent by the receiver to indicate that it has correctly received a data frame . Usually, when the transmitter does not receive the acknowledgment before the timeout occurs (i.e., within a reasonable amount of time after sending the data frame), it retransmits the frame until it is either correctly received or the error persists beyond a predetermined number of retransmissions. Three types of ARQ protocols are Stop-and-wait ARQ , Go-Back-N ARQ , and Selective Repeat ARQ . ARQ

816-401: Is an important theorem in forward error correction, and describes the maximum information rate at which reliable communication is possible over a channel that has a certain error probability or signal-to-noise ratio (SNR). This strict upper limit is expressed in terms of the channel capacity . More specifically, the theorem says that there exist codes such that with increasing encoding length

864-434: Is appropriate if the communication channel has varying or unknown capacity , such as is the case on the Internet. However, ARQ requires the availability of a back channel , results in possibly increased latency due to retransmissions, and requires the maintenance of buffers and timers for retransmissions, which in the case of network congestion can put a strain on the server and overall network capacity. For example, ARQ

912-504: Is determined by the selected modulation scheme and the proportion of capacity consumed by FEC. Error detection and correction codes are often used to improve the reliability of data storage media. A parity track capable of detecting single-bit errors was present on the first magnetic tape data storage in 1951. The optimal rectangular code used in group coded recording tapes not only detects but also corrects single-bit errors. Some file formats , particularly archive formats , include

960-402: Is required, a receiver can simply apply the same algorithm to the received data bits and compare its output with the received check bits; if the values do not match, an error has occurred at some point during the transmission. If error correction is required, a receiver can apply the decoding algorithm to the received data bits and the received check bits to recover the original error-free data. In

1008-589: Is somewhat blurred in a system such as BitTorrent , where it is possible for the originating node to go offline but the content to continue to be served. Still, this is only the case for individual files requested by the redistributors, as contrasted with networks such as Freenet , Winny , Share and Perfect Dark where any node may be storing any part of the files on the network. Distributed data stores typically use an error detection and correction technique. Some distributed data stores (such as Parchive over NNTP) use forward error correction techniques to recover

SECTION 20

#1732772918807

1056-440: Is that they are extremely simple, and are in fact used in some transmissions of numbers stations . A parity bit is a bit that is added to a group of source bits to ensure that the number of set bits (i.e., bits with value 1) in the outcome is even or odd. It is a very simple scheme that can be used to detect single or any other odd number (i.e., three, five, etc.) of errors in the output. An even number of flipped bits will make

1104-428: Is used on shortwave radio data links in the form of ARQ-E , or combined with multiplexing as ARQ-M . Forward error correction (FEC) is a process of adding redundant data such as an error-correcting code (ECC) to a message so that it can be recovered by a receiver even when a number of errors (up to the capability of the code being used) are introduced, either during the process of transmission or on storage. Since

1152-467: The CAP theorem . In peer network data stores, the user can usually reciprocate and allow other users to use their computer as a storage node as well. Information may or may not be accessible to other users depending on the design of the network. Most peer-to-peer networks do not have distributed data stores in that the user's data is only available when their node is on the network. However, this distinction

1200-573: The Hebrew Bible were paid for their work according to the number of stichs (lines of verse). As the prose books of the Bible were hardly ever written in stichs, the copyists, in order to estimate the amount of work, had to count the letters. This also helped ensure accuracy in the transmission of the text with the production of subsequent copies. Between the 7th and 10th centuries CE a group of Jewish scribes formalized and expanded this to create

1248-481: The Numerical Masorah to ensure accurate reproduction of the sacred text. It included counts of the number of words in a line, section, book and groups of books, noting the middle stich of a book, word use statistics, and commentary. Standards became such that a deviation in even a single letter in a Torah scroll was considered unacceptable. The effectiveness of their error correction method was verified by

1296-456: The Verhoeff algorithm , are specifically designed to detect errors commonly introduced by humans in writing down or remembering identification numbers. A cyclic redundancy check (CRC) is a non-secure hash function designed to detect accidental changes to digital data in computer networks. It is not suitable for detecting maliciously introduced errors. It is characterized by specification of

1344-533: The noise in the communication channel is different from that which a spacecraft on an interplanetary mission experiences. Additionally, as a spacecraft increases its distance from Earth, the problem of correcting for noise becomes more difficult. The demand for satellite transponder bandwidth continues to grow, fueled by the desire to deliver television (including new channels and high-definition television ) and IP data. Transponder availability and bandwidth constraints have limited this growth. Transponder capacity

1392-477: The P-Grid environment. Distributed data store Distributed databases are usually non-relational databases that enable a quick access to data over a large number of nodes. Some distributed databases expose rich query abilities while others are limited to a key-value store semantics. Examples of limited distributed databases are Google 's Bigtable , which is much more than a distributed file system or

1440-426: The P-Grid tree. These are called replicas. The replica peers maintain an independent replica sub-network and uses gossip based communication to keep the replica group up-to-date. The redundancy in both the replication of key-space partitions as well as the routing network together is called structural replication. The figure above shows how a query is resolved by forwarding it based on prefix matching. P-Grid partitions

1488-467: The Voyager 2 RSV code as a minimum. Concatenated codes are increasingly falling out of favor with space missions, and are replaced by more powerful codes such as Turbo codes or LDPC codes . The different kinds of deep space and orbital missions that are conducted suggest that trying to find a one-size-fits-all error correction system will be an ongoing problem. For missions close to Earth, the nature of

P-Grid - Misplaced Pages Continue

1536-634: The accuracy of copying through the centuries demonstrated by discovery of the Dead Sea Scrolls in 1947–1956, dating from c.  150 BCE-75 CE . The modern development of error correction codes is credited to Richard Hamming in 1947. A description of Hamming's code appeared in Claude Shannon 's A Mathematical Theory of Communication and was quickly generalized by Marcel J. E. Golay . All error-detection and correction schemes add some redundancy (i.e., some extra data) to

1584-476: The changes of the data are accidental or maliciously introduced. Digital signatures are perhaps most notable for being part of the HTTPS protocol for securely browsing the web. Any error-correcting code can be used for error detection. A code with minimum Hamming distance , d , can detect up to d − 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if

1632-431: The choice of routing table entries. Each peer, for each level of the trie, maintains autonomously routing entries chosen randomly from the complementary sub-trees. In fact, multiple entries are maintained for each level at each peer to provide fault-tolerance (as well as potentially for query-load management). For diverse reasons including fault-tolerance and load-balancing, multiple peers are responsible for each leaf node in

1680-452: The delivered message by recomputing the tag and comparing it with the one provided. There exists a vast variety of different hash function designs. However, some are of particularly widespread use because of either their simplicity or their suitability for detecting certain kinds of errors (e.g., the cyclic redundancy check's performance in detecting burst errors ). A random-error-correcting code based on minimum distance coding can provide

1728-468: The four-bit block can be repeated three times, thus producing 1011 1011 1011 . If this twelve-bit pattern was received as 1010 1011 1011 – where the first block is unlike the other two – an error has occurred. A repetition code is very inefficient and can be susceptible to problems if the error occurs in exactly the same place for each group (e.g., 1010 1010 1010 in the previous example would be detected as correct). The advantage of repetition codes

1776-449: The key-space in a granularity adaptive to the load at that part of the key-space. Consequently, its possible to realize a P-Grid overlay network where each peer has similar storage load even for non-uniform load distributions. This network probably provides as efficient search of keys as traditional distributed hash tables (DHTs) do. Note that in contrast to P-Grid, DHTs work efficiently only for uniform load-distributions. Hence we can use

1824-733: The limited power availability aboard space probes. Whereas early missions sent their data uncoded, starting in 1968, digital error correction was implemented in the form of (sub-optimally decoded) convolutional codes and Reed–Muller codes . The Reed–Muller code was well suited to the noise the spacecraft was subject to (approximately matching a bell curve ), and was implemented for the Mariner spacecraft and used on missions between 1969 and 1977. The Voyager 1 and Voyager 2 missions, which started in 1977, were designed to deliver color imaging and scientific information from Jupiter and Saturn . This resulted in increased coding requirements, and thus,

1872-450: The one given) that will yield the same hash value. If an attacker can change not only the message but also the hash value, then a keyed hash or message authentication code (MAC) can be used for additional security. Without knowing the key, it is not possible for the attacker to easily or conveniently calculate the correct keyed hash value for a modified message. Digital signatures can provide strong assurances about data integrity, whether

1920-572: The original file when parts of that file are damaged or unavailable. Others try again to download that file from a different mirror. Error detection and correction In information theory and coding theory with applications in computer science and telecommunications , error detection and correction ( EDAC ) or error control are techniques that enable reliable delivery of digital data over unreliable communication channels . Many communication channels are subject to channel noise , and thus errors may be introduced during transmission from

1968-404: The parity bit appear correct even though the data is erroneous. Parity bits added to each word sent are called transverse redundancy checks , while those added at the end of a stream of words are called longitudinal redundancy checks . For example, if each of a series of m-bit words has a parity bit added, showing whether there were an odd or even number of ones in that word, any word with

P-Grid - Misplaced Pages Continue

2016-407: The possibility of uncorrectable errors with FEC. Reliability and inspection engineering also make use of the theory of error-correcting codes. In a typical TCP/IP stack, error control is performed at multiple levels: The development of error-correction codes was tightly coupled with the history of deep-space missions due to the extreme dilution of signal power over interplanetary distances, and

2064-547: The probability of error on a discrete memoryless channel can be made arbitrarily small, provided that the code rate is smaller than the channel capacity. The code rate is defined as the fraction k/n of k source symbols and n encoded symbols. The actual maximum code rate allowed depends on the error-correcting code used, and may be lower. This is because Shannon's proof was only of existential nature, and did not show how to construct codes that are both optimal and have efficient encoding and decoding algorithms. Hybrid ARQ

2112-492: The receiver does not have to ask the sender for retransmission of the data, a backchannel is not required in forward error correction. Error-correcting codes are used in lower-layer communication such as cellular network , high-speed fiber-optic communication and Wi-Fi , as well as for reliable storage in media such as flash memory , hard disk and RAM . Error-correcting codes are usually distinguished between convolutional codes and block codes : Shannon's theorem

2160-446: The source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases. Error detection is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. Error correction is the detection of errors and reconstruction of the original, error-free data. In classical antiquity, copyists of

2208-604: The spacecraft were supported by (optimally Viterbi-decoded ) convolutional codes that could be concatenated with an outer Golay (24,12,8) code . The Voyager 2 craft additionally supported an implementation of a Reed–Solomon code . The concatenated Reed–Solomon–Viterbi (RSV) code allowed for very powerful error correction, and enabled the spacecraft's extended journey to Uranus and Neptune . After ECC system upgrades in 1989, both crafts used V2 RSV coding. The Consultative Committee for Space Data Systems currently recommends usage of error correction codes with performance similar to

2256-445: The spare sectors. RAID systems use a variety of error correction techniques to recover data when a hard drive completely fails. Filesystems such as ZFS or Btrfs , as well as some RAID implementations, support data scrubbing and resilvering, which allows bad blocks to be detected and (hopefully) recovered before they are used. The recovered data may be re-written to exactly the same physical location, to spare blocks elsewhere on

2304-569: The time an ARQ system discovers an error and re-transmits it, the re-sent data will arrive too late to be usable. Applications where the transmitter immediately forgets the information as soon as it is sent (such as most television cameras) cannot use ARQ; they must use FEC because when an error occurs, the original data is no longer available. Applications that use ARQ must have a return channel ; applications having no return channel cannot use ARQ. Applications that require extremely low error rates (such as digital money transfers) must use ARQ due to

#806193