Misplaced Pages

DVB-C

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Digital Video Broadcasting - Cable ( DVB-C ) is the DVB European consortium standard for the broadcast transmission of digital television over cable . This system transmits an MPEG-2 or MPEG-4 family digital audio / digital video stream, using a QAM modulation with channel coding . The standard was first published by the ETSI in 1994, and subsequently became the most widely used transmission system for digital cable television in Europe, Asia and South America. It is deployed worldwide in systems ranging from the larger cable television networks (CATV) down to smaller satellite master antenna TV ( SMATV ) systems.

#900099

51-459: With reference to the figure, a short description of the single processing blocks follows. The receiving STB adopts techniques which are dual to those ones used in the transmission. On February 18, 2008 it was announced that a new standard – DVB-C2 – would be developed during 2008, and a "Call for Technologies" was issued. Proposals including simulation programs and information on patent rights could be submitted until June 16, 2008. "The results of

102-836: A jointly typical set by the following: We say that two sequences X 1 n {\displaystyle {X_{1}^{n}}} and Y 1 n {\displaystyle Y_{1}^{n}} are jointly typical if they lie in the jointly typical set defined above. Steps The probability of error of this scheme is divided into two parts: Define: E i = { ( X 1 n ( i ) , Y 1 n ) ∈ A ε ( n ) } , i = 1 , 2 , … , 2 n R {\displaystyle E_{i}=\{(X_{1}^{n}(i),Y_{1}^{n})\in A_{\varepsilon }^{(n)}\},i=1,2,\dots ,2^{nR}} as

153-468: A best 2 out of 3 voting scheme if the copies differ" are inefficient error-correction methods, unable to asymptotically guarantee that a block of data can be communicated free of error. Advanced techniques such as Reed–Solomon codes and, more recently, low-density parity-check (LDPC) codes and turbo codes , come much closer to reaching the theoretical Shannon limit, but at a cost of high computational complexity. Using these highly efficient codes and with

204-531: A capacity. But such an errorless channel is an idealization, and if M is chosen small enough to make the noisy channel nearly errorless, the result is necessarily less than the Shannon capacity of the noisy channel of bandwidth B {\displaystyle B} , which is the Hartley–Shannon result that followed later. Claude Shannon 's development of information theory during World War II provided

255-465: A channel at rates beyond the channel capacity. The theorem does not address the rare situation in which rate and capacity are equal. The channel capacity C {\displaystyle C} can be calculated from the physical properties of a channel; for a band-limited channel with Gaussian noise, using the Shannon–Hartley theorem . Simple schemes such as "send the message 3 times and use

306-540: A channel of bandwidth B {\displaystyle B} hertz was 2 B {\displaystyle 2B} pulses per second, to arrive at his quantitative measure for achievable line rate. Hartley's law is sometimes quoted as just a proportionality between the analog bandwidth , B {\displaystyle B} , in Hertz and what today is called the digital bandwidth , R {\displaystyle R} , in bit/s. Other times it

357-451: A codebook whose performance is better than the average, and so satisfies our need for arbitrarily low error probability communicating across the noisy channel. Suppose a code of 2 n R {\displaystyle 2^{nR}} codewords. Let W be drawn uniformly over this set as an index. Let X n {\displaystyle X^{n}} and Y n {\displaystyle Y^{n}} be

408-461: A coding technique which allows the probability of error at the receiver to be made arbitrarily small. This means that theoretically, it is possible to transmit information nearly without error up to nearly a limit of C {\displaystyle C} bits per second. The converse is also important. If the probability of error at the receiver increases without bound as the rate is increased, so no useful information can be transmitted beyond

459-402: A communication channel refers to the maximum rate of error-free data that can theoretically be transferred over the channel if the link is subject to random data transmission errors, for a particular noise level. It was first described by Shannon (1948), and shortly after published in a book by Shannon and Warren Weaver entitled The Mathematical Theory of Communication (1949). This founded

510-483: A communication link, a bound on the maximum amount of error-free information per time unit that can be transmitted with a specified bandwidth in the presence of the noise interference, assuming that the signal power is bounded, and that the Gaussian noise process is characterized by a known power or power spectral density. The law is named after Claude Shannon and Ralph Hartley . The Shannon–Hartley theorem states

561-533: A known variance. Since the variance of a Gaussian process is equivalent to its power, it is conventional to call this variance the noise power. Such a channel is called the Additive White Gaussian Noise channel, because Gaussian noise is added to the signal; "white" means equal amounts of noise at all frequencies within the channel bandwidth. Such noise can arise both from random sources of energy and also from coding and measurement error at

SECTION 10

#1732802306901

612-492: A noise process consisting of adding a random wave whose amplitude is 1 or −1 at any point in time, and a channel that adds such a wave to the source signal. Such a wave's frequency components are highly dependent. Though such a noise may have a high power, it is fairly easy to transmit a continuous signal with much less power than one would need if the underlying noise was a sum of independent noises in each frequency band. For large or small and constant signal-to-noise ratios,

663-411: A noisy channel by using encoding and decoding functions. An encoder maps W into a pre-defined sequence of channel symbols of length n . In its most basic model, the channel distorts each of these symbols independently of the others. The output of the channel –the received sequence– is fed into a decoder which maps the sequence into an estimate of the message. In this setting, the probability of error

714-445: A noisy channel, and matching serves to show that these bounds are tight bounds. The following outlines are only one set of many different styles available for study in information theory texts. This particular proof of achievability follows the style of proofs that make use of the asymptotic equipartition property (AEP). Another style can be found in information theory texts using error exponents . Both types of proofs make use of

765-647: A random coding argument where the codebook used across a channel is randomly constructed - this serves to make the analysis simpler while still proving the existence of a code satisfying a desired low probability of error at any data rate below the channel capacity . By an AEP-related argument, given a channel, length n {\displaystyle n} strings of source symbols X 1 n {\displaystyle X_{1}^{n}} , and length n {\displaystyle n} strings of channel outputs Y 1 n {\displaystyle Y_{1}^{n}} , we can define

816-550: A signal-to-noise ratio, but achieving reliability through error-correction coding rather than through reliably distinguishable pulse levels. If there were such a thing as a noise-free analog channel, one could transmit unlimited amounts of error-free data over it per unit of time (Note that an infinite-bandwidth analog channel could not transmit unlimited amounts of error-free data absent infinite signal power). Real channels, however, are subject to limitations imposed by both finite bandwidth and nonzero noise. Bandwidth and noise affect

867-505: A very conservative value of M {\displaystyle M} to achieve a low error rate. The concept of an error-free capacity awaited Claude Shannon, who built on Hartley's observations about a logarithmic measure of information and Nyquist's observations about the effect of bandwidth limitations. Hartley's rate result can be viewed as the capacity of an errorless M -ary channel of 2 B {\displaystyle 2B} symbols per second. Some authors refer to it as

918-407: A way to quantify information and its line rate (also known as data signalling rate R bits per second). This method, later known as Hartley's law, became an important precursor for Shannon's more sophisticated notion of channel capacity. Hartley argued that the maximum number of distinguishable pulse levels that can be transmitted and received reliably over a communications channel is limited by

969-483: Is a limit to the amount of information that can be transferred by a signal of a bounded power, even when sophisticated multi-level encoding techniques are used. In the channel considered by the Shannon–Hartley theorem, noise and signal are combined by addition. That is, the receiver measures a signal that is equal to the sum of the signal encoding the desired information and a continuous random variable that represents

1020-421: Is attained at the capacity achieving distributions for each respective channel. That is, C = lim inf 1 n ∑ i = 1 n C i {\displaystyle C=\lim \inf {\frac {1}{n}}\sum _{i=1}^{n}C_{i}} where C i {\displaystyle C_{i}} is the capacity of the i th channel. The proof runs through in almost

1071-421: Is bounded away from 0 if R is greater than C - we can get arbitrarily low rates of error only if R is less than C. A strong converse theorem, proven by Wolfowitz in 1957, states that, for some finite positive constant A {\displaystyle A} . While the weak converse states that the error probability is bounded away from zero as n {\displaystyle n} goes to infinity,

SECTION 20

#1732802306901

1122-450: Is defined as: Theorem (Shannon, 1948): (MacKay (2003), p. 162; cf Gallager (1968), ch.5; Cover and Thomas (1991), p. 198; Shannon (1948) thm. 11) As with the several other major results in information theory, the proof of the noisy channel coding theorem includes an achievability result and a matching converse result. These two components serve to bound, in this case, the set of possible rates at which one can communicate over

1173-439: Is essentially as good as the best possible code; the theorem is proved through the statistics of such random codes. Shannon's theorem shows how to compute a channel capacity from a statistical description of a channel, and establishes that given a noisy channel with capacity C {\displaystyle C} and information transmitted at a line rate R {\displaystyle R} , then if there exists

1224-468: Is possible to transmit information nearly without error at any rate below a limiting rate, C . The converse is also important. If R > C {\displaystyle R>C} , an arbitrarily small probability of error is not achievable. All codes will have a probability of error greater than a certain positive minimal level, and this level increases as the rate increases. So, information cannot be guaranteed to be transmitted reliably across

1275-440: Is quoted in this more quantitative form, as an achievable line rate of R {\displaystyle R} bits per second: Hartley did not work out exactly how the number M should depend on the noise statistics of the channel, or how the communication could be made reliable even when individual symbol pulses could not be reliably distinguished to M levels; with Gaussian noise statistics, system designers had to choose

1326-630: Is the bandwidth (in hertz). The quantity 2 B {\displaystyle 2B} later came to be called the Nyquist rate , and transmitting at the limiting pulse rate of 2 B {\displaystyle 2B} pulses per second as signalling at the Nyquist rate . Nyquist published his results in 1928 as part of his paper "Certain topics in Telegraph Transmission Theory". During 1928, Hartley formulated

1377-407: The Shannon–Hartley theorem tells the maximum rate at which information can be transmitted over a communications channel of a specified bandwidth in the presence of noise . It is an application of the noisy-channel coding theorem to the archetypal case of a continuous-time analog communications channel subject to Gaussian noise . The theorem establishes Shannon's channel capacity for such

1428-459: The channel capacity C {\displaystyle C} , meaning the theoretical tightest upper bound on the information rate of data that can be communicated at an arbitrarily low error rate using an average received signal power S {\displaystyle S} through an analog communication channel subject to additive white Gaussian noise (AWGN) of power N {\displaystyle N} : where During

1479-518: The noisy-channel coding theorem (sometimes Shannon's theorem or Shannon's limit ), establishes that for any given degree of noise contamination of a communication channel, it is possible (in theory) to communicate discrete data (digital information ) nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley . The Shannon limit or Shannon capacity of

1530-493: The DVB-C2 Study Mission already provided clear indications that technologies are available allowing the performance of the second generation DVB cable transmission system to get so close to the theoretical Shannon Limit that any further improvements in the future would most likely not be able to justify the introduction of a disruptive third generation of cable transmission system." (DVB-C2 CfT) By using state of

1581-484: The additive noise is not white (or that the ⁠ S / N {\displaystyle S/N} ⁠ is not constant with frequency over the bandwidth) is obtained by treating the channel as many narrow, independent Gaussian channels in parallel: where Note: the theorem only applies to Gaussian stationary process noise. This formula's way of introducing frequency-dependent noise cannot describe all continuous-time noise processes. For example, consider

DVB-C - Misplaced Pages Continue

1632-789: The art coding and modulation techniques, DVB-C2 should offer greater than 30% higher spectrum efficiency under the same conditions, and the gains in downstream channel capacity will be greater than 60% for optimized HFC networks. The final DVB-C2 specification was approved by the DVB Steering Board in April 2009. DVB-C2 allows bit rates up to 83.1 Mbit/s on an 8 MHz channel bandwidth when using 4096-QAM modulation; future extensions will allow up to 97 Mbit/s and 110.8 Mbit/s per channel using 16384-QAM and 65536-AQAM modulation. Modes and features of DVB-C2 in comparison to DVB-C: Shannon Limit In information theory ,

1683-549: The capacity formula can be approximated: When the SNR is large ( S / N ≫ 1 ), the logarithm is approximated by in which case the capacity is logarithmic in power and approximately linear in bandwidth (not quite linear, since N increases with bandwidth, imparting a logarithmic effect). This is called the bandwidth-limited regime . where Similarly, when the SNR is small (if ⁠ S / N ≪ 1 {\displaystyle S/N\ll 1} ⁠ ), applying

1734-473: The channel capacity. The theorem does not address the rare situation in which rate and capacity are equal. The Shannon–Hartley theorem establishes what that channel capacity is for a finite-bandwidth continuous-time channel subject to Gaussian noise. It connects Hartley's result with Shannon's channel capacity theorem in a form that is equivalent to specifying the M in Hartley's line rate formula in terms of

1785-421: The computing power in today's digital signal processors , it is now possible to reach very close to the Shannon limit. In fact, it was shown that LDPC codes can reach within 0.0045 dB of the Shannon limit (for binary additive white Gaussian noise (AWGN) channels, with very long block lengths). The basic mathematical model for a communication system is the following: A message W is transmitted through

1836-415: The dynamic range of the signal amplitude and the precision with which the receiver can distinguish amplitude levels. Specifically, if the amplitude of the transmitted signal is restricted to the range of [− A ... + A ] volts, and the precision of the receiver is ±Δ V volts, then the maximum number of distinct pulses M is given by By taking information per pulse in bit/pulse to be the base-2- logarithm of

1887-428: The event that message i is jointly typical with the sequence received when message 1 is sent. We can observe that as n {\displaystyle n} goes to infinity, if R < I ( X ; Y ) {\displaystyle R<I(X;Y)} for the channel, the probability of error will go to 0. Finally, given that the average codebook is shown to be "good," we know that there exists

1938-469: The ideas of Nyquist and Hartley, and then formulated a complete theory of information and its transmission. In 1927, Nyquist determined that the number of independent pulses that could be put through a telegraph channel per unit time is limited to twice the bandwidth of the channel. In symbolic notation, where f p {\displaystyle f_{p}} is the pulse frequency (in pulses per second) and B {\displaystyle B}

1989-422: The late 1920s, Harry Nyquist and Ralph Hartley developed a handful of fundamental ideas related to the transmission of information, particularly in the context of the telegraph as a communications system. At the time, these concepts were powerful breakthroughs individually, but they were not part of a comprehensive theory. In the 1940s, Claude Shannon developed the concept of channel capacity, based in part on

2040-438: The modern discipline of information theory . Stated by Claude Shannon in 1948, the theorem describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. Shannon's theorem has wide-ranging applications in both communications and data storage . This theorem is of foundational importance to the modern field of information theory . Shannon only gave an outline of

2091-412: The net data rate that can be approached with coding is equivalent to using that M {\displaystyle M} in Hartley's law. In the simple version above, the signal and noise are fully uncorrelated, in which case S + N {\displaystyle S+N} is the total power of the received signal and noise together. A generalization of the above equation for the case where

DVB-C - Misplaced Pages Continue

2142-404: The next big step in understanding how much information could be reliably communicated through noisy channels. Building on Hartley's foundation, Shannon's noisy channel coding theorem (1948) describes the maximum possible efficiency of error-correcting methods versus levels of noise interference and data corruption. The proof of the theorem shows that a randomly constructed error-correcting code

2193-400: The noise. This addition creates uncertainty as to the original signal's value. If the receiver has some information about the random process that generates the noise, one can in principle recover the information in the original signal by considering all possible states of the noise process. In the case of the Shannon–Hartley theorem, the noise is assumed to be generated by a Gaussian process with

2244-401: The number of distinct messages M that could be sent, Hartley constructed a measure of the line rate R as: where f p {\displaystyle f_{p}} is the pulse rate, also known as the symbol rate, in symbols/second or baud . Hartley then combined the above quantification with Nyquist's observation that the number of independent pulses that could be put through

2295-455: The power ratio back to a voltage ratio, so the number of levels is approximately proportional to the ratio of signal RMS amplitude to noise standard deviation. This similarity in form between Shannon's capacity and Hartley's law should not be interpreted to mean that M {\displaystyle M} pulse levels can be literally sent without any confusion. More levels are needed to allow for redundant coding and error correction, but

2346-427: The proof. The first rigorous proof for the discrete case is given in ( Feinstein 1954 ). The Shannon theorem states that given a noisy channel with channel capacity C and information transmitted at a rate R , then if R < C {\displaystyle R<C} there exist codes that allow the probability of error at the receiver to be made arbitrarily small. This means that, theoretically, it

2397-446: The rate at which information can be transmitted over an analog channel. Bandwidth limitations alone do not impose a cap on the maximum information rate because it is still possible for the signal to take on an indefinitely large number of different voltage levels on each symbol pulse, with each slightly different level being assigned a different meaning or bit sequence. Taking into account both noise and bandwidth limitations, however, there

2448-629: The same way as that of channel coding theorem. Achievability follows from random coding with each symbol chosen randomly from the capacity achieving distribution for that particular channel. Typicality arguments use the definition of typical sets for non-stationary sources defined in the asymptotic equipartition property article. The technicality of lim inf comes into play when 1 n ∑ i = 1 n C i {\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}C_{i}} does not converge. Shannon%E2%80%93Hartley theorem In information theory ,

2499-425: The sender and receiver respectively. Since sums of independent Gaussian random variables are themselves Gaussian random variables, this conveniently simplifies analysis, if one assumes that such error sources are also Gaussian and independent. Comparing the channel capacity to the information rate from Hartley's law, we can find the effective number of distinguishable levels M : The square root effectively converts

2550-405: The strong converse states that the error goes to 1. Thus, C {\displaystyle C} is a sharp threshold between perfectly reliable and completely unreliable communication. We assume that the channel is memoryless, but its transition probabilities change with time, in a fashion known at the transmitter as well as the receiver. Then the channel capacity is given by The maximum

2601-489: The transmitted codewords and received codewords, respectively. The result of these steps is that P e ( n ) ≥ 1 − 1 n R − C R {\displaystyle P_{e}^{(n)}\geq 1-{\frac {1}{nR}}-{\frac {C}{R}}} . As the block length n {\displaystyle n} goes to infinity, we obtain P e ( n ) {\displaystyle P_{e}^{(n)}}

SECTION 50

#1732802306901
#900099