Misplaced Pages

VP3

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

On2 TrueMotion VP3 is a ( royalty-free ) lossy video compression format and video codec . It is an incarnation of the TrueMotion video codec, a series of video codecs developed by On2 Technologies .

#605394

45-501: There is no formal specification for the VP3 bitstream format beyond the VP3 source code published by On2 Technologies. In 2003, Mike Melanson created an incomplete description of the VP3 bitstream format and decoding process at a higher level than source code, with some help from On2 and the Xiph.Org Foundation . VP3 was originally a proprietary and patented video codec. On2 TrueMotion VP3.1

90-427: A video codec . Some video coding formats are documented by a detailed technical specification document known as a video coding specification . Some such specifications are written and approved by standardization organizations as technical standards , and are thus known as a video coding standard . There are de facto standards and formal standards. Video content encoded using a particular video coding format

135-531: A H.264 encoder/decoder a codec shortly thereafter ("open-source our H.264 codec"). A video coding format does not dictate all algorithms used by a codec implementing the format. For example, a large part of how video compression typically works is by finding similarities between video frames (block-matching) and then achieving compression by copying previously-coded similar subimages (such as macroblocks ) and adding small differences when necessary. Finding optimal combinations of such predictors and differences

180-406: A fast DCT algorithm with C.H. Smith and S.C. Fralick in 1977, and founded Compression Labs to commercialize DCT technology. In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression. This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. Motion-compensated DCT later became

225-520: A given video coding format from/to uncompressed video are implementations of those specifications. As an analogy, the video coding format H.264 (specification) is to the codec OpenH264 (specific implementation) what the C Programming Language (specification) is to the compiler GCC (specific implementation). Note that for each specification (e.g., H.264 ), there can be many codecs implementing that specification (e.g., x264 , OpenH264, H.264/MPEG-4 AVC products and implementations ). This distinction

270-416: A lot more computing power than editing intraframe compressed video with the same picture quality. But, this compression is not very effective to use for any audio format. A video coding format can define optional restrictions to encoded video, called profiles and levels. It is possible to have a decoder which only supports decoding a subset of profiles and levels of a given video format, for example to make

315-556: A much more efficient form of compression for video coding. The CCITT received 14 proposals for DCT-based video compression formats, in contrast to a single proposal based on vector quantization (VQ) compression. The H.261 standard was developed based on motion-compensated DCT compression. H.261 was the first practical video coding standard, and uses patents licensed from a number of companies, including Hitachi , PictureTel , NTT , BT , and Toshiba , among others. Since H.261, motion-compensated DCT compression has been adopted by all

360-407: A number of companies, primarily Mitsubishi, Hitachi and Panasonic . The most widely used video coding format as of 2019 is H.264/MPEG-4 AVC . It was developed in 2003, and uses patents licensed from a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics . In contrast to the standard DCT used by its predecessors, AVC uses the integer DCT . H.264 is one of

405-499: A patent lawsuit due to submarine patents . The motivation behind many recently designed video coding formats such as Theora , VP8 , and VP9 have been to create a ( libre ) video coding standard covered only by royalty-free patents. Patent status has also been a major point of contention for the choice of which video formats the mainstream web browsers will support inside the HTML video tag. The current-generation video coding format

450-647: A plug-in for RealPlayer was announced in January 2002. Lately AOL licensed VP4 and created the Nullsoft Streaming Video format. Now the VP4 codec is limited, but still used by AOL. Later incarnations of this codec are VP5 , VP6 , VP7 , VP8 , and VP9 . Bitstream format A bitstream format is the format of the data found in a stream of bits used in a digital communication or data storage application. The term typically refers to

495-454: Is HEVC (H.265), introduced in 2013. AVC uses the integer DCT with 4x4 and 8x8 block sizes, and HEVC uses integer DCT and DST transforms with varied block sizes between 4x4 and 32x32. HEVC is heavily patented, mostly by Samsung Electronics , GE , NTT , and JVCKenwood . It is challenged by the AV1 format, intended for free license. As of 2019 , AVC is by far the most commonly used format for

SECTION 10

#1732786889606

540-402: Is a content representation format of digital video content, such as in a data file or bitstream . It typically uses a standardized video compression algorithm, most commonly based on discrete cosine transform (DCT) coding and motion compensation . A specific software, firmware , or hardware implementation capable of compression or decompression in a specific video coding format is called

585-412: Is a form of lossless video used in some circumstances such as when sending video to a display over a HDMI connection. Some high-end cameras can also capture video directly in this format. Interframe compression complicates editing of an encoded video sequence. One subclass of relatively simple video coding formats are the intra-frame video formats, such as DV , in which each frame of the video stream

630-458: Is an NP-hard problem, meaning that it is practically impossible to find an optimal solution. Though the video coding format must support such compression across frames in the bitstream format, by not needlessly mandating specific algorithms for finding such block-matches and other encoding steps, the codecs implementing the video coding specification have some freedom to optimize and innovate in their choice of algorithms. For example, section 0.5 of

675-518: Is compressed independently without referring to other frames in the stream, and no attempt is made to take advantage of correlations between successive pictures over time for better compression. One example is Motion JPEG , which is simply a sequence of individually JPEG -compressed images. This approach is quick and simple, at the expense of the encoded video being much larger than a video coding format supporting Inter frame coding. Because interframe compression copies data from one frame to another, if

720-457: Is normally bundled with an audio stream (encoded using an audio coding format ) inside a multimedia container format such as AVI , MP4 , FLV , RealMedia , or Matroska . As such, the user normally does not have a H.264 file, but instead has a video file , which is an MP4 container of H.264-encoded video, normally alongside AAC -encoded audio. Multimedia container formats can contain one of several different video coding formats; for example,

765-442: Is not consistently reflected terminologically in the literature. The H.264 specification calls H.261 , H.262 , H.263 , and H.264 video coding standards and does not contain the word codec . The Alliance for Open Media clearly distinguishes between the AV1 video coding format and the accompanying codec they are developing, but calls the video coding format itself a video codec specification . The VP9 specification calls

810-547: Is that, with intraframe systems, each frame uses a similar amount of data. In most interframe systems, certain frames (such as I-frames in MPEG-2 ) are not allowed to copy data from other frames, so they require much more data than other frames nearby. It is possible to build a computer-based video editor that spots problems caused when I frames are edited out while other frames need them. This has allowed newer formats like HDV to be used for editing. However, this process demands

855-443: The source code and open source license for VP3.2 video compression algorithm at www.vp3.com. The VP3.2 Public License 0.1 granted the right to modify the source code only if the resulting larger work continued to support playback of VP3.2 data. In September, 2001 it was donated to the public as open source , and On2 irrevocably disclaimed all rights to it, granting a royalty-free license grant for any patent claims it might have over

900-461: The temporal dimension . DCT coding is a lossy block compression transform coding technique that was first proposed by Nasir Ahmed , who initially intended it for image compression , while he was working at Kansas State University in 1972. It was then developed into a practical image compression algorithm by Ahmed with T. Natarajan and K. R. Rao at the University of Texas in 1973, and

945-438: The temporal dimension . In 1967, University of London researchers A.H. Robinson and C. Cherry proposed run-length encoding (RLE), a lossless compression scheme, to reduce the transmission bandwidth of analog television signals. The earliest digital video coding algorithms were either for uncompressed video or used lossless compression , both methods inefficient and impractical for digital video coding. Digital video

SECTION 20

#1732786889606

990-478: The DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for them, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25- bit per pixel for a videotelephone scene with image quality comparable to a typical intra-frame coder requiring 2-bit per pixel. The DCT was applied to video encoding by Wen-Hsiung Chen, who developed

1035-538: The H.264 specification says that encoding algorithms are not part of the specification. Free choice of algorithm also allows different space–time complexity trade-offs for the same video coding format, so a live feed can use a fast but space-inefficient algorithm, and a one-time DVD encoding for later mass production can trade long encoding-time for space-efficient encoding. The concept of analog video compression dates back to 1929, when R.D. Kell in Britain proposed

1080-511: The MP4 container format can contain video coding formats such as MPEG-2 Part 2 or H.264. Another example is the initial specification for the file type WebM , which specifies the container format (Matroska), but also exactly which video ( VP8 ) and audio ( Vorbis ) compression format is inside the Matroska container, even though Matroska is capable of containing VP9 video, and Opus audio support

1125-422: The bandwidth available in the 2000s. Practical video compression emerged with the development of motion-compensated DCT (MC DCT) coding, also called block motion compensation (BMC) or DCT motion compensation. This is a hybrid coding algorithm, which combines two key data compression techniques: discrete cosine transform (DCT) coding in the spatial dimension , and predictive motion compensation in

1170-483: The bitstream format VP4 can't be seen as an individual codec. On July 19, 2001 On2 announced an agreement with RealNetworks to license its VP4 video compression technology, for set-top boxes and other devices. On2 enabled RealPlayer as the exclusive media player for the VP4 codec and the RealSystem iQ architecture became the only streaming media platform capable of delivering the VP4 codec. The first beta version of

1215-477: The concept of transmitting only the portions of the scene that changed from frame-to-frame. The concept of digital video compression dates back to 1952, when Bell Labs researchers B.M. Oliver and C.W. Harrison proposed the use of differential pulse-code modulation (DPCM) in video coding. In 1959, the concept of inter-frame motion compensation was proposed by NHK researchers Y. Taki, M. Hatori and S. Tanaka, who proposed predictive inter-frame video coding in

1260-626: The data format of the output of an encoder, or the data format of the input to a decoder when using data compression . Standardized interoperability specifications such as the video coding standards produced by the MPEG and the ITU-T , and the audio coding standards produced by the MPEG, often specify only the bitstream format and the decoding process. This allows encoder implementations to use any methods whatsoever that produce bitstreams which conform to

1305-414: The decoder program/hardware smaller, simpler, or faster. A profile restricts which encoding techniques are allowed. For example, the H.264 format includes the profiles baseline , main and high (and others). While P-slices (which can be predicted based on preceding slices) are supported in all profiles, B-slices (which can be predicted based on both preceding and following slices) are supported in

1350-501: The major video coding standards (including the H.26x and MPEG formats) that followed. MPEG-1 , developed by the Moving Picture Experts Group (MPEG), followed in 1991, and it was designed to compress VHS -quality video. It was succeeded in 1994 by MPEG-2 / H.262 , which was developed with patents licensed from a number of companies, primarily Sony , Thomson and Mitsubishi Electric . MPEG-2 became

1395-457: The original frame is simply cut out (or lost in transmission), the following frames cannot be reconstructed properly. Making cuts in intraframe-compressed video while video editing is almost as easy as editing uncompressed video: one finds the beginning and ending of each frame, and simply copies bit-for-bit each frame that one wants to keep, and discards the frames one does not want. Another difference between intraframe and interframe compression

VP3 - Misplaced Pages Continue

1440-514: The recording, compression, and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers. Consumer video is generally compressed using lossy video codecs , since that results in significantly smaller files than lossless compression. Some video coding formats designed explicitly for either lossy or lossless compression, and some video coding formats such as Dirac and H.264 support both. Uncompressed video formats, such as Clean HDMI ,

1485-417: The software and any derivatives, allowing anyone to use any VP3-derived codec for any purpose. In March 2002, On2 altered licensing terms required to download the source code for VP3 to LGPL . In June 2002 On2 donated VP3 to the Xiph.org Foundation under a BSD -like open source license to make VP3 the basis of a new, free (e.g. patent- and royalty-free) video codec, Theora . The free video codec Theora

1530-413: The specified bitstream format. Normally, decoding of a bitstream can be initiated without having to start from the beginning of a file , or the beginning of the data transmission. Some bitstreams are designed for this to occur, for example by using indexes or key frames . Uses of bit stream decoders (BSD): Video coding A video coding format (or sometimes video compression format )

1575-532: The standard coding technique for video compression from the late 1980s onwards. The first digital video coding standard was H.120 , developed by the CCITT (now ITU-T) in 1984. H.120 was not usable in practice, as its performance was too poor. H.120 used motion-compensated DPCM coding, a lossless compression algorithm that was inefficient for video coding. During the late 1980s, a number of companies began experimenting with discrete cosine transform (DCT) coding,

1620-434: The standard video format for DVD and SD digital television . Its motion-compensated DCT algorithm was able to achieve a compression ratio of up to 100:1, enabling the development of digital media technologies such as video on demand (VOD) and high-definition television (HDTV). In 1999, it was followed by MPEG-4 / H.263 , which was a major leap forward for video compression technology. It uses patents licensed from

1665-475: The video coding format VP9 itself a codec . As an example of conflation, Chromium's and Mozilla's pages listing their video formats support both call video coding formats, such as H.264 codecs . As another example, in Cisco's announcement of a free-as-in-beer video codec, the press release refers to the H.264 video coding format as a codec ("choice of a common video codec"), but calls Cisco's implementation of

1710-640: The video encoding standards for Blu-ray Discs ; all Blu-ray Disc players must be able to decode H.264. It is also widely used by streaming internet sources, such as videos from YouTube , Netflix , Vimeo , and the iTunes Store , web software such as the Adobe Flash Player and Microsoft Silverlight , and also various HDTV broadcasts over terrestrial ( ATSC standards , ISDB-T , DVB-T or DVB-T2 ), cable ( DVB-C ), and satellite ( DVB-S2 ). A main problem for many video coding formats has been patents , making it expensive to use or potentially risking

1755-525: Was forked off from the released codebase of VP3.2 and further developed into an independent codec. On2 declared Theora to be the successor in VP3's lineage. Theora developers declared a freeze on the Theora I bitstream format in June 2004, allowing other companies to start implementing encoders and decoders for the format without worrying about the format changing in incompatible ways. The Theora I Specification

1800-432: Was initially limited to intra-frame coding in the spatial dimension. In 1975, John A. Roese and Guner S. Robinson extended Habibi's hybrid coding algorithm to the temporal dimension, using transform coding in the spatial dimension and predictive coding in the temporal dimension, developing inter-frame motion-compensated hybrid coding. For the spatial transform coding, they experimented with different transforms, including

1845-630: Was introduced in May 2000 followed three months later by the VP3.2 release. Later that year, On2 announced VP3 plugins for QuickTime and RealPlayer . In May 2001, On2 released the beta version of its new VP4 proprietary codec. In June 2001, On2 also released a VP3 codec implementation for Microsoft Windows where the encoder was priced at $ 39.95 for personal use, and $ 2,995 for limited commercial use. In August 2001, On2 Technologies announced that they would be releasing an open source version of their VP3.2 video compression algorithm. In September 2001 they published

VP3 - Misplaced Pages Continue

1890-446: Was introduced in the 1970s, initially using uncompressed pulse-code modulation (PCM), requiring high bitrates around 45–200 Mbit/s for standard-definition (SD) video, which was up to 2,000 times greater than the telecommunication bandwidth (up to 100   kbit/s ) available until the 1990s. Similarly, uncompressed high-definition (HD) 1080p video requires bitrates exceeding 1   Gbit/s , significantly greater than

1935-460: Was later added to the WebM specification. A format is the layout plan for data produced or consumed by a codec . Although video coding formats such as H.264 are sometimes referred to as codecs , there is a clear conceptual difference between a specification and its implementations. Video coding formats are described in specifications, and software, firmware , or hardware to encode/decode data in

1980-426: Was published in 1974. The other key development was motion-compensated hybrid coding. In 1974, Ali Habibi at the University of Southern California introduced hybrid coding, which combines predictive coding with transform coding. He examined several transform coding techniques, including the DCT, Hadamard transform , Fourier transform , slant transform, and Karhunen-Loeve transform . However, his algorithm

2025-591: Was published in September 2004. Any later changes in the specification are minor updates. A first stable release (version 1.0) of the Theora reference implementation (libtheora) was released in November 2008. VP4 was announced in January 2001. On2 Technologies released the beta version of VP4 on May 21, 2001. In June 2001 On2 Technologies posted the production release of VP4 on its website. VP4 brought an improved encoder for VP3 bitstream format. So because of keeping

#605394