Misplaced Pages

VP9

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

VP9 is an open and royalty-free video coding format developed by Google .

#53946

89-685: VP9 is the successor to VP8 and competes mainly with MPEG's High Efficiency Video Coding (HEVC/H.265). At first, VP9 was mainly used on Google's video platform YouTube . The emergence of the Alliance for Open Media , and its support for the ongoing development of the successor AV1 , of which Google is a part, led to growing interest in the format. In contrast to HEVC, VP9 support is common among modern web browsers (see HTML video § Browser support ). Android has supported VP9 since version 4.4 KitKat , while Safari 14 added support for VP9 in iOS / iPadOS / tvOS 14 and macOS Big Sur . Parts of

178-407: A static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. Adaptive models dynamically update

267-496: A VP8 patent pool , without revealing the patents in question, and despite On2 having gone to great lengths to avoid such patents. In November 2011, the Internet Engineering Task Force published the informational RFC 6386, VP8 Data Format and Decoding Guide. In March 2013, MPEG LA announced that it had dropped its effort to form a VP8 patent pool after reaching an agreement with Google to license

356-472: A VP8 (and VP9) codec is found in the programming library libvpx which is released as free software . It has a mode for one-pass and two-pass encoding , respectively, while the one-pass mode is known as being broken and not offering effective control over the target bitrate. Currently, libvpx is primary software library capable of encoding VP8 video streams, but at least one independent implementation exists in ffvp8enc . A Video for Windows wrapper of

445-768: A component within lossy data compression technologies (e.g. lossless mid/side joint stereo preprocessing by MP3 encoders and other lossy audio encoders). Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data would be unfavourable. Common examples are executable programs, text documents, and source code. Some image file formats, like PNG or GIF , use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, while smaller lossy audio files are typically used on portable players and in other cases where storage space

534-452: A developer of the x264 encoder, gave several points of criticism for VP8, claiming that its specification was incomplete, and the performance of the encoder's deblocking filter was inferior to x264 in some areas. In its specification, VP8 should be a bit better than H.264 Baseline Profile and Microsoft's VC-1 . Encoding is somewhere between Xvid and VC-1. Decoding is slower than FFmpeg 's H.264, but this aspect can hardly be improved due to

623-450: A draft proposal for including VP9 video in an MP4 container with MPEG Common Encryption . In January 2016, Ittiam demonstrated an OpenCL based VP9 encoder. The encoder is targeting ARM Mali mobile GPUs and was demonstrated on a Samsung Galaxy S6 . VP9 support was added to Microsoft 's web browser Edge in 2016. In March 2017, Ittiam announced the completion of a project to enhance the encoding speed of libvpx. The speed improvement

712-450: A higher level with lower resolution continues with the sums. This is called discrete wavelet transform . JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but the distribution of values could be more peaked. The adaptive encoding uses

801-467: A mixture of HTML5 and a freed VP8. Word of an impending open-source release announcement got out on April 12, 2010. On May 19, at its Google I/O conference, Google released the VP8 codec software under a BSD -like license and the VP8 bitstream format specification under an irrevocable free patent license. This made VP8 the second product from On2 Technologies to be opened, following their donation of

890-447: A much higher probability than large values. This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes. For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken. A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on

979-435: A number of better-known compression benchmarks. Some benchmarks cover only the data compression ratio , so winners in these benchmarks may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from

SECTION 10

#1732786673054

1068-538: A particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa. In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm; indeed, this result is used to define the concept of randomness in Kolmogorov complexity . It is provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through

1157-509: A reference frame or an average of content from two reference frames ("compound prediction mode"). The (ideally small) remaining difference ( delta encoding ) from the computed prediction to the actual image content is transformed using a DCT or ADST (for edge blocks) and quantized. Something like a b-frame can be coded while preserving the original frame order in the bitstream using a structure named superframes. Hidden alternate reference frames can be packed together with an ordinary inter frame and

1246-419: A relatively small memory footprint . The format features a pure intra mode, i.e. using only independently coded frames without temporal prediction, to enable random access in applications like video editing. VP8 is a traditional block-based transform coding format. It has much in common with H.264 , e.g. some prediction modes. At the time of first presentation of VP8, according to On2 the in-loop filter and

1335-464: A single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, deflate compressed files never need to grow by more than 5 bytes per 65,535 bytes of input. In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces

1424-475: A skip frame that triggers display of previous hidden altref content from its reference frame buffer right after the accompanying p-frame. VP9 enables lossless encoding by transmitting at the lowest quantization level (q index 0) an additional 4×4-block encoded Walsh–Hadamard transformed (WHT) residue signal. In order to be seekable, raw VP9 bitstreams have to be encapsulated in a container format , for example Matroska (.mkv), its derived WebM format (.webm) or

1513-409: A specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain. Some of the most common lossless compression algorithms are listed below. See list of lossless video codecs Cryptosystems often compress data (the "plaintext") before encryption for added security. When properly implemented, compression greatly increases

1602-560: A technology advantage." Google claims that VP8 offers the "highest quality real-time video delivery" and Libvpx includes a mode where the maximum CPU resources possible will be used while still keeping the encoding speed almost exactly equivalent to the playback speed (realtime), keeping the quality as high as possible without lag. On the other hand, a review conducted by streamingmedia.com in May 2010 concluded that H.264 offers slightly better quality than VP8. In September 2010 Fiona Glaser,

1691-410: A very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor). Abstractly, a compression algorithm can be viewed as a function on sequences (normally of octets). Compression

1780-716: Is WebM-enabled from version 2.3 - Gingerbread. Since Android 4.0, VP8 could be read inside mkv and WebM could be streamed. Adobe also announced that the Flash Player will support VP8 playback in a future release. On September 30, 2010, Google announced WebP , their new image format, on the Chromium blog. WebP is based on VP8's intra-frame coding and uses a container based on Resource Interchange File Format (RIFF). While H.264/MPEG-4 AVC contains patented technology and requires licenses from patent holders and limited royalties for hardware, Google has irrevocably released

1869-549: Is especially useful with high-resolution video. Also, the prediction of motion vectors was improved. In addition to VP8's four modes (average/"DC", "true motion", horizontal, vertical), VP9 supports six oblique directions for linear extrapolation of pixels in intra-frame prediction . New coding tools also include: In order to enable some parallel processing of frames, video frames can be split along coding unit boundaries into up to four rows of 256 to 4096 pixels wide evenly spaced tiles with each tile column coded independently. This

SECTION 20

#1732786673054

1958-683: Is expected that users of the hitherto leading MPEG formats will often switch to the royalty-free alternative formats of the VPx/AVx series instead of upgrading to HEVC. A main user of VP9 is Google's popular video platform YouTube , which offers VP9 video at all resolutions along with Opus audio in the WebM file format, through DASH streaming . Another early adopter was Misplaced Pages (specifically Wikimedia Commons , which hosts multimedia files across Misplaced Pages's subpages and languages). Misplaced Pages endorses open and royalty-free multimedia formats. As of 2016,

2047-546: Is implemented in these web browsers: On Windows 10 October 2018 Update (1809) : WebM (.webm is recognized officially) - On Anniversary Update (1607), limited support is available in Microsoft Edge (via MSE only) and Universal Windows Platform apps. - On April 2018 Update (1803) with Web Media Extensions preinstalled, Microsoft Edge (EdgeHTML 17) supports VP9 videos embedded in <video> tags. - On October 2018 Update (1809), VP9 Video Extensions

2136-478: Is limited or exact replication of the audio is unnecessary. Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (i.e. frequently encountered) data will produce shorter output than "improbable" data. The primary encoding algorithms used to produce bit sequences are Huffman coding (also used by

2225-456: Is made by heuristics ; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimizing the compression overhead. For example, the zip data format specifies the 'compression method' of 'Stored' for input files that have been copied into

2314-753: Is mandatory for video resolutions in excess of 4096 pixels. A tile header contains the tile size in bytes so decoders can skip ahead and decode each tile row in a separate thread . The image is then divided into coding units called superblocks of 64×64 pixels which are adaptively subpartitioned in a quadtree coding structure. They can be subdivided either horizontally or vertically or both; square (sub)units can be subdivided recursively down to 4×4 pixel blocks. Subunits are coded in raster scan order: left to right, top to bottom. Starting from each key frame, decoders keep 8 frames buffered to be used as reference frames or to be shown later. Transmitted frames signal which buffer to overwrite and can optionally be decoded into one of

2403-436: Is not possible to produce a lossless algorithm that reduces the size of every possible input sequence. Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection

2492-539: Is preinstalled. It enables encoding of VP8 and VP9 content on devices that do not have a hardware-based video encoder. VP9 is supported in all major open source media player software , including VLC , MPlayer /MPlayer2/ MPV , Kodi , MythTV , and FFplay . Android has had VP9 software decoding since version 4.4 "KitKat" . For a list of consumer electronics with hardware support, including TVs, smartphones, set top boxes and game consoles, see webmproject.org's list. Hardware accelerated VP9 decoding support nowadays

2581-467: Is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression , of successive images within a sequence). This is called delta encoding (from the Greek letter Δ , which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing

2670-436: Is successful if the resulting sequence is shorter than the original sequence (and the instructions for the decompression map). For a compression algorithm to be lossless , the compression map must form an injection from "plain" to "compressed" bit sequences. The pigeonhole principle prohibits a bijection between the collection of sequences of length N and any subset of the collection of sequences of length N −1. Therefore, it

2759-428: Is the basic variant, requiring the least from a hardware implementation: VP9 offers the following 14 levels: VP9 is a traditional block-based transform coding format. The bitstream format is relatively simple compared to formats that offer similar bitrate efficiency like HEVC. VP9 has many design improvements compared to VP8 . Its biggest improvement is support for the use of coding units of 64×64 pixels. This

VP9 - Misplaced Pages Continue

2848-571: Is ubiquitous as most GPUs and SoCs support it natively. Hardware encoding is present in Intel's Kaby Lake processors and above. VP8 VP8 is an open and royalty-free video compression format released by On2 Technologies in 2008. Initially released as a proprietary successor to On2's previous VP7 format, VP8 was released as an open and royalty-free format in May 2010 after Google acquired On2 Technologies. Google provided an irrevocable patent promise on its patents for implementing

2937-524: The Chromecast Ultra , mobile phones as well as web browsers. A series of cloud encoding services offer VP9 encoding, including Amazon , Bitmovin , Brightcove , castLabs, JW Player , Telestream , and Wowza. Encoding.com has offered VP9 encoding since Q4 2016, which amounted to a yearly average of 11% popularity for VP9 among its customers that year. JW Player supports VP9 in its widely used software-as-a-service HTML video player. VP9

3026-615: The DirectShow filter installed. According to Google, VP8 is mainly used in connection with WebRTC and as a format for short looped animations, as a replacement for the Graphics Interchange Format (GIF). VP8 can be multiplexed into the Matroska -based container format WebM along with Vorbis and Opus audio. The image format WebP is based on VP8's intra-frame coding. VP8's direct successor, VP9 , and

3115-714: The EVE encoder, which according to their studies offered better two-pass rate control and was 8% more efficient than libvpx. An offline encoder comparison between libvpx, two HEVC encoders and x264 in May 2017 by Jan Ozer of Streaming Media Magazine, with encoding parameters supplied or reviewed by each encoder vendor (Google, MulticoreWare and MainConcept respectively), and using Netflix's VMAF objective metric, concluded that "VP9 and both HEVC codecs produce very similar performance" and "Particularly at lower bitrates, both HEVC codecs and VP9 deliver substantially better performance than H.264". An encoding speed versus efficiency comparison of

3204-818: The FFmpeg Team announced the ffvp8 decoder. Through testing, they determined that ffvp8 was faster than Google's own libvpx decoder. The WebM Project hardware team released an RTL hardware decoder for VP8, that is releasable to semiconductor companies at zero cost. TATVIK Technologies announced a VP8 decoder that is optimized for the ARM Cortex-A8 processor. Marvell 's ARMADA 1500-mini chipset has VP8 SD and HD hardware decoding support (used in Chromecast ). Intel has full VP8 decoding support built into their Bay Trail chipsets . Intel Broadwell also adds VP8 hardware decoding support. Also on May 19, 2010,

3293-477: The LZ77 -based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003. Many of the lossless compression techniques used for text also work reasonably well for indexed images , but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of

3382-640: The MPEG Licensing Administration dropped an announced assertion of disputed patent claims against VP8 and its successors after the United States Department of Justice started to investigate whether it was acting to unfairly stifle competition. Throughout, Google has worked with hardware vendors to get VP9 support into silicon. In January 2014, Ittiam , in collaboration with ARM and Google, demonstrated its VP9 decoder for ARM Cortex devices. Using GPGPU techniques,

3471-522: The United States and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing still image files in favor of Portable Network Graphics (PNG), which combines

3560-704: The VP3 codec in 2002 to the Xiph.Org Foundation , from which they derived the Theora codec. In February 2011, MPEG LA invited patent holders to identify patents that may be essential to VP8 in order to form a joint VP8 patent pool . As a result, in March the United States Department of Justice (DoJ) started an investigation into MPEG LA for its role in possibly attempting to stifle competition. In July 2011, MPEG LA announced that 12 patent holders had responded to its call to form

3649-513: The WebM Project was launched, featuring contributions from "Mozilla, Opera, Google and more than forty other publishers, software and hardware vendors" in a major effort to use VP8 as the video format for HTML5. In the WebM container format , the VP8 video is used with Vorbis or Opus audio. Internet Explorer 9 will support VP8 video playback if the proper codec is installed. Android

VP9 - Misplaced Pages Continue

3738-405: The deflate algorithm ) and arithmetic coding . Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy , whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1. There are two primary ways of constructing statistical models: in

3827-448: The unicity distance by removing patterns that might facilitate cryptanalysis . However, many ordinary lossless compression algorithms produce headers, wrappers, tables, or other predictable output that might instead make cryptanalysis easier. Thus, cryptosystems must utilize compression algorithms whose output does not contain these predictable patterns. Genetics compression algorithms (not to be confused with genetic algorithms ) are

3916-575: The Golden Frames were among the novelties of this iteration. The first definition of such a filter is already found in the H.263 standard, though, and Golden Frames were already in use in VP5 and VP7. The discrete cosine transform (DCT) on 4×4 blocks and the Walsh–Hadamard transform (WHT) serve as basic frequency transforms. A maximum of three frames can be referenced for temporal prediction:

4005-609: The VP8 codec based on the Google VP8 library ( FourCC : VP80) is available. The WebM Project hardware team in Finland released an RTL hardware encoder for VP8 that is available at no cost for semiconductor manufacturers. The Nvidia Tegra mobile chipsets have full VP8 hardware encoding and decoding (since Tegra 4 ). Nexus 5 could use hardware encoding libvpx is capable of decoding VP8 video streams. On July 23, 2010, Fiona Glaser, Ronald Bultje, and David Conrad of

4094-533: The VP8 format, and released a specification of the format under the Creative Commons Attribution 3.0 license . That same year, Google also released libvpx , the reference implementation of VP8, under the revised BSD license . Opera , Firefox , Chrome , Pale Moon , and Chromium support playing VP8 video in HTML video tag. Internet Explorer officially supports VP8 if the user has

4183-462: The VP8 patents it owns under a royalty-free public license. According to a comparison of VP8 (encoded with the initial release of libvpx) and H.264 conducted by StreamingMedia, it was concluded that "H.264 may have a slight quality advantage, but it's not commercially relevant" and that "Even watching side-by-side (which no viewer ever does), very few viewers could tell the difference". They also stated that "H.264 has an implementation advantage, not

4272-453: The algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument called the pigeonhole principle , as follows: Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only

4361-431: The archive verbatim. Mark Nelson, in response to claims of "magic" compression algorithms appearing in comp.compression, has constructed a 415,241 byte binary file of highly entropic content, and issued a public challenge of $ 100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute it without error. A similar challenge, with $ 5,000 as reward,

4450-403: The buffers without being shown. The encoder can send a minimal frame that just triggers one of the buffers to be displayed ("skip frame"). Each inter frame can reference up to three of the buffered frames for temporal prediction. Up to two of those reference frames can be used in each coding block to calculate a sample data prediction, using spatially displaced ( motion compensation ) content from

4539-580: The class of context-mixing compression software. Matt Mahoney , in his February 2010 edition of the free booklet Data Compression Explained , additionally lists the following: The Compression Ratings website published a chart summary of the "frontier" in compression ratio and time. The Compression Analysis Tool is a Windows application that enables end users to benchmark the performance characteristics of streaming implementations of LZF4, Deflate, ZLIB, GZIP, BZIP2 and LZMA using their own data. It produces measurements and charts with which users can compare

SECTION 50

#1732786673054

4628-429: The compression speed, decompression speed and compression ratio of the different compression methods and to examine how the compression level, buffer size and flushing operations affect the results. Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by

4717-477: The decoder was capable of 1080p at 30fps on an Arndale Board . In early 2015 Nvidia announced VP9 support in its Tegra X1 SoC , and VeriSilicon announced VP9 Profile 2 support in its Hantro G2v2 decoder IP. In April 2015 Google released a significant update to its libvpx library, with version 1.4.0 adding support for 10-bit and 12-bit bit depth , 4:2:2 and 4:4:4 chroma subsampling , and VP9 multithreaded decoding/encoding. In December 2015, Netflix published

4806-455: The error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context. No lossless compression algorithm can efficiently compress all possible data (see § Limitations for more on this) . For this reason, many different algorithms exist that are designed either with

4895-435: The fact that DNA sequences have characteristic properties, such as inverted repeats. The most successful compressors are XM and GeCo. For eukaryotes XM is slightly better in compression ratio, though for sequences larger than 100 MB its computational requirements are impractical. Self-extracting executables contain a compressed application and a decompressor. When executed, the decompressor transparently decompresses and runs

4984-416: The form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed images . These techniques take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones. Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having

5073-472: The format are covered by patents held by Google. The company grants free usage of its own related patents based on reciprocity, i.e. as long as the user does not engage in patent litigations. VP9 is the last official iteration of the TrueMotion series of video formats that Google bought in 2010 for $ 134 million together with the company On2 Technologies that created it. The development of VP9 started in

5162-493: The gradual shift from Flash to HTML5 technology, which was still somewhat immature when VP9 was introduced. Trends towards UHD resolutions, higher color depth and wider gamuts are driving a shift towards new, specialized video formats. With the clear development perspective and support from the industry demonstrated by the founding of the Alliance for Open Media, as well as the pricey and complex licensing situation of HEVC it

5251-641: The implementation can make a difference, concluding that "ffvp9 beats libvpx consistently by 25–50%". Another decoder comparison indicated 10–40 percent higher CPU load than H.264 (but does not say whether this was with ffvp9 or libvpx), and that on mobile, the Ittiam demo player was about 40 percent faster than the Chrome browser at playing VP9. There are several variants of the VP9 format (known as "coding profiles"), which successively allow more features; profile 0

5340-581: The last Golden Frame (may be an intra frame), alternate reference frame, and the directly preceding frame. The so-called alternate reference frames (altref) can serve as reference-only frames for displaying them can be deactivated. In this case, the encoder can fill them with arbitrary useful image data, even from future frames, and thereby serve the same purpose as the b-frames of the MPEG formats. Similar macroblocks can be assigned to one of up to four (even spatially disjoint) segments and thereby share parameters like

5429-679: The latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and specific algorithms adapted to genetic data. In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression much faster than leading general-purpose compression utilities. Genomic sequence compression algorithms, also known as DNA sequence compressors, explore

SECTION 60

#1732786673054

5518-469: The main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a subset of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data. The "trick" that allows lossless compression algorithms, used on

5607-627: The model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders. Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm ( general-purpose meaning that they can accept any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of

5696-449: The older minimalistic Indeo video file (IVF) format which is traditionally supported by libvpx. VP9 is identified as V_VP9 in WebM and VP09 in MP4 , adhering to respective naming conventions. Adobe Flash , which traditionally used VPx formats up to VP7 , was never upgraded to VP8 or VP9, but instead to H.264 . Therefore, VP9 often penetrated corresponding web applications only with

5785-400: The original data , though usually with greatly improved compression rates (and therefore reduced media sizes). By operation of the pigeonhole principle , no lossless compression algorithm can shrink the size of all possible data: Some data will get longer by at least one symbol or bit. Compression algorithms are usually effective for human- and machine-readable documents and cannot shrink

5874-413: The original application. This is especially often used in demo coding, where competitions are held for demos with strict size limits, as small as 1 kilobyte . This type of compression is not strictly limited to binary executables, but can also be applied to scripts, such as JavaScript . Lossless compression algorithms and their implementations are routinely tested in head-to-head benchmarks . There are

5963-410: The other hand, it has also been proven that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it is possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant pi , which appear random but can be generated by

6052-479: The patents that it alleges "may be essential" for VP8 implementation, and granted Google the right to sub-license these patents to any third-party user of VP8 or VP9 . This deal has cleared the way for possible MPEG standardisation as its royalty-free internet video codec, after Google submitted VP8 to the MPEG committee in January 2013. In March 2013, Nokia asserted a patent claim against HTC and Google for

6141-404: The probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy. Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in

6230-411: The reference frame used, quantizer step size, or filter settings. VP8 offers two different adjustable deblocking filters that are integrated into the codec loops (in-loop filtering). Many coding tools use probabilities that are calculated continuously from recent context, starting at each intra frames. Macro blocks can comprise 4×4, 8×8, or 16×16 samples. Motion vectors have quarter-pixel precision. VP8

6319-505: The reference implementation in libvpx , x264 and x265 was made by an FFmpeg developer in September 2015: By SSIM index, libvpx was mostly superior to x264 across the range of comparable encoding speeds, but the main benefit was at the slower end of x264@veryslow (reaching a sweet spot of 30–40% bitrate improvement within twice as slow as this), whereas x265 only became competitive with libvpx around 10 times as slow as x264@veryslow . It

6408-523: The royalty-free AV1 codec from the Alliance for Open Media are based on VP8. VP8 only supports progressive scan video signals with 4:2:0 chroma subsampling and 8 bits per sample . In its first public version, On2's VP8 implementation supports multi-core processors with up to 64 cores simultaneously. At least in the implementation (from August 2011), VP8 is comparatively badly adapted to high resolutions ( HD ). With only three reference frame buffers needed, VP8 enables decoder implementations with

6497-518: The second half of 2011 under the development names of Next Gen Open Video ( NGOV ) and VP-Next . The design goals for VP9 included reducing the bit rate by 50% compared to VP8 while maintaining the same video quality, and aiming for better compression efficiency than the MPEG High Efficiency Video Coding (HEVC) standard. In June 2013 the "profile 0" of VP9 was finalized, and two months later Google's Chrome browser

6586-460: The similarities to H.264. Compression-wise, VP8 offers better performance than Theora and Dirac . According to Glaser, the VP8 interface lacks features and is buggy, and the specification is not fully defined and could be considered incomplete. Much of the VP8 code is copy-pasted C code , and since the source constitutes the actual specification, any bugs will also be defined as something that has to be implemented to be in compliance. In 2010, it

6675-513: The size of random data that contain no redundancy . Different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain. Lossless data compression is used in many applications. For example, it is used in the ZIP file format and in the GNU tool gzip . It is also often used as

6764-470: The size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be greater than N. So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better. Thus,

6853-426: The specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space). As mentioned previously, lossless sound compression is a somewhat specialized area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by

6942-443: The three accepted video formats are VP9, VP8 and Theora . Since December 2016, Netflix has used VP9 encoding for their catalog, alongside H.264 and HEVC . As of February 2020, AV1 has been started to be adopted for mobile devices, not unlike how VP9 has started on the platform. Google TV uses (at least in part) VP9 profile 2 with Widevine DRM . Stadia used VP9 for video game streaming up to 4k on supported hardware like

7031-437: The type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily modeled redundancy that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to

7120-643: The use of VP8 in Android in a German court; however, on August 5, 2013, the webm project announced that the German court has ruled that VP8 does not infringe Nokia's patent. Nokia has made an official intellectual property rights (IPR) declaration to the IETF with respect to the VP8 Data Format and Decoding Guide listing 64 granted patents and 22 pending patent applications. The reference implementation of

7209-430: The very highest quality (slowest encoding) whereas libvpx was superior at any other encoding speed, by SSIM. In a subjective quality comparison conducted in 2014 featuring the reference encoders for HEVC (HM 15.0), MPEG-4 AVC/H.264 (JM 18.6), and VP9 (libvpx 1.2.0 with preliminary VP9 support), VP9, like H.264, required about two times the bitrate to reach video quality comparable to HEVC, while with synthetic imagery VP9

7298-472: The wave-like nature of the data ‍ — ‍ essentially using autoregressive models to predict the "next" value and encoding the (possibly small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the error ) tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits. It

7387-452: The years of companies achieving "perfect compression" where an arbitrary number N of random bits can always be compressed to N  − 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 1. On

7476-548: Was announced that the WebM audio/video format would be based on a profile of the Matroska container format together with VP8 video and Vorbis audio. Lossless compression Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information . Lossless compression is possible because most real-world data exhibits statistical redundancy . By contrast, lossy compression permits reconstruction only of an approximation of

7565-529: Was close to HEVC. By contrast, another subjective comparison from 2014 concluded that at higher quality settings HEVC and VP9 were tied at a 40 to 45% bitrate advantage over H.264. Netflix , after a large test in August 2016, concluded that libvpx was 20% less efficient than x265, but by October the same year also found that tweaking encoding parameters could "reduce or even reverse the gap between VP9 and HEVC". At NAB 2017 , Netflix shared that they had switched to

7654-479: Was concluded that libvpx and x265 were both capable of the claimed 50% bitrate improvement over H.264, but only at 10–20 times the encoding time of x264. Judged by the objective quality metric VQM in early 2015, the VP9 reference encoder delivered video quality on par with the best HEVC implementations. A decoder comparison by the same developer showed 10% faster decoding for ffvp9 than ffh264 for same-quality video, or "identical" at same bitrate. It also showed that

7743-547: Was first released by On2 Technologies on September 13, 2008, as On2 TrueMotion VP8, replacing its predecessor, VP7 . After Google acquired On2 in February 2010, calls for Google to release the VP8 source code were made. Most notably, the Free Software Foundation issued an open letter on March 12, 2010, asking Google to gradually replace the usage of Adobe Flash Player and H.264 on YouTube with

7832-414: Was released with support for VP9 video playback. In October of that year a native VP9 decoder was added to FFmpeg , and to Libav six weeks later. Mozilla added VP9 support to Firefox in March 2014. In 2014 Google added two high bit depth profiles: profile 2 and profile 3. In 2013 an updated version of the WebM format was published, featuring support for VP9 together with Opus audio. In March 2013,

7921-719: Was said to be 50-70%, and the code "publicly available as part of libvpx". VP9 is customized for video resolutions greater than 1080p (such as UHD ) and also enables lossless compression . It supports resolutions up to 65536×65536, whereas HEVC supports resolutions up to 8192×4320 pixels . The VP9 format supports the following color spaces (and corresponding YCbCr to RGB transformation matrices): Rec. 601 , Rec. 709 , Rec. 2020 , SMPTE-170 , SMPTE-240 , and sRGB . VP9 supports many transfer functions and supports HDR video with hybrid log–gamma (HLG) or perceptual quantizer (PQ). An early comparison that took varying encoding speed into account showed x265 to narrowly beat libvpx at

#53946