Bink Video is a proprietary file format (extensions .bik and .bk2 ) for video developed by Epic Games Tools (formerly RAD Game Tools), a part of Epic Games .
46-567: The format includes its own proprietary video and audio compression algorithms ( video and audio codecs ) supporting resolutions from 320×240 up to high definition video. It is bundled as part of the Epic Video Tools along with Epic Games Tools' previous video codec, Smacker video . It is a hybrid block-transform and wavelet codec using 16 different encoding techniques. The codec places emphasis on lower decoding requirements over other video codecs with specific optimizations for
92-493: A 4:2:1 color sampling pattern, and the DV standard uses 4:1:1 sampling ratios. Professional video codecs designed to function at much higher bitrates and to record a greater amount of color information for post-production manipulation sample in 4:2:2 and 4:4:4 ratios. Examples of these codecs include Panasonic's DVCPRO50 and DVCPROHD codecs (4:2:2), Sony's HDCAM-SR (4:4:4), Panasonic's HDD5 (4:2:2), Apple 's Prores HQ 422 (4:2:2). It
138-425: A fundamentally analog data set in a digital format. Because of the design of analog video signals, which represent luminance (luma) and color information (chrominance, chroma) separately, a common first step in image compression in codec design is to represent and store the image in a YCbCr color space. The conversion to YCbCr provides two benefits: first, it improves compressibility by providing decorrelation of
184-410: A model that was developed using a particular video codec is not guaranteed to be accurate for another video codec. Similarly, a model trained on tests performed on a large TV screen should not be used for evaluating the quality of a video watched on a mobile phone. When estimating quality of a video codec, all the mentioned objective methods may require repeating post-encoding tests in order to determine
230-401: A software package for PCs, such as K-Lite Codec Pack , Perian and Combined Community Codec Pack . Video quality Video quality is a characteristic of a video passed through a video transmission or processing system that describes perceived video degradation (typically compared to the original video). Video processing systems may introduce some amount of distortion or artifacts in
276-453: A standard, and implementers are free to design their encoder however they want, as long as the video can be decoded in the specified manner. For this reason, the quality of the video produced by decoding the results of different encoders that use the same video codec standard can vary dramatically from one encoder implementation to another. A variety of video compression formats can be implemented on PCs and in consumer electronics equipment. It
322-431: A video (as determined by an image quality model) can then be recorded and pooled over time to assess the quality of an entire video sequence. While this method is easy to implement, it does not factor in certain kinds of degradations that develop over time, such as the moving artifacts caused by packet loss and its concealment . A video quality model that considers the temporal aspects of quality degradations, like VQM or
368-428: Is software or hardware that compresses and decompresses digital video . In the context of video compression, codec is a portmanteau of encoder and decoder , while a device that only compresses is typically called an encoder , and one that only decompresses is a decoder . The compressed data format usually conforms to a standard video coding format . The compression is typically lossy , meaning that
414-419: Is also part of a commercially available video quality toolset (SSIMWAVE). VMAF is used by Netflix to tune their encoding and streaming algorithms, and to quality-control all streamed content. It is also being used by other technology companies like Bitmovin and has been integrated into software such as FFmpeg . An objective model should only be used in the context that it was developed for. For example,
460-478: Is also worth noting that video codecs can operate in RGB space as well. These codecs tend not to sample the red, green, and blue channels in different ratios, since there is less perceptual motivation for doing so—just the blue channel could be undersampled. Some amount of spatial and temporal downsampling may also be used to reduce the raw data rate before the basic encoding process. The most popular encoding transform
506-524: Is applied to the quantized values. When a DCT has been used, the coefficients are typically scanned using a zig-zag scan order, and the entropy coding typically combines a number of consecutive zero-valued quantized coefficients with the value of the next non-zero quantized coefficient into a single symbol and also has special ways of indicating when all of the remaining quantized coefficient values are equal to zero. The entropy coding method typically uses variable-length coding tables . Some encoders compress
SECTION 10
#1732794491970552-557: Is in turn succeeded by Versatile Video Coding (VVC). There are also the open and free VP8 , VP9 and AV1 video coding formats, used by YouTube, all of which were developed with involvement from Google . Video codecs are used in DVD players, Internet video , video on demand , digital cable , digital terrestrial television , videotelephony and a variety of other applications. In particular, they are widely used in applications that record or transmit video, which may not be feasible with
598-434: Is the 8x8 DCT. Codecs that make use of a wavelet transform are also entering the market, especially in camera workflows that involve dealing with RAW image formatting in motion sequences. This process involves representing the video image as a set of macroblocks . For more information about this critical facet of video codec design, see B-frames . The output of the transform is first quantized , then entropy encoding
644-515: Is therefore possible for multiple codecs to be available in the same product, reducing the need to choose a single dominant video compression format to achieve interoperability . Standard video compression formats can be supported by multiple encoder and decoder implementations from multiple sources. For example, video encoded with a standard MPEG-4 Part 2 codec such as Xvid can be decoded using any other standard MPEG-4 Part 2 codec such as FFmpeg MPEG-4 or DivX Pro Codec , because they all use
690-456: Is to automatically estimate the average user's (viewer's) opinion on the quality of a video processed by a system. Procedures for subjective video quality measurements are described in ITU-R recommendation BT.500 and ITU-T recommendation P.910 . In such tests, video sequences are shown to a group of viewers. The viewers' opinion is recorded and averaged into the mean opinion score to evaluate
736-511: Is widely used by streaming internet services such as YouTube , Netflix , Vimeo , and iTunes Store , web software such as Adobe Flash Player and Microsoft Silverlight , and various HDTV broadcasts over terrestrial and satellite television. AVC has been succeeded by HEVC (H.265), developed in 2013. It is heavily patented, with the majority of patents belonging to Samsung Electronics , GE , NTT and JVC Kenwood . The adoption of HEVC has been hampered by its complex licensing structure. HEVC
782-623: The MOVIE Index , may be able to produce more accurate predictions of human-perceived quality. The estimation of visual artifacts is a well known technique for estimating overall video quality. The majority of these artifacts are compression artifacts caused by lossy compression. Some of the attributes typically estimated by pixel-based metrics include: Spatial Temporal This section lists examples of video quality metrics. Since objective video quality models are expected to predict results given by human observers, they are developed with
828-564: The MPEG standards. MPEG-1 was developed by the Motion Picture Experts Group (MPEG) in 1991, and it was designed to compress VHS -quality video. It was succeeded in 1994 by MPEG-2 / H.262 , which was developed by a number of companies, primarily Sony , Thomson and Mitsubishi Electric . MPEG-2 became the standard video format for DVD and SD digital television . In 1999, it was followed by MPEG-4 / H.263 , which
874-590: The Nintendo 3DS ( Nintendo Switch , PlayStation 4 , PlayStation Vita , Wii U and Xbox One ) and the ninth-generation PlayStation 5 and Xbox Series X/S . Bink 2 is faster than Bink 1, and supports higher resolutions as well. Epic Games acquired the technology and business of RAD Game Tools including Bink on January 7, 2021, renaming it to Epic Games Tools. It was announced they planned to integrate RAD's technology directly into Unreal Engine and that licenses will continue to be available to those who do not use
920-508: The Unreal Engine in their work. Bink uses a wavelet-based compression algorithm optimized for game video sequences. It supports resolutions up to 4K and can encode at bitrates from 500 kbps to 200 Mbps. The codec is designed for efficient decompression, leveraging multithreading and SIMD instructions on modern CPUs. Bink also offers optional alpha channel support for composing video with 3D graphics. Video codec A video codec
966-477: The age of analog video systems, it was possible to evaluate the quality aspects of a video processing system by calculating the system's frequency response using test signals (for example, a collection of color bars and circles). Digital video systems have almost fully replaced analog ones, and quality evaluation methods have changed. The performance of a digital video processing and transmission system can vary significantly and depends on many factors, including
SECTION 20
#17327944919701012-576: The aid of subjective test results . During the development of an objective model, its parameters should be trained so as to achieve the best correlation between the objectively predicted values and the subjective scores, often available as mean opinion scores (MOS). The most widely used subjective test materials are in the public domain and include still pictures, motion pictures, streaming video, high definition, 3-D (stereoscopic), and special-purpose picture quality-related datasets. These so-called databases are created by various research laboratories around
1058-468: The amount of information available about the original signal, the received signal, or whether there is a signal present at all: Some models that are used for video quality assessment (such as PSNR or SSIM ) are simply image quality models , whose output is calculated for every frame of a video sequence. An overview of recent no-reference image quality models has also been given in a journal paper by Shahid et al. The quality measure of every frame in
1104-399: The characteristics of the input video signal (e.g., amount of motion or spatial details), the settings used for encoding and transmission, and the channel fidelity or network performance . Objective video quality models are mathematical models that approximate results from subjective quality assessment , in which human observers are asked to rate the quality of a video. In this context,
1150-557: The color signals; and second, it separates the luma signal, which is perceptually much more important, from the chroma signal, which is less perceptually important and which can be represented at lower resolution using chroma subsampling to achieve more efficient data compression. It is common to represent the ratios of information stored in these different channels in the following way Y:Cb:Cr. Different codecs use different chroma subsampling ratios as appropriate to their compression needs. Video compression schemes for Web and DVD make use of
1196-420: The complexity of the encoding and decoding algorithms, sensitivity to data losses and errors, ease of editing, random access, and end-to-end delay ( latency ). Historically, video was stored as an analog signal on magnetic tape . Around the time when the compact disc entered the market as a digital-format replacement for analog audio, it became feasible to also store and convey video in digital form. Because of
1242-409: The compressed video lacks some information present in the original video. A consequence of this is that decompressed video has lower quality than the original, uncompressed video because there is insufficient information to accurately reconstruct the original video. There are complex relationships between the video quality , the amount of data used to represent the video (determined by the bit rate ),
1288-421: The different computer game consoles it supports. It has been primarily used for full-motion video sequences in video games , and has been used in games for Windows , Mac OS and all sixth-generation game consoles ( Dreamcast , GameCube , PlayStation 2 and Xbox ) and all major seventh-generation gaming platforms ( Nintendo DS , PlayStation 3 , PlayStation Portable , Wii and Xbox 360 ). The format
1334-427: The encoding parameters that satisfy a required level of visual quality, making them time consuming, complex and impractical for implementation in real commercial applications. There is ongoing research into developing novel objective evaluation methods which enable prediction of the perceived quality level of the encoded video before the actual encoding is performed. The main goal of many-objective video quality metrics
1380-412: The field to mean a descriptive statistic that provides an indicator of quality. The term “objective” refers to the fact that, in general, quality models are based on criteria that can be measured objectively, that is, free from human interpretation. They can be automatically evaluated by a computer program . Unlike a panel of human observers, an objective model should always deterministically output
1426-534: The high data volumes and bandwidths of uncompressed video. For example, they are used in operating theaters to record surgical operations, in IP cameras in security systems, and in remotely operated underwater vehicles and unmanned aerial vehicles . Any video stream or file can be encoded using a wide variety of live video format options. Here are some of the H.264 encoder settings that need to be set when streaming to an HTML5 video player. Video codecs seek to represent
Bink Video - Misplaced Pages Continue
1472-447: The large amount of storage and bandwidth needed to record and convey raw video, a method was needed to reduce the amount of data used to represent the raw video. Since then, engineers and mathematicians have developed a number of solutions for achieving this goal that involve compressing the digital video data. In 1974, discrete cosine transform (DCT) compression was introduced by Nasir Ahmed , T. Natarajan and K. R. Rao . During
1518-505: The late 1980s, a number of companies began experimenting with DCT lossy compression for video coding, leading to the development of the H.261 standard. H.261 was the first practical video coding standard, and was developed by a number of companies, including Hitachi , PictureTel , NTT , BT , and Toshiba , among others. Since H.261, DCT compression has been adopted by all the major video coding standards that followed. The most popular video coding standards used for codecs have been
1564-456: The performance of a codec is often evaluated in terms of PSNR or SSIM. For service providers, objective models can be used for monitoring a system. For example, an IPTV provider may choose to monitor their service quality by means of objective models, rather than asking users for their opinion, or waiting for customer complaints about bad video quality. Few of these standards have found commercial applications, including PEVQ and VQuad-HD . SSIM
1610-461: The performance of a model, some frequently used metrics are the linear correlation coefficient , Spearman's rank correlation coefficient , and the root mean square error (RMSE). Other metrics are the kappa coefficient and the outliers ratio . ITU-T Rec. P.1401 gives an overview of statistical procedures to evaluate and compare objective models. Objective video quality models can be used in various application areas. In video codec development,
1656-409: The process is often called inverse quantization or dequantization , although quantization is an inherently non-invertible process. Video codec designs are usually standardized or eventually become standardized—i.e., specified precisely in a published document. However, only the decoding process need be standardized to enable interoperability. The encoding process is typically not specified at all in
1702-409: The quality of a system can be determined offline (i.e., in a laboratory setting for developing new codecs or services) or in-service (to monitor and ensure a certain level of quality). Since the world's first video sequence was recorded and transmitted, many video processing systems have been designed. Such systems encode video streams and transmit them over various kinds of networks or channels. In
1748-450: The same quality score for a given set of input parameters. Objective quality models are sometimes also referred to as instrumental (quality) models , in order to emphasize their application as measurement instruments. Some authors suggest that the term “objective” is misleading, as it “implies that instrumental measurements bear objectivity, which they only do in cases where they can be generalized.” Objective models can be classified by
1794-462: The same video format. Codecs have their qualities and drawbacks. Comparisons are frequently published. The trade-off between compression power, speed, and fidelity (including artifacts ) is usually considered the most important figure of technical merit. Online video material is encoded by a variety of codecs, and this has led to the availability of codec packs — a pre-assembled set of commonly used codecs combined with an installer available as
1840-405: The table below In theory, a model can be trained on a set of data in such a way that it produces perfectly matching scores on that dataset. However, such a model will be over-trained and will therefore not perform well on new datasets. It is therefore advised to validate models against new data and use the resulting performance as a real indicator of the model's prediction accuracy. To measure
1886-432: The term model may refer to a simple statistical model in which several independent variables (e.g., the packet loss rate on a network and the video coding parameters) are fit against results obtained in a subjective quality evaluation test using regression techniques . A model may also be a more complicated algorithm implemented in software or hardware . The terms model and metric are often used interchangeably in
Bink Video - Misplaced Pages Continue
1932-418: The video in a multiple-step process called n-pass encoding (e.g. 2-pass), which performs a slower but potentially higher quality compression. The decoding process consists of performing, to the extent possible, an inversion of each stage of the encoding process. The one stage that cannot be exactly inverted is the quantization stage. There, a best-effort approximation of inversion is performed. This part of
1978-418: The video signal that negatively impact the user's perception of the system. For many stakeholders in video production and distribution, ensuring video quality is an important task. Video quality evaluation is performed to describe the quality of a set of video sequences under study. Video quality can be evaluated objectively (by mathematical models ) or subjectively (by asking users for their rating). Also,
2024-784: The world. Some of them have become de facto standards, including several public-domain subjective picture quality databases created and maintained by the Laboratory for Image and Video Engineering (LIVE) as well the Tampere Image Database 2008 . A collection of databases can be found in the QUALINET Databases repository. The Consumer Digital Video Library (CDVL) hosts freely available video test sequences for model development. Some databases also provide pre-computed metric scores to allow others to benchmark new metrics against existing ones. Examples can be seen in
2070-428: Was a major leap forward for video compression technology. It was developed by a number of companies, primarily Mitsubishi Electric, Hitachi and Panasonic . The most widely used video coding format, as of 2016, is H.264/MPEG-4 AVC . It was developed in 2003 by a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics . H.264 is the main video encoding standard for Blu-ray Discs , and
2116-905: Was reverse-engineered by the FFmpeg project and Bink decoding is supported by the open-source libavcodec library. Bink was inducted into the Front Line Awards Hall of Fame by the Game Developer magazine in 2009. The winners for the award were published in the January 2010 issue of the magazine. Bink 2, a new version of the format, was released in 2013. This new format is available for Windows (standard, Windows 8 Store and Windows 8 Phone), Mac OS , Linux , major touchscreen smartphone platforms ( Android and iOS ), all three major seventh-generation consoles (PlayStation 3, Wii, Xbox 360), all major eighth-generation platforms except
#969030