Misplaced Pages

Matrox G400

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

The G400 is a video card made by Matrox , released in September 1999. The graphics processor contains a 2D GUI , video, and Direct3D 6.0 3D accelerator . Codenamed " Toucan ", it was a more powerful and refined version of its predecessor, the G200 .

#945054

59-585: The Matrox G200 graphics processor had been a successful product, competing with the various 2D & 3D combination cards available in 1998. Matrox took the technology developed from the G200 project, refined it, and essentially doubled it up to form the G400 processor. The new chip featured several new and innovative additions, such as multiple monitor output support, an all-around 32-bit rendering pipeline with high performance, further improved 2D and video acceleration, and

118-841: A BR02 chip bridging the NV18's native AGP interface with the PCI-Express bus. 1000 This family is a derivative of the GeForce4 MX family, produced for the laptop market. The GeForce4 Go family, performance wise, can be considered comparable to the MX line. One possible solution to the lack of driver support for the Go family is the third party Omega Drivers. Using third party drivers can, among other things, invalidate warranties. The Omega Drivers are supported by neither laptop manufacturers, laptop ODMs, nor by Nvidia. Nvidia attempted legal action against

177-440: A height map for simulating the surface displacement yielding the modified normal. This is the method invented by Blinn and is usually what is referred to as bump mapping unless specified. The steps of this method are summarized as follows. Before a lighting calculation is performed for each visible point (or pixel ) on the object's surface: The result is a surface that appears to have real depth. The algorithm also ensures that

236-701: A "real" GeForce4—i.e., a GeForce4 Ti. The GeForce4 MX was particularly successful in the PC OEM market, and rapidly replaced the GeForce2 MX as the best-selling GPU. In motion-video applications, the GeForce4 MX offered new functionality. It (and not the GeForce4 Ti) was the first GeForce member to feature the Nvidia VPE (video processing engine). It was also the first GeForce to offer hardware-iDCT and VLC (variable length code) decoding, making VPE

295-402: A functionality in which all internal 3D calculations are done with 32-bit precision. The goal was to prevent dithering and other artifacts caused by inadequate precision when performing calculations. The result was the best quality 16-bit and 32-bit color modes available at the time. Matrox was known for their quality analog display output on prior cards and the G400 is no exception. G400

354-596: A good video codec, gives much better results anyway. There are no WDM drivers available for this card. In Fall of 2000, Matrox introduced the G450 chip (codenamed Condor) as a successor to the G400 line. Like the G250 was to the G200 , G450 was primarily a die shrink of the G400 core from the 250 nm semiconductor fabrication process to 180 nm. By shrinking the core, costs are reduced because more chips are made per wafer at

413-628: A major upgrade from Nvidia's previous HDVP . In the application of MPEG-2 playback, VPE could finally compete head-to-head with ATI's video engine. There were 3 initial models: the MX420, the MX440 and the MX460. The MX420 had only Single Data Rate (SDR) memory and was designed for very low end PCs, replacing the GeForce2 MX100 and MX200. The GeForce4 MX440 was a mass-market OEM solution, replacing

472-568: A native OpenGL driver called "TurboGL" was released, but it was only designed to support several popular games of the time (e.g. Quake3 ). This driver was a precursor to a fully functional OpenGL ICD driver, a quick development to improve performance as fast as possible by offering an interim solution. Since TurboGL didn't support all OpenGL applications, it was essentially a "Mini ICD" much like 3DFX had used with their Voodoo boards. TurboGL included support for then-new SIMD technologies from AMD and Intel , including SSE1 and 3DNow! . In mid-2000

531-402: A new 3D feature known as Environment Mapped Bump Mapping . Internally the G400 is a 256-bit processor, using what Matrox calls a "DualBus" architecture. This is an evolution of G200's "DualBus", which had been 128-bit. A Matrox "DualBus" chip consists of twin unidirectional buses internally, each moving data into or out of the chip. This increases the efficiency and bandwidth of data flow within

590-456: Is 128-bit and is designed to use either SDRAM or SGRAM . Matrox released both 16 MiB and 32 MiB versions of the G400 boards, and used both types of RAM. The slowest models are equipped with 166 MHz SDRAM, while the fastest (G400 MAX) uses 200 MHz SGRAM. G400MAX had the highest memory bandwidth of any card before the release of the DDR -equipped version of NVIDIA GeForce 256 . Perhaps

649-453: Is a texture mapping technique in computer graphics for simulating bumps and wrinkles on the surface of an object. This is achieved by perturbing the surface normals of the object and using the perturbed normal during lighting calculations. The result is an apparently bumpy surface rather than a smooth surface, although the surface of the underlying object is not changed. Bump mapping was introduced by James Blinn in 1978. Normal mapping

SECTION 10

#1732797403946

708-427: Is that it perturbs only the surface normals without changing the underlying surface itself. Silhouettes and shadows therefore remain unaffected, which is especially noticeable for larger simulated displacements. This limitation can be overcome by techniques including displacement mapping where bumps are applied to the surface or using an isosurface . There are two primary methods to perform bump mapping. The first uses

767-451: Is the most common variation of bump mapping used. Bump mapping is a technique in computer graphics to make a rendered surface look more realistic by simulating small displacements of the surface. However, unlike displacement mapping , the surface geometry is not modified. Instead only the surface normal is modified as if the surface had been displaced. The modified surface normal is then used for lighting calculations (using, for example,

826-514: The GeForce FX 5200 and the midrange FX 5600, and performing similarly to the mid-range Radeon 9600 Pro in some situations. Though its lineage was of the past-generation GeForce2, the GeForce4 MX did incorporate bandwidth and fill rate-saving techniques, dual-monitor support, and a multi-sampling anti-aliasing unit from the Ti series; the improved 128-bit DDR memory controller was crucial to solving

885-522: The Phong reflection model ) giving the appearance of detail instead of a smooth surface. Bump mapping is much faster and consumes fewer resources for the same level of detail compared to displacement mapping because the geometry remains unchanged. There are also extensions which modify other surface features in addition to increasing the sense of depth. Parallax mapping and horizon mapping are two such extensions. The primary limitation with bump mapping

944-467: The G400 received a fully compliant OpenGL ICD which offered capable performance in most OpenGL-supporting software. The G400 continually received official driver updates into 2006. Even with initial driver difficulties, Matrox G400 was very competitive. 2D and Direct3D performance were more than competitive with the NVIDIA RIVA TNT2 , 3dfx Voodoo3 , and ATI Rage 128 Pro . In fact, prior to

1003-597: The GeForce2 MX/MX400 and GeForce2 Ti. The GeForce4 MX460 was initially meant to slot in between the MX440 and the Ti4400, but the late addition of the Ti4200 to the line at a very similar price point (combined with the existing GeForce3 Ti200 and ATI's Radeon 8500LE/9100, which were also similarly priced) prevented the MX460 from ever being truly competitive, and the model soon faded away. In terms of 3D performance,

1062-460: The GeForce4 MX name as a misleading marketing ploy since it was less advanced than the preceding GeForce3. In the features comparison chart between the Ti and MX lines, it showed that the only "feature" that was missing on the MX was the nfiniteFX II engine —the DirectX 8 programmable vertex and pixel shaders. However, the GeForce4 MX was not a GeForce4 Ti with the shader hardware removed, as

1121-507: The HeadCasting Engine, a hardware implementation of a vertex shader for accelerated matrix palette skinning. It does this by improving on the 96 constant registers specified for by DirectX 8.0 to a total of 256. Despite the feature, it is inaccessible by DirectX driver. Matrox only supports HeadCasting feature through the bundled Matrox Digimask software, which have never become popular. On 2005-7-13, Matrox Graphics Inc. announced

1180-549: The MX's performance in games that did not use shaders was considerably behind the GeForce4 Ti and GeForce3. Despite harsh criticism by gaming enthusiasts, the GeForce4 MX was a market success. Priced about 30% above the GeForce2 MX, it provided better performance, the ability to play a number of popular games that the GeForce2 could not run well—above all else—to the average non-specialist it sounded as if it were

1239-539: The MX420 performed only slightly better than the GeForce2 MX400 and below the GeForce2 GTS . However, this was never really much of a problem, considering its target audience. The nearest thing to a direct competitor the MX420 had was ATI's Radeon 7000. In practice its main competitors were chipset-integrated graphics solutions, such as Intel's 845G and Nvidia's own nForce 2, but its main advantage over those

SECTION 20

#1732797403946

1298-458: The MX440 in production while the 5200 was discontinued. The GeForce4 Go was derived from the MX line and it was announced along with the rest of the GeForce4 lineup in early 2002. There was the 420 Go, 440 Go, and 460 Go. However, ATI had beaten them to the market with the similarly performing Mobility Radeon 7500 , and later the DirectX 8.0 compliant Mobility Radeon 9000. (Despite its name,

1357-475: The Ti4200 damaging the Ti4400's sales, Nvidia set the Ti4200's memory speed at 222 MHz on the models with a 128  MiB frame buffer—a full 53 MHz slower than the Ti4400 (all of which had 128 MiB frame buffers). Models with a 64 MiB frame buffer were set to 250 MHz memory speed. This tactic didn't work however, for two reasons. Firstly, the Ti4400 was perceived as being not good enough for those who wanted top performance (who preferred

1416-529: The Ti4200 was cheaper and faster than the previous top-line GeForce 3 and Radeon 8500. Besides the late introduction of the Ti4200, the limited release 128 MiB models of the GeForce 3 Ti200 proved unimpressive, letting the Radeon 8500LE and even the full 8500 dominated the upper-range performance for a while. The Matrox Parhelia , despite having several DirectX 9.0 capabilities and other innovative features,

1475-688: The Ti4600), nor those who wanted good value for money (who typically chose the Ti4200), causing the Ti4400 to be regarded as a pointless middle ground of the two. Furthermore, some graphics card makers simply ignored Nvidia's guidelines for the Ti4200, and set the memory speed at 250 MHz on the 128 MiB models anyway. Then in late 2002, the NV25 core was replaced by the NV28 core, which differed only by addition of AGP-8X support. The Ti4200 with AGP-8X support

1534-502: The availability of Millennium G550 PCIe, the world's first PCI Express x1 graphics card. The card uses Texas Instruments XIO2000 bridge controller to achieve PCI Express support. Findings within a release of Matrox graphics drivers (MGA64.sys v4.77.027) mentioned a never-released Matrox Millennium G800. The MGA-G800, codenamed Condor 2, would have been clocked at 200 MHz core with 200 MHz DDR memory (6.4 GB/s bandwidth). The chip had 3 pixel pipelines with 3 texture units each. It

1593-463: The bandwidth limitations that plagued the GeForce 256 and GeForce2 lines. It also owed some of its design heritage to Nvidia's high-end CAD products, and in performance-critical non-game applications it was remarkably effective. The most notable example is AutoCAD , in which the GeForce4 MX returned results within a single-digit percentage of GeForce4 Ti cards several times the price. Many criticized

1652-479: The board's complexity (and cost) because fewer traces have to be used, and potentially the pin-count of the graphics processor can be significantly reduced if the chip is designed only for a 64-bit bus. However, DDR has a higher inherent latency than SDR given the same bandwidth, so performance dropped somewhat. The new G450 again had support for AGP 4X, like some later-produced G400 boards. The 3D capabilities of G450 were identical to G400. Unfortunately, because of

1711-477: The chip to each of its functional units. G400's 3D engine consists of 2 parallel pixel pipelines with 1 texture unit each, providing single-pass dual-texturing capability. The Millennium G400 MAX is capable of 333 megapixels per second fillrate at its 166 MHz core clock speed. It is purely a Direct3D 6.0 accelerator and, as such, lacks support for the later hardware transform and lighting acceleration of Direct3D 7.0 cards. The chip's external memory interface

1770-470: The counterparts aboard the competition's cards. However, the Warp engine was programmable which theoretically enhanced flexibility of the chip. Unfortunately Matrox never described the functionality of this component in-depth so little is known about it. As said earlier, G400 suffered at launch from driver problems. While its Direct3D performance was admirable, its OpenGL installable client driver (ICD) component

1829-399: The entry-level GeForce 2 MX , the midrange GeForce4 MX models (released the same time as the Ti4400 and Ti4600), and the older but still high-performance GeForce 3 (demoted to the upper mid-range or performance niche). However, ATI's Radeon 8500LE was somewhat cheaper than the Ti4400, and outperformed its price competitors, the GeForce 3 Ti200 and GeForce4 MX 460. The GeForce 3 Ti500 filled

Matrox G400 - Misplaced Pages Continue

1888-429: The factory, and Matrox can take the time to fix earlier mistakes in the core, and trim or add new functionality. Matrox clocked the G450 core at 125 MHz, just like the plain G400. Overclocking tests showed that the core was unable to achieve higher speeds than G400 even though it was manufactured on a newer process. Perhaps the biggest addition to G450 was that Matrox moved the previously external second RAMDAC , for

1947-406: The higher speed if the motherboard is capable as well. G400 was known for being particularly dependent on the host system's CPU for high 3D performance. This was attributed both to its architecture and to the poor drivers it relied on for much of its life (especially OpenGL ICD). With regard to its hardware, G400's triangle setup engine, called the "Warp Engine" ironically, was somewhat slower than

2006-430: The identical core clock and due to lower memory bandwidth, G450 was slower than G400 in games. Marvel G450 eTV not only had a TV tuner, but also was a launchpad for Matrox's new eDualHead dual display enhancement. It added some new features to DualHead that worked with Internet Explorer to make pages show up on both screens at once. MGA-G550 processor added a second pixel pipeline, hardware transform and lighting, and

2065-493: The limited graphics hardware of the time, EMBM only saw limited use during G400's time. Only a few games supported the feature, such as Dungeon Keeper 2 and Millennium Soldier: Expendable . EMBM requires either specialized hardware within the chip for its calculations or a more flexible and programmable graphics pipeline, such as later DirectX 8.0 accelerators like the GeForce 3 and Radeon 8500 . G400's rendering pipelined uses what Matrox called "Vibrant Color Quality 2" (VCQ2),

2124-401: The most notable feature of G400 is its ability to drive two separate monitors to display a single desktop. This feature is known as "DualHead" and was a decisive edge for Matrox over the card's competitors at the time. The DualHead capability not only offered desktop widening but also desktop cloning (two screens showing the same thing) and a special "DVDMAX" mode which outputs video overlays onto

2183-432: The older GeForce 3 by a significant margin. The competing ATI Radeon 8500 was generally faster than the GeForce 3 line, but was overshadowed by the GeForce 4 Ti in every area other than price and more advanced pixel shader (1.4) support. Nvidia, however, missed a chance to dominate the upper-range/performance segment by delaying the release of the Ti4200 and by not rolling out 128 MiB models quickly enough; otherwise

2242-457: The performance gap between the Ti200 and the Ti4400 but it could not be produced cheap enough to compete with the Radeon 8500. In consequence, Nvidia rolled out a slightly cheaper model: the Ti4200. Although the 4200 was initially supposed to be part of the launch of the GeForce4 line, Nvidia had delayed its release to sell off the soon-to-be discontinued GeForce 3 chips. In an attempt to prevent

2301-508: The process). The G400 chip supports, in hardware, a texture-based surface detailing method called Environment Mapped Bump Mapping (EMBM). EMBM was actually created by BitBoys Oy and licensed to Matrox. EMBM was not supported by several competitors such as NVIDIA's GeForce 256 through GeForce 2 , which only supported the simpler Dot-3 BM, but was available on the ATI Radeon 7200 . Due to this lack of industry-wide support, and its toll on

2360-646: The release of the NVIDIA GeForce 256 that supported Direct3D 7.0 transform and lighting acceleration, the Millennium G400 MAX was a respectable Direct3D card, competitive with Voodoo3 3500 and TNT2 Ultra. 3dfx had an edge in some games with its low-overhead Glide API and NVIDIA was, for a long time, king of OpenGL. Matrox stopped support for Marvel G400-TV early because there was no way to make it fully functional in Windows 2000 . The problem

2419-488: The same as the MX440, but had crucial advantages with better single-texturing performance and proper support of DirectX 8 shaders. However, the 9000 was unable to break the MX440's entrenched hold on the OEM market. Nvidia's eventual answer to the Radeon 9000 was the GeForce FX 5200 , but despite the 5200's DirectX 9 features it did not have a significant performance increase compared to the MX440 even in DirectX 7.0 games. This kept

Matrox G400 - Misplaced Pages Continue

2478-433: The second monitor connector (DualHead), into the G450 chip itself. RAMDAC speeds were still different though, with the primary running at an excellent 360 MHz, but the secondary running at only 230 MHz. This meant that the primary monitor could run much higher resolutions and refresh rates than the secondary. This was the same as G400. The G450 also had native support for TMDS signaling, and thus DVI , but this

2537-551: The second monitor. Matrox's award-winning Powerdesk display drivers and control panel integrated Dualhead in a very flexible and functional way that become world-renowned for its effectiveness. However, contrary to the video mode's name, G400 does not support full DVD decoding hardware acceleration. G400 does have partial support for the DVD video decoding process but it does not perform inverse discrete cosine transform IDCT or motion compensation in hardware (the two most demanding steps of

2596-560: The short-lived 4200 Go is not part of this lineup, it was instead derived from the Ti line.) Like the Ti series, the MX was also updated in late 2002 to support AGP-8X with the NV18 core. The two new models were the MX440-8X, which was clocked slightly faster than the original MX440, and the MX440SE, which had a narrower memory bus, and was intended as a replacement of sorts for the MX420. The MX460, which had been discontinued by this point,

2655-492: The surface appearance changes as lights in the scene are moved around. The other method is to specify a normal map which contains the modified normal for each point on the surface directly. Since the normal is specified directly instead of derived from a height map this method usually leads to more predictable results. This makes it easier for artists to work with, making it the most common method of bump mapping today. Realtime 3D graphics programmers often use variations of

2714-429: The technique in order to simulate bump mapping at a lower computational cost. One typical way was to use a fixed geometry, which allows one to use the heightmap surface normal almost directly. Combined with a precomputed lookup table for the lighting calculations, the method could be implemented with a very simple and fast loop, allowing for a full-screen effect. This method was a common visual effect when bump mapping

2773-475: Was also equipped with a hardware transform and lighting unit capable of processing 20–30 million triangles per second. Further speculation included a memory controller that could support DDR SDRAM and DDR FC-RAM, DirectX 8.0 compliance, and a faster version running at 250 MHz. These specifications are somewhat reminiscent of Matrox Parhelia , in that Parhelia is a 4 pipeline DirectX 8 GPU with 4 texture units per pipeline. Bump mapping Bump mapping

2832-846: Was an attempt to form a fourth family, also for the laptop market, the only member of it being the GeForce4 4200 Go (NV28M) which was derived from the Ti line. The GeForce4 Ti (NV25) was launched in February 2002 and was a revision of the GeForce 3 (NV20). It was very similar to its predecessor; the main differences were higher core and memory clock rates, a revised memory controller (known as Lightspeed Memory Architecture II / LMA II ), updated pixel shaders with new instructions for Direct3D 8.0a support, an additional vertex shader (the vertex and pixel shaders were now known as nFinite FX Engine II ), hardware anti-aliasing ( Accuview AA ), and DVD playback. Legacy Direct3D 7-class fixed-function T&L

2891-557: Was at most competitive with the GeForce 3 and GeForce 4 Ti 4200, but it was priced the same as the Ti 4600 at US$ 399. The GeForce 4 Ti4200 enjoyed considerable longevity compared to its higher-clocked peers. At half the cost of the 4600, the 4200 remained the best balance between price and performance until the launch of the ATI Radeon 9500 Pro at the end of 2002. The Ti4200 still managed to hold its own against several next generation DirectX 9 chips released in late 2003, outperforming

2950-594: Was based on this chip, and sold as the Ti4200-8X. A Ti4800SE replaced the Ti4400 and a Ti4800 replaced the Ti4600 respectively when the 8X AGP NV28 core was introduced on these. The only mobile derivative of the Ti series was the GeForce4 4200 Go (NV28M), launched in late 2002. The solution featured the same feature-set and similar performance compared to the NV28-based Ti4200, although the mobile variant

3009-748: Was clocked lower. It outperformed the Mobility Radeon 9000 by a large margin, as well as being Nvidia's first DirectX 8 laptop graphics solution. However, because the GPU was not designed for the mobile space, it had thermal output similar to the desktop part. The 4200 Go also lacked power-saving circuitry like the MX-based GeForce4 4x0 Go series or the Mobility Radeon 9000. This caused problems for notebook manufacturers, especially with regards to battery life. The GeForce4 Ti outperformed

SECTION 50

#1732797403946

3068-534: Was first introduced. GeForce4 The GeForce 4 series ( codenames below) refers to the fourth generation of Nvidia 's GeForce line of graphics processing units (GPUs). There are two different GeForce4 families, the high-performance Ti family, and the budget MX family. The MX family spawned a mostly identical GeForce4 Go (NV17M) family for the laptop market. All three families were announced in early 2002; members within each family were differentiated by core and memory clock speeds. In late 2002, there

3127-475: Was multiple-monitor support; Intel's solutions did not have this at all, and the nForce 2's multi-monitor support was much inferior to what the MX series offered. The MX440 performed reasonably well for its intended audience, outperforming its closest competitor, the ATI Radeon 7500 , as well as the discontinued GeForce2 Ti and Ultra. When ATI launched its Radeon 9000 Pro in September 2002, it performed about

3186-448: Was never replaced. Another variant followed in late 2003—the MX 4000, which was a GeForce4 MX440SE with a slightly higher memory clock. The GeForce4 MX line received a third and final update in 2004, with the PCX 4300, which was functionally equivalent to the MX 4000 but with support for PCI Express . In spite of its new codename (NV19), the PCX 4300 was in fact simply an NV18 core with

3245-503: Was not a standard issue connector. Boards shipped with dual analog VGA connectors. G450 was adapted to use a DDR SDRAM memory interface, instead of the older single data rate (SDR) SGRAM and SDRAM used on G400. By doing this they were able to switch to a 64-bit memory bus and use the DDR memory to equal the previous memory bandwidth by clocking the RAM again at 166 MHz. A 64-bit bus reduces

3304-563: Was now implemented as vertex shaders. Proper dual-monitor support ( TwinView ) was also brought over from the GeForce 2 MX. The GeForce 4 Ti was superior to the GeForce 4 MX in virtually every aspect save for production cost, although the MX had the Nvidia VPE (video processing engine) which the Ti lacked. The initial two models were the Ti4400 and the top-of-the-range Ti4600 . At the time of their introduction, Nvidia's main products were

3363-553: Was the benchmark for signal quality for several years, significantly outperforming some competitors (notably pre- GeForce4 NVIDIA cards). Where many cards were crippled by blurry output, especially as the resolution and refresh rate increased, the Matrox cards delivered very sharp and clear images. G400 is the first Matrox board compatible with AGP 4X. Most (REV. A) G400 boards actually only support 2X mode, but there are later revisions (REV. B), that are fully 4X compliant and run at

3422-440: Was very poor. The situation was eerily similar to what had happened with the older G200, with its near-total lack of credible OpenGL support. Matrox made it very clear that they were committed to supporting OpenGL, however, and development rapidly progressed. G400 initially launched with an OpenGL to Direct3D wrapper driver , like G200, that translated an application's OpenGL calls into Direct3D (a slow and buggy solution). Eventually

3481-602: Was with the Zoran chip used for hardware MJPEG video compression on the Marvel G400 card. Matrox tried to make stable drivers for several months but with no luck. A Matrox user going by name Adis hacked original drivers to make the card work under Windows 2000. The driver was later updated for Windows XP , and then for Windows Server 2003 . Video capturing was possible but drivers are still based on VfW . Hardware MJPEG capturing can be unstable but software compression, using

#945054