Misplaced Pages

EA Vancouver

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

EA Vancouver (formerly known as EA Burnaby , then EA Canada ) is a Canadian video game developer located in Burnaby , British Columbia . The development studio opened as Distinctive Software in January 1983, and is also Electronic Arts 's largest and oldest studio. EA Vancouver employs approximately 1,300 people, and houses the world's largest video game test operation. It is best known for developing a lot of EA Sports and EA Sports big titles, including EA Sports FC (formerly FIFA ), NHL , SSX , NBA Street , NFL Street , EA Sports UFC, and FIFA Street titles. As well as a number of NBA Live and NCAA Basketball titles between 1994 and 2009.

#752247

90-521: The campus consists of a motion-capture studio, twenty-two rooms for composing, fourteen video editing suites, three production studios, a wing for audio compositions, and a quality assurance department. There are also facilities such as fitness rooms, two theatres , a cafeteria , coffee bars , a soccer field, and several arcades. EA Vancouver is a major studio of the American gaming software giant Electronic Arts (EA) which has many studios around

180-684: A box office failure of Mars Needs Moms . Television series produced entirely with motion capture animation include Laflaque in Canada, Sprookjesboom and Cafe de Wereld in The Netherlands, and Headcases in the UK. Virtual reality and Augmented reality providers, such as uSens and Gestigon , allow users to interact with digital content in real time by capturing hand motions. This can be useful for training simulations, visual perception tests, or performing virtual walk-throughs in

270-410: A motion picture . Also referred to as motion tracking or camera solving , match moving is related to rotoscoping and photogrammetry . Match moving is sometimes confused with motion capture , which records the motion of objects, often human actors, rather than the camera. Typically, motion capture requires special cameras and sensors and a controlled environment (although recent developments such as

360-405: A neo-noir third-person / shooter video game called My Eyes On You , using motion capture in order to animate its main character, Jordan Adalien, and along with non-playable characters. Out of the three nominees for the 2006 Academy Award for Best Animated Feature , two of the nominees ( Monster House and the winner Happy Feet ) used motion capture, and only Disney · Pixar 's Cars

450-406: A 3D environment. Motion capture technology is frequently used in digital puppetry systems to drive computer-generated characters in real time. Gait analysis is one application of motion capture in clinical medicine . Techniques allow clinicians to evaluate human motion across several biomechanical factors, often while streaming this information live into analytical software. One innovative use

540-407: A 3D model: There are many applications of Motion Capture. The most common are for video games, movies, and movement capture, however there is a research application for this technology being used at Purdue University in robotics development. Video games often use motion capture to animate athletes, martial artists , and other in-game characters. As early as 1988, an early form of motion capture

630-414: A camera in a real or virtual world. Therefore, a camera is a vector that includes as its elements the position of the camera, its orientation, focal length, and other possible parameters that define how the camera focuses light onto the film plane . Exactly how this vector is constructed is not important as long as there is a compatible projection function P . The projection function P takes as its input

720-423: A camera vector (denoted camera ) and another vector the position of a 3-D point in space (denoted xyz ) and returns a 2D point that has been projected onto a plane in front of the camera (denoted XY ). We can express this: The projection function transforms the 3-D point and strips away the component of depth. Without knowing the depth of the component an inverse projection function can only return

810-424: A camera vector that when every parameter is free we still might not be able to narrow F down to a single possibility no matter how many features we track. The more we can restrict the various parameters, especially focal length, the easier it becomes to pinpoint the solution. In all, the 3D solving process is the process of narrowing down the possible solutions to the motion of the camera until we reach one that suits

900-415: A cyan light strobe instead of the typical IR light for minimum fall-off underwater and high-speed cameras with an LED light or with the option of using image processing. An underwater camera is typically able to measure 15–20 meters depending on the water quality, the camera and the type of marker used. Unsurprisingly, the best range is achieved when the water is clear, and like always, the measurement volume

990-405: A few decades, which has given new insight into many fields. The vital part of the system, the underwater camera, has a waterproof housing. The housing has a finish that withstands corrosion and chlorine which makes it perfect for use in basins and swimming pools. There are two types of cameras. Industrial high-speed cameras can also be used as infrared cameras. Infrared underwater cameras come with

SECTION 10

#1732782380753

1080-506: A human can. A large number of points can be analyzed with statistics to determine the most reliable data. The disadvantage of automatic tracking is that, depending on the algorithm, the computer can be easily confused as it tracks objects through the scene. Automatic tracking methods are particularly ineffective in shots involving fast camera motion such as that seen with hand-held camera work and in shots with repetitive subject matter like small tiles or any sort of regular pattern where one area

1170-494: A mathematical model into the silhouette. For movements you can not see a change of the silhouette, there are hybrid systems available that can do both (marker and silhouette), but with less marker. In robotics, some motion capture systems are based on simultaneous localization and mapping . Optical systems utilize data captured from image sensors to triangulate the 3D position of a subject between two or more cameras calibrated to provide overlapping projections. Data acquisition

1260-586: A modified version of EAGL 4 and combines it with the Heroic Driving Engine. Need for Speed: World uses a modified EAGL 3 engine with the physics of the earlier games with an external GUI programmed in Adobe Flash . During the development for Need for Speed: The Run , EA Black Box dropped its custom engine and adopted Frostbite 2 engine. Motion-capture Motion capture (sometimes referred as mo-cap or mocap , for short)

1350-561: A performer wearing a full-body spandex/lycra suit designed specifically for motion capture . This type of system can capture large numbers of markers at frame rates usually around 120 to 160 fps although by lowering the resolution and tracking a smaller region of interest they can track as high as 10,000 fps. Active optical systems triangulate positions by illuminating one LED at a time very quickly or multiple LEDs with software to identify them by their relative positions, somewhat akin to celestial navigation. Rather than reflecting light back that

1440-512: A photogrammetric analysis tool in biomechanics research in the 1970s and 1980s, and expanded into education, training, sports and recently computer animation for television , cinema , and video games as the technology matured. Since the 20th century, the performer has to wear markers near each joint to identify the motion by the positions or angles between the markers. Acoustic, inertial, LED , magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times

1530-445: A reference for placing synthetic objects or by a reconstruction program to create a 3-D version of the actual scene. The camera and point cloud need to be oriented in some kind of space. Therefore, once calibration is complete, it is necessary to define a ground plane. Normally, this is a unit plane that determines the scale, orientation and origin of the projected space. Some programs attempt to do this automatically, though more often

1620-791: A scene from incidental footage. A reconstruction program can create three-dimensional objects that mimic the real objects from the photographed scene. Using data from the point cloud and the user's estimation, the program can create a virtual object and then extract a texture from the footage that can be projected onto the virtual object as a surface texture. Match moving has two forms. Some compositing programs, such as Shake , Adobe Substance , Adobe After Effects , and Discreet Combustion , include two-dimensional motion tracking capabilities. Two dimensional match moving only tracks features in two-dimensional space, without any concern to camera movement or distortion. It can be used to add motion blur or image stabilization effects to footage. This technique

1710-420: A scene where an actor walks in front of a background, the tracking artist will want to use only the background to track the camera through the scene, knowing that motion of the actor will throw off the calculations. In this case, the artist will construct a tracking matte to follow the actor through the scene, blocking that information from the tracking process. Since there are often multiple possible solutions to

1800-557: A scene, with each tag uniquely identified to eliminate marker reacquisition issues. Since the system eliminates a high-speed camera and the corresponding high-speed image stream, it requires significantly lower data bandwidth. The tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique appears ideal for on-set motion capture or real-time broadcasting of virtual sets but has yet to be proven. Motion capture technology has been available for researchers and scientists for

1890-522: A series of restructurings and layoffs within EA. In 2011, EA Canada acquired Bight Games, a maker of freemium games. Games developed for publishing by EA Sports : Games developed for publishing by EA Sports BIG : EA Graphics Library or EAGL is a game engine which was created and developed by EA Canada. It is the main engine used in some of EA 's games, notably the Need for Speed series, and

SECTION 20

#1732782380753

1980-414: A set of points { xyz i,0 ,..., xyz i,n } and { xyz j,0 ,..., xyz j,n } where i and j still refer to frames and n is an index to one of many tracking points we are following. We can derive a set of camera vector pair sets {C i,j,0 ,...,C i,j,n }. In this way multiple tracks allow us to narrow the possible camera parameters. The set of possible camera parameters that fit, F,

2070-424: A set of possible 3D points, that form a line emanating from the nodal point of the camera lens and passing through the projected 2-D point. We can express the inverse projection as: or Let's say we are in a situation where the features we are tracking are on the surface of a rigid object such as a building. Since we know that the real point xyz will remain in the same place in real space from one frame of

2160-525: A subsidiary of Warner Brothers Pictures created especially to enable virtual cinematography , including photorealistic digital look-alikes for filming The Matrix Reloaded and The Matrix Revolutions movies, used a technique called Universal Capture that utilized 7 camera setup and the tracking the optical flow of all pixels over all the 2-D planes of the cameras for motion, gesture and facial expression capture leading to photorealistic results. Traditionally markerless optical motion tracking

2250-434: A take. No longer do they need to perform to green/blue screens and have no feedback of the result. Eye-line references, actor positioning, and CGI interaction can now be done live on-set giving everyone confidence that the shot is correct and going to work in the final composite. To achieve this, a number of components from hardware to software need to be combined. Software collects all of the 360 degrees of freedom movement of

2340-416: A unique identification of each marker for a given capture frame at a cost to the resultant frame rate. The ability to identify each marker in this manner is useful in real-time applications. The alternative method of identifying markers is to do it algorithmically requiring extra processing of the data. There are also possibilities to find the position by using colored LED markers. In these systems, each color

2430-434: A white void and a camera. For any position in space that we place the camera, there is a set of corresponding parameters (orientation, focal length, etc.) that will photograph that black point exactly the same way. Since C has an infinite number of members, one point is never enough to determine the actual camera position. As we start adding tracking points, we can narrow the possible camera positions. For example, if we have

2520-402: Is a small set. Set of possible camera vectors that solve the equation at i and j (denoted C ij ). So there is a set of camera vector pairs C ij for which the intersection of the inverse projections of two points XY i and XY j is a non-empty, hopefully small, set centering on a theoretical stationary point xyz . In other words, imagine a black point floating in

2610-434: Is a technique that allows the insertion of 2D elements, other live action elements or CG computer graphics into live-action footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot . It also allows for the removal of live action elements from the live action shot. The term is used loosely to describe several different methods of extracting camera motion information from

2700-465: Is also dependent on the number of cameras. A range of underwater markers are available for different circumstances. Different pools require different mountings and fixtures. Therefore, all underwater motion capture systems are uniquely tailored to suit each specific pool instalment. For cameras placed in the center of the pool, specially designed tripods, using suction cups, are provided. Emerging techniques and research in computer vision are leading to

2790-619: Is assigned to a specific point of the body. One of the earliest active marker systems in the 1980s was a hybrid passive-active mocap system with rotating mirrors and colored glass reflective markers and which used masked linear array detectors. Active marker systems can further be refined by strobing one marker on at a time, or tracking multiple markers over time and modulating the amplitude or pulse width to provide marker ID. 12-megapixel spatial resolution modulated systems show more subtle movements than 4-megapixel optical systems by having both higher spatial and temporal resolution. Directors can see

EA Vancouver - Misplaced Pages Continue

2880-526: Is extremely difficult for an automatic tracker to correctly find features with high amounts of motion blur. The disadvantage of interactive tracking is that the user will inevitably introduce small errors as they follow objects through the scene, which can lead to what is called "drift". Professional-level motion tracking is usually achieved using a combination of interactive and automatic techniques. An artist can remove points that are clearly anomalous and use "tracking mattes" to block confusing information out of

2970-502: Is generated externally, the markers themselves are powered to emit their own light. Since the inverse square law provides one quarter of the power at two times the distance, this can increase the distances and volume for capture. This also enables a high signal-to-noise ratio, resulting in very low marker jitter and a resulting high measurement resolution (often down to 0.1 mm within the calibrated volume). The TV series Stargate SG1 produced episodes using an active optical system for

3060-490: Is generated near the camera's lens. The camera's threshold can be adjusted so only the bright reflective markers will be sampled, ignoring skin and fabric. The centroid of the marker is estimated as a position within the two-dimensional image that is captured. The grayscale value of each pixel can be used to provide sub-pixel accuracy by finding the centroid of the Gaussian . An object with markers attached at known positions

3150-662: Is no line-of-sight to the satellites — such as in indoor environments. The majority of vendors selling commercial optical motion capture systems provide accessible open source drivers that integrate with the popular Robotic Operating System ( ROS ) framework, allowing researchers and developers to effectively test their robots during development. In the field of aerial robotics research, motion capture systems are widely used for positioning as well. Regulations on airspace usage limit how feasible outdoor experiments can be conducted with Unmanned Aerial Systems ( UAS ). Indoor tests can circumvent such restrictions. Many labs and institutions around

3240-421: Is not very distinct. This tracking method also suffers when a shot contains a large amount of motion blur, making the small details it needs harder to distinguish. The advantage of interactive tracking is that a human user can follow features through an entire scene and will not be confused by features that are not rigid. A human user can also determine where features are in a shot that suffers from motion blur; it

3330-423: Is often referred to as performance capture . In many fields, motion capture is sometimes called motion tracking , but in filmmaking and games, motion tracking usually refers more to match moving . In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early techniques used images from multiple cameras to calculate 3D positions , often the purpose of motion capture

3420-403: Is pose detection, which can empower patients during post-surgical recovery or rehabilitation after injuries. This approach enables continuous monitoring, real-time guidance, and individually tailored programs to enhance patient outcomes. Some physical therapy clinics utilize motion capture as an objective way to quantify patient progress. During the filming of James Cameron's Avatar all of

3510-410: Is primarily used to track the movement of a camera through a shot so that an identical virtual camera move can be reproduced in a 3D animation program. When new animated elements are composited back into the original live-action shot, they will appear in perfectly matched perspective and therefore appear seamless. As it is mostly software-based, match moving has become increasingly affordable as

3600-408: Is responsible for converting the light from the target area into a digital image that the tracking computer can process. Depending on the design of the optical tracking system, the optical imaging system can vary from as simple as a standard digital camera to as specialized as an astronomical telescope on the top of a mountain. The specification of the optical imaging system determines the upper limit of

3690-815: Is sufficient to create realistic effects when the original footage does not include major changes in camera perspective. For example, a billboard deep in the background of a shot can often be replaced using two-dimensional tracking. Three-dimensional match moving tools make it possible to extrapolate three-dimensional information from two-dimensional photography. These tools allow users to derive camera movement and other relative motion from arbitrary footage. The tracking information can be transferred to computer graphics software and used to animate virtual cameras and simulated objects. Programs capable of 3-D match moving include: There are two methods by which motion information can be extracted from an image. Interactive tracking, sometimes referred to as "supervised tracking", relies on

EA Vancouver - Misplaced Pages Continue

3780-404: Is the intersection of all sets: The fewer elements are in this set the closer we can come to extracting the actual parameters of the camera. In reality errors introduced to the tracking process require a more statistical approach to determining a good camera vector for each frame, optimization algorithms and bundle block adjustment are often utilized. Unfortunately there are so many elements to

3870-467: Is the process of recording the movement of objects or people. It is used in military , entertainment , sports , medical applications, and for validation of computer vision and robots. In films, television shows and video games, motion capture refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation . When it includes face and fingers or captures subtle expressions, it

3960-412: Is to record only the movements of the actor, not their visual appearance. This animation data is mapped to a 3D model so that the model performs the same actions as the actor. This process may be contrasted with the older technique of rotoscoping . Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt or dolly around the stage driven by a camera operator while

4050-457: Is traditionally implemented using special markers attached to an actor; however, more recent systems are able to generate accurate data by tracking surface features identified dynamically for each particular subject. Tracking a large number of performers or expanding the capture area is accomplished by the addition of more cameras. These systems produce data with three degrees of freedom for each marker, and rotational information must be inferred from

4140-466: Is used to calibrate the cameras and obtain their positions, and the lens distortion of each camera is measured. If two calibrated cameras see a marker, a three-dimensional fix can be obtained. Typically a system will consist of around 2 to 48 cameras. Systems of over three hundred cameras exist to try to reduce marker swap. Extra cameras are required for full coverage around the capture subject and multiple subjects. Vendors have constraint software to reduce

4230-467: Is used to keep track of various objects, including airplanes, launch vehicles, missiles and satellites. Many such optical motion tracking applications occur outdoors, requiring differing lens and camera configurations. High-resolution images of the target being tracked can thereby provide more information than just motion data. The image obtained from NASA's long-range tracking system on the space shuttle Challenger's fatal launch provided crucial evidence about

4320-455: The Kinect camera and Apple 's Face ID have begun to change this). Match moving is also distinct from motion control photography , which uses mechanical hardware to execute multiple identical camera moves. Match moving, by contrast, is typically a software -based technology, applied after the fact to normal footage recorded in uncontrolled environments with an ordinary camera. Match moving

4410-448: The two-time Olympic figure skating champion Yuzuru Hanyu graduated from Waseda University . In his thesis, using data provided by 31 sensors placed on his body, he analysed his jumps. He evaluated the use of technology both in order to improve the scoring system and to help skaters improve their jumping technique. In March 2021 a summary of the thesis was published in the academic journal. Motion tracking or motion capture started as

4500-575: The Caribbean , the Na'vi from the film Avatar , and Clu from Tron: Legacy . The Great Goblin, the three Stone-trolls , many of the orcs and goblins in the 2012 film The Hobbit: An Unexpected Journey , and Smaug were created using motion capture. The film Batman Forever (1995) used some motion capture for certain visual effects. Warner Bros. had acquired motion capture technology from arcade video game company Acclaim Entertainment for use in

4590-633: The Dragon , and Rare 's Dinosaur Planet . Indoor positioning is another application for optical motion capture systems. Robotics researchers often use motion capture systems when developing and evaluating control, estimation, and perception algorithms and hardware. In outdoor spaces, it’s possible to achieve accuracy to the centimeter by using the Global Navigation Satellite System ( GNSS ) together with Real-Time Kinematics ( RTK ). However, this reduces significantly when there

SECTION 50

#1732782380753

4680-671: The VFX allowing the actor to walk around props that would make motion capture difficult for other non-active optical systems. ILM used active markers in Van Helsing to allow capture of Dracula's flying brides on very large sets similar to Weta's use of active markers in Rise of the Planet of the Apes . The power to each marker can be provided sequentially in phase with the capture system providing

4770-487: The Veil of Mists (2000) was the first feature-length film made primarily with motion capture, although many character animators also worked on the film, which had a very limited release. 2001's Final Fantasy: The Spirits Within was the first widely released movie to be made with motion capture technology. Despite its poor box-office intake, supporters of motion capture technology took notice. Total Recall had already used

4860-399: The actor is performing. At the same time, the motion capture system can capture the camera and props as well as the actor's performance. This allows the computer-generated characters, images and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in

4950-630: The actor's performance in real-time, and watch the results on the motion capture-driven CG character. The unique marker IDs reduce the turnaround, by eliminating marker swapping and providing much cleaner data than other technologies. LEDs with onboard processing and radio synchronization allow motion capture outdoors in direct sunlight while capturing at 120 to 960 frames per second due to a high-speed electronic shutter. Computer processing of modulated IDs allows less hand cleanup or filtered results for lower operational costs. This higher accuracy and resolution requires more processing than passive technologies, but

5040-499: The additional processing is done at the camera to improve resolution via subpixel or centroid processing, providing both high resolution and high speed. These motion capture systems typically cost $ 20,000 for an eight-camera, 12-megapixel spatial resolution 120-hertz system with one actor. One can reverse the traditional approach based on high-speed cameras. Systems such as Prakash use inexpensive multi-LED high-speed projectors. The specially built multi-LED IR projectors optically encode

5130-423: The automatic tracking process. Tracking mattes are also employed to cover areas of the shot which contain moving elements such as an actor or a spinning ceiling fan. A tracking matte is similar in concept to a garbage matte used in traveling matte compositing. However, the purpose of a tracking matte is to prevent tracking algorithms from using unreliable, irrelevant, or non-rigid tracking points. For example, in

5220-603: The body movement onto a 2D or 3D character's motion on-screen. During Game Developers Conference 2016 in San Francisco Epic Games demonstrated full-body motion capture live in Unreal Engine. The whole scene, from the upcoming game Hellblade about a woman warrior named Senua, was rendered in real-time. The keynote was a collaboration between Unreal Engine , Ninja Theory , 3Lateral , Cubic Motion , IKinema and Xsens . In 2020,

5310-407: The calibration process and a significant amount of error can accumulate, the final step to match moving often involves refining the solution by hand. This could mean altering the camera motion itself or giving hints to the calibration mechanism. This interactive calibration is referred to as "refining". Most match moving applications are based on similar algorithms for tracking and calibration. Often,

5400-418: The camera as well as metadata such as zoom, focus, iris and shutter elements from many different types of hardware devices, ranging from motion capture systems such as active LED marker based system from PhaseSpace, passive systems such as Motion Analysis or Vicon, to rotary encoders fitted to camera cranes and dollies such as Technocranes and Fisher Dollies, or inertia & gyroscopic sensors mounted directly to

5490-413: The camera. There are also laser based tracking systems that can be attached to anything, including Steadicams, to track cameras outside in the rain at distances of up to 30 meters. Motion control cameras can also be used as a source or destination for 3D camera data. Camera moves can be pre-visualised in advance and then converted into motion control data that drives a camera crane along precisely

SECTION 60

#1732782380753

5580-429: The cause of the accident. Optical tracking systems are also used to identify known spacecraft and space debris despite the fact that it has a disadvantage compared to radar in that the objects must be reflecting or emitting sufficient light. An optical tracking system typically consists of three subsystems: the optical imaging system, the mechanical tracking platform and the tracking computer. The optical imaging system

5670-651: The cost of computer power has declined; it is now an established visual-effects tool and is even used in live television broadcasts as part of providing effects such as the yellow virtual down-line in American football . The process of match moving can be broken down into two steps. The first step is identifying and tracking features. A feature is a specific point in the image that a tracking algorithm can lock onto and follow through multiple frames ( SynthEyes calls them blips ). Often features are selected because they are bright/dark spots, edges or corners depending on

5760-486: The effective range of the tracking system. The mechanical tracking platform holds the optical imaging system and is responsible for manipulating the optical imaging system in such a way that it always points to the target being tracked. The dynamics of the mechanical tracking platform combined with the optical imaging system determines the tracking system's ability to keep the lock on a target that changes speed rapidly. Match moving In visual effects , match moving

5850-414: The film's production. Acclaim's 1995 video game of the same name also used the same motion capture technology to animate the digitized sprite graphics. Star Wars: Episode I – The Phantom Menace (1999) was the first feature-length film to include a main character created using motion capture (that character being Jar Jar Binks , played by Ahmed Best ), and Indian - American film Sinbad: Beyond

5940-469: The frequency rate of the desired motion. The resolution of the system is important in both the spatial resolution and temporal resolution as motion blur causes almost the same problems as low resolution. Since the beginning of the 21st century - and because of the rapid growth of technology - new methods have been developed. Most modern systems can extract the silhouette of the performer from the background. Afterwards all joint angles are calculated by fitting in

6030-590: The globe. EA, based in Redwood City, California, had acquired Distinctive Software in 1991 for $ 11 million and renamed Distinctive Software to EA Canada. At the time of the business acquisition, Distinctive Software was noted for developing a number of racing and sporting games published under the Accolade brand. Since becoming EA Canada, EA Canada has developed many EA Games, EA Sports , and EA Sports BIG games. EA Seattle , formerly Manley & Associates,

6120-522: The image to the next we can make the point a constant even though we do not know where it is. So: where the subscripts i and j refer to arbitrary frames in the shot we are analyzing. Since this is always true then we know that: Because the value of XY i has been determined for all frames that the feature is tracked through by the tracking program, we can solve the reverse projection function between any two frames as long as P'( camera i , XY i ) ∩ P'( camera j , XY j )

6210-441: The initial results obtained are similar. However, each program has different refining capabilities. On-set, real-time camera tracking is becoming more widely used in feature film production to allow elements that will be inserted in post-production be visualised live on-set. This has the benefit of helping the director and actors improve performances by actually seeing set extensions or CGI characters whilst (or shortly after) they do

6300-410: The motion of the camera by solving the inverse-projection of the 2-D paths for the position of the camera. This process is referred to as calibration . When a point on the surface of a three-dimensional object is photographed, its position in the 2-D frame can be calculated by a 3-D projection function. We can consider a camera to be an abstraction that holds all the parameters necessary to model

6390-414: The needs of the composite we are trying to create. Once the camera position has been determined for every frame it is then possible to estimate the position of each feature in real space by inverse projection. The resulting set of points is often referred to as a point cloud because of its raw appearance like a nebula . Since point clouds often reveal some of the shape of the 3-D scene they can be used as

6480-631: The particular tracking algorithm. Popular programs use template matching based on NCC score and RMS error . What is important is that each feature represents a specific point on the surface of a real object. As a feature is tracked it becomes a series of two-dimensional coordinates that represent the position of the feature across a series of frames. This series is referred to as a "track". Once tracks have been created they can be used immediately for 2-D motion tracking, or then be used to calculate 3-D information. The second step involves solving for 3D motion. This process attempts to derive

6570-417: The problem of marker swapping since all passive markers appear identical. Unlike active marker systems and magnetic systems, passive systems do not require the user to wear wires or electronic equipment. Instead, hundreds of rubber balls are attached with reflective tape, which needs to be replaced periodically. The markers are usually attached directly to the skin (as in biomechanics), or they are velcroed to

6660-607: The rapid development of the markerless approach to motion capture. Markerless systems such as those developed at Stanford University , the University of Maryland , MIT , and the Max Planck Institute , do not require subjects to wear special equipment for tracking. Special computer algorithms are designed to allow the system to analyze multiple streams of optical input and identify human forms, breaking them down into constituent parts for tracking. ESC entertainment ,

6750-433: The relative orientation of three or more markers; for instance shoulder, elbow and wrist markers providing the angle of the elbow. Newer hybrid systems are combining inertial sensors with optical sensors to reduce occlusion, increase the number of users and improve the ability to track without having to manually clean up data. Passive optical systems use markers coated with a retroreflective material to reflect light that

6840-405: The same path as the 3-D camera. Encoders on the crane can also be used in real time on-set to reverse this process to generate live 3D cameras. The data can be sent to any number of different 3D applications, allowing 3D artists to modify their CGI elements live on set as well. The main advantage being that set design issues that would be time-consuming and costly issues later down

6930-475: The scenes involving motion capture were directed in real-time using Autodesk MotionBuilder software to render a screen image which allowed the director and the actor to see what they would look like in the movie, making it easier to direct the movie as it would be seen by the viewer. This method allowed views and angles not possible from a pre-rendered animation. Cameron was so proud of his results that he invited Steven Spielberg and George Lucas on set to view

7020-475: The set. Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking . The first virtual actor animated by motion-capture was produced in 1993 by Didier Pourcel and his team at Gribouille. It involved "cloning" the body and face of French comedian Richard Bohringer, and then animating it with still-nascent motion-capture tools. Motion capture offers several advantages over traditional computer animation of

7110-520: The space. Instead of retro-reflective or active light emitting diode (LED) markers, the system uses photosensitive marker tags to decode the optical signals. By attaching tags with photo sensors to scene points, the tags can compute not only their own locations of each point, but also their own orientation, incident illumination, and reflectance. These tracking tags work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. The system supports an unlimited number of tags in

7200-499: The system in action. In Marvel's The Avengers , Mark Ruffalo used motion capture so he could play his character the Hulk , rather than have him be only CGI as in previous films, making Ruffalo the first actor to play both the human and the Hulk versions of Bruce Banner. FaceRig software uses facial recognition technology from ULSee.Inc to map a player's facial expressions and the body tracking technology from Perception Neuron to map

7290-481: The technique, in the scene of the x-ray scanner and the skeletons. The Lord of the Rings: The Two Towers was the first feature film to utilize a real-time motion capture system. This method streamed the actions of actor Andy Serkis into the computer-generated imagery skin of Gollum / Smeagol as it was being performed. Storymind Entertainment, which is an independent Ukrainian studio, created

7380-421: The true position of targets — the “ground truth” baseline in research and development. Results derived from other sensors and algorithms can then be compared to the ground truth data to evaluate their performance. Movies use motion capture for CGI effects, in some cases replacing traditional cel animation, and for completely CGI creatures, such as Gollum , The Mummy , King Kong , Davy Jones from Pirates of

7470-419: The user defines this plane. Since shifting ground planes does a simple transformation of all of the points, the actual position of the plane is really a matter of convenience. 3-D reconstruction is the interactive process of recreating a photographed object using tracking data. This technique is related to photogrammetry . In this particular case we are referring to using match moving software to reconstruct

7560-421: The user to follow features through a scene. Automatic tracking relies on computer algorithms to identify and track features through a shot. The tracked points movements are then used to calculate a "solution". This solution is composed of all the camera's information such as the motion, focal length, and lens distortion . The advantage of automatic tracking is that the computer can create many points faster than

7650-558: The voices). The 2007 adaptation of the saga Beowulf animated digital characters whose appearances were based in part on the actors who provided their motions and voices. James Cameron's highly popular Avatar used this technique to create the Na'vi that inhabit Pandora. The Walt Disney Company has produced Robert Zemeckis 's A Christmas Carol using this technique. In 2007, Disney acquired Zemeckis' ImageMovers Digital (that produces motion capture films), but then closed it in 2011, after

7740-514: The world have built indoor motion capture volumes for this purpose. Purdue University houses the world’s largest indoor motion capture system, inside the Purdue UAS Research and Test (PURT) facility. PURT is dedicated to UAS research, and provides tracking volume of 600,000 cubic feet using 60 motion capture cameras. The optical motion capture system is able to track targets in its volume with millimeter accuracy, effectively providing

7830-460: Was also used in a few sports titles from EA Sports . Need for Speed: Hot Pursuit 2 and Need for Speed: Underground used the first version of the EAGL engine, (EAGL 1) Need for Speed: Underground 2 uses EAGL 2, Need for Speed: Most Wanted and Need for Speed: Carbon uses EAGL 3, Need for Speed: ProStreet and Need for Speed Undercover uses EAGL 4; Need for Speed Undercover uses

7920-492: Was animated without motion capture. In the ending credits of Pixar 's film Ratatouille , a stamp appears labelling the film as "100% Genuine Animation – No Motion Capture!" Since 2001, motion capture has been used extensively to simulate or approximate the look of live-action theater, with nearly photorealistic digital character models. The Polar Express used motion capture to allow Tom Hanks to perform as several distinct digital characters (in which he also provided

8010-401: Was closed in 2002. Half the jobs were moved to EA Vancouver. EA acquired Black Box Games in 2002 and Black Box Games became part of EA Canada under the name of EA Black Box . EA Black Box later became an independent EA studio in 2005. After its acquisition, EA Black Box became the home of several franchises, such as Need for Speed and Skate . The studio was later shut down in 2013, after

8100-826: Was used to animate the 2D player characters of Martech 's video game Vixen (performed by model Corinne Russell ) and Magical Company 's 2D arcade fighting game Last Apostle Puppet Show (to animate digitized sprites ). Motion capture was later notably used to animate the 3D character models in the Sega Model arcade games Virtua Fighter (1993) and Virtua Fighter 2 (1994). In mid-1995, developer/publisher Acclaim Entertainment had its own in-house motion capture studio built into its headquarters. Namco 's 1995 arcade game Soul Edge used passive optical system markers for motion capture. Motion capture also uses athletes in based-off animated games, such as Naughty Dog 's Crash Bandicoot , Insomniac Games ' Spyro

#752247