Misplaced Pages

Spectral imaging

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.

Spectral imaging is imaging that uses multiple bands across the electromagnetic spectrum . While an ordinary camera captures light across three wavelength bands in the visible spectrum, red, green, and blue (RGB), spectral imaging encompasses a wide variety of techniques that go beyond RGB. Spectral imaging may use the infrared , the visible spectrum, the ultraviolet , x-rays , or some combination of the above. It may include the acquisition of image data in visible and non-visible bands simultaneously, illumination from outside the visible range, or the use of optical filters to capture a specific spectral range. It is also possible to capture hundreds of wavelength bands for each pixel in an image.

#976023

52-412: Multispectral imaging captures a small number of spectral bands, typically three to fifteen, through the use of varying filters and illumination. Many off-the-shelf RGB camera sensors can detect wavelengths of light from 300 nm to 1200 nm. A scene may be illuminated with NIR light, and, simultaneously, an infrared-passing filter may be used on the camera to ensure that visible light is blocked and only NIR

104-457: A backdrop, verifying realms of undisturbed soil areas, as it is sensitive to the 10.4 micrometer wavelength. The blue detector is sensitive to wavelengths of 9.3 micrometers. If the intensity of the blue image changes when scanning, that region is likely disturbed . The scientists reported that fusing these two images increased detection capabilities. Intercepting an intercontinental ballistic missile (ICBM) in its boost phase requires imaging of

156-423: A blue color. This phenomenon is due to the fact that plants reflect more infrared radiation (IR) than man-made objects. Dense vegetation cover may give a more intense red color than that of sparse vegetation cover. This helps in determining whether the trees are healthy or not. It also gives evidence for the growth rate of plants. It is helpful when identifying the boundary between land and ocean or lakes because

208-473: A certain limit the same class will be represented in different classes in the sense that variation in the class is represented. After forming the clusters, ground truth validation is done to identify the class the image pixel belongs to. Thus in this unsupervised classification a priori information about the classes is not required. One of the popular methods in unsupervised classification is k-means clustering . Multispectral imaging measures light emission and

260-660: A complete spectrum. In other words, the camera has a high spectral resolution. The phrase "spectral imaging" is sometimes used as a shorthand way of referring to this technique, but it is preferable to use the term "hyperspectral imaging" in places when ambiguity may arise. Hyperspectral images are often represented as an image cube, which is type of data cube . Applications of spectral imaging include art conservation , astronomy , solar physics , planetology , and Earth remote sensing . It also applies to digital and print reproduction, and exhibition lighting design for small and medium cultural institutions. Spectral imaging systems are

312-456: A location which has an elevation higher than that of the datum plane, the original position will move away from the central point of the image. For a location which has an elevation lower than that of the datum plane, the original position will move closer to the central point of the image. Distortion refers to any change in an object's or region's location on an aerial photo that modifies its original features and shapes. It usually appears near

364-444: A scale of 1:500 to 1:1000. This type of photograph is best suited for local site investigations . It looks like a zoomed-in map. Small-scale aerial photographs are those taken at a scale of 1:5000 to 1:20000. It is more suitable for provincial or large area research . An aerial photograph marks different data and information about the covered area and the airplane's position and condition. These details are measured and noted by

416-677: A significant spectral resolution is required. There is another technique, much more efficient and based on multibandpass filters, which allows obtaining a number of final bands starting from a limited number of images. The taken images build a mathematical base with enough information to reconstruct data for each pixel with a high spectral resolution. This is the approach followed by the Hypercolorimetric Multispectral Imaging (HMI) of Profilocolore SRL. Multispectral imaging Multispectral imaging captures image data within specific wavelength ranges across

468-543: A single system. — Valerie C. Coffey In the case of Landsat satellites, several different band designations have been used, with as many as 11 bands ( Landsat 8 ) comprising a multispectral image. Spectral imaging with a higher radiometric resolution (involving hundreds or thousands of bands), finer spectral resolution (involving smaller bands), or wider spectral coverage may be called hyperspectral or ultraspectral. Multispectral imaging can be employed for investigation of paintings and other works of art. The painting

520-453: A very tiny in size in the image. Low-altitude aerial photographs are taken when the plane is flying at an altitude of less than 10,000 feet. The objects in the photograph are of a larger in size and contain more details compared with those in high-altitude photographs. Due to this advantage, a routine collection of low-altitude photographs has been conducted every six months since 1985. The scale of aerial photography or satellite imagery

572-513: Is a special case of spectral imaging where often hundreds of contiguous spectral bands are available. For different purposes, different combinations of spectral bands can be used. They are usually represented with red, green, and blue channels. Mapping of bands to colors depends on the purpose of the image and the personal preferences of the analysts. Thermal infrared is often omitted from consideration due to poor spatial resolution, except for special purposes. Many other combinations are in use. NIR

SECTION 10

#1732790847977

624-439: Is capable of producing good-quality images under poor weather conditions, such as foggy and misty air. Color aerial photographs preserve and capture the colors of the original objects through the numerous layers inside the film. Color photographs can be used to distinguish different kinds of soils, rocks, and deposits that are located above the rock layers, and some contaminated water sources. The degradation of trees driven by

676-626: Is captured in the image. Industrial, military, and scientific work, however, uses sensors built for the purpose. Hyperspectral imaging is another subcategory of spectral imaging, which combines spectroscopy and digital photography . In hyperspectral imaging , a complete spectrum or some spectral information (such as the Doppler shift or Zeeman splitting of a spectral line ) is collected at every pixel in an image plane . A hyperspectral camera uses special hardware to capture hundreds of wavelength bands for each pixel, which can be interpreted as

728-425: Is irradiated by ultraviolet , visible and infrared rays and the reflected radiation is recorded in a camera sensitive in this region of the spectrum. The image can also be registered using the transmitted instead of reflected radiation. In special cases the painting can be irradiated by UV , VIS or IR rays and the fluorescence of pigments or varnishes can be registered. Multispectral analysis has assisted in

780-437: Is more similar to humans. Features and structures can be easily recognized. However, landscapes, buildings and hillslopes that are blocked by the mountainous areas are not visible. Black and white aerial photographs are frequently used for drawing maps, such as topographic maps . Topographic maps are precise, in-depth descriptions of the terrain characteristics found in the areas or regions. Black and white aerial photography

832-437: Is often shown as red, causing vegetation-covered areas to appear red. The wavelengths are approximate; exact values depend on the particular instruments (e.g. characteristics of satellite's sensors for Earth observation, characteristics of illumination and sensors for document analysis): Unlike other aerial photographic and satellite image interpretation work, these multispectral images do not make it easy to identify directly

884-657: Is often used in detecting or tracking military targets. In 2003, researchers at the United States Army Research Laboratory and the Federal Laboratory Collaborative Technology Alliance reported a dual band multispectral imaging focal plane array (FPA). This FPA allowed researchers to look at two infrared (IR) planes at the same time. Because mid-wave infrared (MWIR) and long wave infrared (LWIR) technologies measure radiation inherent to

936-527: Is that the scale of the images must be the same. The flying routes of the planes and the time of day are not restricted. The preferred orientation of an aerial photograph is closely related to the position of the Sun and the shadow element. In the Northern Hemisphere, the south direction is always the position of the sun. Therefore, shadows are usually formed on the north side. This then affects

988-517: Is the formula for scale measurement. This measurement controls the amount and types of buildings observed in the photographs, the occurrence of some specific characteristics, and the certainty of the measurements. For example, a large-scale photograph usually gives a more accurate measurement of distance compared with a small-scale photograph. 1:6000 to 1:10000 is the best range of scale for landslip research and geological mapping for ground assessment. Large-scale aerial photographs are those taken at

1040-401: Is the value calculated when the elevation difference between the photo film and the camera lens is divided by the difference between the camera lens and the terrain surface. It can also be measured by dividing the measured length of two locations in the photo by the length of those locations in reality. In figure 5, several related terms and symbols are shown. Focal length (f) refers to

1092-400: Is to generate the 3D topography or relief when using a stereoscope for interpretation. The stereoscope is an instrument used to see the 3D overlapping aerial images. Reaching the requirements of the two chosen, overlapping images is simple. The principal points (central point of the image in geometry) of the two photos must be in different locations on the terrain. Another restriction

SECTION 20

#1732790847977

1144-416: The camera axis has a 60° angle difference from the vertical axis , shown in figure 4. In this case, the horizon is observable. This type of photograph captures a fairly sizable region. As in a low oblique photograph, the length between two points and the orientation of objects are inaccurate. High oblique aerial photographs are widely used in assisting field investigation because the line of sight shown

1196-404: The electromagnetic spectrum . The wavelengths may be separated by filters or detected with the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range (i.e. infrared and ultraviolet ). It can allow extraction of additional information the human eye fails to capture with its visible receptors for red, green and blue . It

1248-443: The "spectral fingerprint" of a stain to the characteristics of known chemical substances can make it possible to identify the stain. This technique has been used to examine medical and alchemical texts, seeking hints about the activities of early chemists and the possible chemical substances they may have used in their experiments. Like a cook spilling flour or vinegar on a cookbook, an early chemist might have left tangible evidence on

1300-412: The camera. To capture a vertical aerial photograph, both of these axes must be in the same position. The vertical pictures are captured by the camera which is above the object being photographed without any tilting or deviation of the camera axis. Areas in a vertical aerial photograph often have a consistent size. Oblique aerial photographs are captured when the cameras are set at specific angles to

1352-419: The data panel which includes different devices and instruments for the specific measurements. Figure 8 shows the general format of a vertical aerial photograph. Overlapping of aerial photos means that around 60% of the covered area of every aerial image overlays that of the one before it. Every object along the flying path can be observed twice at a minimum. The purpose of overlapping the aerial photography

1404-489: The dividing border between planet and atmosphere from a viewing angle , is unobservable in a low oblique aerial photograph. The length between two points is unable to be calculated and is not accurate because a low oblique image does not have a scale. The orientation of objects is also inaccurate. Low oblique photographs can be used as a reference before site investigation because they give updated details of local places. High oblique aerial photographs are generated when

1456-477: The elements of image interpretation: location, size, shape, shadow, tone/color, texture, pattern, height/depth and site/situation/association. They are routinely used when interpreting aerial photos and analyzing photo-like images. An experienced image interpreter uses many of these elements intuitively. However, a beginner may not only have to consciously evaluate an unknown object according to these elements, but also analyze each element's significance in relation to

1508-410: The elevation difference between the film and the lens. H is the elevation difference between the lens and the sea level, which is the average level of the water surface. h is the elevation difference between the terrain surface and the sea level. S is the scale of aerial photographs. S = f ( H − h ) {\displaystyle S={\tfrac {f}{(H-h)}}}

1560-541: The emissivity of ground surfaces, multispectral imaging can detect the presence of underground missiles. Surface and sub-surface soil possess different physical and chemical properties that appear in spectral analysis. Disturbed soil has increased emissivity in the wavelength range of 8.5 to 9.5 micrometers while demonstrating no change in wavelengths greater than 10 micrometers. The US Army Research Laboratory's dual MWIR/LWIR FPA used "red" and "blue" detectors to search for areas with enhanced emissivity. The red detector acts as

1612-530: The feature type by visual inspection. Hence the remote sensing data has to be classified first, followed by processing by various data enhancement techniques so as to help the user to understand the features that are present in the image. Such classification is a complex task which involves rigorous validation of the training samples depending on the classification algorithm used. The techniques can be grouped mainly into two types. Supervised classification makes use of training samples. Training samples are areas on

Spectral imaging - Misplaced Pages Continue

1664-400: The ground for which there is ground truth , that is, what is there is known. The spectral signatures of the training areas are used to search for similar signatures in the remaining pixels of the image, and we will classify accordingly. This use of training samples for classification is called supervised classification. Expert knowledge is very important in this method since the selection of

1716-551: The hard body as well as the rocket plumes. MWIR presents a strong signal from highly heated objects including rocket plumes, while LWIR produces emissions from the missile's body material. The US Army Research Laboratory reported that with their dual-band MWIR/LWIR technology, tracking of the Atlas 5 Evolved Expendable Launch Vehicles, similar in design to ICBMs, picked up both the missile body and plumage. Most radiometers for remote sensing (RS) acquire multispectral images. Dividing

1768-456: The image's other objects and phenomena. Vertical aerial photographs represent more than 95% of all captured aerial images. The principles of capturing vertical photographs are shown in Figure 2. Two major axes which originate from the camera lens are included. One is the vertical axis which is always at 90° to the study area. Another one is the camera axis which changes with the angle of

1820-471: The insects can also be identified using color aerial photos. It can assist in locating storage of materials in the natural environment, such as trees, wild animals and oil. Color infrared aerial photographs are captured using false color film which changes the original color of different features into "false color". For example, grasslands and forests which are green in nature have a red color. But some artificial objects which are covered in green may have

1872-478: The interests of the researchers. Modern weather satellites produce imagery in a variety of spectra. Multispectral imaging combines two to five spectral imaging bands of relatively large bandwidth into a single optical system. A multispectral system usually provides a combination of visible (0.4 to 0.7 µm), near infrared (NIR; 0.7 to 1 µm), short-wave infrared (SWIR; 1 to 1.7 µm), mid-wave infrared (MWIR; 3.5 to 5 µm) or long-wave infrared (LWIR; 8 to 12 µm) bands into

1924-464: The interpretation of ancient papyri , such as those found at Herculaneum , by imaging the fragments in the infrared range (1000 nm). Often, the text on the documents appears to the naked eye as black ink on black paper. At 1000 nm, the difference in how paper and ink reflect infrared light makes the text clearly readable. It has also been used to image the Archimedes palimpsest by imaging

1976-418: The land. It is a very helpful enhancement or addition to the traditional vertical image. It allows the vision to pass through a relatively high proportion of the plant cover and leaves of trees. Oblique aerial photographs can be classified into two types. Low oblique aerial photographs are generated when the camera axis has a 15–30° angle difference from the vertical axis , shown in figure 3. The horizon,

2028-595: The longer wavelengths. Researchers claim that dual-band technologies combine these advantages to provide more information from an image, particularly in the realm of target tracking. For nighttime target detection, thermal imaging outperformed single-band multispectral imaging. Dual band MWIR and LWIR technology resulted in better visualization during the nighttime than MWIR alone. Citation Citation. The US Army reports that its dual band LWIR/MWIR FPA demonstrated better visualizing of tactical vehicles than MWIR alone after tracking them through both day and night. By analyzing

2080-401: The most likely class. In case of unsupervised classification no prior knowledge is required for classifying the features of the image. The natural clustering or grouping of the pixel values (i.e. the gray levels of the pixels) are observed. Then a threshold is defined for adopting the number of classes in the image. The finer the threshold value, the more classes there will be. However, beyond

2132-508: The object and require no external light source, they also are referred to as thermal imaging methods. The brightness of the image produced by a thermal imager depends on the objects emissivity and temperature.   Every material has an infrared signature that aids in the identification of the object. These signatures are less pronounced in hyperspectral systems (which image in many more bands than multispectral systems) and when exposed to wind and, more dramatically, to rain. Sometimes

Spectral imaging - Misplaced Pages Continue

2184-504: The ocean does not reflect the IR. High-altitude aerial photographs are taken when the plane is flying in the altitude range of 10,000 to 25,000 feet. The advantage of high-altitude aerial photography is that it can record the information of a larger area by taking one photograph only. However, high-altitude photographs cannot show as many details as low-altitude photographs since some objects, such as buildings, roads, and infrastructures, are of

2236-632: The pages of the ingredients used to make medicines. Aerial photographic and satellite image interpretation Aerial photographic and satellite image interpretation , or just image interpretation when in context, is the act of examining photographic images , particularly airborne and spaceborne , to identify objects and judging their significance. This is commonly used in military aerial reconnaissance , using photographs taken from reconnaissance aircraft and reconnaissance satellites . The principles of image interpretation have been developed empirically for more than 150 years. The most basic are

2288-425: The parchment leaves in bandwidths from 365–870 nm, and then using advanced digital image processing techniques to reveal the undertext with Archimedes' work. Multispectral imaging has been used in a Mellon Foundation project at Yale University to compare inks in medieval English manuscripts. Multispectral imaging has also been used to examine discolorations and stains on old books and manuscripts. Comparing

2340-407: The picture's border. There are two causes of distortion. The first one is the tilt and tip of a plane . When the aircraft is rising or descending it produces a tip. When the aircraft leans in a particular direction during the aerial survey, it produces a tilt . The tilt axis is perpendicular to the tip axis. The second reason is the processing of photographs . This is a process used to modify

2392-402: The relief of the topography. This orientation of the image also helps geologists link the 3D pictures to what they observe. Distortion and displacement are two common phenomena observed on an aerial photograph. Displacement is a result of varying topographic relief within a covered area. A datum plane , which refers to the average sea level, is essential in the displacement phenomenon. For

2444-447: The shadow element in aerial photographs. For example, if the plane is flying from North to South, the sunlight is also coming North and the shadows that show the shapes of objects can be clearly observed in aerial photographs. If the plane is flying from South to North, shadows will not be clearly observed. For photo interpretation, it is preferred that the image is taken so that the shadows can be clearly observed as shadows can highlight

2496-409: The spectrum into many bands, multispectral is the opposite of panchromatic , which records only the total intensity of radiation falling on each pixel . Usually, Earth observation satellites have three or more radiometers . Each acquires one digital image (in remote sensing, called a 'scene') in a small spectral band. The bands are grouped into wavelength regions based on the origin of the light and

2548-447: The surface of the target may reflect infrared energy. This reflection may misconstrue the true reading of the objects’ inherent radiation. Imaging systems that use MWIR technology function better with solar reflections on the target's surface and produce more definitive images of hot objects, such as engines, compared to LWIR technology. However, LWIR operates better in hazy environments like smoke or fog because less scattering occurs in

2600-407: The systems that through the acquisition of one or more images of a subject are able of giving back a spectrum for each pixel of the original images. There are a number of parameters to characterize the obtained data: The most used way to achieve spectral imaging is to take an image for each desired band, using a narrowband filters. This leads to a huge number of images and large bank of filters when

2652-469: The training samples and a biased selection can badly affect the accuracy of classification. Popular techniques include the maximum likelihood principle and convolutional neural network . The maximum likelihood principle calculates the probability of a pixel belonging to a class (i.e. feature) and allots the pixel to its most probable class. Newer convolutional neural network based methods account for both spatial proximity and entire spectra to determine

SECTION 50

#1732790847977

2704-503: Was originally developed for military target identification and reconnaissance. Early space-based imaging platforms incorporated multispectral imaging technology to map details of the Earth related to coastal boundaries, vegetation, and landforms. Multispectral imaging has also found use in document and painting analysis. Multispectral imaging measures light in a small number (typically 3 to 15) of spectral bands . Hyperspectral imaging

#976023