False-color image processing

From MediaWiki

Imaging > Imaging Techniques > False-color image processing

In progress: Seeking additional comments and images to develop this section

Please be patient, this part of the site is under development. We are starting to build out the Imaging Wiki.

Wiki Team Lead: David Lainé, Meghan Wilson
Wiki Editor: Amalia Siatou, Camilla Perondi, Jan Cutajar
Wiki Contributors: your name could be here!


Introduction[edit | edit source]

False color image processing is a non-invasive technique that combines and rearranges the color channels from one or multiple source images that results in a final composite image. The colors rendered in this resulting image do not match those that would be observed naturally by the human eye. These methods help visualize information not otherwise discernable which can aid in interpretation of media and materials present and inform further scientific analysis, conservation examination, and scholarly research.

Within cultural heritage, the most common use of false color image processing is as a derivative of infrared (IR) or ultraviolet (UV) photography. These methods rearrange the Red, Green and Blue channels of a visible-color image and insert an infrared or ultraviolet capture. Processing methods have expanded with the growing popularity of spectral imaging and the adaptation of remote sensing analyses. These techniques, which include principal component analysis (PCA), derive from a data cube of registered images captured at various wavebands from the ultraviolet, visible, and infrared spectrums.

History and background

Before digital photography became more widespread, conservators and photographers could only rely on infrared photographic emulsions, among which the Kodak Ektachrome infrared film type 2236. Its three emulsion layers were sensitized to infrared (in the range between 750 nm and 900 nm), red and green wavebands respectively. It is therefore here that the traditional color arrangement for the infrared false color was born, and the same result is often sought after with digital image processing.

The earliest use of infrared false color photography in the cultural field is reported in the late 1970s, where pigments, inks and glasses were first examined, but it was only in the 1990s that an attempt of standard procedure was proposed[1].

Techniques[edit | edit source]

The aim of using false color processing techniques (and their derivatives) is to enhance the signal recorded in the source images, which commonly span through the UV-NIR range of the electromagnetic spectrum.

These methods  can be applied to almost every type of cultural heritage object. However, it should be noted that pigments and materials may be layered, mixed, or their recipes vary. Therefore false color processing is not meant to selectively identify media or materials but rather to suggest what may be present.

Depending on which technique is used and the object’s constituting materials, it is possible to investigate the object’s characteristics in relation to the research scope, such as:

  • Qualitative identification of pigments, inks and dyes
  • Discrimination of metameric pigments, inks and dyes
  • Location and mapping of previous integrations and treatments
  • Enhance discoloured or faded features of the support, the text or the polychromy
  • Separate / distinguish layers (of paint, text, etc.)
  • Identify subtle differences within a single material
  • Enhance surface texture to assess construction techniques (e.g. woodcuts, compass holes, carvings, indented handwriting, etc.)
  • Enhance watermarks
  • Assess substrate density
  • Non-invasively reveal information beneath pastedowns
  • Track changes over time (of material degradation, before/after exhibition, before/after treatment, artificial aging of scientific samples, etc.)

From single capture[edit | edit source]

False Color Infrared (FCIR)

Infrared False Colour of a painting (750 nm 850 nm)
Color image of a discoloured painting (top), the infrared image captured with a 720 nm and 850 nm bandpass filter (middle), and their false color composition (bottom). Image courtesy of Camilla Perondi CC BY SA 4.0

A color-rendering method resulting from the combination of a visible-color and infrared image by rearranging the respective RGB channels into a derivative image. Traditionally, the Red and Green channels of the visible-color image are assigned to the Green and Blue channels of the new image respectively, while the Green channel of the infrared image is assigned to the Red channel of the new image.

R'=Gir ,  G'=Rvis ,  B'=Gvis

The resulting color rendering may be affected by several factors, among which:

  • Transmittance spectrum of the infrared bandpass filter used for the infrared image
  • Dynamic range and contrast of the infrared image
  • color management performed on the visible image
  • Light falloff or specular reflections across the scene
  • Ageing and discoloration of the object (e.g. yellowing of the finishing varnish).

The nature of the binder in polychrome surfaces doesn't considerably affect the resulting image[2].

If accompanied by a reference chart, the False color Infrared method finds useful application in the identification of pigments, inks and dyes. The chart should include a set of known materials, i.e. pigments, binding media, inks or dyes, applied in a similar fashion as the object under investigation.

False Color Reflected Ultraviolet (FCUV)

Reflected UV False Colour of a painting (365 nm)
Color image of a discoloured painting (top), the ultraviolet reflectance image (middle), and their false color composition (bottom). Image courtesy of Camilla Perondi CC BY SA 4.0

A color-rendering method resulting from the combination of a visible-color and reflected ultraviolet image by rearranging the respective RGB channels into a derivative image. Traditionally, the Green and Blue channels of the visible-color image are assigned to the Red and Green channels of the new image respectively, while the Green channel of the infrared image is assigned to the Blue channel of the new image.

R'=Gvis ,  G'=Bvis ,  B'=Guvr

The resulting color rendering may be affected by several factors, among which:

  • Transmittance spectrum of the UV bandpass filter used for the UV reflectance image
  • Dynamic range and contrast of the UV reflectance image
  • color management performed on the visible image
  • Light falloff or specular reflections across the scene
  • Ageing and discoloration of the object (e.g. yellowing of the finishing varnish).

Neither the nature of the paint binder nor the finishing varnish considerably affect the resulting image.

Chromatic Image (CHR)

A two-channel based color rendering method used to enhance  the difference in hues and simplify complex color schemes. It finds application in revealing concealed features in dark scenes and in areas of proximate hues, such as dull signatures and other features not visible in the traditional imaging.

Chromatic image of a graffiti
Visible light orthophoto of a graffiti by Blu and Ericailcane (top) and its chromatic image (bottom). Image courtesy of Camilla Perondi CC BY SA 4.0

From data cube[edit | edit source]

Principal Component Analysis (PCA)

Principal component analysis can be applied to many types of data, but the following discussion will focus on PCA as applied to a multispectral image stack / image cube.

Color image of a 10th century parchment fragment used as a book cover. Contains ink, stains, repairs, and possibly adhesive residue along the edges.
PCA with false color applied to the different components. This enables the eye to better understand what is similar and dissimilar.

Principal component analysis is a technique that takes a large data set (e.g. an image cube with tens of thousands of pixels) and reduces it to make it easier to interpret with minimal loss of information. PCA uses a mathematical algorithm to detect differences (called “variance”) within a data set and better distinguish what is similar and dissimilar. For example, the page of a book contains different variables like the paper, inks, pigments, stains, etc. Each of these elements have unique spectral responses throughout the wavebands of a multispectral image stack because they’re made from different chemical compositions. PCA will emphasize each of these variables (called “principal components”) in turn in a new stack of images. The first image accounts for the most variability, in this case likely highlighting the difference between the ink and the paper. False colors can be assigned to each component to visually distinguish them even more (called a “pseudocolor” image).

Standardization / Preprocessing

Before being modeled with PCA, raw data needs to be standardized and scaled so the variables are meaningful and comparable. It ensures that variables with larger ranges don’t dominate over those with smaller ranges and provides models with better predictive performance. A common preprocessing technique for imaging data is mean centering. Mean centering calculates the average of each variable and subtracts it from the variable, leaving the mean-centered data to include only how it differs from the average sample in the original data matrix. Areas of low signal are amplified and larger signals will no longer swamp the model during PCA calculation. Additional decluttering methods like generalized least square weight (GLSW) can be applied to shrink variance further if the data set contains unwanted signal that severely interferes with the observation of desired signal.

Principal Component Analysis

Principal component analysis transforms a data set of correlated variables, reducing its dimensionality while retaining its statistical information, by rearranging the data into uncorrelated variables called principal components. The algebraic term for PCA is singular value decomposition. PCA can be understood by the function:

           Spectra = Ax+By+Cz …

Where Spectra is a pixel within the image, i.e. 3-dimentional data within the registered image cube.

           Where A, B, C, … is weight.

           Where x, y, z, … is a variable.

Example plot of data point projections on a principal component vector. Image courtesy of Meghan Wilson CC BY SA 4.0

PCA first computes a covariance matrix. Covariance is a measurement of the joint variability of two variables. The matrix is a way of organizing the variables to look through all possible combinations of pairs to see if there is any relationship (correlation) between them.

Next, PCA finds the eigenvectors and eigenvalues of the covariance matrix to identify the principal components. Eigenvectors are the directions of the axes where there is the most variance (the most information). An eigenvalue is the amount of variance connected to an eigenvector. The values enable the principal components to be sorted from highest (most variance) to lowest (least variance).

Each blue dot in the scatter plot below is a data point (the representation of a pixel from the image cube data set). The dotted lines are the projections of the data points onto a vector. When the position of a vector is found so that the average of the squared distances from the projections is highest (at maximum variance) this vector becomes principal component (PC) 1. Subsequent principal components (PC2, PC3, etc.) will always be an orthogonal (perpendicular) transformation from the previous principal component and will be at a position with the next highest amount of variance.

Applications for PCA and Interpretation

PCA can be used for a myriad of applications including but not limited to: spectral characterization of inks, colorants, substrates, and treatments, recovery of obscured or deteriorated document content, enhancement of historic object creation techniques, and assessment of environmental impacts.

Example of use of PCA to recover palimpsest texts. Alexander Hamilton Papers: General Correspondence, -1804; 1780. 1780. Manuscript/Mixed Material. Source: Library of Congress.

In the images below, PCA was used to get a sense of which pigments and inks were similar and different. Sometimes, even though pigments look to be the same color, they’re in fact not. But our unaided eye can’t always tell the difference. PCA can map an entire page or object much more quickly and easily than other point-sourced techniques that would be used to ascertain the same type of conclusions. It’s important to note that this type of interpretation can only be made within the single image. If the text renders blue in one pseudocolor image, it cannot be assumed it’s a match to blue components of a second pseudocolor image, no matter how similar the original materials may be (e.g. even subsequent pages of a book).

Areas of red pigment, which include the drop cap, the block text insert, and rubrication within the body text, all respond similarly (in cyan) indicating they are the same pigment. The red used for the figure’s robes however responds green within the pseudocolor, indicating it is a different red pigment from the other areas. Interestingly, some of the shadows in the folds of the robe respond as cyan within the pseudocolor image, so there could be two reds there, one similar to the red used in other areas on the page and one completely different. This is an example of how PCA can help inform areas of interest for further investigation with tools like fiber-optic reflectance spectroscopy (FORS) or X-ray fluorescence spectroscopy (XRF).

By emphasizing different materials on a cultural heritage object, PCA can help the eye distinguish hard to read content. In the detail below, PCA enables the visual separation of two inks so that crossed out script can become legible. The same methodology applies to content that is stained or faded and is especially useful in processing images of watermarks covered by interfering script.

PCA can aid in preservation research of cultural heritage objects. The color image on the left depicts faint green lines seen on a 1320 portolan chart. The PCA image on the right renders these lines in purple and they appear much broader than what the unaided eye can see. The PCA helps illustrate how the verdigris, a corrosive copper containing pigment, has spread into the parchment. Similarly, PCA can help enhance other types of deterioration and staining which can aid in other analyses or conservation treatment.

Classification

Classification workflows categorize pixels into different classes based on their similarity. The way the similarity is calculated depends on the classification method chosen. The algorithms can work on their own (called an “unsupervised classification”), or with training data where known pixels are designated with regions of interest (ROI) to provide a baseline for determining the remaining pixels in the image (called a “supervised classification”). The latter generally provides more accurate results and is what will be discussed below. Classification can be used to identify unknown materials if areas of known material are present.

Maximum Likelihood Classification

Maximum likelihood classification is a conventional classification method based on the probability that a pixel belongs to a certain class based on definitions specified by normally distributed (Gaussian) training data.

Maximum likelihood classification can be conducted on a stack derived from a single color in which the red, green, and blue channels have been separated and saved independently as greyscale images. It can also be used on larger multispectral image stacks.

Regions of interest are defined into classes that encompass pixels known to be the same material. The more classes created and the more pixels assigned to each class, the better the result will be.

If using a multispectral image stack with known wavelengths for each band, a spectral profile can be derived for each class to then potentially identify each material if compared against a library of known samples.

Soft Independent Modelling of Class Analogies (SIMCA)

A statistical method for classification / pattern recognition of data (based on PCA models). SIMCA defines known pixels into classes and can potentially identify unknowns by fitting them to the defined classes via probability thresholds.

Spectral Angle Mapping (SAM)

Spectral classification that matches unknown pixels to known references by determining the similarity between them.

Maximum Intensity Projected (MIP) / Particle Analysis

Identifies isolated particles in an image and provides statistics about those particles (how many total, size, etc.).

Workflows and best practices[edit | edit source]

False-color image processing methodologies generally follow “best practice” guidelines like Metamorfoze, FADGI, ISO-19264, CHARISMA[3] and AIC Guide to Digital Photography[4].

When used, it is critical to include both a reading key and a written documentation of the methodology, including the corresponding metadata, for reference and reproducibility.

Processed images should always be analyzed in tandem with the original image to ensure they are not over-manipulated and/or that perceived results are not artifacts.

When processing from an RGB image (e.g. FC IR or FC UV), keep the green channel intact (or take an L* conversion of the visible image and apply to the green channel of the FCI) and replace the red and / or blue channels with the additional (IR / UV) information.

When using more advanced processing techniques like PCA, work from designated regions of interest (ROI) to minimize variables and maximize useful pixels in the data being provided to the algorithm.

Digital preservation standards of using lossless non-proprietary imaging formats (e.g. .TIFF) should be considered throughout processing and especially implemented when saving final images. Processing should be executed with .TIFF file formats, but .PNG derivatives may be used if necessary due to computer or software data size constraints. .JPG should never be used due to pixel compression and introduction of artefacts which severely alter the information.

Examples / Case studies[edit | edit source]

Jill Dunkerton and Marta Melchiorre Di Crescenzo (National Gallery, London) present the results on their research on the painting Adoration of the Kings by Sandro Botticelli and Filippino Lippi.


The following table aims to collect case studies on the use of false color imaging found in scholarly literature:

Field of application Type of object FCI technique Subject(s) Reference
Paintings Mural painting Chromatic Derivative Imaging Mural paints from the Tomb of the Monkey in the Etruscan necropolis of Poggio Renzo Legnaioli et al. 2013[5]
Paintings Easel painting FCIR - Infrared False Color The Virgin’s apparition to Saint Martin, with Saint Agnes and Saint Thecla by Eustache Le Sueur Hayem-Ghez et al. 2015[6]
Paintings Easel painting FCIR - Infrared False Color

PCA - Principal Component Analysis

The Drunkenness of Noah by Andrea Sacchi Pronti et al. 2019[7]
Ceramics Majolica FCIR - Infrared False Color Majolica polychrome decorations Meucci and Carratoni 2016[8]
Paintings Easel painting FCIR - Infrared False Color Adoration of the Kings by Sandro Botticelli and Filippino Lippi Dunkerton et al. 2020[9]


References[edit | edit source]

  1. Moon, Thomas, Michael R. Schilling, and Sally Thirkettle. 1992. ‘A Note on the Use of False-Color Infrared Photography in Conservation’. Studies in Conservation 37 (1): 42.
  2. Cosentino, Antonino. 2015. ‘Effects of Different Binders on Technical Photography and Infrared Reflectography of 54 Historical Pigments’. International Journal of Conservation Science 6 (3): 287–98.
  3. Dyer, Joanne, Giovanni Verri, and John Cupitt. 2013. ‘Multispectral Imaging in Reflectance and Photo-Induced Luminescence Modes: A User Manual
  4. Frey, Franziska S. 2011. The AIC Guide to Digital Photography and Conservation Documentation. Edited by Jeffrey Warda. American Institute for Conservation of Historic and Artistic Works.
  5. Legnaioli, Stefano, Giulia Lorenzetti, Gildo H. Cavalcanti, Emanuela Grifoni, Luciano Marras, Anna Tonazzini, Emanuele Salerno, Pasquino Pallecchi, Gianna Giachi, and Vincenzo Palleschi. 2013. ‘Recovery of Archaeological Wall Paintings Using Novel Multispectral Imaging Approaches’. Heritage Science 1 (1): 33.
  6. Hayem-Ghez, Anita, Elisabeth Ravaud, Clotilde Boust, Gilles Bastian, Michel Menu, and Nancy Brodie-Linder. 2015. ‘Characterizing Pigments with Hyperspectral Imaging Variable False-Color Composites’. Applied Physics A: Materials Science and Processing 121 (3): 939–47.
  7. Pronti, Lucilla, Martina Romani, Gianluca Verona-Rinati, Ombretta Tarquini, Francesco Colao, Marcello Colapietro, Augusto Pifferi, Mariangela Cestelli-Guidi, and Marco Marinelli. 2019. ‘Post-Processing of VIS, NIR, and SWIR Multispectral Images of Paintings. New Discovery on the The Drunkenness of Noah, Painted by Andrea Sacchi, Stored at Palazzo Chigi (Ariccia, Rome)’. Heritage 2 (3): 2275–86
  8. Meucci, Costantino, and Loredana Carratoni. 2016. ‘Identification of the Majolica Polychromatic Decoration by IRFC Methodology’. Journal of Archaeological Science: Reports 8: 224–34
  9. Dunkerton, Jill, Catherine Higgitt, Marta Melchiorre Di Crescenzo, and Rachel Billinge. 2020. ‘A Case of Collaboration: The Adoration of the Kings by Botticelli and Filippino Lippi’. National Gallery Technical Bulletin 41 (2).