Reflectance Transformation Imaging (RTI)

From Wiki

Imaging > Imaging Techniques > Reflectance Transformation Imaging (RTI)

In progress: Seeking additional comments and images to develop this section

Please be patient, this part of the site is under development. We are starting to build out the Imaging Wiki.

Reflectance Transformation Imaging (RTI) is a Computational Photography assisted technique, which uses multi-lighting conditions to capture a set of images, from a fixed camera position, with the aim of virtually and interactively revealing the characteristics of an imaged surface.

Interested in contributing to this page? Visit the Contributors' Toolbox or reach out to one of our Team Leads:

Wiki Team Leads: Emily Frank, Hendrik Hameeuw, Bruno Vandermeulen
Wiki Editors: Christopher Ciccone, Caroline Roberts, Jessica Walthew
Wiki Contributors: Amalia Siatou, Alexander Dittus, Bruno Vandermeulen, Carla Schroer, Caroline Roberts, Christopher Ciccone, Emily Frank, Eve Mayberger, Hendrik Hameeuw, Jessica Walthew, Kurt Heumiller, Paige Schmidt

You can also leave comments or make quick suggestions about the content you see on this page using the RTI Wiki Suggestions form.

What is RTI[edit | edit source]


RTI (Reflectance Transformation Imaging) is a user-friendly, non-invasive imaging technique for the examination and documentation of cultural heritage object surfaces. In this technique, a source image set is processed into an interactive file. It allows the viewer to examine the visual appearance of an object in various lighting conditions with a range of computational enhancements, highlighting and revealing characteristics of the imaged object. Applications using this technique range from simple, accessible tools to highly calibrated scientific systems. RTI can be used for a variety of activities including documentation, access, condition monitoring, interventive conservation treatment, interactive museum displays, and research. The distinctive feature of this method is the ability to virtually relight the imaged surface from any raking angle in a viewer interface. A processed source image set is therefore often referred to as a relightable file or image.

Key to this imaging method is the acquisition of the source image set; a series of images are captured with the object and camera static relative to each other, while knowing or recording the spatial location of the light sources.

The following steps/conditions are essential to this method:

A. A camera is mounted on top of a physical or imaginary hemispherical umbrella/dome facing the object.
B. A series of individual images are captured, each lit (with a flash or continuous light source) from varying directions following the shape of a physical or imaginary umbrella/dome. These images can be many of the same light sources, all with fixed positions, homogeneously distributed, or a single light source to be manually repositioned.
C. Throughout the recording process, both the camera and object stay 100% motionless.
D. The distance from the light source to the imaged object is consistent throughout all recordings.
E. Throughout the acquisition process, the spatial position of the applied light sources is registered (highlight method) or it is predetermined and incorporated into the processing software (some of the dome methods).
F. The obtained source image set is processed into a relightable image with appropriate software.

Other more general terms for this acquisition method are also in use, i.e. Single Camera Multi Light (SCML) and Multi-light Reflectance (MLR). See the Setup section for a more detailed overview on the various types of acquisition systems (dome, highlight, rotating arm, and microscopic methods), equipment, and setups.

Figure 1. Theoretical model of a Single Camera Multi Light (SCML) dome acquisition device with, in this case, 108 fixed light positions on its interior, made in SketchUp Make 2016, Hendrik Hameeuw 2021


Figure 2. Schematic of a virtual dome for highlight RTI (H-RTI) indicating four angles of light and twelve locations around the object for a manually positioned light source, made in Adobe Illustrator CC, Emily Frank 2021


SCML methods such as RTI use of multiple angles of illumination to understand the reflectance and topographical information of the surface of interest. The most common implementation of RTI is via Polynomial Texture Mapping (PTM) invented by Tom Malzbender of HP Labs in 2000 (Maelzbender et al. 2000, Maelzbender et al. 2001) and HSH (Hemispherical Harmonics) RTI introduced for heritage applications by 2008. Since, these types of interactive imaging have been applied to a range of art and cultural assets in a wide variety of situations (see also the History section) (Mudge et al. 2008, Earl et al. 2010, Klausmeyer et al. 2012, Mudge et al. 2006).

Key to the computer assistance in SCML/MLR techniques is that the position of the light source in each image be known. When the camera/multi-light combination system is aligned in a fixed setup (such as a dome, with a rotating arm, and/or under a microscope), the information about the light positions can be included in the processing software beforehand. In other cases (methods relying on the highlight method), each photograph in the source image set needs to include a reflective sphere. It allows the processing software to keep track of the position of the light source in each shot; this crucial information to be extracted and used during the processing phase. This lighting information is used to generate a mathematical model of the surface of interest. More technically, the RTI method derives a per-pixel reflectance model represented by a set of coefficients that defines the fitted function that takes into account the direction of the lighting (Manfredi et al. 2014, LOC RTI Format Description Properties). Other SCML/MLR methods follow similar calculation methods. See the Other Multi-light Reflectance Technologies section for the functionality and differences of each of these methods.

The resulting relightable images are processed from a source image set by custom built software (e.g. RTIBuilder) to calculate and establish the interactive value (function) for each pixel. Missing illumination angles are interpolated from the available incident lighting angles and registered in the source image set to fully model the surface’s interaction with light. For each pixel, the surface reflectance is approximated. The result is what can be described as a virtual relightable object, which can mimic any raking light condition. Secondly, various rendering modes reveal, accentuate, and document details on the imaged surface. These outcomes can be consulted in viewer interfaces, which can be stand alone or web-based (e.g. RTIViewer, WebRTIViewer, Pixel+ viewer). When uploaded, all pixel information is mapped with the same resolution as the original photographic captures. The viewers allow the operator to move the virtual light sources over the imaged surface providing unique diagnostic illuminations that would be difficult to obtain by inspecting individual or standardized still-images, and often even surpass visual inspections of the original objects. Furthermore, processed relightable files can be embedded in web pages or combined with additional imaging methods to present multi-faceted investigations of surface topography and other features of art of cultural assets ( Artal-Isbrand et al. 2011, Caine et al. 2011, Hanneken 2016, Watteeuw et al. 2020). Advanced viewer interfaces allow annotations and layered disseminations. Some have the option to generate high-quality screen captures of any particular visualisation at the desire of the operator, to be used for dissemination. See the Viewers section for an extensive overview.

Since its introduction early/mid 2000s, and thanks to the launch of the open access processing and viewing software by Cultural Heritage Imaging (see History below ), RTI and other SCML/MLR methods have been implemented by a large number of cultural institutions, conservators, conservation scientists, research units, and individuals around the world. Many digital repositories include relightable files, and countless applications and workflows have been developed using this imaging technology. In conservation practice, these methods have been used to examine and track surface condition (Hughes-Hallett et al. 2020), document and understand methods of manufacture, (i.e. via study of tool marks, etc.) (Harris and Piquette 2015, Artal-Isbrand 2010, Serotta 2014), and record ephemeral and in situ phenomena (i.e. rock art, archaeological textile impressions, etc.) (Duffy 2018, Frank 2017).

History of RTI[edit | edit source]

Setups[edit | edit source]

Applying RTI or other SCML imaging methods all starts with the creation of the source image set. The image set can be obtained in many ways, as long as the following criteria are met: every image in the set must depict the same surface, the surface in each image of the set must be lit from a different angle, and there must be a method to derive the angle of the light in each image. Some approaches require unique specialized lighting and/or camera devices (sub-sections A, C, D); some can be performed only with more standard photographic equipment (sub-section B).

While methods A, B, and C can be adapted to a variety of applications and it may be possible to create image sets for a given project with multiple setups, each method is best suited for particular project parameters, discussed in the Practical Considerations and Recommended Applications sections. Factors to consider when choosing the best method for a project and subsequent equipment acquisition include (but are not limited to):

  • Reproducibility/uniformity of results
  • In situ technical infrastructure and working environment
  • Accessibility of imaging location and portability requirements of equipment
  • Size of the objects to be imaged
  • Number of objects to be imaged (and consistency of object sizes)
  • Cost of equipment vs. time

A. Dome Method[edit | edit source]

The Dome Method for acquiring the source image set uses a stationary dome structure with mounted light sources at pre-established intervals that can be activated individually according to an automated sequence (Figure 2 in ‘What is RTI’). In this method of image capture, both the camera and lighting structure are stationary, which is in contrast to the mobile light or rotating arm methods, in which single or multiple light systems rotate around the subject in relation to a fixed camera position. Multiple dome systems are available for purchase or can be constructed by users (a useful Affordable DTI Dome toolkit can be found on GitHub, for example). In order to establish lighting positions in each image, the dome method can employ the ‘highlight method’ (see subsection B) or a lighting array whose lighting positions in relation to the camera are known and programmed into the software.

Click here to view an interactive example of MLR Dome Acquisition Method Extract of work by Berk Kaya, Suryansh Kumar, Carlos Oliveira, Vittorio Ferrari, Luc Van Gool. Google Research-Zürich

Equipment lists[edit | edit source]

  • Camera (essential)
  • Lens(es) (essential)
  • Dome with fixed lights (essential)
  • Reflective sphere targets (some models, other have programmed lighting and camera positions)
  • Camera or dome mount (system dependent and/or for non-downward-facing setups)
  • Computer for capture (some models, other can operate via camera only)
  • Computer for processing (essential)

Workflows[edit | edit source]

Each dome system has its own workflow. They vary from fully automated (a click-and-go acquisition and image processing approach) to semi-automated (positioning reflective targets, automated acquisition, calculating light positions and processing images); some of them make use of the RTI image processing procedures. In general, the dome methods keep the necessary workflows simple.   

Field/Low/High-Tech Options (see also examples under section case studies)[edit | edit source]

Dome systems are most commonly used in labs, studios or other ‘inhouse’ locations, including collection spaces. Almost all domes have means to transport them: the smaller versions as a single unit, and larger systems with some (dis)assembly.

Some dome systems can run on batteries and thus, can be used in the field. For those cases, these domes are also made more robust to withstand rough conditions.

Standard domes are positioned on a table or ground and are oriented downwards. Some models have the flexibility to be oriented in many directions (to image vertical or even overhead positioned surfaces).

Some dome devices are modular, and any chosen camera (DSLR or others) can be mounted on top of the dome structure. That gives flexibility towards the image definition and field of view.

Some dome systems are equipped with various or varying types of lighting. In addition to visible white light, sets of ultraviolet, red, green, blue and/or infrared light sources can be installed. These systems expand functionality and combine the abilities of Multi Light Reflectance (MLR) with Multispectral (MS) imaging.  

The more acquisition and processing are fully integrated and precisely attuned to each other, the more high-tech features can be added to the toolbox of the viewer software. Examples are xyz-measurements on the surface and combining multispectral recordings.

Practical considerations[edit | edit source]

Dome systems keep the acquisition of the source image set simple. The capture procedure is controlled, consistent and reproducible, which is  important for reliable monitoring and scientific applications.

The controlled nature of the dome systems setup allows for relatively fast file generation (± 5 minutes) and serial work. This is important when many objects are being imaged, and is especially useful when the objects’ surfaces and sizes are uniform. The standardized approach of the dome system allows for uniform repetition, which opens strategies to monitor surfaces over time, including before/during/after treatments and as a tool for tracking surface condition.

The surface area that can be imaged with dome devices is limited, depending on the size of the dome and the applied camera and lens combination. Stitching several recordings together is possible, but is still in an experimental phase.

Fully integrated, all-inclusive dome systems can be very expensive (+ $20.000). The more modular devices (for example, to be combined with your own camera and lenses) can be relatively cheap (- $5.000).

Common applications[edit | edit source]

There are purely commercial and more academic/scientific oriented dome systems available. The latter, especially in combination with tailored software, allow for applications which go beyond the visual aspects typical for the RTI/MLR imaging methods (See the RTI in combination with other imaging technologies and Viewers sections)

Especially large collections have been imaged with dome applications, examples are coins (see Palazzo Blu website and Avgoustinos et al 2017), cuneiform texts, and cylinder seals (see CLDI website).

Tips & Tricks[edit | edit source]

Think carefully about the recording strategy in advance; when the relative position of the dome and the camera (including its lens and zoom settings) stay the same, and the light positions are calibrated with the highlight method, this calibration has to be calculated only once and the obtained Light Position File can be reused.

Make sure the dome is installed in a stable position on a likewise stable work surface. When a laptop is used, position it on a seperate support to avoid vibration of the dome system during image capture.

As most domes create with their structure a black room above/in front of the imaged surface, and the dome method allows a very stable recording process (assuming it is positioned on a stable surface), a relatively small camera aperture can be used to obtain an optimized depth of field and keep more extreme variations in the relief across the surface in focus. The latter is crucial to calculate accurate reflectance characteristics per pixel.

Reusable batch process scripts in image editing programs can be applied to the source image set (recommended to be captured in RAW) to create the jpg derivatives; e.g. for file renaming, rotation, white balance.  

Figure 3. RTI in field conditions, under direct sunlight, 2014 during National Geographic survey at Qurta, Egypt (image by Isabelle Terrasse)

B. Single Mobile Light Method[edit | edit source]

The Single Mobile Light Method (SMLM) for acquiring a source image set relies on manual movement of a single light source around the object, i.e. a handheld flash unit or any mobile light. The camera remains stationary during image capture. Ideally, a source image set with an even distribution of lighting positions around the object is obtained. This is usually achieved by moving the light source along imaginary hemispherical axes and taking images at fixed intervals along these axes from 5° to 85°, with 0° being in-plane with the object (see Figure 2 in What is RTI).

The Single Mobile Light Method utilizes the Highlight Method (see Terminology section) for determining the lighting angles in each capture in the source image set. To achieve this, one or two reflective spheres are positioned in the image field of the camera and included in each capture of the source image set. The position of the highlight (reflection) from the light source on the sphere documents the angle of the light in relation to the imaged object. When the source image set is processed, this information is extracted by the processing software to estimate all the lighting angles for that particular capture sequence. For this reason it is important that the light source is angled toward the center of the object as it is repositioned along the imaginary spherical axes.

The chosen lighting positions are often estimated by the operator, and devices and procedures can be used to assist with consistent lighting placement. The most common solution for Single Mobile Light image capture utilizes a length of string held between the object’s central focal point and the light source to maintain an equal distance between the object and light source throughout the source image set. This string is also used to ensure the light source is angled toward the center of the object by coupling the string with an extension rod or stick attached to the light source parallel to the light path (see Figure 3).

Equipment lists[edit | edit source]

  • Camera (essential)
  • Lens (essential)
  • Reflective sphere(s) (essential)
  • Camera mount, tripod, or copy-stand (essential)
  • Portable light source: handheld flash unit or mobile light (essential)
  • String or other spacer (essential)
  • Trigger system (essential, can be via computer or remote, wireless or wired)
  • Computer for capture (optional)
  • Computer for processing (essential)

Workflows[edit | edit source]

RTI workflows are available through institutions such as Cultural Heritage Imaging. But, many variants and/or modifications have been suggested throughout the years (Georgia O’Keeffe Museum, Smithsonian Conservation Institute, …).

Camera setup[edit | edit source]

Camera configurations[edit | edit source]

Properly setting up the camera prior capture will help in producing a good quality dataset, which allows further processing. To start, make sure the camera is set in manual mode (M) and choose the appropriate shutter time, aperture, and ISO value for a good exposure. Check the histogram on the LCD of the camera or computer for under- or over-exposure.

Prime lenses are usually of higher optical quality; they have less distortion compared to zoom lenses. Make sure to disable autofocus to prevent focus shift during capture. Using a zoom lens is possible, but the slightest shift in focal length during the capture sequence disrupts accurate image registration. Taping the lens ring after framing and  focusing prevents the shift of zoom (in the case of zoom lenses) and focus. This is especially important when your camera is facing downwards, and gravity is acting on the lens.

Camera positioning[edit | edit source]

The camera should always be positioned so that it is perpendicular to the object. The positioning technique depends on whether the object is horizontal (set on a table, for example), or vertical (on a wall, or set upright). For a vertical object, positioning is straightforward, with the camera on a tripod facing the object. For a horizontal object, you can mount a ball head upside down on the bottom of a tripod’s central column, and rotate it so that the lens faces straight down. This creates the fewest shadows, and makes positioning easier. Camera stands and copy stands work well for horizontal objects. Be aware, the camera stand/tripod should not cast a shadow on the object when the light source is moved around, dropping light on the object from all angles.

Tripod and image registration[edit | edit source]

While the lighting and measurement methods are flexible, this is not the case with the stringent registration demands of shooting to capture RTI source images. Even the slightest tripod movement during capture can ruin the registration of the image set and the RTI. Therefore, it is essential to use a very sturdy tripod or copy stand; adding weight to the base of your tripod can help secure less robust setups. Furthermore, in order to minimize camera movement, it is highly advisable to use tethered shooting (through the computer) and prevent mirror shake by raising the mirror on the DSLR or by using a mirrorless camera.

Manual mode camera[edit | edit source]

When capturing images for the source image set, the camera should always be put in manual mode. As such the settings can be adjusted individually and all images in the source image set are shot with the exact same camera settings (essential for correctly processed final results).

Apertures[edit | edit source]

The recommended aperture settings to capture stable high-resolution images is between f/5.6 and f/11. Nonetheless, concerning the depth of field (DoF), a deep focus for more voluminous objects is produced with smaller apertures and longer exposure times.

Exposures[edit | edit source]

Base your exposure on a light angle between 30° to 60° to the object’s surface. Over-exposed and under-exposed images do not reliably record the color and detail of the object, and should not be used in processing. Use the histogram function on your camera or computer capture software to make sure your settings are correct. If needed, over- or under-exposed images can be excluded from the source image set.

ISO[edit | edit source]

The light sensitivity settings of your digital camera (i.e. 100, 200, 400, 800 ISO) can be manipulated. A low ISO setting is recommended. High-end digital cameras do allow setting the ISO value a few steps higher to allow capturing with less light, but even then, keep it as low as possible.

White balance[edit | edit source]

It is recommended to include a gray or color card in the image field of the photographs in the image source set. This will allow you to correct the white balance during post processing, rather than using the built-in white balancing tools of the camera.

Target setup[edit | edit source]

Reflective spheres[edit | edit source]

The Single Mobile Light and Handheld Flash Methods both rely on the Highlight Method for determining lighting angles during  image processing, in which the software calculates the angle of the light in each image based on the position of reflectance of that light source on a reflective sphere. Two spheres should be placed in each image, and should be placed so that the top third of the spheres are in focus when the object is in focus. These spheres can be positioned on horizontal surfaces by placing them on washers (or positioned with putty), or they can be elevated with dowels. Reflective spheres can be purchased from RTI-specific sources such as CHI, or metal ball bearings can be used and positioned on washers, dowels or threaded rod. Spheres can be used in various sizes, relative to the size of the object (i.e. for capturing large objects, use a large sphere; for capturing small objects, use a small sphere). The sphere should measure 200 px across at a minimum in the final captured images for processing with RTI Builder.

It is important that the reflective spheres do not get scratched, and should be handled with gloves to avoid fingerprints. When positioning the spheres, be sure that the spheres and any support materials (e.g. dowels) do not cast shadows on the object at any of the lighting angles.

Lighting setup[edit | edit source]

The Single Mobile Light Setup requires a light source that can be repositioned in a complete dome pattern in relation to the object, without disrupting the camera position. In both lighting methods, Handheld Flash and Mobile Light, shadows created by extraneous objects (e.g. reflective targets) must be avoided, and it is better to omit these lighting positions.

Day and artificial light vs. night & darkroom[edit | edit source]

Ideally an acquisition is performed in an environment that is as dark as possible. It best ensures that the variations of the reflections from changing angles of incidence can play the maximum role in processing the datasets. Of course, night or darkroom conditions cannot always be obtained due to practical reasons.

When external light sources cannot be avoided, you must use a sufficiently strong light source to generate the reflections.. When imaging needs to be performed in daylight or even in direct sunlight, opt for the strongest flash units. Avoid direct sunlight and partial shadows by the sun on the imaged surface. When needed, you can shield the imaged surface with an umbrella, or a large piece of cardboard or blanket. When the sun or other light source(s) causes one or more consistent spot reflections in the reflective sphere targets, these need to be manually removed (made black) before processing, via some photo editing software packages this can be done in batch per source image set since neither object and camera have moved during capture.

Mobile light[edit | edit source]

Any easily repositioned light source can be used for lighting. This includes a light mounted to a tripod, a ceiling track light, or even a handheld flashlight. Regardless of the sophistication of the light and mount, the distance of the light in relation to the object must be consistent, and the angle of the light must intersect the object’s center. There are multiple ways to achieve the proper angle from light to object, but one advantage of the string m

ethod for determining consistent distance between the object and light is that it can also be used to establish the correct angle between the light source and object. The lighting intensity should remain consistent throughout the image set, and standard white balance procedures should be performed prior to image capture.

Handheld flash[edit | edit source]

Parameters for Handheld Flash setup are similar to those of the Mobile Light Method. Handheld flash units can be tethered to the camera or set up for remote control (as can mobile lights equipped with flash). Since the handheld flash requires a person to hold the flash unit during image capture, it is important to establish the flash unit operator’s path around the camera and object in advance of image capture. If the object is large enough, the lighting operator may need to be on a ladder or be equipped with an extension rod for the flash unit. Knee pads are also advisable when imaging vertical object matter, to allow for comfortable kneeling.

Lighting positions[edit | edit source]

To keep an even distance between your light source and the object to be captured, you can use a string as suggested by Cultural Heritage Imaging in their Guide to Highlight Image Capture. In general, the recommended distance from the light to the object is three times the diameter of the object. When necessary, this can be reduced to a minimum of twice the diameter. Reducing the ratio below 2:1 results in a proportional reduction of the area on the object where RTI data can be collected. Ratios greater than 3:1 are fine and offer a slight improvement of normal accuracy. The captures are made along imaginary spherical axes surrounding the imaged surface. In order to take these positions as best as possible, marks can be placed around the object, in a circle, for example every 20° or 25°; see McEwan 2018, fig. 8 for an example set-up. However, when capturing delicate objects, the use of a string as suggested might pose a risk, since the string may get entangled in fragile surface structures or pull the object over. In this case, other solutions for measuring the distance might be helpful (i.e. a laser range finder, see Recording with a laser device for distance measuring, Recording with an ultrasonic device for distance measuring, and iPhone measure app below).

Recording with a laser device for distane measuring[edit | edit source]

While a regular laser-based range finder might not be accurate enough for short distance measuring, A. Dittus presented a simple method that provides an accurate way to measure the light-object-distance for recording the raw images (Dittus 2014, publication is in German): a lamp or remote flash is equipped with an arm on both sides. On each arm a laser is mounted that can simultaneously be turned on and off via a switch in the handle. Both lasers are crossing in front of the lamp in a distance that is equal to the desired radius of the hemisphere of light positions. For recording the images, one has to pick a relevant feature in the middle of the object (as when measuring the distance with a string). The lasers are switched on and the lamp is moved towards the object until they meet in the same spot in the middle of the object. If the laser dots are moving away from each other when bringing the lamp closer to the object, the radius is already undercut and it is necessary to move the lamp further away instead. The lasers are switched off, the picture is taken and the lasers can be switched on again for finding the next light position. If another radius of the light dome is desired, the angle of the crossing lasers can be adjusted to meet in a different distance. Unlike the workflow with a string, this method requires only one person to handle the light, and with a remote camera trigger one person can handle the whole recording process even when documenting big objects (Dittus 2014, publication is in German).

Recording with an ultrasonic device for distance measuring[edit | edit source]

A. Cosentino suggested another “stringless” distance measuring solution (Cosentino 2014), especially for big objects like murals etc.: an ultrasonic distance measuring device that is coupled to an Arduino microcontroller. Both are mounted to the light source with the sensor facing in the direction of the light. If the light source is in the correct distance from the object a sound and an LED light indicate that the photograph can be taken. Additionally, it is possible to connect the Arduino to a computer to read the measured distances. Since the ultrasonic device is not indicating where the measurement is taken, it might be useful to install a simple laser (which is only switched on during the distance measuring) to ensure that the measurement is taken in the middle of the object (Dittus 2014: 48).

iPhone measure app[edit | edit source]

A simple and fairly accurate way to measure the distance between the object and illumination source is the “measure “ app on the iPhone. The measure app uses augmented reality (AR) technology to turn the device into a tape measure. It is effective between 2-10 feet.

Triggering the camera[edit | edit source]

RTI image set captures can be easily accomplished with two people working together: one in charge of light positioning and the other in charge of the remote camera trigger. A single person working alone may use a remote camera shutter release by cable (or better wirelessly, as cables can be tripping hazards and interfere with registration) or the Timelapse feature included in most camera software. Setting up the time lapse to give ~3s between exposures can yield an efficient solo capture session (recall 45+ images are needed to process an RTI).

Practical considerations[edit | edit source]

The Single Mobile Light Method (compared to the Dome method) is very flexible in its execution, adapting to small/large surfaces, easy/difficult access, and regular/irregular surfaces, among other challenges.

The majority of required equipment is in a standard photographic documentation setup, which keeps the extra investment for this method low. The most expensive additional purchase would be a wireless system to aid communication between the camera and the flash. All standard required equipment can run on batteries and can be acquired in robust design or with protective gear. With the right preparations, the Single Mobile Light Method can therefore be easily used in the field.  

The Single Mobile Light method can be applied in any setting (for example: microscopic setups, large panels and ceilings, and even extreme applications like underwater RTI). When wisely deployed, it adapts to all sorts of environmental situations and conditions making these methods suitable for lab, gallery, storeroom, and field environments.

The Single Mobile Light Method is a more hands-on approach  compared to the Dome systems. When obtaining results needs to be produced relatively fast (e.g. large collections) or the results need to be consistent and reproducible; then weigh the pros and cons with regard to the dome method.

C. Other Methods[edit | edit source]

Data Assessment[edit | edit source]

Pre-processing[edit | edit source]

Final result[edit | edit source]

Data Management[edit | edit source]

Dissemination / RTI Viewers[edit | edit source]

Stand-alone/desktop viewers[edit | edit source]

Web-based viewers[edit | edit source]

Embedding webpages and online databases[edit | edit source]

Technical Support / Obsolescence[edit | edit source]

Main obsolescence issues facing PTM and HSH RTI[edit | edit source]

Processing/Builder[edit | edit source]

The software provided via http://culturalheritageimaging.org/ is reaching its technological expiration date. It is written in java, which was a good choice +10 years ago. However, as of 2020, java and the implemented java-code of the ‘builder’ causes issues on most updated Mac and Windows platforms (e.g. 64-bit operating systems); it no longer runs on these platforms or it is identified as a virus and is automatically deleted upon download.

The RTIBuilder from Cultural Heritage Imaging does two main things:

  • It creates a light position (lp) file (based on the information extracted from the reflective spheres in each image in the source image set.
  • It applies a processing algorithm (fitter) to the dataset, with the help of the lp file, to generate a finished relightable image.


A number of alternative solutions for step 2 are available, but the RTIBuilder is the primary used path to date.   

Viewer[edit | edit source]

At present (December 2021), the two mainly used RTI viewers (desktop and webviewer) work on the most recent Windows and Mac platforms, which means that fully processed PTM and HSH RTI files can still be viewed.

Solutions to address known software issues[edit | edit source]

Mac OS compatibility issues with RTI software[edit | edit source]

Possible solutions:[edit | edit source]
  • Maintain a computer with an old OS
  • Remote desktop into a computer with an old OS
  • Use this https://www.parallels.com/products/desktop/resources/ or virtualbox to run a guest operating system on computer and install the RTI software in the vm (while this link says run Windows on Mac, but you should also just be able to run other Mac os-es)
  • Install and run an old OS and the RTIBuilder on an external drive

Running RTIBuilder on Windows 10 Platform[edit | edit source]

Running the RTIBuilder on the current Windows 10 platform can be problematic. In case you can not download RTIbuilder on your computer via http://culturalheritageimaging.org/, try here (same version, different packaging). Important: JAVA needs to be installed on your computer!!!

To run the RTIbuilder, double-click on RTIbuilder.jar. If nothing happens with a double-click on RTIbuilder.jar try:

  • Step 1: Click on the “Start” button on your desktop and type “Command Prompt” in the search field. Right-click on the result and select “Run as Administrator.” (You'll need admin rights on your computer.)
  • Step 2: In the “Command Prompt” window, type the following command line and hit “Enter”: ftype jarfile="C:\Program Files\Java\jre1.8.0_311\bin\javaw.exe" -jar "%1" %*
    • NOTE 1: the \Program Files\ can also be \Program Files (x68); it depends where your Java is installed.
    • NOTE 2: the \jre1.8.0_311\ depends on the version of java you have installed, go to the "C:\Program Files\Java\" or "C:\Program Files (x86)\Java\" folder on your computer and define which version of Java you are running; copy that exact version!
  • Step 3: Go back to you RTIBuilder folder structure, and double click RTIbuilder.jar

All of these solutions have one element in common: they are trying to find work-arounds to prolong the lifespan of +10 year old processing software. At some point, this will no longer be reasonable; and new software will need to be developed.

The good news, new open processing software interfaces are being produced (or updated) by several stakeholders across the globe. (when fully functioning, they will be discussed in this section)

Selected Case Studies[edit | edit source]

RTI in combination with other imaging technologies[edit | edit source]

UV[edit | edit source]

IR[edit | edit source]

3D[edit | edit source]

Terminology / Glossary[edit | edit source]

Reflectance Transformation Imaging (RTI) and related technologies serve a wide range of users, often coming from disparate backgrounds. For this reason, a number of terms may refer to the same concept, or have overlapping meanings. Furthermore, there are some terms that have a very specific meaning in an academic context, but may be colloquially applied in the wider user group. For example, to some, RTI specifically refers only to images created with the specific HSH RTI fitter, while to others, RTI may be applied broadly to any Multi-Light Reflectance (MLR) technology. The glossary in this section, attempts to clarify the terminology surrounding the broad spectrum of MLR technologies.

key: Italics indicate terms discussed elsewhere in this glossary.

A[edit | edit source]

Albedo[edit | edit source]

A single number that specifies the reflectivity/reflectance of a surface, specifically the ratio of incoming to outgoing light. Albedo varies between 0 for a perfectly dark surface patch, to 1.0 for a perfectly reflective, white surface. Albedo can also be expressed as a percentage, for example the average Earth’s albedo is 30-35%, meaning that 30-35% of incoming solar radiation is reflected back into space, the rest absorbed.

Algorithmic Rendering[edit | edit source]

Computational processing of source image data that results in an image that emphasizes or suppresses certain aspects or features. In the same way that Adobe Photoshop filters apply systematic transformations to color data, in algorithmic rendering an algorithm can apply transformations to both color and surface normal data in a MLR/RTI processed data set, to produce the digital equivalent of technical illustration or photographic manipulation.

B[edit | edit source]

Blend Map

A composite image produced by CHI’s RTIBuilder when using the Highlight Method of RTI. The image shows all of the highlights from every image in the MLR/RTI image set, positioned on a single sphere. The blend map allows one to see the spread of light achieved for a particular MLR/RTI image set; see also Highlight Map.

BRDF (bidirectional reflectance distribution function)[edit | edit source]

Describes how light is reflected from an opaque surface. A scalar function (that is, a function that returns a single number) of four variables (the inputs to the function). The inputs are two angles that specify the incoming light direction, and two angles that specify the outgoing light direction coming off the surface. Depending on the algorithm used to process the MLR/RTI Source Image Set BRDF information can be derived. RTI and PTM depend only on the variation in the first two angles; the second two are fixed, determined by the viewing direction. (First defined in Nicodemus 77.)

BSSRDF (bidirectional scattering-surface reflectance distribution function)[edit | edit source]

Characterizes how light interacts with a reflective surface. Unlike the BRDF, it characterizes subsurface scattering by introducing two more sets of independent variables (beyond the four used in BRDF). Subsurface scattering is common in translucent materials such as marble, where light is internally scattered by the material. The two extra pairs of variables correspond to the incoming light location, in addition to the outgoing light location on the surface (First defined inNicodemus 77.)

BTF (bidirectional texture function)[edit | edit source]

Characterizes the spatial variation of reflectance for material that is not uniform, by adding two independent variables to the BRDF, for a scalar function of 6 variables. Can also be considered to have another independent variable, the wavelength of light used in the measurement (First defined in [Dana 97].)

C[edit | edit source]

Camera Raw File[edit | edit source]

A proprietary raw file format produced as an output option in digital cameras, also commonly called a Digital Negative. It contains all the pixel information captured by the camera's sensors without compression or processing applied. Raw files should be used to capture and keep most of the registered information by the light sensor in the camera, and to have total control over the future processing. Raw files are processed through specific software where you can change settings such as white balance, tonal curve, …., all post capture. There are both open source (RawTherapee, DarkTable, …) and commercial packages (Adobe Camera Raw, Capture One, …) available. However, because they are proprietary, raw files should not be used for true archiving. In that case the raw image data can be converted to an archival format such as Digital Negative (DNG) format or a Tag Image File Format (TIFF). When capturing a MLR/RTI Source Image Set, established best practice is to capture in a raw file format because it allows retention of the least processed information for archival purposes even if derivative images are created for further processing.

Camera Settings[edit | edit source]

Basic digital camera settings are the lens aperture (f-stop), exposure time, and ISO settings, which together determine the exposure. The depth of field can be changed by manipulating the aperture. Other important settings that affect image quality include white balance and resolution/quality. When capturing images for MLR/RTI, always put the camera in manual mode and adjust the settings individually.

Capture Client[edit | edit source]

Software on a computer that aids the Image Capture to obtain a MLR/RTI Source Image Set. In that case the digital camera is controlled by that computer software, instead of the camera itself. This manner of working allows one to capture single or sequences of images without touching the camera and to visualize and assess the results directly on a computer screen. This approach can be used for captures with the Single Mobile Light, Highlight and Dome methods. A Light Array typically has a capture client.

CHI: see Cultural Heritage Imaging.[edit | edit source]
Compression[edit | edit source]

A method of reducing the size of a digital image file in order to decrease the needed storage capacity for that specific data set or batch of data sets. Compression technologies are distinguished from each other by whether they remove or simplify detail and color from the image. Lossless compression technologies compress image data without removing detail (TIFF LWZ, Jpg2000), while "lossy" technologies compress images by removing some detail (JPEG). Most MLR/RTI processing algorithms produce compressed files and use compressed images (for example JPEGs) as the input Source Image set.

Computational Photography[edit | edit source]

The computational extraction of information from a sequence of digital photographs. Extracted information is integrated into new digital representations to yield rich data not found in the original, individual photographs. The MLR/RTI methods are one branch of many forms of computational photography.

Continuous Light[edit | edit source]

Continuous lights–as opposed to flash or strobe lighting–can be aimed quickly and accurately at the center of a subject, permit longer exposures if more light is required, and do not need to be synchronized to the camera shutter. Continuous light is easy to work with indoors (outside the sun can act as continuous lighting) and have appropriate power for MLR/RTI applications. It does not work well in location settings where there is not access to a reliable power source such as electricity, a generator, or batteries.

Cultural Heritage Imaging (CHI)[edit | edit source]

CHI is a nonprofit based in San Francisco that helps disseminate imaging techniques in the cultural heritage field, and developed the original RTIBuilder and RTIViewer software and related documentation, all offered as open data via their webpage. CHI is also responsible for the Digital Lab Notebook, a software tool that enhances documentation and repeatability of complex computational photography processes, such as the metadata associated with a digital representation. The Digital Lab Notebook records the provenance and complete processing history of the data. It was developed for the RTI and Photogrammetry imaging methods, and also supports multi-spectral and documentary image sets.

D[edit | edit source]

Deep Zoom Image Format (definition needed)[edit | edit source]
Discrete Modal Decomposition (DMD)[edit | edit source]

Discrete Modal Decomposition (DMD) is an approach to reflectance modelling (like PTM or HSH) which is based on an Eigen basis derived from a structural dynamic problem. DMD is efficient in that it allows for a more accurate modeling of angular reflectance when light-matter interaction is complex (such as in the presence of shadows, specularities, and inter-reflections) than other approaches (Presented in Pitard et al. 2017.)

Diffraction and Image Resolution[edit | edit source]

As aperture decreases, sharpness as a function of the diffraction of light through the lens decreases. While an aperture of f-22 gives more depth of field,sharpness and the ability to resolve definition (that is, line pairs per millimeter) is lost. For this reason, stopping down to gain depth of field must be weighed against image clarity. The impact of diffraction on digital images will depend on a few factors, but will depend largely on the pixel pitch (density of pixels on the sensor). With increasing numbers of pixels the sensors have gotten more dense, making the point at which diffraction comes in earlier (lower MP sensors may have been fine at f/11, but some sensors today may notice diffraction at f/8). When that same number of pixels is distributed over a larger sensor (i.e. half frame, full frame) the pixel density decreases and diffraction might again only occur above f/11. For MLR/RTI, avoid the diffraction phenomenon and keep the aperture sufficiently low.   

Diffuse Gain[edit | edit source]

An enhancement technique that helps to see surface detail due to shape. Keeps the surface Normal for each pixel at the value estimated mathematically from the input images, but allows the user to arbitrarily control the second derivative (curvature) of the reflectance function interactively. This transformation makes the surface more sensitive to variations in lighting direction (Presented in Malzbender 2001.)

Diffuse Reflectance[edit | edit source]

Any reflectance of light off a surface that appreciably scatters light in many directions is considered diffuse. An ideal diffuse reflector is called Lambertian, and emits light in all directions uniformly, regardless of the incoming light direction. An example of an approximately diffuse surface is that of paper. This is opposed to a specular reflectance, which returns a highlight in a specific direction, producing a specular highlight. Many materials have both a diffuse and specular component.

Digital Lab Notebook: see Cultural Heritage Imaging.[edit | edit source]

Directory Structure for RTI

The CHI RTIBuilder software requires a specific folder structure and file-naming convention for the image files it uses to create an RTI file. In order to process the image sequence resulting from your Image Capture, create a project folder. Convert the original Camera Raw Files to Digital Negative (DNG) Format, and store the DNG files in a subfolder named original-capture/. Export JPEG versions of those images, and store them in a subfolder named jpeg-exports/.

Digital Negative (DNG) Format

DNG is an open, lossless, Camera Raw File Format for digital images, created by Adobe, based on the TIFF standard. Use of this format preserves initial image state, and provides XMP (Extensible Metadata Platform) records of all transformations, beginning with development from a Camera Raw File. One recommended workflow for producing a Source Image Set from original captured images is to use Adobe Digital Negative Converter to convert in batch Camera Raw Files to the standardized DNG format, and make copies in the JPEG format.

Dome[edit | edit source]

A dome-shaped Image Capture apparatus that has a set of lights at fixed positions. Control software flashes each light in a fixed sequence in order to produce the image sequence needed to produce a MLR/RTI file. A Capture Client is often used and knows the position of each light (i.e. a fixed and precalibrated Lighting Position File) and thus, the software uses a Light Map to associate each image with the lighting angle used for that particular image. See also Light Array.

E[edit | edit source]

ESAT[edit | edit source]

ESAT (Department of Electrical Engineering) is one of the largest research departments of KU Leuven. As one of its divisions, PSI (Processing Speech and Images) performs demand-driven research in the field of audio and image processing. Relevant research topics are situated in the fields of Computer Vision, Artificial Intelligence, and Computational Photography and include 3D modeling and visualisation, photorealistic rendering, object recognition/classification, image compression. In collaboration with various Departments in the Humanities, it developed the MLR products: Portable Light Dome system, the pixel+ viewer and the SCML file format.

EXIF (Exchangeable Image File)[edit | edit source]

Common Metadata format for technical information relevant to digital images, such as camera, lens, exposure information, date/time, and so on. Typically, the camera automatically collects the data and embeds it in the image file. Processing software and digital asset and preservation management systems can automatically extract information from these metadata fields in the header of the image file.

F[edit | edit source]

File Format[edit | edit source]

The way an image is saved to a digital camera’s memory or to a computer. The most common file formats for digital cameras are RAW (DNG or other proprietary file formats), TIFF, and JPEG. Each MLR/RTI processing software will work with one or several permissible file formats.

Fitting Algorithm[edit | edit source]

The mathematical process of finding a low-complexity function that best represents a set of measured values. Term used in particular for PTM and RTI processing algorithms. Typically the order of the function being fit (6 for PTMs), needs to be less than or equal to the number of measured values, or it is referred to as under-constrained and cannot be solved without introducing additional assumptions. The RTIBuilder includes the Polynomial Texture Map (PTM) algorithm (ptmfitter), and the Hemispherical Harmonics (HSH) algorithm (hshfitter). In that case, a PTM or HSH RTI included the results of one fitting algorithm. The SCML file format (can) include(s) the results of PTM, HSH RTI, RBF RTI fitting algorithms and other processing algorithms, such as those based on Photometric Stereo.

Flash Lighting[edit | edit source]

Flash lights illuminate the subject for a short amount of time with intense light energy. They permit short exposure times and a more narrow aperture (more depth of field), the imaging needs to be synchronized to the camera shutter. Flash lighting is opposed to Continuous Lighting. Flash lights, with the same settings, always give the same amount of energy, guaranteeing consistency throughout the capture sequence of a Source Image Set. When battery-operated it can be used in remote outdoor conditions with lesser risk of running out of light, when a power source is unreliable or absent. Often called Strobe Lighting in a studio context.

Figure 4. Normal exposure

G[edit | edit source]

H[edit | edit source]

Highlight Map[edit | edit source]

A composite image produced by the RTIBuilder when using the Highlight Method. The image shows all of the highlights from every image in the MLR/RTI Source Image Set, composed onto a single sphere. This Blend Map allows you to see the spread of light achieved for one particular source image set.

Highlight Method[edit | edit source]
Figure 5. Overexposure

An image-capture technique for MLR/RTI, in which a Source Image Set is captured with at least two Reflective Spheres in each view. In this method, the light may be moved with less than perfect precision; the reflection of the light source on the spheres enables the processing software to calculate the lighting angle for that image and stores that information into a Light Position File. This information is used during the processing source image set and/or whan Algorithmic Renderings are applied on the processed results. Sometimes also called Highlight-Reflectance Transformation Imaging (H-RTI). The highlight method can be the light position determining approach both for the Single Mobile Light and Dome methods. For an in-depth discussion, see the RTI Guide to Highlight Image Capture or Setups on this page.

Histogram[edit | edit source]
Figure 6. Underexposure

A visual representation of the exposure values of a digital image, typically a graph that shows the image's shadows, midtones, and highlights as vertical peaks and valleys along a horizontal plane (Figures 4-6). Shadows are represented on the left side, highlights on the right side, and midtones in the central portion of the graph. The histogram allows quick checking of the quality of the exposure. If most of the tones are on the extreme left, the image is probably underexposed. On the contrary, if most of the tones are on the extreme right, the image is probably overexposed. Try to avoid under- and overexposure as much as possible.

Hemispherical Harmonics (HSH)[edit | edit source]

A model of distribution across a hemisphere of directions (as opposed to spherical harmonics, which model a distribution across an entire sphere of possible directions). This is a natural representation in the study of reflectance off an opaque surface, which only occurs in a hemisphere. This model is used for the processing Fitting Algorithm of HSH RTI.

HSH: see Hemispherical Harmonics.

I[edit | edit source]

Image Capture[edit | edit source]

The process of creating the initial set of digital images from which to create an MLR/RTI file by taking a sequence of images of the subject with a specific set of lighting angles. The Highlight Method or a Light Array can be used to produce the image sequence.

IPTC (definition needed)[edit | edit source]

J[edit | edit source]

Java[edit | edit source]

The computer language in which the CHI RTIBuilder program is written.

JPEG Format[edit | edit source]

A “lossy”' Compression format, capable of reducing digital image files to about 5% of their normal size. JPEG stands for Joint Photographic Experts Group. Decompression of JPEG files can cause visual artifacts ("blockiness," "jaggies," or “banding”) in digital images. The greater the compression levels, the more artifacts are visible. For RTI (the Source Image Set for the RTIBuilder only works with JPEGs) and for all other MLR Source Image Sets always use the least compression possible. Artifacts are most visible in smooth surfaces with little detail or in zones with high/sufficient contrast. The better the quality of the input images, the potential to obtain high-quality processed results.  

K[edit | edit source]

L[edit | edit source]

Light Array[edit | edit source]

An apparatus (typically dome-shaped, but an arm or panel are equally possible) that has a set of lights at fixed positions. Control software flashes each light in a fixed sequence in order to capture the image sequence needed to process MLR/RTI files. Because each light is at a known position, the software can use a pre-calibrated Light Map and/or Lighting Position File to associate each image with the lighting angle used for it. See also Dome.

Light Direction Extrapolation

An enhancement technique that allows lighting to be at a lower angle than is physically realizable. Once the reflectance functions are modeled based on the input images, lighting direction component values can be used outside the range of (-1 to +1) that are physically unrealizable, producing an extrapolation of the captured reflectance function. With the light direction extrapolation function additional virtual relighting angles can be achieved in the viewers which allow consultation of MLR/RTI processed results (Presented in Malzbender 2001].)

Light Map[edit | edit source]

A computer file that maps the angle of a light in a Light Array or Dome to the image taken with that light. Based on a light map a Lighting Position File can be computed.

Lighting Position File[edit | edit source]

Lighting positions are needed for all MLR/RTI computations to create relightable images. It can be created via the Highlight Method detection or if the lights are in fixed, predetermined positions (Light Array or Dome). A lighting position file (lp) can be generated mathematically and, when the lights are in fixed, predetermined positions, reused without needing to detect lighting angles for every Source Image set.

Log File[edit | edit source]

Processing software often creates log files. For example, RTIBuilder to create an RTI File, the software records its processing steps and writes them to a log file named <project>.xml in the top-level project folder. Thus, that file contains information on the conducted processing, mostly only readable for the specific processing software which created it.

LP File: see Lighting Position File.[edit | edit source]

M[edit | edit source]

Metadata[edit | edit source]

Information tags/fields that are attached to some form of digital data, which describes or gives information about that data. Metadata can be either technical, contextual, machine or human made. Common metadata formats for image data include EXIF (Exchangeable Image File), XMP (Extensible Metadata Platform), or IPTC (International Press and Telecommunications Council).

Micro-Reflectance Transformation Imaging[edit | edit source]

Refers generally to RTI captured under magnification. See also Monkey Brain.

MLR: see Multi-Light Reflectance.[edit | edit source]
Multi-Light Reflectance (MLR)[edit | edit source]

Computational photographic imaging method based on the capturing of a Source Image Set on which for each image the subject is lit from multiple lighting positions. The detected reflectance on each of the captured images in such a set is used to estimate the overall Albedo and/or surface orientation for each pixel. A processed result enables the interactive re-lighting of the subject from any direction. The method is also named Single Camera Multi-Light and Reflectance Transformation Imaging.

Monkey Brain

The Monkey Brain is a dome-shaped light array for capturing MLR/RTI Source Image Sets images through the microscope. It was developed by Paul and Andrew Messier as part of the MoMA Thomas Walther collaboration; measuring drawings, user’s manual, and a parts list can be found here. See also Micro Reflectance Transformation Imaging.

Figure 7. Surface normal diagram courtesy of Cultural Heritage Imaging (CHI)

N[edit | edit source]

Normal[edit | edit source]

The mathematical term for the directional vector that is perpendicular to the surface of an object at a specific point (Figure 7). MLR/RTI processing softwares calculate the Surface Normal at each point of an object, using information derived from the lighting angles at each pixel in each of a series of images. Normal information, in the form of surface shape, is included along with color information for each pixel in the resulting Relightable Image. This enables viewer software to show the surface shape of the subject in great detail and more realistic.

O[edit | edit source]

P[edit | edit source]

Phong Lighting Model[edit | edit source]

One of the first lighting models used in computer graphics, builds up the brightness of any rendered object as a sum of weighted diffuse and specular reflectances coming from the surface being rendered. Principles of the Phong Lighting Model are used in a number of Visual Styles in the available MLR/RTI viewers (Introduced in Phong 1975.)

Photometric Stereo[edit | edit source]

Photometric Stereo is an alternative technique to PTM/RTI which estimates Surface Normals and Albedo by observing the surface under different lighting conditions. Thus, some of the available MLR methods and processing algorithms compute a relightable image based on these specific principles. Recent work has combined photometric stereo and photogrammetry (Introduced in Woodham 1980.)

Pixel+ Viewer[edit | edit source]

Open source web-based online viewer for various MLR file formats (i.e. PTM, HSH RTI, RBF RTI, CUN, ZUN and SCML). The viewer also allows comparisons between these various file formats within the same viewing environment. When the processed Relightable File includes multiple registered images or maps (e.g. based on several multi-spectral captures), these can be viewed and consulted superimposed and in interaction with each other (e.g. false colors). Similarly, when present in the relightable file, up to six processed recordings can be presented next to each other (e.g. six viewpoints of an object). The viewer can also handle relightable files computed according to the Deep Zoom Image Format. The pixel+ viewer has been developed by ESAT at Leuven University in Belgium and can be found here.

Polynomial Texture Map (PTM)[edit | edit source]

The first implementation of MLR/RTI imaging was Polynomial Texture Mapping, invented by Tom Malzbender at HP Labs in 2001; see details here and here. A PTM Fitting Algorithm is available for producing a Relightable File.

Portable Light Dome (PLD)[edit | edit source]

The Dome-shaped Image Capture devices with sets of lights at fixed positions developed at the University of Leuven by ESAT, a first in 2005. The PLD system also includes control software for flashing each light in a fixed sequence in order to obtain the source image set to process a MLR file. This integrated system has devices equipped with white light LEDs and with 5 different narrow band LEDs (infrared, red, green, blue & ultraviolet); another division is made between PLD domes with 260 LEDs (minidomes) and with 228 LEDs (microdomes).The PLD system makes use of the Photometric Stereo Fitting Algorithm. The system also includes an open source viewer interface for its MLR file formats, which offers besides the standard relighting option and various Visual Styles also BRDF functionalities.

PTM: see Polynomial Texture Map.[edit | edit source]

Q[edit | edit source]

R[edit | edit source]

Radial Basis Function (RBF) Reflectance Transformation Imaging (definition needed)[edit | edit source]
Raking Illumination[edit | edit source]

The use of light position at a low angle in relation to the plane of an object, to create shadows that emphasize elevations or depressions which deviate from that same normal plane. This lighting technique helps to record the topography and texture of an object. MLR/RTI source images are captured with a range of degrees and positions of raking light.

RAW: see Camera Raw File.[edit | edit source]
Reflective Spheres[edit | edit source]

Two or more shiny black spheres included in MLR/RTI source images to allow MLR/RTI software to detect the reflection highlights from the light source in each image, and use that data to calculate the exact angle of the light. At least two spheres are required for Algorithmic Rendering. This is the basic technology for the Highlight method of image capture that allows you to produce MLR/RTI images without a fixed Light Array. See also Spheres.

Reflectance Field[edit | edit source]

Equivalent to the BSSRDF (bidirectional scattering-surface reflectance distribution function), an 8-dimensional quantity that maps incoming lighting direction and position to reflected lighting direction and location, taking into account subsurface scattering (Introduced in [Debevec 2000.)

Reflectance Function[edit | edit source]

The amount of light reflected from a given surface point, as a function of the two angles of lighting direction from a directional light source. It is the measurement represented in an RTI or PTM for each surface point. Viewpoint is assumed to be fixed for acquiring the reflectance function (Introduced in Debevec 2000.)

Reflectance Transformation Imaging (RTI)

A computational photographic method that captures a subject’s surface shape and color by photographing an object from a fixed point of view with varying lighting angles. After processing these captures, it enables the interactive re-lighting of the subject from any direction in the viewer program. RTI also permits the mathematical enhancement of the subject’s surface shape and color attributes. The enhancement functions of RTI reveal surface information that is not disclosed under direct empirical examination of the physical object. The method is also labeled as (see) Single Camera Multi-Light or (see) Multi-Light Reflectance imaging. For further in-depth discussions see http://culturalheritageimaging.org/Technologies/ and https://www.loc.gov/preservation/digital/formats/fdd/fdd000486.shtml.

Reflective Spheres: see Spheres.[edit | edit source]
Relight (need definition)[edit | edit source]
Relightable Images[edit | edit source]

MLR/RTI Source Image Sets processed with particular Fitting Algorithms result in relightable images. Within the cultural heritage field, this term has been used in particular by the Italian research group at the ISTI-CNR throughout their “Relight” initiative; it has since become common terminology within MLR/RTI.   

Rendering Modes[edit | edit source]

Mathematical transformations (also called signal processing filters) that allows a viewer to show enhanced versions of an MLR/RTI that disclose and emphasize certain features, often difficult or impossible to see under direct empirical examination. Viewer Modes and Visual Styles are synonyms.

RGBN[edit | edit source]

An acronym for the representation of the three color quantities (Red, Green, Blue) along with the Surface Normal (typically a unit length vector representing the surface orientation) for each pixel.

RTI: see Reflectance Transformation Imaging.[edit | edit source]
RTIBuilder[edit | edit source]

An interface to a set of tools that process a source image set to produce an RTI file. Built by CHI, it is written in the Java programming language. The program incorporates a user-selected Fitting Algorithm to transform the Source Image Set into either PTM or HSH RTI format. This is only currently available for 32-bit Windows OS; see Technical Support for a longer discussion.

RTIViewer[edit | edit source]

A software interface that allows loading and examining processed MLR/RTI Source Images Sets (see also Fitting Algorithms & Relightable Images) created with RTIBuilder or similar applications. The original RTIviewer is free and can be downloaded via CHI and GitHub. RTIViewer offers interactive rendering of images, allowing the alteration of the apparent direction of lighting. In addition, it offers a number of enhancement modes (see also Rendering Modes, Viewer Modes, Visual Styles), which apply mathematical transformations to processed image data to enhance or emphasize particular features of the target object. Other open access viewers for RTI Files are WebRTIViewer (site & GitHub) and Pixel+ Viewer (site & GitHub).

RTI File[edit | edit source]

A computer File Format with the extension .rti or .ptm, produced using Fitting Algorithms of the RTI technology. An RTI file is a Relightable Image. RTI files can be created using the CHI RTIBuilder interface or similar applications, and view them using the/a RTIViewer interface.

S[edit | edit source]

SCML: see Single Camera Multi-Light.[edit | edit source]
SCML File Format[edit | edit source]

Container File Format to combine several MLR/RTI processed Source Image Sets optimized for web-based consultation. The single SCML File stores the data uncompressed. The general architecture consists of one or more entries (i.e. files or directories) and a central directory placed at the end of the file. Each entry has a local header containing per entry information such as file name, (un)compressed size, a CRC Check, etc. At the end of the file, a central directory contains information about each of the entries and where to find them (relative byte offsets). For high-resolution results the images or maps can also be stored in the Deep Zoom Image format. Existing source image sets or processed MLR/RTI files can be converted into a SCML file.  

Single Camera Multi-Light (SCML)[edit | edit source]

Computational photographic imaging method based on the capturing from a single camera position of a Source Image Set on which for each image the subject is lit from multiple lighting positions. The method is also named Multi-Light Reflectance and Reflectance Transformation Imaging.

Single Mobile Light Method[edit | edit source]

Method to obtain a source image set for Reflectance Transformation Imaging, Multi-Light Reflectance or any Single Camera Multi-Light imaging technique. Only one light source is used which is manually repositioned for each image capture. To document the light positions this capture technique is combined with the (see) Highlight Method.

Source Image Set[edit | edit source]

Set of images acquired during a MLR/RTI capture session, and to be used as a set for further processing (see Fitting Algorithms). On each image in this set, the subject is the same and the light settings differ (i.e. another illumination angle towards the center of the imaged subject).

Specular Enhancement[edit | edit source]

An image enhancement technique that yields improved perception of surface shape by photographically acquiring the reflectance functions of a surface, extracting per-pixel surface normals from these reflectance functions, and then rendering the resultant surface with added specular highlights computed from the surface normals using a Phong Lighting Model (Presented in Malzbender 2001.)

Specular Reflectance[edit | edit source]

The mirror-like reflection of light, shiny highlight, or spot reflection. The direction of reflectance is equivalent to the direction of incidence relative to the surface normal at the point of reflectance. See also Diffuse Reflectance.

Spheres[edit | edit source]

Two or more shiny (black) spheres included in each capture of an MLR/RTI Source Image Set to allow Algorithmic Rendering software packages to detect the reflection highlights from the light source in each image, and use that data to calculate the exact angle of the light directed to the center of the images subject. Two spheres are required for correct Algorithmic Rendering. This is the basic technological approach for the Highlight method of image capture that allows the production of Relightable Images. Reflective spheres are also needed for some Dome and Light Array automated Image Capture devices.

String[edit | edit source]

When using the Single Mobile Light Method (see also Highlight Method) for Image Capture, the distance of the light source from the subject should remain nearly constant; this is best achieved by using a string, or an equivalent measuring device, of a specific length to place the light for each image in the Source Image set.

Strobe Lighting: see Flash Lighting.[edit | edit source]
Surface Normal: see Normal.[edit | edit source]
Sync Cable[edit | edit source]

A cable that connects the light source to the camera so that the flash occurs when the shutter is open. The cable can connect the light directly to the camera, or can connect the light to a wireless trigger device; see Wireless Light Control.

T[edit | edit source]

Tethered Capture[edit | edit source]

Tethering–which connects the camera to a computer–allows remote control of camera settings and the ability to fire the camera from the computer. Images are transferred immediately to the computer and can be instantly viewed. Most camera vendors have their own tether software solution but commercial (e.g. Lightroom, Capture One) and open source tools (e.g. digiCamControl) are available.

TIFF (Tagged Image File Format)[edit | edit source]

A format (identified by the .tif file extension) for flexible bitmap image files. Supported by virtually all image editing and page-layout applications, and produced by virtually all sensor-based image capture devices. The format supports CMYK, RGB, LAB, grayscale files with alpha channels, and bitmap files without alpha channels. TIFF also supports LZW Compression, a lossless Compression format. A TIFF file is suitable for digital preservation.

Two-and-a-half-dimensional Image[edit | edit source]

MLR/RTI and some of the other related computational imaging techniques can be referred to as 2 ½ dimensional, because they create pseudo-3D information extracted from 2D image sets. While the digital images used for MLR/RTI builds do not contain information about depth, the coordination of data from the image set demonstrating the effect of known or implied light angles allows for the calculation of surface normals, which define the 3-dimensional surface being examined. Thus MLR/RTI, while not a true 3-dimensional imaging technique, is more descriptive of surface topology than 2D images alone. Interactive viewing in the RTIViewer or web viewers also enables this richer understanding of the surfaces.

U[edit | edit source]

V[edit | edit source]

Viewer Modes[edit | edit source]

Mathematical transformations (i.e. signal processing filters) that allow MLR/RTI viewers to show enhanced versions of a MLR/RTI that disclose and emphasize certain features, often difficult or impossible to see under direct empirical examination. Rendering Modes and Visual Styles are synonyms.  

Visual Computing Lab (need definition)[edit | edit source]
Visual Styles[edit | edit source]

A synonym for Viewer Modes or Rendering Modes.

W[edit | edit source]

Wireless Light Control[edit | edit source]

A wireless transmitter/receiver can be used to receive signals from the camera so that the flash is triggered when the shutter opens. This minimizes the number of cables in use during a MLR/RTI Image capture session. Examples are the Speedlite Transmitter ST-E2, a camera-mounted infrared controller for Canon's flashes, and the PocketWizard, which is attached to a light by a Sync Cable.

X[edit | edit source]

XML (Extensible Markup Language)[edit | edit source]

A text-based markup language that can be used for tagging information, making it easier for a computer to scan and to automate various processes. In the RTI workflow, CHI RTIBuilder uses XML format for the log file it produces.

XMP (Extensible Metadata Platform)[edit | edit source]

A specific XML (Extensible Markup Language) schema used to store Metadata in image files. A unique advantage of XMP is that it allows creation of custom metadata, as well as supporting certain standards such as IPTC and EXIF (Exchangeable Image File). See XML (Extensible Markup Language).

Y[edit | edit source]

Z[edit | edit source]

Zeroed Out Settings[edit | edit source]

A recommended custom preset for Adobe Camera Raw, which allows the user to make their own adjustments to white balance and exposure (if needed) in captured source images, before using them to create an RTI file. A zeroed-out preset ensures that data is not being processed, interpreted, or stylized to fit consumer tastes. To create it, edit any image in Adobe Camera Raw, set all possible options to 0, save the current settings as a named preset, then make that preset the default.

Bibliography / Suggested Reading[edit | edit source]


Artal-Isbrand, P. et al. 2010. “An Evaluation of Decorative Techniques on a Red-Figure Attic Vase from the Worcester Art Museum using Reflectance Transformation Imaging (RTI) and Confocal Microscopy with a Special Focus on the ‘Relief Line’.” MRS Proceedings 1319.

Artal-Isbrand, P., P. Klausmeyer, and W. Murray 2011. “An Evaluation of Decorative Techniques on a Red-Figure Attic Vase from the Worcester Art Museum using Reflectance Transformation Imaging (RTI) and Confocal Microscopy with a Special Focus on the ‘Relief Line’.” in: MRS Online Proceeding Library Archive 1319.

Avgoustinos A., A. Nikolaidou, and R. Georgiou. 2017. "OpeNumisma: A Software Platform Managing Numismatic Collections with A Particular Focus On Reflectance Transformation Imaging". Code4Lib Journal, 37.

Caine, M. and M. Magen. 2011. “Pixels and Parchment: The Application of RTI and Infrared Imaging to the Dead Sea Scrolls.” In Electronic Visualisation and the Arts (EVA 2011), edited by Stuart Dunn, Jonathan P. Bowen and Kia Ng, 140–146. London: BCS.

Dittus, A. 2014. Reflectance Transformation Imaging transparenter Materialien.

Duffy, S. 2018. Multi-light Imaging for Cultural Heritage. Swindon: Historic England. Earl, G.P., K. Martinez, and T. Malzbender. Archaeological Applications of Polynomial Texture Mapping: Analysis, Conservation and Representation. J. Archaeol. Sci. 2010, 37, 2040–2050.

Frank, E. 2017. “Lights, Camera, Archaeology: Documenting Archaeological Textile Impressions with Reflectance Transformation Imaging (RTI).” Textile Specialty Group Postprints 25. The American Institute for Conservation, 11-42.

Hanneken, T. 2016. “New Technology for Imaging Unreadable Manuscripts and Other Artifacts: Integrated Spectral Reflectance Transformation Imaging (Spectral RTI).” In Ancient Worlds in a Digital Culture, edited by Claire Clivaz and David Hamidovic, 180–195. Digital Biblical Studies 1. Leiden: Brill.

Harris, S. and K. Piquette. 2015. “Reflectance Transformation Imaging (RTI) for visualising leather grain surface morphology as an aid to species identification: a pilot study.” Archaeological Leather Group Newsletter 42: 13-18.

Hughes-Hallett, M., C. Young, and P. Messier. 2020. “A Review of RTI and an Investigation into the Applicability of Micro-RTI as a Tool for the Documentation and Conservation of Modern and Contemporary Paintings.” Journal of the American Institute for Conservation.

Klausmeyer, P., R. Albertson, M. Cushman, and P. Artal-Isbrand. Applications of Reflectance Transformation Imaging (RTI) in a Fine Arts Museum: Examination, Documentation, and Beyond. Available online: http://ncptt.nps.gov/wp-content/uploads/klausmeyer.pdf (accessed on 13 June 2014).

Malzbender, T., D. Gelb, H. Wolters, and B. Zuckerman. Enhancement of Shape Perception by Surface Reflectance Transformation. Available online: http://www.hpl.hp.com/techreports/2000/HPL-2000-38R1.pdf (accessed on 9 July 2014)

Malzbender, T., D. Gelb, and H. Wolters. Polynomial Texture Maps. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 12–17 August 2001; pp. 519–528.

Mark M., T. Malzbender, A. Chalmers, R. Scopigno, J. Davis, O. Wang, P. Gunawardane, M. Ashley, M. Doerr, A. Proenca, and J. Barbosa. 2008. Image-based empirical information acquisition, scientific reliability, and long-term digital preservation for the natural sciences and cultural heritage. In (Eurographics’08) Tutorials, M. Roussou and J. Leigh, Eds., Eurographics Association.http://culturalheritageimaging.org/What_We_Do/Publications/eurographics2008/eurographics_2008_tutorial_notes.pdf (accessed on 10 March 2021)

Manfredi, M., G. Bearman, G. Williamson, D. Kronkright, E. Doehne, M. Jacobs, and E. Marengo. “A New Quantitative Method for the Non-Invasive Documentation of Morphological Damage in Paintings Using RTI Surface Normals.” Sensors 14, no. 7 (2014): 12271–84. doi:10.3390/S140712271.

McEwan, J.A. "Reflectance Transformation Imaging and the Future of Medieval Sigillography." History Compass 16, no. 9 (2018):e12477.

Mudge, M., T. Malzbender, C. Schroer, and M. Lum. New Reflection Transformation Imaging Methods for Rock Art and Multiple-Viewpoint Display. Available online: http://culturalheritageimaging.org/What_We_Do/Publications/vast2006/index.html (accessed on 13 June 2014).

Serotta, A. 2014. “An Investigation of Tool Marks on Ancient Egyptian Hard Stone Sculpture: Preliminary Report.” Metropolitan Museum of Art Studies in Science and Technology 2. New York: The Metropolitan Museum of Art, 197-201.

Watteeuw, L., M. Van Bos, T. Gersten, B. Vandermeulen, and H. Hameeuw. 2020. An applied complementary use of Macro X-ray Fluorescence scanning and Multi-light reflectance imaging to study Medieval Illuminated Manuscripts. The Rijmbijbel of Jacob van Maerlant, in: Microchemical Journal 155 (June 2020), 104582. (DOI: 10.1016/j.microc.2019.104582)

RTI Format Description Properties: https://www.loc.gov/preservation/digital/formats/fdd/fdd000486.shtml (accessed on 29 March 2021)

http://culturalheritageimaging.org/What_We_Offer/Downloads/Process/ (accessed on 29 March 2021).

http://culturalheritageimaging.org/What_We_Offer/Downloads/View/index.html (accessed on 29 March 2021).

http://vcg.isti.cnr.it/rti/webviewer.php (accessed on 29 March 2021). https://www.heritage-visualisation.org/viewer/ (accessed on 29 March 2021). http://vcg.isti.cnr.it/PalazzoBlu/ (accessed on 4 November 2021)

https://cdli.ucla.edu/?q=rti-images (accessed on 4 November 2021)