3D Imaging

From MediaWiki

Imaging > Imaging Techniques > 3D Imaging

In progress: Seeking additional comments and images to develop this section

Please be patient, this part of the site is under development. We are starting to build out the Imaging Wiki.


Interested in contributing to this page? Visit the Contributors' Toolbox or reach out to one of our Team Leads:

Wiki Team Leads: Brinker Ferguson, Zarah Walsh-Korb, Yi Yang
Wiki Editors: Amy McCrory, Roxanne Radpour, Charles Walbridge
Wiki Contributors: Moshe Caine, Kurt Heumiller, Dale Konkright, your name could be here


What is 3D Imaging?[edit | edit source]

(broad definition of 3D imaging here, leading to the next section - Types of 3D Imaging)

Types of 3D imaging[edit | edit source]


(For conservation purposes, we're dividing 3D Imaging into several categories. There's lots of overlap among the categories, and we'll try to start clarifying that in this section.)
3D Imaging for conservation includes several techniques, including laser scanning, optical profiling, photogrammetry, and structured light scanning.


Laser Scanning[edit | edit source]


Optical Profiling[edit | edit source]


Optical profiling/profilometry is an optical technique to enable precise, quantitative, and non-contact extraction of topographical data from a surface. This surface topology data can be used to study surface roughness and structure [1]. There are various optical methods to achieve optical profiling.

History of Optical Profiling[edit | edit source]

Digital microscope was fist invented in 1986 in Tokyo, Japan. A step motor is later introduced to digital microscope to enable the scanning of focal plan [6] to create 3D measurements.

Beginning in the early 2000s, various groups around the world have demonstrated that data collected through an Optical Coherent Tomography (OCT) scanner can lead to new non-invasive art conservation methods and viewing experiences. The data collected by OCT include a painting’s 3D surface topography data in micrometer resolution; layer structure data under the painting’s surface; volumetric data of the painting that can be used for layer analysis [5][8][9]. One application for OCT scanning is to study the thickness and structure of varnish layers of painting, which could be used for real-time monitoring of the ablation, melting and evaporation, or exfoliation of the varnish layer [10].  Due to its cross-sectional imaging ability, other applications can be facilitated: revealing hidden alterations such as retouching and overpainting [8], characterization of varnish [11], punchwork and underdrawings in panel paintings [12], brush strokes, surface craquelure, paint losses, and restorations [13]. Conservators have also used OCT to collect surface and subsurface information on objects such as jade [14], wood [15], Egyptian Faience [16], plastic sculptures, Limoges enamels [17], and tomb murals  [9].  

Starting in the 2000s, THz-TD systems has been used for the global mapping of stratigraphy of an old-master painting [18], inspection of subsurface structures buried in historical plasters [19], enhancing structural features of lacquered screen such as repaired areas, and shed light on the applied techniques [20], monitoring of the conservation process of a mural [21][22].

Optical Profiling Hardware[edit | edit source]

a.  3D Digital Microscopy[edit | edit source]

A typical optical microscope focuses onto a single plane at any given time. A 3D digital microscope utilizes focus-variation technique to scan an object through multiple focal planes. 3D images are obtained by compiling the in focused data. 3D digital microscope enables the view of the entire magnified object in focus. Both 3D data and color data are taken simultaneously. Thus, the surface profile of the object can be recovered and measured [2].

b. Optical coherence tomography (OCT)[edit | edit source]

An OCT system is based on the basic concept of a Michelson-type interferometer. An infrared light (840nm to 1310nm) source with a wideband spectrum passes through a beam splitter, the light beam is split into a sample arm and a reference arm. The depth information is obtained by introducing a mechanical lateral-scanning mechanism at the reference arm in a time domain OCT. In a spectral-domain OCT, the detector captures the spectral information of the interfered signal and uses a Fast Fourier Transform (FFT) algorithm to resolve the depth information. A typical OCT system can achieve a spatial resolution of ~ 1 µm [3].

c. Terahertz time-domain imaging (THz-TD)[edit | edit source]

An ultrashort pulsed laser is first sent to a beam splitter to be split into pump and probe beam. The pump beam is used to generate a broadband of THz radiation. This radiation is guided through the sample and thus, contains the structural information of the sample.   used to generate the terahertz pulse. The probe beam undergoes an adjustment in path length using an optical delay line, enabling gated detection of the THz signal from the sample. Both amplitude and phase information of the frequency components are measured to generate the structural information of the sample [4].

Optical Profiling Software[edit | edit source]

Most data taken from 3D digital microscope, OCT and THz-TD system are in 3D point cloud format. Therefore, software that converts point cloud data into mesh or 3D model is recommended.  

Examples from the field[edit | edit source]

3D Microscopy

See the 'Girl with a Pearl Earring' painting in 10-gigapixel detail

Optical coherence tomography (OCT)

Multi-scale optical coherence tomography imaging and visualization of Vermeer’s Girl with a Pearl Earring

Terahertz time-domain imaging (THz-TD)

Global mapping of stratigraphy of an old-master painting using sparsity-based terahertz reflectometry

Photogrammetry[edit | edit source]

Photogrammetry is the “science of measuring in photos”, and is most commonly used in remote sensing, aerial photography, archaeology, architecture and other fields where we need to determine measurements from photographs.
It is based on the principle that while a single photograph can only yield two-dimensional coordinates (height and width) two overlapping images of the same scene, taken slightly apart from each other can allow the third dimension (depth) to be calculated. This is much the same way as the human visual system generates depth perception from the images projected by our two eyes. We are able to see objects in three dimensions, judge volume, distance and relative size, all because of our stereoscopic vision. This is due to the fact that our brain receives two slightly different images resulting from the different positions of the left and the right eye and due to the fact of the eye’s central perspective.
This principle of stereoscopic viewing is the underlying principle of photogrammetry. If two photos are taken of the same object but from slightly different positions, one may calculate the three-dimensional coordinates of any point which is represented in both photos. The two camera positions view the object from so-called "lines of sight". These lines of sight are mathematically intersected to produce the 3-dimensional coordinates of the points of interest. This same principle of triangulation is also the way our two eyes work together to gauge distance.

Photogrammetry & Structure from Motion (SfM)[edit | edit source]


Nowadays, the two terms are somewhat interchangeable and are often used to convey the same thing, that is the construction of a 3D scene or object through the use of multiple photographic images.
Nevertheless, the terms are not identical and stem from slightly different approaches.
Photogrammetry literally means "measurement via light". The essential aspect of this technique is indeed measurement, triangulation, by which the three coordinates of a given point on an image are calculated from stereo pairs. Multiple images thus provide many hundreds or thousands of points which construct the "point cloud", from which the digital "mesh" is formed in the original shape of the object. In parallel, the texture data from the photographs is calculated by the processing software to form the UV or Texture Map.
In contrast to this, SfM is more forgiving. As the name implies, SfM emphasizes the process of moving round the object or scene. This may be by the mobile photographer, or by UAV. Unlike photogrammetry, SfM does not require prior knowledge of the camera positions. The SfM software automatically identifies matching features in multiple images. These may be distinctive lines, points, textures, or other clearly defined features. By tracking these features over the images taken from different positions the software calculates the positions and orientation of the cameras and the XYZ coordinates.

History of Photogrammetry[edit | edit source]


Photogrammetry is not new. Whilst we tend to align its birth with that of photography in the 1st half of the 19th century, we may In fact trace the mathematics of it back to no other than Leonardo da Vinci, who in 1480 wrote the following:

“Perspective is nothing else than the seeing of an object behind a sheet of glass, smooth and quite transparent, on the surface of which all the things may be marked that are behind this glass. All things transmit their images to the eye by pyramidal lines, and these pyramids are cut by the said glass. The nearer to the eye these are intersected, the smaller the image of their cause will appear” [Doyle, 1964]

So, in fact it could be claimed that the photogrammetric theory is based on the principles of perspective and projective geometry. Albrecht Duerer's "Perspective Machine" (1525) an instrument that could be used to create a true perspective drawing, was indeed based upon those laws of perspective. Nevertheless, it was the birth of photography which heralded the first practical uses of photogrammetry. The title "Father of Photogrammetry" must go to Aimé Laussedat (April 19, 1819 - March 18, 1907) who in 1849, was the first person to use terrestrial photographs for topographic map compilation. In 1858, he even (unsuccessfully) attempted aerial photography with kites. In 1862, his use of photography for mapping was officially accepted by the Science Academy in Madrid.

Analytical Photogrammetry[edit | edit source]

The rapid development of the computer after the 2nd world war saw the beginnings of analytical photogrammetry and algebraic based formulas which advanced digital aerial triangulation. However, the advent of the digital photograph and advanced software for image data processing saw this evolution bloom into a fast and practical field for interactive photographic imaging. Today, many photogrammetric based software solutions exist on the market. Some are based on multiple images and use triangulation principles much like 3D scanners, while others use the stereo image principle. It is important however to remember that stereo imaging is not 3D photographic modeling. The camera, like the human eye cannot calculate what it cannot see. It may guess, extrapolate, average, calculate and estimate based on the information available. However, true depth calculation can be performed only if the area in question was captured by the imaging system. In two overlapping images there will be large areas of information absent by occlusion. Nevertheless, even if full, and analytically accurate information is not present, interactive stereo images can greatly enhance our perception of the object. This is most apparent in reliefs, three dimensional surfaces and rough textures.



Photogrammetry Hardware[edit | edit source]


One of the greatest advantages of Photogrammetry and SFM lies in their very modest hardware requirements.

Camera[edit | edit source]


For basic location-based work all that is needed in a consumer camera, preferably a DSLR. Obviously, the better the camera, the better potential results. Today, however, even good smartphone cameras can yield remarkably good results. Sometimes, a tripod may be useful. While in other cases the freedom of movement may be an advantage.

UAV[edit | edit source]


For aerial photogrammetry, a UAV (drone) is often essential. Many such models exist on the market today and their photographic quality is ever improving.

Lighting[edit | edit source]


For studio-based photogrammetry or SfM, good controllable lighting is essential. Today LED light banks are becoming very popular due to their powerful output, color control and lack of heat.

Revolving base[edit | edit source]


In many cases, a revolving base plate can prove a great help, as it allows for convenient and accurate rotation of the object relative to the camera. Such plates can range from a simple 'lazy Susan' which can be purchased at any home store, to highly sophisticated computer-controlled systems.

Computer[edit | edit source]


Here the equation is simple. The faster, the more RAM, the better graphic processor, the more storage, the better.

Miscellaneous[edit | edit source]


Other important items may include:
A color checker for color balance.
Markers to be placed around the object or area.
Reflectors and/or diffusers for controlling the light.
Polarizing filters, both for the camera and for the light source (cross-polarization).
Ample storage


Photogrammetry Software[edit | edit source]

The imaging stage in photogrammetry, while essential, is by no means the only important stage. Transforming the multiple images into a digital model demands the use of extremely sophisticated software. Here again there exists a range of photogrammetry software solutions, ranging from the free/open source, to the expensive.

Among the free options it is worth mentioning the following:

  • AliceVision – Meshroom
  • 3DFlow - Zephyr Free (limited to 50 images per model)
  • MicMac (command-line based software developed by the French National Geographic Institute and the French National School of Geographic Sciences.)
  • ColMap (SfM and Multi-View Stereo (MVS) pipeline with a graphical and command-line interface.)
  • Visual SFM open-source )
  • Open MVG (which stands for “Open Multiple View Geometry”)


Nearly all paid photogrammetry software offers a free trial or a reduced price for educational use. All are excellent and each have their advantages and disadvantages. While this is by no means a complete list, The most popular among these include:

Pre and Post-Processing Software[edit | edit source]

Prior to loading into the photogrammetry software, the images may well need to undergo further processing. This may include conversion from RAW formats, correction of highlights and shadows, noise removal, color de-fringing, etc. All such processing will need to be performed on the entire set of images, ranging from tens to thousands. Therefore, it is essential to perform this task as a batch process.

A popular software for this purpose (though by no means the only one) is Adobe Lightroom.

It is important to consider that most photogrammetry models require some extent of post processing to ready them for final use. These include:

  • Decimation: reducing the number of polygons to a convenient size for print or display.
  • Smoothing: Most photogrammetry software has difficulty to some extent when dealing with the reproduction of flat and smooth surfaces. Post processing can aid in smoothing these areas.
  • Watertight preparation: Essential for for 3D printing
  • UV texture map retouching: This may be as simple as the removal of unwanted markers, and as complex as adding or correcting defects in the photographic texture maps.
  • Preview and inspection: Once the final model is ready it is essential to preview and inspect it carefully. Defects may be discovered and need to be fixed. Measurements may need to be made, and other types of analysis performed.
  • Presentation: Finally, the model will need to be presented, whether in physical printed form of (more often) as a digital display. These may be offline or online, adapted for mobile or fixed computer display, etc.

Photogrammetry Data[edit | edit source]


The essential ingredient in the data capture process in good clear and sharp images. Or to put it another way, GIGA (garbage in, garbage out). The major detrimental factors to successful capture include:

Lack of focus - As the photogrammetric process is based upon identification of multiple small and clearly defined points of interest across images, blurred images prevent the processing software from identifying such points. Blurred images can occur either through lack of accurate focus, lack of depth of field, camera movement, or even physical obstruction such as dirt or fingerprints on the lens. Object movement - even if our photography is clean and sharp, movement of the object can result in the same problems of blurring.
Solution- For these reasons it is essential to apply professional standards of photography, including: Quality of camera and lens, use of tripod when possible, adequate lighting to allow for short shutter speeds and depth of field.

Shiny, reflective or transparent subjects - These are notorious for photogrammetric imaging. The reason for this is that they don’t actually have any defined points of interest on them for the software to relate to. Worse still, the points which the camera thinks that it sees are actually reflections like those in a mirror and these will shift as does the camera, from image to image, confusing the software.
Solution - In many cases, the use of a polarizing filter on the camera can cut the reflected light and reduce the confusion. In more extreme cases, “cross polarization” involving polarizing both the light source and the camera lens, can dramatically improve the results. This method will apply obviously only to cases whereby we can control the light source, usually in a studio environment.

Thin lines, such as hair, wires, poles, etc. - Photogrammetry relies on the placement of as many as possible points of interest on the subject. Very thin elements by their nature make this extremely difficult. The lack of points of interest (poi) result in there being insufficient data to build up the point cloud in those areas.
Solution - Obviously, anything that can increase the poi in thin areas will improve results. This includes moving in as close as possible to them, using the sharpest lenses, large image sensors with a large pixel count and reducing camera noise by lowering the camera ISO.

Occlusion - photogrammetry, especially SfM, relies on comparing points of interest across several images. As the camera (or rotating subject) moves, parts of the subject closer to the camera can occlude those behind. What the camera doesn’t see it cannot reproduce. Therefore, the greater the occlusion in the image set, the greater will be the error and the less accurate the final model.
Solution - Take as many pictures as possible, from as many as possible angles. Remember: there’s no such thing as too many images. Some may be superfluous, but more is always better than less.

Flat, smooth areas - walls, ceilings, flat textureless surfaces, are a constant challenge, for the simple fact that they lack points of interest and focus for the point cloud construction process.
Solution - in many cases and where possible, adding markers, stickers, and when possible even “messing up the surface”, can help immensely. In most cases, these information points will give the software points to latch on to. If the sole purpose is to reproduce just the surface shape, then these will not present a problem. If however the photographic texture map is also necessary, careful post processing in image processing software can remove the offending items and restore the clean surface.

Repeating & Symmetric patterns - SfM is based on the principle of the identification of the same points of interest in sequential images. Fences, patterned walls, floors and other surfaces with a repeating pattern can confuse the software into miss-identifying different elements as being the same.
Solution - If and where possible, breaking the symmetry can help. Either by adding random objects on the wall or floor, or by the use of markers as above.

Flashing or moving lights - can confuse the software. Television screens, LCDs, car headlights, and other light sources which change position, shape, color, or intensity between the photographs, may corrupt the triangulation and point identification process.
Solution - Avoidance where and as much as possible.

Lens Distortion - Very wide angle or long focal length lenses both distort the image, either by exaggeration of distance and perspective, or by condensing and reducing it. While modern photogrammetry software can deal very well with slight distortions, exaggerated ones can lead to stitching problems and poi identification.
Solution - Recommend focal lengths lie within the 28 - 70 mm for 35 mm DSLR cameras with a full frame sensor. In addition, it is extremely important to shoot with cameras that record the camera lens metadata, as the software will read this and its algorithms will act on this information.

ISO and compression noise - As stated repeatedly, the SfM and photogrammetry process is reliant on clean and “True” information. The jpeg format is a lossy compression method. Highly compressed jpeg images remove pixel information and replace it with compression blocks through a process of quantization. These will change from image to image and once again create false information.
Likewise, raising the ISO of the camera raises the sensitivity of the sensor to low light. While this can assist greatly in capturing otherwise undetectable areas in the image, this creates false pixel information in the form of noise.
Solution - where possible it is alway advisable to shoot in RAW format, which is devoid of compression or other camera induced distortion. Likewise it is recommended to keep the ISO as low as possible, ideally no higher than 200. If necessary, compensate with longer shutter speed and tripod shooting.

Exposure - Under or over exposure in the photograph result in areas devoid of information. Lack of information results in lack of poi and poor point clouds or models with holes. Tricky and difficult shooting situations will include: High contrast lighting, direct harsh sunlight or projectors, subject matter with inherently high internal contrast.
Solution - Prevention is the best form of medicine.
Shoot in softest (shadowless) possible light.
Overcast cloudy days are ideal.
Light-tent for studio shooting.
Shoot in RAW format, then boost shadows and reduce highlights in post processing (such as Adobe Lightroom).

Baked Light - In nature, depth and texture identification are based on light and shadow. This is one of the most basic aspects of photography and its correct use is essential to the creation of a good 2D image. In 3D imaging however, the opposite is the case. It is essential to avoid misrepresenting the image by introducing light and shadow which are not part of the subject matter itself. If the 3D image is to be printed, the light and shade will be created as, when and where the object is placed. In 3D screen based representation the subject will be lit with software based lights. The existence of light and shadow on the photographic texture map is called “baked light” and confuses the external lighting.

Solution - As with the previous section (exposure), the solution lies in correct, shadowless lighting. However, not always is this possible, especially in outdoor situations where there is less control. In these cases, the importance of RAW shooting and post processing to boost shadows and reduce highlights is essential.

Furthermore, some software solutions exist today for a process named “delighting”. This software manually or semi automatically identifies the highlight and shadow areas in the model and tries its best to reduce them.

Photogrammetry in combination with other imaging technologies[edit | edit source]

UV[edit | edit source]

IR[edit | edit source]

RTI[edit | edit source]


Structured Light Scanning[edit | edit source]


Structured Light Scanning (SLS) is a method of close-range topographical surface modeling of 3D objects using a 2D camera and a projected light pattern. The 3D object distorts the projected image of the pattern, and the camera measures the relative distance of each point in the resulting image, rendering a 3D model of the object.

Image courtesy of https://chsopensource.org/

History of Structured Light Scanning[edit | edit source]

Structured light imaging techniques were developed in the 1970s when researchers the contour lines of 3D objects by illuminating them with special masks. [1]. The improvements in camera resolution and computational power enabled the proliferation of SLS. Over the years, researchers have used SLS to image the 3D surface of the human body [2][3] and for other industrial applications [4].

Structured Light Scanning Hardware[edit | edit source]

An SLI system typically includes a projector and a camera. The projector projects “structured light”, which contains specially designed 2D spatially varying intensity light patterns, onto the object. Depending on the type of the system, the spatial pattern can be in black-white, greyscale, color, stripes, or gridline. A camera captures a 2D image of the scene which contains the object and light pattern projection. The 3D surface shape of the object is calculated based on the information from the distortion of the projected structured light.

There are three aspects to evaluating the performance of an SLS system.

1. Accuracy: the maximum deviation of the measured value from the actual dimension of the 3D object.

2. Resolution: the smallest portion of the object surface that a 3D imaging system can resolve.

3. Speed: the system speed can be affected by many factors, such as the frame rate, a single-shot system or sequential shot, and the system’s computational speed.

Other factors such as field of view, depth of field, and standoff distance should also be considered [4].

At this moment, most handheld SLS systems have a 3D resolution of around 0.2mm.

Structured Light Scanning Software[edit | edit source]

Most commercial SLS system provides software to process the 3D data. The main difference is that some systems are able to process both geometry and texture data, meaning that the output model will be a textured 3D model. Some systems can only process geometry data, meaning that the output 3D model does not contain textured data.

Output 3D formats: OBJ, PLY, WRL, STL, AOP, ASC, PTX, E57, XYZRGB

Computer systems: since processing the 3D data requires substantial computation power, most SLS system requires Intel Core i7 or i9, 32+ GB RAM, NAVIDIA GPU with 2+ GB VRAM, CUDA 6+ [5].

Examples from the field[edit | edit source]

The Bacchus Conservation Project: a multidisciplinary team 3D scanned the North Carolina Museum of Art’s Statue of Bacchus and the various other fragments that once were attached to it. ( https://ncartmuseum.org/bacchus_under_structured_light/)

Structured-light 3D scanning of exhibited historical clothing: historical costumes have been 3D scanned through SLS [6].

Periodical Conservation State Monitoring of Oil Paintings: SLS can be used to continuously monitor the state of oil paintings by 3D scan the object periodically and comparing their 3D models [7].

Additional links:[edit | edit source]

https://www.youtube.com/watch?v=3S3xLUXAgHw

https://www.nature.com/articles/s41566-021-00780-4

Reference:[edit | edit source]

1. P. Benoit, E. Mathieu, “Real time contour line visualization of an object,” Optics Communications, 12, 175-180 (1974)

2. (N. G. Durdle, J. Thayyoor and V. J. Raso, "An improved structured light technique for surface reconstruction of the human trunk," Conference Proceedings. IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No.98TH8341), 1998, pp. 874-877 vol.2, doi: 10.1109/CCECE.1998.685637.)

3. S. M. Dunn, R. L. Keizer and J. Yu, "Measuring the area and volume of the human body with structured light," in IEEE Transactions on Systems, Man, and Cybernetics, vol. 19, no. 6, pp. 1350-1364, Nov.-Dec. 1989, doi: 10.1109/21.44059.

4. Jason Geng, "Structured-light 3D surface imaging: a tutorial," Adv. Opt. Photon. 3, 128-160 (2011)

5. https://www.artec3d.com/portable-3d-scanners/artec-eva#specifications)

6. Montusiewicz, J., Miłosz, M., Kęsik, J. et al. Structured-light 3D scanning of exhibited historical clothing—a first-ever methodical trial and its results. Herit Sci 9, 74 (2021). https://doi.org/10.1186/s40494-021-00544-x

7. P. D. Badillo, V. A. Parfenov and D. S. Kuleshov, "3D Scanning for Periodical Conservation State Monitoring of Oil Paintings," 2022 Conference of Russian Young Researchers in Electrical and Electronic Engineering (ElConRus), 2022, pp. 1098-1102, doi: 10.1109/ElConRus54750.2022.9755461.



Dissemination[edit | edit source]

Stand-alone/desktop viewers[edit | edit source]

Web-based viewers[edit | edit source]

Embedding webpages and online databases[edit | edit source]


Technical Support / Obsolescence[edit | edit source]

Recurring challenges[edit | edit source]

Solutions[edit | edit source]


Case Studies[edit | edit source]

General[edit | edit source]

Specific for Conservation Sciences[edit | edit source]

Bibliography / Suggested Reading[edit | edit source]