Photogrammetry Training

photogrammetry training class 2014

CHI offers a 4-day photogrammetry class for groups of up to 15 people. Get practical experience in acquiring photogrammetric image sets and producing scientific 3D documentation with demonstrable accuracy. Learn imaging equipment, capture setups, and software to build 3D content. More…

Calibrated Scale Bars for Photogrammetry

Scale bars placed around mask object

CHI offers a set of 10 calibrated scale bars for photogrammetry. These scale bars are both highly accurate and very efficient for adding scale to 3D imaging projects. More…

See the Photogrammetry Discussion in the CHI Forums

Visit the CHI forums and read about photogrammetry image capture, processing, and software comparisons. More…

3D Photogrammetry and a Diego Rivera Mural

At the mural shoot

Learn how the CHI team applied photogrammetry to capture a huge Diego Rivera mural at City College of San Francisco. More…

CHI's Imaging Project at El Morro National Monument

Petroglyph with sheep in the rock face at El Morro

In June 2015 the CHI team went on location to El Morro in New Mexico and used both RTI and photogrammetry to capture at-risk historical inscriptions and petroglyphs and answer critical questions about them. More…

Related Grant: “Applying Scientific Rigor to Photogrammetric 3D Documentation for Cultural Heritage and Natural Science Materials”

In collaboration with several partner consultants and technical advisors, CHI was awarded a grant from the National Center for Preservation Technology and Training (NCPTT) that will produce advanced metadata and knowledge management tools to record a “Digital Lab Notebook” (DLN), describing the means and context of 3D photogrammetric data capture. More…

Related Publications

BLM Tech Note 248 on Photogrammetry
Sennedjem lintel at the Phoebe A. Hearst Museum

Photogrammetry

Contents:  What is it?  How does it work?  Example: Tlingit Helmet  How to Capture  Example: Cuneiform Cone Sequence

What is it?

Fundamentally, photogrammetry is about measurement: the measuring of the imaging subject. To perform high-quality photogrammetric measurement, the photographer capturing the photogrammetry data set must follow a rule-based procedure. This procedure will guide users regarding how to configure, position, and orient the camera towards the imaging subject in a way that provides the most useful information to the processing software and minimizes the uncertainty in the resulting measurements. These measurements will be as good or as poor as the design of the measurement structure, or lack thereof, that underlies the collection of the photographic data.

Recent technological advances in digital cameras, computer processors, and computational techniques, such as sub-pixel image matching, make photogrammetry a portable and powerful technique. It yields extremely dense and precise 3D surface data with an appropriately limited number of photos, captured with standard digital photography equipment, in a relatively short period of time. In the last five years, the variety and power of photogrammetry and related processes have increased dramatically.

Video: “Photogrammetry for Rock Art”

Watch this brief video to see an example of a petroglyph rock art panel as a 3D model created using photogrammetry.

Photogrammetry for Rock Art from Cultural Heritage Imaging on Vimeo.

How does it work?

CHI uses an image capture technique for photogrammetry based on the work of Neffra Matthews and Tommy Noble at the US Bureau of Land Management (BLM). The BLM Tech Note (PDF) and 2010 VAST tutorial, provide additional information regarding the origins of our methods. Neffra and Tommy have been improving their photogrammetry methods at the BLM for over 20 years. Their image capture method acquires photo data sets that are software independent and will get the most information-rich results possible from the various photogrammetry software systems on the market. CHI has been working in collaboration with Tommy and Neffra for over a decade. The four-day photogrammetry training CHI offers was developed by and continues to feature this collaboration.

The method of image capture taught by CHI is software independent. A well-captured photogrammetry data set will produce the same 3D model when processed by a knowledgeable user employing sufficiently robust software. Currently CHI uses Agisoft PhotoScan Pro software.

The most advanced photogrammetry software uses the Structure from Motion (SfM) method. The SfM approach simultaneously determines how light passes through the camera’s optical system (the camera’s calibration) and the camera’s position and orientation (pose), relative to the imaging subject, for each photo. During processing, each camera’s calibration and pose is made increasingly more precise through an iterative process. This is done by iteratively refining a sparse cloud of points in the virtual scene representing the real-world environment containing the imaging subject. The points in the sparse cloud are created from the matches of similar pixel neighborhoods identified in multiple photos. If matching pixel neighborhoods are found in two, or preferably more, photos, the areas occupied by the pixel neighborhoods in the respective photos are projected into the virtual 3D scene. These projections intersect in the form of a common volume in the 3D scene and are represented as a point in the sparse cloud. The positional uncertainty (precision) of these points is reduced in a process discussed in more detail below. As the precisions of the point positions increase, the precisions of the camera calibration and pose also increase. When the desired camera calibration and pose are at the level of precision acceptable to the user, the SfM process is finished. During the following stage, PhotoScan and other software packages offering SfM then use one variety or another of multi-viewpoint stereo algorithms to build a dense point cloud, which can be transformed into a textured 3D model.

Using SfM algorithms, photographic capture sets can be acquired using uncalibrated camera/lens combinations. To generate the information necessary to characterize how light passes from the imaging subject through the given optical system, SfM algorithms need a set of matched point correspondences. These matched points are found in the overlapping photographs of a planned network of images, captured from different positions and orientations relative to the imaging subject. How the camera is moved relative to the subject has a great impact on the degree of precision (positional uncertainty) present in the measurements of the associated 3D representation.

SfM differs from previous photogrammetry software tools. SfM relies solely on the photographs of a camera moving around the scene containing the imaging subject. No separate camera calibration is needed or desired. This feature separates SfM from other older photogrammetry algorithms, which require either a precalibrated camera or an additional set of photos to calculate a calibration for the camera, before point neighborhood matching commences.

To explain this in greater detail, the SfM software must take the information contained in the set of photogrammetry photos and optimally solve for three outcomes:

In SfM, error reductions in the camera calibration, pose, and 3D point matches are all solved simultaneously. A precision improvement in any one of these three components, calibration, pose, or sparse points in the cloud, will improve the precision of the other two. A complex algorithm called a Bundle Adjustment generates this three-part improvement. How the Bundle Adjustment works is beyond the scope of this photogrammetry introduction; however, it is useful to know that Bundle Adjustment algorithms are widely used in experimental science.

In SfM, the camera calibration and pose is continually improved throughout what is called the optimization operation, as the matched point’s positional uncertainty is systematically reduced within the sparse point cloud. This is usually done by iteration, at each stage removing the points in the sparse cloud that have the poorest precision. Each time the points with the poorest precision are removed, a Bundle Adjustment is run, and the calibration, pose, and point precisions improve. Points with initially poor precision, if not first selected for deletion, can have their positional precision continuously improved over iterations of the Bundle Adjustment. This is one reason why not all the poor precision points are deleted at once.

When these three operations have yielded a very high precision, low uncertainty camera calibration and pose, often expressed in small fractions of pixels, the role of the SfM algorithm is finished. The sparse cloud has no further use. The remaining precisional uncertainty of the SfM solution is quantified in the form of a Root Mean Squares Error (RMSE) residual by the Bundle Adjustment. RMSE is equivalent to the statistical concept of a standard deviation. This level of precision uncertainty will serve as a foundation for all subsequent measurement operations.

The photogrammetry software must then use Multi-Viewpoint Stereo (MVS) algorithms, informed by the knowledge of camera calibration and pose, to build a dense point cloud in virtual space, of a size determined by the user. The size of the dense cloud can reach into the hundreds of millions or billions of points. With a high precision camera calibration and camera pose, the camera sensor that captured each photograph can be positioned and oriented in virtual space to project the photo's pixel information through the virtual model of the lens (the calibration) in a direct line out towards the point on the virtual subject’s surface that the pixel represents. It is important to understand that each of these projections is, in fact, a small, gradually widening “tube” from a pixel on the camera sensor to a spot on the subject. This tube encloses a small volume. This volume is the “footprint“ the projected pixel covers on the surface of the subject. When the projections from multiple photos intersect on the subject’s surface, they create a commonly shared volume. When the photos are captured from rule-based positions and orientations (poses), their projections work together to make a smaller and smaller commonly shared volume. The surface point in the dense point cloud made by these intersections falls within this commonly shared volume. The smaller the common volume, the less uncertain point’s location becomes. This also means that the point’s position in space is known with increasingly higher precision. The rule-based photogrammetric capture method designed by Matthews and Noble is explicitly designed to produce a set of viewpoints of the subject that will produce projection intersections with the smallest common volume in 3D space. As will be shown below, when nine projections from nine properly positioned and oriented photographs intersect, the common volume will be very small and a highly precise, low positional uncertainty point will result. When each point results from the intersection of nine well located pixel projections, the dense cloud of points will represent a precise, measurable virtual 3D version of the original imaging subject’s surface shape.

The photogrammetry software then employs surfacing algorithms, using the dense cloud’s 3D point positions and the look angles from the photos to the matched points, to build the geometrical mesh. A texture map is calculated from the color information in the pixels of the original photos and the knowledge of how those pixels map onto the 3D geometry. The result is a textured 3D model that can be measured with a known precision.

Example: Tlingit Helmet – Views of a 3D Photogrammetric Model

Tlingit helmet, carved wood

This is a Tlingit helmet made of carved wood by artist Richard Beasley, 1998. Above are three views of a 3D model of it, produced from a photogrammetry image sequence. Top to bottom, left side: detail of input image as object rests on turntable; model in wireframe viewing mode; model in solid viewing mode; model in texture viewing mode. Right side: large background image combines 3 views of the model, illustrating wireframe, solid, and texture.

Adding Measurability

On its own, photogrammetry generates 3D representations without scale. The scale for the virtual representation is added during the SfM stage of processing. The scale provides the ability to introduce real-world measurement values to the virtual 3D model. At CHI, we accomplish this by adding at least three (and preferably four) calibrated scale bars of known dimension into the scene containing the imaging subject. The scale bars can be on, around, or next to the region of interest. Each scale bar must be included in multiple (at least three, preferably nine) overlapping images. Scale bars are flat, lightweight linear bars in several sizes with printed targets separated by a known, calibrated distance. The software can recognize the targets. The user then enters distances between the targets. Using calibrated scale bars can produce levels of measurement precision well below one tenth of a millimeter.

Measurement structure design is the process of defining a sensor network and the subsequent methods to process the information it collects. In photogrammetry, the sensor network is the camera’s 3D location and orientation for each photo in the capture set in relation to the imaging subject. To get the best results, this network must collect enough data so that the impact of any incorrect data is minimized. This data set must also enable the reduction to a minimum of the 3D measurement uncertainty of the resulting virtual 3D model representing the imaging subject. The design of the measurement structure is influenced by the imaging subject’s 3D features, any restrictions on the placement of cameras and the number of images necessary to satisfy the given “accuracy” and quality requirements. The prerequisite for any successful measurement in any scientific data domain is the design of such a measurement structure. Reduction of measurement uncertainty is accomplished through the systematic reduction and elimination of error in photogrammetric image capture and its subsequent virtual 3D reconstruction.

The resolution of a surface model is governed by the area on the real-world subject represented by the pixels in the images from which it was generated. This resolution is known as ground sample distance (GSD). The GSD resolution is determined by the resolution of the camera sensor, the focal length of the lens, and the distance from the subject.

How to Capture Photos

A crucial element of a successful photogrammetric process is obtaining a “good” photographic sequence. Good photographic sequences are based on a few simple rules. The CHI photogrammetry training class explores the reasons behind these rules and shows how to make informed choices in the face of challenging subjects.

Here are some suggestions for the camera/lens configuration:

How to determine where to take the photographs:

Archiving the Results

Photogrammetry is archive friendly. Strictly speaking, all of the 3D information required to build a scaled, virtual, textured 3D representation is contained in the 2D photos present in a well-designed photogrammetric capture set. Today, the methods of long-term preservation of photographs are well understood. To preserve the textured 3D information of any imaging subject, all that is necessary is to archive the sets of photos and their associated metadata. When a 3D representation is desired, the archived photo sets can be used to generate or re-generate the virtual model. With a well-captured image set, newly generated 3D representation will be same as previous representations made with the image set. At the current rate of software and computing power development, it is likely that 3D models built from archived photogrammetry image sets will at some point be available “on demand.”

Example: Cuneiform Cone Sequence

The image sequence below shows a 3D model of a section (17mm X 24mm) of a cuneiform cone from the Archaeological Research Collection of the University of Southern California. The sequence is a series of increasing closeups. Each of the images shows the 3D mesh (the underlying geometry) in the upper right, with texture applied in the lower left.

First image of a cuneiform cone from the Archaeological Research Collection of the University of Southern California
  Second and closer image of the cuneiform cone
  Third and closest image of the cuneiform cone