See the Photogrammetry Discussion in the CHI Forums

Visit the CHI forums and read about photogrammetry image capture, processing, and software comparisons. More…

O'Keeffe Museum: Photogrammetry Videos

A 2012 summer research project at the Georgia O'Keeffe Museum in Santa Fe, New Mexico has been documenting their work using photogrammetry and RTI on objects from the collection as well as two historic houses. The CHI team helped kick off this project by providing consulting and training in its first week. Since then they have been posting blogs, photos, and videos about their progress, and now there are several photogrammetry videos online (among other related topics). More…

Related Publications

BLM Tech Note 248 on Photogrammetry
Sennedjem lintel at the Phoebe A. Hearst Museum


Contents:  What is it?  How does it work?  Example 

What is it?

Photogrammetry refers to the practice of deriving 3D measurements from photographs. Recent technological advances in digital cameras, computer processors, and computational techniques, such as sub-pixel image matching, make photogrammetry a portable and powerful technique, yielding extremely dense and accurate 3D surface data with a limited number of photos, captured with standard digital photography equipment, in a relatively short period of time. In the last five years, the variety and power of photogrammetry and related processes such as Structure from Motion (SfM) have increased dramatically. SfM finds the three-dimensional structure of a subject by analyzing the projected 2D motion field created by a sequential change of position of the camera sensor relative to the subject. Both photogrammetry and SFM require digital image sets that record this relative change in position between the camera viewpoint and the subject. This motion is identified by matching the pixels that reference locations on the subject in one photograph with the pixels referencing the same location in other photographs. Photographic sequences, which are captured according to principles that maximize the information available from this change in viewpoint, yield the best results. These rule-based data sets are software platform-independent and can be reused by others both now and in the future.

Sennenjem Lintel from the Phoebe A. Hearst Museum

The Sennedjem Lintel from the Phoebe A. Hearst Museum of Anthropology at the University of California, Berkeley.
Four views of the 3D model produced from a photogrammetry image sequence of the lintel. Panels: (upper left) 3D representation with color information; (upper right) rotated detail of 3D surface information; (lower left) close up view of 3D geometry structure; (lower right) another view of the lintel's 3D surface information.

How does it work?

CHI uses an image capture technique based on the work of Neffra Matthews and Tommy Noble at the US Government’s Bureau of Land Management (BLM). The BLM Tech Note (PDF) and 2010 VAST tutorial, cited in the right column of this page, provide additional information.

A crucial element of a successful photogrammetric process is obtaining a “good” photographic sequence that is based on a few simple principles.

Camera Calibration Sequence

The first step in the capture process is the camera calibration sequence. The calibration sequence determines and maps the optical distortions in the lens with respect to the sensor location. This can be accomplished most effectively when there are a large number of points in common between the overlapping images of the calibration sequence. The camera calibration photographs must be captured at the same settings as the overlapping photos. At least four additional photos are required; two taken with the camera physically rotated 90 degrees to the previous line of overlapping photos, and two additional photos with the camera rotated 270 degrees. The additional four camera calibration photos may be taken at any location along the line of overlapping photographs; however the best results occur in areas where the greatest number of auto-correlated points may be generated.

The final accuracy of the resulting dense surface model is governed by the image resolution, or ground sample distance (GSD). The GSD is a result of the resolution of the camera sensor (higher is better), the focal length of the lens, and the distance from the subject (closer is better). The resolution of the images is governed by the number of pixels per given area and the size of the sensor.

The camera should be set to manual (preferably f/5.6–f/11) and the ISO, shutter speed, white balance, and other settings be adjusted to achieve properly exposed images. To obtain the highest order results, it is necessary to ensure that focal distance and zoom do not change for a given sequence of photos. This can be achieved by taking a single photo, at the desired distance, using the auto-focus function, then turning the camera to manual focus and taping the focus ring in place. To maintain a consistent 66% overlap, the camera must be moved a distance equivalent to 34% in an adjacent direction of a single photo field of view. To ensure the entire subject is covered by at least two overlapping photos, the photographer must position the left extent of the subject in the center of the first frame. The next step is to proceed systematically from left to right along the length of the subject and take as many photos as necessary to ensure complete coverage. For higher quality results, rotate the camera 90 degrees and use the same 66% overlap over the previously captured area. Rotate the camera 180 degrees, and again capture that area. Because of the flexibility of this technique, it is possible to obtain high accuracy 3D data from subjects that are at almost any orientation (horizontal, vertical, above, or below) the camera position. However, it is important to keep the plane of the sensor and lens parallel to the subject and to maintain a consistent height (or distance) from the subject.

Adding Measurability

To provide measurability, a scale must be placed in the image sequence and must be present in at least two photos. The scale may be placed anywhere in the field of view, including the corners and edges of the composition. The optical distortion effects, due to placing the scale near the edges or corners of the field of view, which would stretch a scale in an ordinary photograph, are corrected by the photogrammetry software. The scale provides the ability to introduce real-world measurement values to the subject. This is accomplished by simply adding an object of known dimension (meter stick or other object) that is visible in at least two stereo models (three photos). It is preferable to have two or more such objects, to ensure visibility and for accuracy assessment. These objects may then be assigned their proper length during processing.

Photogrammetry in Relation to Reflectance Transformation Imaging (RTI)

The photogrammetric image capture steps described above can be done in concert with the RTI image capture process. The camera calibration photo sequence may be used with photogrammetry software to calculate a highly accurate camera calibration. If desired, this data can be used to remove lens distortion and/or ortho-rectify the RTI image set prior to processing the RTI.

Example: Cuneiform Cone Sequence

The image sequence below shows a 3D model of a section (17mm X 24mm) of a cuneiform cone from the Archaeological Research Collection of the University of Southern California. The sequence is a series of increasing closeups. Each of the images shows the 3D mesh (the underlying geometry) in the upper right, with texture applied in the lower left.

First image of a cuneiform cone from the Archaeological Research Collection of the University of Southern California
  Second and closer image of the cuneiform cone
  Third and closest image of the cuneiform cone