Related Links

Flickr Slide Show: The Sennedjem Lintel at the Phoebe Hearst Museum

View these slides from 2010 showing how the CHI team conducted a capture session at the Phoebe A. Hearst Museum of Anthropology (PAHMA), University of California, Berkeley, in collaboration with the Departments of Near Eastern Studies and Anthropology. The goal was to create an online research resource. Watch the slide show…


Preparing the Sennedjem lintel for a capture session

Do It Yourself: How to Capture RTI and AR Photo Sequences

Algorithmic Rendering (AR) uses the same image sets as those captured for RTI. Using CHI's documentation, how-to video, and collaboratively developed software, you can learn how to use both RTI and AR to capture and study real-world subjects so their surface relief features can be examined in minute detail not visible with the naked eye.


RTI training shot

Rock Art Example With Algorithmic Rendering

See an example of AR applied to a petroglyph from the Legend Rock Site in Wyoming in this photographic treatment. The expanded visual details were derived computationally using AR.


Algorithmic Rendering

Contents:  What is it?  How does it work?  Examples 

What is it?

Algorithmic rendering (AR) is a mathematical method that can apply selected mathematical “filters” to a sequence of digital photographs of a cultural or natural history subject (for example, to analyze the thread count of a painting's canvas to determine its origin). The result is a new digital image of the subject disclosing its features in a new and useful way.

Image sequence with visual effects yielded by algorithmic rendering

The Sennedjem Lintel from the Phoebe A. Hearst Museum of Anthropology at the University of California, Berkeley.
Panels: (a) AR composite image with color and surface normals at each pixel. (b) Exaggerated shading reveals fine surface details. (c) Darkening groves emphasizes large features. (d) Toon shading depicts shape features. (e) Labertian shading computed on the grayscale image conveys surface shape. (f) Suggestive contours present another method conveying shape. (Illustrations provided by Dr. Corey Toler-Franklin.)

At CHI, AR is a tool that is complementary to Reflectance Transformation Imaging (RTI). Once a sequence of digital photos has been captured using the RTI methodology, AR can be applied to extract and extend the visual information from the sequence and present that information as a new digital representation. This has the effect of disclosing data in a way not present in any individual photo within the sequence and more useful for the purpose at hand. This approach has led to deeper explorations of three-dimensional (3D) surface shape, detail, material, and color in domains such as medical and natural sciences, technical illustration, archaeology, art history, museum conservation, and forensic analysis.

How does it work?

Specifically, the AR process takes the RTI data set and generates RGBN images with per-pixel color and surface shape information, in the form of surface normals. (For a description of surface normals, see RTI: How does it work?) RGBN stands for Red, Green, Blue, and Normal information. RTI files are also RGBN images. AR applies signal processing “filters” to this RGBN information to reveal new information while introducing a wide range of control for a variety of stylistic effects. These RGBN images are powerful tools for documenting complex, real-world subjects, because they are easy to capture at a high resolution and readily extensible to processing tools, including those originally developed for full 3D geometric models. Most state-of-the-art, non-photorealistic rendering algorithms are simply functions of the surface normal, lighting, and viewing directions. RGBN images are more efficient to process than full 3D geometry, requiring less storage and computation time. Functions are computed in image space producing powerful 3D results with simpler 2D methods.

An early technique of applying non-photorealistic rendering to RGBN data was developed at Princeton University and discussed in the 2007 paper by Corey Toler-Franklin, Adam Finkelstein, and Szymon Rusinkiewicz.

Each RTI and AR records the access path to the original empirical data, the raw photographs and processing log files. (See Digital Lab Notebook for more information.)

Examples

The CARE Tool

Cultural Heritage Imaging and computer scientists from Princeton University are collaborating to create a next-generation tool they call the Collaborative Algorithmic Rendering Engine (CARE). Funded by the National Science Foundation, this project will produce a tool that will integrate algorithmic rendering processes with a user interface designed for non-experts in digital imaging and preservation. (For more information on this effort, see the CARE tool project page.)

To use the CARE tool, each collaborator will need a copy of the original photographic capture sequence. The CARE tool will present the collaborators with a gallery of different representations of the sequence's imaging subject, showing the effects of different signal processing filters. This capability is similar to that of image-processing software, such as Adobe Photoshop, that can display the effects of numerous color information filters in a visual gallery of different graphic results. The CARE tool will also be able to display a visual gallery of varying graphic possibilities by performing mathematical transformations, not only on the color information (as with Photoshop), but also on the rich surface-structure information contained in the surface normals the tool derives from the originally captured photo sequence. Such a gallery of surface-feature depiction and visual emphasis can disclose both anticipated information and accidental discoveries uncovered by processing pipeline options never imagined by the user.

Users will be able to create technical illustrations by exploring and selecting alternative filters, adjusting filter settings, and combining filter effects. The CARE tool will automatically extract the necessary information from the RTI data and generate the requested illustration. Each of the collaborators will see updated representations reflecting their selections in near real time. The collaborators can find the filters and settings that emphasize the representation of relevant information and de-emphasize the representation of less relevant information. In this way they can follow a process of discovery to find how best to depict the features most relevant to their particular purposes.

Both the suite of RTI tools and the CARE tool are designed to record the same information that a scientist records in a lab notebook or an archaeologist records in field notes. The RTI and CARE tools are based on digital photography, capable of automatic post-processing and automatic recording of image generation process history in a machine-readable log. Because the processing of the raw photos into ARs will be performed automatically, we can also automatically save the history of what is done each step of the way. The CARE tool will also record the entire history of user filter selection and settings configuration. It will show sample AR generation and the collaborators' choices of new AR filter and settings selection and parameter reconfiguration. The construction history will be available throughout and after the AR creation, and it will be sharable, in real time, with collaborators anywhere in the world.

Anyone can view the final AR in an associated viewer and replay this history of discovery and decision. This interplay of creation and discovery will also become part of the AR record, enabling others to relive moments of discovery and learn from successful practices. In this way, the quality and authenticity of the data can be tracked and reconfirmed. This “digital lab notebook” enables qualitative evaluation, long-term preservation, and the ability to reuse data. People using these ARs today, as well as potential future users of these illustrations, can decide for themselves whether or not an AR illustration is trustworthy and useful.

All of this saved lab notebook data from the worldwide use of these tools will form an ever-expanding library of effective imaging examples for future reuse.

In short, this project proposes a revolutionary new paradigm for capturing, rendering, sharing, archiving, and preserving the provenance of technical illustrations in natural history and the humanities.

The Advantage of AR Over Traditional Technical Illustrations and Photographs

There are advantages and limitations to both drawings and photographs as representations of our natural history and material culture. One of the virtues of hand-drawn media is that the information recorded emphasizes the subject's attributes most relevant to the purpose at hand. This is particularly evident in technical illustration. When archaeologists draw a flint stone tool, they selectively and subjectively emphasize salient features such as the curvilinear edges of the fractures made during its manufacture and subsequent retouching.

Yet technical drawing is very time-consuming and has a steep learning curve. While technical drawing skills will remain valuable in the humanities, wide adoption of tools such as CARE will accomplish the same goal of emphasizing relevant features with significant savings of time and effort, along with a great increase in flexibility, transparency, and accessibility.

Photographers, like technical illustrators, can compose their images to emphasize particular features. For example, they can achieve high tonal contrast by positioning the photographic light source such that the subject is struck with obliquely angled incident illumination. This produces an image that accentuates the subject's structure, detail, and three-dimensional qualities. The use of raking light photography in art conservation is an example.

When a photographer uses light for emphasis, some areas of the subject are in shadow and some areas are highlighted. The documentary photographer taking photos, singularly or in series, is faced with the editorial choice of which information to capture and which information to omit. On the one hand, data is obscured in both shadow and highlighted areas. On the other hand, images with the fewest shadows and reflections also disclose the fewest surface relief features.

This problem is well understood by epigraphic photographers who decipher inscriptions. They recognize that lighting direction significantly determines the content of the empirical data set recovered. To partially mitigate this problem, they capture multiple images of the subject from the same point of view, using different illumination directions to increase the amount of available data.

Such editorial discretion in photography is useful, but it is also a limitation. The information important for one purpose may be different from the information important for another purpose, particularly if the purpose is unknown to the original documenter or emerges at a later point in time. The CARE tool retains the complete information set contained in the original image sequence for reuse for novel purposes.

The CARE tool will capture the best features of editorial photography and technical illustration, while maximizing the amount of high-quality, accurate digital recording of the subject, and the image generation process history, so that the data can be evaluated for its scientific reliability and has a better chance of long-term preservation.