Animation
 

Introduction
Ray tracing's major drawback is the extended computational time involved in calculating ray-object intersections.  Animation can improve the efficiency of a ray tracer by taking advantage of the spatiotemporal coherence of neighboring frames, since each frame closely resembles the frames immediately precede and succeed it.  This means that only the pixels involved in "the move" will require the calculations normally needed for every pixel in the image.  In the article "Generating Exact Ray-Traced Animation Frames by Reprojection," Stephen Adelson and Larry Hodges claim that animation by reprojection yields "up to 92 percent savings in rendering time."

The animation algorithm works from a fully rendered base frame, it can then create inferred frames in which the objects occupy their new positions.  For each sub pixel sampled in the base frame, the algorithm saves the information into an object data file the 3D intersection point, normal vector, diffuse color, and ID tags of the intersected object and the shadowing objects.  If needed, this data will later be used to verify shadows, and calculate specular highlights, reflections, and refraction.  The inferred frame contains the new image generated from reprojecting the base frame.  The animation algorithm works best when the two frames are adjacent to each other.  The more the images on the two frames differ, the less the savings in computation.

Animation is done in three steps: reprojection, verification, and enhancement.
 

Reprojection
Reprojection accounts for object movement, transformation, and camera movement and rotation.  The system consults an object data file containing the incremental movement of all objects, then projects the new positions of the objects in the inferred frame.  The system proceeds to rewrite the object data file with the new information, for use in creating the next frame.  If more than one sample from the base frame are projected to the same grid in the inferred position, the value physically closest to the viewpoint will be the correct value.  However the exact value is to be verified later.
 

Verification
After the objects move, some positions may now be uncovered while others are hidden.  To address this, the algorithm casts a ray between the viewing position and the intersection point.  Since each object in the frame has a bounding box around it, the algorithm only checks the object-ray intersection when the ray hits a bounding box.  If there is no intersection, the algorithm ignores the object and the original property of the pixel remains.  If the ray intersects an object, however, the algorithm undergoes a series of calculations to determine the new reflective property of the pixel.
 

Enhancement
In this phase, the new positions are enhanced by reflection, refraction, and specular highlights.  The number of higher level ray cast and the execution time of the enhancement phase is approximately equal to the number and time of these rays in a full ray tracing of the image.  Thus, this phase does not increase the algorithm's savings.  The more higher level rays in the image, the smaller the overall savings of the animation algorithm.
 

Conclusion
The animation algorithm is more efficient because rays are checked during the verification phase only as needed.  In the best case scenario, there is no change from the base frame.  In the worst case scenario, the reprojected position has been obscured in the inferred frame and a ray needs to be cast through that position to determine its property.  Animation eliminates one object intersection test for most primary rays, thus is a superior algorithm.


Table of Contents