Lights, Camera, Action...
Volume Number: 14
Issue Number: 5
Column Tag: Multimedia
by Tim Monroe, Apple Computer, Inc.
Embedding QuickDraw 3D objects into QuickTime VR
panoramas
QuickTime VR is a wonderful medium for immersing the user in photorealistic or
rendered virtual environments, but it doesn't take too terribly long before the static
and silent nature of the experience becomes apparent. By "static" I mean that, for the
most part, there's no motion, no life, in the virtual environment. Understandably, one
of the most common requests that the QuickTime VR team has gotten from developers is
for a way to embed sounds and moving objects into QuickTime VR scenes. Although the
virtual environments provided by QuickTime VR are quite compelling all by
themselves, they just spring to life when even small bits of sound or fleeting bits of
motion are added to them.
Adding motion and sound to QuickTime VR object movies isn't very difficult and doesn't
require any programming You can infuse some motion into a VR scene by creating what
are called "animated" object movies, where a given pan and tilt angle is associated not
with a single frame, but with a set of frames, which are played in sequence (and
possibly also looped) when the user is at that particular pan and tilt angle. (This is
called "frame animation".) Similarly, you can configure any object movie to
automatically play in sequence all the views of the current row in the object movie.
(This is called "view animation".) In addition, the author of a VR scene can include
sound tracks in the movie file. If a sound track is located at the same time as an object
node, the VR movie controller automatically plays the sound track when that node is
the active node.
But for panoramas, which are by far the most common type of VR movies, there is
nothing analogous to frame or view animation, and the movie controller simply ignores
any sound track whose duration overlaps that of the panoramic node. To embed sounds
and motion into panoramas, you'll need to do some programming. In a previous
MacTech article (Monroe and Wolfson, July 1997), we showed how to play sounds that
appear to come from specific locations in a panorama or that are ambient in the
panorama (emanating from no particular location). In this article, I'll show how to
embed rendered QuickDraw 3D objects in a panorama.
There are several obvious uses for this technique. First, you might want to populate a
panorama with various objects that move over time. Imagine a panorama with a
rendered jet flying by in the sky, or a rendered carousel that spins on its axis in the
middle of a panorama. These effects are easy to achieve by taking an existing 3D model,
embedding it into a panorama, and then dynamically altering its position or rotation
over time. Another use for embedding QuickDraw 3D objects is to serve as a "screen
on which to play QuickTime. QuickDraw 3D allows you to map a texture onto a 3D
object; this texture can even change over time. So, we can use the individual frames of
a QuickTime movie as a texture for a 3D object. The result is a QuickTime movie
superimposed onto the 3D object. With a small amount of trial and error to get the
placement of the 3D "screen" just right, you can play QuickTime movies on top of a TV
screen in a panorama, for instance. It's also possible to drop out a solid background of a
QuickTime movie when mapping it as a texture and thus get a kind of "blue screening
(perhaps to have people walking around inside the panorama).
The basic approach that we'll use to integrate rendered QuickDraw 3D objects into
QuickTime VR panoramas is really no more complicated than the one we used
previously to integrate directional sounds into panoramas: first, we define an
arbitrary correspondence between the QuickDraw 3D coordinate space and the
QuickTime VR panorama space. Then we translate changes in the panorama's pan and
tilt angles into changes in the 3D camera. The actual implementation of this approach,
however, is vastly more complicated with 3D than with sound, primarily because we
need to do all the standard 3D setup and rendering, in addition to then embedding that
rendered image into the VR panorama. Here we'll describe the general process and give
the code for some of the key steps. See the source code for the VR3DObjects sample
application for the complete details.
Before reading this article, you should already have read the article mentioned above.
That article provides a good overview of the capabilities of the QuickTime VR
programming interfaces and shows how to use them to perform some simple actions.
You should also be familiar with QuickDraw 3D. See the first chapter of the book 3D
Graphics Programming With QuickDraw 3D for a quick overview of how to use
QuickDraw 3D. Also, develop magazine has printed numerous good articles describing
various parts of QuickDraw 3D; consult the Bibliography at the end of this article for
a list of some of those articles.
Lights (etc.)
First, let's briefly discuss the basic QuickDraw 3D setup. As I just mentioned, I'm
assuming you're already familiar with QuickDraw 3D or with some similar 3D
graphics system. It's beyond the scope of this article to explain everything you need to
know to use QuickDraw 3D; the following brief recap is intended only to jog your
memory.
All 3D rendering is done using a private data structure called a view. A view is really
nothing more than a collection of other objects, including a camera, a set of lights, a
renderer, a draw context, and the 3D model. The model specifies the location and
geometric shape of the object (or objects) to be rendered, as well as information about
how the renderer should apply the illumination from the lights (called illumination
shading) and about what if any texture is to be applied to the surface of the object
(called texture shading).
The renderer determines how the geometric description of the model is converted into
a graphical image (for instance, is the model drawn in a wireframe outline of its
surfaces or as a collection of colored, shaded surfaces?). The renderer also determines
which parts of a model are drawn and which are obscured by other surfaces.
The lights and the camera associated with a view are pretty much just what you'd
expect. The lights provide illumination to the objects in the model. A view can have one
or more lights of varying positions and colors. The camera determines how the
rendered model is projected onto a flat screen (called the "view plane"). QuickDraw
3D supports several kinds of cameras, which are distinguished by their methods of
projection.
Perhaps the least intuitive part of a view is the draw context, which maintains
information about a particular drawing destination. You can use QuickDraw 3D to draw
directly into Macintosh windows, or Microsoft Windows windows, or even into a pixel
map (a "pixmap"), a region of memory that is not directly associated with a window.
The draw context maintains general information common to all drawing destinations
(such as the color to use when erasing the drawing destination) and specific
information about a particular type of drawing destination (for instance, the pixel
type and size of an offscreen graphics world).
For present purposes, we want to have QuickDraw 3D draw into an offscreen graphics
world, giving us an image that we can later superimpose on the panorama. We're going
to superimpose the image by copying it from that offscreen graphics world into the
panorama's prescreen buffer (the buffer that contains the unwarped panoramic image
that is about to be copied to the screen). QuickTime VR then automatically copies the
prescreen buffer to the screen. Figure 1 shows the flow of pixels.
Figure 1. From geometric description to a screen image
Accordingly, our draw context will be a pixmap draw context. First we need to create
an offscreen graphics world to hold the pixmap. Clearly, the size of the pixmap should
be the same as the size of the QuickTime VR movie.
GetMovieBox((**theWindowObject).fMovie, &myRect);
QTNewGWorld(&(**myAppData).fPixGWorld, kOffscreenPixelType,
&myRect, NULL, NULL, 0L);
The QTNewGWorld function is a version of NewGWorld that allows you to specify the
pixel type of the offscreen graphics world. The pixel type depends on whether we're
running on Mac OS or Windows: for Mac OS we use the value k32ARGBPixelFormat and
for Window we use the value k32BGRAPixelFormat. Now that we've created an
offscreen graphics world of the correct size and pixel type, we can create a pixmap
draw context, as shown in Listing 1.
Listing 1
CreateDrawContext
TQ3DrawContextObject CreateDrawContext (GWorldPtr theGWorld)
TQ3DrawContextObject myDrawContext = NULL;
TQ3PixmapDrawContextData myPMData;
TQ3DrawContextData myDCData;
PixMapHandle myPixMap;
Rect myRect;
TQ3ColorARGB myClearColor;
float myFactor = 0xffff;
if (theGWorld == NULL) return(myDrawContext);
// set the background color;
// note that RGBColor is defined in the range 0-65535,
// while TQ3ColorARGB is defined in the range 0.0-1.0; hence the
division....
myClearColor.a = 0.0;
myClearColor.r = kClearColor.red / myFactor;
myClearColor.g = kClearColor.green / myFactor;
myClearColor.b = kClearColor.blue / myFactor;
// fill in draw context data
myDCData.clearImageMethod = kQ3ClearMethodWithColor;
myDCData.clearImageColor = myClearColor;
myDCData.paneState = kQ3False;
myDCData.maskState = kQ3False;
myDCData.doubleBufferState = kQ3False;
myPMData.drawContextData = myDCData;
// the pixmap must remain locked in memory for as long as it exists
myPixMap = GetGWorldPixMap(theGWorld);
LockPixels(myPixMap);
myRect = theGWorld->portRect;
myPMData.pixmap.width = myRect.right - myRect.left;
myPMData.pixmap.height = myRect.bottom - myRect.top;
myPMData.pixmap.rowBytes = (**myPixMap).rowBytes & 0x3fff;
myPMData.pixmap.pixelType = kQ3PixelTypeRGB32;
myPMData.pixmap.pixelSize = 32;
myPMData.pixmap.bitOrder = kQ3EndianBig;
myPMData.pixmap.byteOrder = kQ3EndianBig;
myPMData.pixmap.image = GetPixBaseAddr(myPixMap);
// create a draw context and return it
myDrawContext = Q3PixmapDrawContext_New(&myPMData);
return(myDrawContext);
}
We're going to superimpose the image in the pixmap draw context onto the prescreen
buffer by calling CopyBits. Obviously, we want to copy only the parts of the draw
context that contain rendered pixels, not the parts that are merely background
(otherwise, we would overwrite the entire prescreen buffer). CopyBits allows us to
specify a copying mode that replaces a destination pixel only if the corresponding
source pixel isn't equal to the background color of the destination graphics port. So, to
successfully copy only the rendered 3D objects from the pixmap draw context to the
prescreen buffer, we need to (1) make sure the background of the draw context is a
known solid color that doesn't occur in any rendered pixels, and (2) make sure that
the background color of the prescreen buffer is set to that same known color. The
CreateDrawContext function defined in Listing 1 uses the constant kClearColor to set
the clearImageColor field of the draw context data structure:
const RGBColor kClearColor = {0x1111, 0x2222, 0x3333};
In our prescreen buffer imaging completion procedure, we call CopyBits as shown in
Listing 2, having first set the background color of the destination port to the same
color.
Listing 2
PrescreenRoutine selection
// get the current graphics world
// (on entry, the current graphics world is set to the prescreen
buffer)
GetGWorld(&myGWorld, &myGDevice);
RGBBackColor(&kClearColor);
// copy the rendered image to the current graphics world;
CopyBits((BitMapPtr)&(*myAppData)->fPixGWorld->portPixMap,
(BitMapPtr)&myGWorld->portPixMap,
&(*myAppData)->fPixGWorld->portRect,
&myGWorld->portRect,
srcCopy | transparent,
0L);
Camera