Poor Man's Bryce Part II
Volume Number: 14
Issue Number: 11
Column Tag: Power Graphics
Poor Man's Bryce, Part II:
More Terrain Generation with Quickdraw[TM] 3D
by Kas Thomas
Texture-mapped terrains are easy to create in Quickdraw
Quickdraw 3D is a cross-platform graphics library designed with performance and
ease of programming foremost in mind. It's a tremendously powerful tool for giving
users an interactive 3D data-visualization experience. With not much effort, it's
possible for the non-graphics specialist to put together a program that can let a user
manipulate fully shaded 3D objects in real time - something only supercomputers
could do just a few short years ago, and then only with great effort.
In a previous article (Part 1 appeared in October's MacTech), we saw how easy it is to
set up a simple Quickdraw 3D program that converts a user's 2D input (in the form of
a TIFF, PICT, JPEG, GIF, or Photoshop 3 file) to a 3D "terrain" using the popular
technique of displacement mapping. The basic trick here is to set up a flat (planar)
grid with vertices equal - or at least proportional - in number to the number of
pixels in the input image, then loop over the grid vertices (and image pixels),
assigning an "elevation" value to each vertex based on the corresponding pixel
intensity in the source image. In this way, a series of polka dots becomes a mountain
range, a letter 'O' becomes a crater, etc. Or, a U.S. Geological Survey DEM (digital
elevation map) file can be converted to a 3D topographic terrain.
For version 1.0 of our PMB (Poor Man's Bryce) application, we chose Quickdraw 3D's
TriGrid mesh primitive as the starting point for our geometry because of its
inherently Cartesian (rows-and-columns-oriented) layout. The TriGrid, however, is
only one of four freeform mesh primitives available in Quickdraw 3D (the others
being the Mesh, TriMesh, and Polyhedron). The pros and cons of the various mesh
types were discussed in some detail last time. Suffice to to say, we chose the TriGrid
partly for performance (it renders quickly compared to, say, a plain Mesh), partly
for its efficient use of RAM (vertices are shared in a near-optimal fashion), and
partly for ease of coding. I also mentioned last time that the TriGrid would offer
certain advantages when it came to applying texture maps. The significance of this will
(at last) be clear shortly.
For version 2.0 of PMB, we're going to add features that improve the appearance of
our pseudo-terrains. One thing we'd like to be able to do is make the terrains a bit less
polygonal-looking (i.e., less faceted; smoother). Another thing that would be very
useful is the ability to overlay PICT images (texture maps) on our grid. Both of these
things are remarkably easy to do in Quickdraw 3D. Follow along and I'll show you what
I mean.
Smoothing the Rough Edges
With the rise in popularity of computer-generated 3D images, we've all become a bit
accustomed to seeing objects that look faceted or polygonal (Figure 1). In some
instances, such as when recreating real-world objects that really are faceted (like
geodesic domes), the polygonal look is perfectly natural. But in most instances, it
detracts from the appearance of the object, because in the real world most objects are
not polygonal-looking. A real tomato tends to be curved, not flat-sided. (The flat-sided
ones, you leave in the bin.) The question is, how do you get an object with lots of flat
surfaces to look curvy and smooth?
Figure 1. The typical "faceted" appearance that results when Gouraud shading is not
utilized.
To understand the answer, we need to understand why faceted objects look faceted in the
first place. The reason facets look like facets is that the human visual system is
extremely sensitive to discontinuities in light intensity. At the edges of adjoining
facets, the intensity of reflected light takes a sharp jump, and it's this sudden change
of color that gets our attention. Back in 1971, a fellow by the name of H. Gouraud
proposed a simple workaround for making edges vanish in computer-generated images.
Gouraud's trick relies on the fact that most illumination models calculate reflected
light intensities based on the angle between the light vector and the so-called "surface
normal" vector associated with a reflective surface. (The surface normal is just a
vector sticking straight out of a planar surface at right angles to it, like a stop-sign
pole sticking up out of a sidewalk.) Gouraud reasoned that if one were to calculate
something called a vertex normal at each of a polygon's vertices, then calculate
intermediate normals between vertices by simple interpolation, it should be possible
to use the intermediate (interpolated) "normals" to calculate reflection; and since
these would vary smoothly from corner to corner and edge to edge, the light values
would vary smoothly as well, eliminating the sharp discontinuity in lighting across
edges. (See Figure 2.)
Figure 2. The same object as Fig. 1, but with Gouraud shading.
Technically speaking, linear interpolation doesn't really do away with the
C1-discontinuity problem (as mathematicians like to call it), but for most purposes
the Gouraud trick works well enough - it does fool the eye into thinking edges have
vanished or, at the very least, become much smoother.
And the neat thing for us is that Quickdraw 3D's renderer will do the hard work of
between-vertex interpolation for us (and calculate our lighting), if we'll just provide
the vertex normals. Note that if we don't provide vertex normals (as a type of vertex
attribute), QD3D's renderer calculates a polygon (surface) normal for each surface,
on the fly, and uses that, which of course is why our surfaces look faceted.
Vector Math in a Nutshell
In case you haven't had any exposure to vector math since high school, it might be good
to stop for a moment and brush up on basic concepts. (Skip this section if you're
already up-to-speed.) In Quickdraw 3D, we deal mostly with (what else?) 3D vectors.
A 3D vector, in turn, can be any quantity that has three numerical components. For
example, points in 3-space have x,y, and z coordinates. Does this mean points are
vectors? Not per se, no. A vector has both direction (or orientation) and magnitude.
Points don't qualify, strictly speaking, but differences between points do constitute
vector quantities. If you say "Start at Fisherman's Wharf and proceed south until you
reach the Cow Palace," you've given me a vector defined by two points. In like manner,
a vector can be defined by a point's position in 3-space relative to the origin (which
after all is just another point in 3-space). We call this the point's position vector.
More generally, we call the vector defined by any two points a difference vector.
The neat thing about vectors, of course, is that they can be added, subtracted, and/or
multiplied to give new vectors. (Vector division is not defined.) Adding vectors is a
simple matter of adding their respective components together. Subtraction works by
subtracting individual components. Multiplication is a bit more complicated. The
so-called dot product of two vectors is obtained by multiplying the respective
components together and summing them, which gives a scalar (one-dimensional)
quantity. The cross product or vector product is obtained by multiplying the two
vectors' components against each other in a particular way to yield new components
(and a new vector).
Now here's the amazing part. The new vector that you get when you cross two 3-space
vectors together will always be at right angles to each of the original vectors. What's
more, the direction of the new vector will be dependent on the order in which you
crossed the two original vectors. A 5 B gives a vector that points 180° opposite of B 5
A.
In order to calculate lighting, a renderer needs to know the direction from which the
light is coming and the angle at which the light hits a given surface. The necessary
information is contained in the dot product of the surface normal and the normalized
lighting vector. Before going further, let's clarify this funny word "normal." We
mentioned that vectors have magnitude as well as direction. The magnitude is
something you can calculate very simply using the Pythagorean distance formula. That
is, you square each of the vector's components (x,y, and z), add the squares together,
and take the square root of the sum. Child's play.
Vector division may be undefined, but you can certainly divide each of a vector's
components numerically, one at a time. If you do that using the vector's magnitude as
the divisor, you end up with individual components less than unity, but the new vector
will have a magnitude of (guess what?) exactly one. This is vector normalization.
(Prove that a 3D vector with x = y = z will always, upon normalization, have
components equal to 0.57735.)
Normalization is extremely handy, because the dot product of two normalized vectors
is exactly the cosine of the angle between them. (Observe also that the cross product of
two normalized vectors is equal in magnitude to the sine of the angle between the
vectors.)
Vertex Normals
Now we're in a position to try to calculate vertex normals for our terrain grid. What
we're really trying to do is loop over all vertices in our grid, obtain surface normals
for all neighboring surfaces of which a given vertex is a member, and average the
normals together. (See Figure 3.) This sounds worse than it is. It turns out Quickdraw
3D has some handy utility routines to make our job a lot easier.
Listing 1 shows the first of two routines we call on in PMB to accomplish
vertex-normal calculation. Actually, the routine shown in Listing 1 just retrieves our
geometry's data and sets up a nested loop to traverse all grid vertices. The actual
vector manipulations come in Listing 2.
Figure 3. A vertex normal can be calculated by averaging together the surface
normals of all polygons of which the vertex is a member.
We begin by getting access to our TriGrid data, which is embedded in a display group
object. The code in Listing 1 shows some very typical calls that are used for this
purpose; you'll see this kind of code over and over again in Quickdraw 3D programs.
The API call Q3Group_GetFirstPositionOfType() fetches the address (if any) of the
first embedded object of the type indicated. In this case, we're passing the constant
kQ3GeometryTypeTriGrid to indicate that we're looking for a TriGrid inside the group
object. If there is no such geometry inside the group, we'll get a position value of NIL
(but the function may return kQ3Success nonetheless), so it's important to check not
only the return value but the validity of the TQ3GroupPosition value.
'The subsequent call to Q3Group_GetPositionObject(), which relies on the "position
given us by Q3Group_GetFirstPositionOfType(), should return a valid reference to
our geometry. Again, we check not only for a successful return value but a non-NIL
geometry object. If we succeed in getting our geometry at this point, we will have
incremented the reference count for our TriGrid, which in turn means we'll have to
decrement the ref count later (before our function returns) if we want to keep all
references balanced and stave off memory leaks. (The subject of reference counts was
addressed in my article on QD3D fundamentals in the July 1998 issue of MacTech.) We still can't work with the geometry's data, however, until we make a call to
Q3TriGrid_GetData(). Until we make this call, the locations of our vertices and other
particulars relating to the TriGrid are known only to QD3D. The "get data" call will
make all the information public again, but really what's happening is that QD3D is
making a copy of internally stored info for us. It's important to understand that we are
not just being given a TQ3TriGridData structure containing a few bytes of information
and a few pointers pointing to arrays in the system heap. Quickdraw 3D is actually
giving us the TQ3TriGridData structure and a fresh copy of all the array data pointed to
by the struct. To make sure this memory later gets freed up, we have to remember to
call Q3TriGrid_EmptyData() prior to exiting our function for any reason.
Once we have access to the TriGrid's data (and such items as the number of rows and
columns in the grid), we can go ahead and loop over all the vertices one by one. For
each one, we call our own routine, GetVertexNormal(), which calculates the vertex
normal (see Listing 2). We then add the vertex normal to the attribute set of the
vertex in question - and store the modified attribute set back into the vertex.
The code for fiddling with vertex attributes should look familiar to you, but in case it
doesn't, here's what's going on. (Again, this is a common programming paradigm in
QD3D, so be sure to understand it. You'll use this type of routine over and over again.)
Attributes are, in general, surface properties of various kinds. They can be colors,
transparency values, surface normals, or any of a zoo of other types. A vertex, in
QD3D, is actually a data structure encompassing both a 3D point (a TQ3Point3D) and a
pointer to an attribute set (of type TQ3AttributeSet). The essential thing to remember
is that an attribute set is a container object that holds attributes. The container may be
empty, or it may contain one kind of attribute, or it may be full of attributes, but the
important thing is, you don't add attributes directly to vertices (or to other objects) -
rather, you add attributes to attribute sets, and then add attribute sets to the objects
they modify. If you look carefully at Listing 1, you'll see that this is indeed what takes
place. We try, first, to access any attribute set that might already be attached to our
vertex. If we come up empty-handed, we call Q3AttributeSet_New() to create a new,
blank attribute set. Once we have the attribute set, we add our vertex normal (a vector
quantity, remember) to it with Q3AttributeSet_Add(). But notice that we still haven't
changed anything's appearance until we call Q3TriGrid_SetVertexAttributeSet(). It's
this call that actually attaches the new attribute set to the vertex.
In case you're wondering, a call to Q3AttributeSet_Add() overwrites any "like data
that may be contained in an attribute set. If there had been any preexisting
surface-normal data, we would not (technically speaking) be obliged to clear it out
with an AttributeSet_Clear call, although the API does provide for such things.
Note that since attribute sets are objects, obtaining a reference to one (for purposes of
editing it) means you later need to decrement its reference count with a call to
Q3Object_Dispose(). By doing this inside our loop, we avoid memory leaks and keep
the QD3D gods happy.
Listing 1: CalculateVertexNorms()
CalculateVertexNorms
void CalculateVertexNorms( TQ3GroupObject grid ) {
TQ3TriGridData data;
TQ3GeometryObject tri;
TQ3GroupPosition pos;
TQ3AttributeSet attribs;
TQ3Status s;
TQ3Vector3D vertexNormal;
unsigned long i,j;
s = Q3Group_GetFirstPositionOfType( grid, kQ3GeometryTypeTriGrid,
&pos );
if (s != kQ3Success || pos == nil)
DebugStr("\pNo trigrid in group!");
s = Q3Group_GetPositionObject( grid, pos, &tri); // get the trigrid
if (s != kQ3Success || tri == nil)
DebugStr("\pCan't get trigrid!");
s = Q3TriGrid_GetData( tri, &data );
if (s != kQ3Success)
DebugStr("\pCan't get trigrid data!");
for (i = 0; i < data.numRows ; i++) { // for all interior rows
// and all interior
columns
for (j = 0; j < data.numColumns ; j++,attribs=nil) {
vertexNormal = GetVertexNormal( tri, i, j, data.numRows,
data.numColumns );
// get attribute set for this vertex...