OpenGL For Mac Users - Part 2
Volume Number: 15
Issue Number: 1
Column Tag: Power Graphics
by Ed Angel, University of New Mexico
Advanced capabilities: architectural features and
exploiting hardware
In Part 1, we developed the basics of OpenGL and argued that the OpenGL API provides
an efficient and easy to use interface for developing three-dimensional graphics
applications. However, if an API is to be used for developing serious applications, it
must be able to exploit modern graphics hardware. In this article, we shall examine a
few of the advanced capabilities of OpenGL.
We shall be concerned with three areas. First, we shall examine how to introduce
realistic shading by defining material properties for our objects and adding light
sources to the scene. Then we shall consider the mixing of geometric and digital
techniques afforded by texture mapping. We shall demonstrate these capabilities
through the color cube example that we developed in our first article. We will then
survey some of the advanced features of OpenGL, concentrating on three areas: writing
client-server programs for networked applications, the use of OpenGL buffers, and the
ability to tune performance to available hardware.
Figure 1 is a more detailed view of the pipeline model that we introduced in Part 1.
Geometric objects such as polygons are defined by vertices that travel down the
geometric pipeline while discrete entities such as bits and picture elements (pixels)
travel down a parallel pipeline. The two pipelines converge during rasterization (or
scan conversion). Consider what happens to a polygon during rasterization. First, the
rasterizer must compute the interior points of the polygon from the vertices. The
visibility of each point must be determined using the z or depth buffer that we
discussed in Part 1. If a point is visible, then a color must be determined for it. In the
simple model that we used in Part 1, a color either was assigned to an entire polygon
or was interpolated across the polygon using the colors at the vertices. Here we shall
consider two other possibilities that can be used either alone or together. We can
assign colors based on light sources and material properties that we can assign to the
polygon. Second we can use the pixels from the discrete pipeline to determine or alter
the color, a process called texture mapping. Once a color has been determined, we can
place the point in the frame buffer, place it in one of the other buffers, or use the
other buffers and tables to modify this color.
Figure 1. Pipeline Model.
In Part 1, we developed a sequence of programs that displayed a cube in various ways.
These programs demonstrated the structure of most OpenGL programs. We divided our
programs into three parts. The main function sets up the OpenGL interface with the
operating system and defines the callback functions for interaction. It will not change
in our examples here. The myinit function defines user parameters. The display
callback typically contains the graphical objects. Our examples here will modify these
two functions.
Lights and Materials
In simple graphics applications, we assign colors to lines and polygons that are used to
color or shade the entire object. In the real world, objects do not appear in constant
colors. Rather colors change gradually over surfaces due to the interplay between the
light illuminating the surface and the absorbtion and scattering properties of the
surface. In addition, if the material is shiny such as a metallic surface, the location of
the viewer will affect what shade she sees. Although physically-based models of these
phenomena can be complex, there are simple approximate models that work well in
most graphical applications.
Figure 2. The Phong Shading Model.
OpenGL uses the Phong shading model, which is based on the four vectors shown in
Figure 2. Light is assumed to arrive from either a point source or a source located
infinitely far from the surface. At a point on the surface the vector L is the direction
from the point to the source. The orientation of the surface is determined by the
normal vector N. Finally, the model uses the angle of a perfect reflector, R, and the
angle between R and the vector to the viewer V. The Phong model contains diffuse,
specular, and ambient terms. Diffuse light is scattered equally in all directions.
Specular light reflects in a range of angles close to the angle of a perfect reflection,
while ambient light models the contribution from a variety of sources and reflections
too complex to calculate individually. The Phong model can be computed at any point
where we have the required vectors and the local absorbtion coefficients. For polygons,
OpenGL applies the model at the vertices and computes vertex colors. To color (or
shade) a vertex, we need the normal at the vertex, a set of material properties, and the
light sources that illuminate that vertex. With this information, OpenGL can compute a
color for the entire polygon. For a flat polygon, we can simply assign a normal to the
first vertex and let OpenGL use the computed vertex color for the entire face, a
technique called flat shading. If we want the polygon to appear curved, we can assign
different normals to each vertex and then OpenGL will interpolate the computed vertex
colors across the polygon. This later method is called smooth or interpolative shading.
For objects composed of flat polygons, flat shading is more appropriate.
Let's again use the cube with the vertex numbering in Figure 3. We use the function
quad to describe the faces in terms of the vertices
Listing 1: quad.c
GLfloat vertices[8][3] = {{-1.0,-1.0,-1.0}, {1.0,-1.0,-1.0},
{-1.0,1.0.-1.0}, {1.0,1.0,-1.0},{-1.0,-1.0,1.0},
{1.0,-1.0,1.0},{-1.0,1.0,1.0},{1.0,1.0,1.0}};
void quad(int a, int b, int c, int d)
glVertex3fv(vertices[a]);
glVertex3fv(vertices[b]);
glVertex3fv(vertices[c]);
glVertex3fv(vertices[d]);
glEnd();
}
Figure 3. Cube Vertex Labeling.
To flat shade our cube, we make use of the six normal vectors, each of which points
outward from one of the faces. Here is the modified cube function
Listing 2: Revised cube.c with normals for shading
Glfloat face_normals[6][3] = {-1.0,0.0,0.0},{0.0,-1.0, 0.0},
{0.0,0.0-1.0},(1.0,0.0,0.0},{0.0,1.0,0.0},{0.0,0.0,1.0}};
void cube()
glNormal3fv(face_normals[2]);
quad(0, 2, 3, 1);
glNormal3fv(face_normals[4]);
quad(2, 6, 7, 3);
glNormal3fv(face_normals[0]);
quad(0, 4, 6, 2);
glNormal3fv(face_normals[3]);
quad(1, 3, 7, 5);
glNormal3fv(face_normals[5]);
quad(4, 5, 7, 6);
glNormal3fv(face_normals[1]);
quad(0, 1, 5, 4);
}
Now that we have specified the orientation of each face, we must describe the light
source(s) and the material properties of our polygons. We must also enable lighting
and the individual light sources. Suppose that we require just one light source. We can
both describe it and enable it within myinit . OpenGL allows each light source to have
separate red, green and blue components and each light source consists of independent
ambient, diffuse and specular sources. Each of these sources is configured in a similar
manner. For our example, we will assume our cube consists of purely diffuse
surfaces, so we need only worry about the diffuse components of the light source. Here
is a myinit for a white light and a red surface:
Listing 3: Revised myint.c with lights and materials
void myinit()
{
GLfloat mat_diffuse[]={1.0, 0.0, 0.0, 1.0};
GLfloat light_diffuse[]={1.0, 1.0, 1.0, 1.0};
GLfloat light0_pos[4] = { 0.5, 1.5, 2.25, 0.0 };
glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);
glLightfv(GL_LIGHT0, GL_POSITION, light0_pos);
/* define material properties for front face of all polygons */
glMaterialfv(GL_FRONT, GL_DIFFUSE, mat_diffuse);
glEnable(GL_LIGHTING); /* enable lighting */
glEnable(GL_LIGHT0); [TOKEN:12074] enable light 0 */
glEnable(GL_DEPTH_TEST); /* Enable hidden-surface-removal */
glClearColor(1.0, 1.0, 1.0, 1.0);
}
Both the light source and the material have RGBA components. The light source has a
position in four-dimensional homogeneous coordinates. If the last component is one,
then the source is a point source located at the position given by the first three
components. If the fourth component is zero, the source is a distant parallel source and
the first three components give its direction. This location is subject to the same
transformations as are vertices for geometric objects. Figure 4 shows the resulting
image
Figure 4. Red Cube with Diffuse Reflections.
Texture Mapping
While the capabilities of graphics systems are measured in the millions of shaded
polygons per second that can be rendered, the detail needed in animations can require
much higher rates. As an alternative, we can "paint" the detail on a smaller number of
polygons, much like a detailed label is wrapped around a featureless cylindrical soup
can. Thus, the complex surface details that we see are contained in two-dimensional
images, rather than in a three-dimensional collection of polygons. This technique is
called texture mapping and has proven to be a powerful way of creating realistic
images in applications ranging from games to movies to scientific visualization. It is so
important that the required texture memory and mapping hardware are a significant
part of graphics hardware boards.
OpenGL supports texture mapping through a separate pixel pipeline that processes the
required maps. Texture images (arrays of texture elements or texels) can be
generated either from a program or read in from a file. Although OpenGL supports one
through four-dimensional texture mapping, to understand the basics of texture
mapping we shall consider only two-dimensional maps to three-dimensional polygons
as in Figure 5.
Figure 5. Texture Mapping a Pattern to a Surface.
We can regard the texture image as continuous with two-dimensional coordinates s and
t. Normally, these coordinate range over (0,1) with the origin at the bottom-left
corner of the image. If we wish to map a texture image to a three-dimensional polygon,
then the rasterizer must match a point on the polygon with both a point in the frame
buffer and a point on the texture map. The first map is defined by the various
transformations that we discussed in Part 1. We determine the second map by
assigning texture coordinates to vertices and allowing OpenGL to interpolate
intermediate values during rasterization. We assign texture coordinates via the
function glTexCoord which sets up a present texture coordinate as part of the graphics
state.
Consider the example of a quadrilateral. If we want to map the entire texture to this
polygon, we can assign the four corners of the texture to the vertices
Listing 4: Assigning texture coordinates
glTexCoord2f(0.0, 0.0);
glVertex3fv(a);
glTexCoord2f(1.0, 0.0);
glVertex3fv(b);
glTexCoord2f(1.0, 1.0);
glVertex3fv(c);
glTexCoord2f(0.0, 1.0);
glVertex3fv(d);
glEnd();
Figure 6 shows a checkerboard texture mapped to our cube. If we assign the texture
coordinates over a smaller range, we will map only part of the texture to the polygon
and if we change the order of the texture coordinates we can rotate the texture map
relative to the polygon. For polygons with more vertices, the application program
must decide on the appropriate mapping between vertices and texture coordinates,
which may not be easy for complex three-dimensional objects. Although OpenGL will
interpolate the given texture map, the results can appear odd if the texture coordinates
are not assigned carefully. The task of mapping a single texture to an object composed
of multiple polygons in a seamless manner can be very difficult, not unlike the real
world difficulties of wallpapering curved surfaces with patterned rolls of paper.
Like other OpenGL features, texture mapping first must be enabled
(glEnable(GL_TEXTURE)). Although texture mapping is a conceptually simple idea, we
must also specify a set of parameters that control the mapping process. The major
practical problems with texture mapping arise because the texture map is really a
discrete array of pixels that often come from images. How these images are stored can
be hardware and application dependent. Usually, we must specify explicitly how the
texture image is stored (bytes/pixel, byte ordering, memory alignment, color
components). Next we must specify how the mapping in Figure 5 is to be carried out.
The basic problem is that we want to color a point on the screen but this point when
mapped back to texture coordinates normally does not map to an s and t corresponding
to the center of a texel. One simple technique is to have OpenGL use the closest texel.
However, this strategy can lead to a lot of jaggedness (aliasing) in the resulting image.
A slower alternative is have OpenGL average a group of the closest texels to obtain a
smoother result. These options are specified through the function glTexParameter.
Another issue is what to do if the value of s or t is outside the interval (0,1). Again
using glTexParameter, we can either clamp the values at 0 and 1 or use the range
(0,1) periodically. The most difficult issue is one of scaling. A texel, when projected
onto the screen, can be either much larger than a pixel or much smaller. If the texel is
much smaller, then many texels may contribute to a pixel but will be averaged to a
single value. This calculation can a very time consuming and results in the color of
only a single pixel. OpenGL supports a technique called mipmapping that allows a
program to start with a single texture array and form a set of smaller texture arrays
that are stored. When texture mapping takes place, the appropriate array -- the one
that matches the size of a texel to a pixel - -is used. The following code sets up a
minimal set of options for a texture map and defines a checkerboard texture.
Listing 5: minimum texture map setup in myint.c
void myinit()
GLubyte image[64][64][3];
int i, j, c;
for(i=0;i<64;i++) for(j=0;j<64;j++)