devxlogo

Drawing with Direct3D, Part 2: Lighting and Textures

Drawing with Direct3D, Part 2: Lighting and Textures

n Part 1 of this article series you saw how to get started with Direct3D, Microsoft’s high-performance three-dimensional graphics library. That article explained how to initialize Direct3D, build simple scenes, and display them. It also explained how you can use transformations to modify the data or change the viewing position over time.

The techniques you’ve seen so far let you specify colors for the objects in a scene?but those colors remain static. The system displays exactly the colors you specify; they aren’t produced by any sort of lighting model. In complicated scenes, displaying only specified colors can produce a poor result, because adjacent shapes that have the same colors blend together, making it difficult for viewers to discern what the scene represents. Imagine a cube where sides all have the same color displayed on a two-dimensional surface. The sides would blend so that you couldn’t tell where one side ended and the next began.

To help solve such problems, you can improve the realism of Direct3D scenes by using a lighting model and materials, letting Direct3D change the colors of objects depending on their relative orientation to light sources and the viewing position. Lighting models and materials prevent adjacent surfaces from blending together and make objects such as a cube appear more realistic.

You’ll also explore using textures to give objects more interesting appearances than those provided by simple colors. Using these techniques, you can not only make a cube with sides that look different depending on the light source, but you can also make the cube appear to be made out of brick, wood, or other materials.

Understanding Light
Before you can add lighting effects to your three-dimensional scenes, you need to understand a little about the physics of light. Direct3D treats light as if it is broken into three distinct components: ambient, diffuse, and specular.

Ambient light represents the overall brightness of the scene?those areas where light does not fall directly. For example, ambient light lets you see under your desk even though there’s no direct light source. Direct3D treats ambient light as a background level of illumination that applies equally to everything in the scene, no matter where it is, or how it is oriented. This is a bit of a simplification, but it works quite well in practice and lets you see parts of the scene that are not lit directly (see the sidebar “Real World Lighting” for more information).

When light from a light source shines on an object, much of that light is scattered in other directions. This scattered or diffuse light usually provides the greatest effect on an object’s perceived color.

Diffuse light is scattered more or less uniformly in all directions. The amount of light that the object scatters depends on how much light energy hits the object?and that depends on the angle at which the light hits the object’s surface. For example, imagine a white sheet of paper. If a light shines directly on the paper at a 90-degree angle, the paper appears white. If the light hits the paper at a 45-degree angle, the paper appears less white or possibly light gray. If the light barely hits the paper at a slim 10-degree angle, the paper may appear dark gray. You might want to take a few moments to look at some paper in your office and notice how its color seems to change when you tilt it at different angles to the light.

?
Figure 1. Mirror Math: The mirror angle is the same size as the incident angle but on the other side of the surface’s normal.

Specular light is the final component of the Direct3D lighting model. To understand specular light, you need to know a few simple terms. Figure 1 shows the terms graphically and makes them a lot easier to understand.

A surface’s normal is a vector that points directly away from the surface at a particular point (such as the blue arrow in Figure 1). The normal for a floor typically points directly upward, the normal for a ceiling points downward, and the normal for a wall points horizontally away from the wall. More complicated surfaces such as spheres have different normals at every point on their surfaces, all pointing away from the center of the sphere.

When light strikes a surface, the angle that the light makes with the surface’s normal is called the incident angle.

A mirror angle is the angle at which a perfectly mirrored surface would reflect light that hits it at a particular point. The mirror angle is the same as the incident angle but reflected onto the other side of the normal.

At one extreme, a perfect mirror reflects all the light that hits it along its mirror angle. At the other extreme, a perfectly non-shiny matte surface reflects no extra light along its mirror angle; it reflects only diffuse light.

Between these two extremes are objects that are somewhat but not perfectly shiny. Surfaces such as polished billiard balls, plastic toys, and shiny apples reflect some extra light more or less along the mirror angle. But because these surfaces are not perfect reflectors, they do not reflect this light exactly along the mirror angle. Most of the reflected light lies along this angle and then drops off quickly at angles farther away from the mirror angle. If you look along the mirror angle, you’ll see a bright spot. If you move slightly to the side, you’ll see less light and if you move farther from the mirror angle you’ll see no reflected component. This extra reflected light is the specular component.

Rounded surfaces such as billiard balls and apples have locations that are positioned so your eye lies close to the mirror angle. Those spots produce shiny highlights when you look at the object. To see the effect yourself, look at a shiny apple in a room without any bright lights, so you’re seeing it primarily from ambient light. If you then hold the apple under a very bright light, you should see the highlights (and if you pull out your protractor, you may be able to measure the incident and mirror angles).

These three components make up the bulk of the lighting model used by Direct3D. Ambient light represents the scene’s general level of illumination, and is the same everywhere. Diffuse light represents light striking an object, and depends on the light’s incident angle. Specular light represents partial reflection along the mirror angle and depends on both the angle of incidence and the angle from the point to the viewer’s eye.

However, there are two more pieces to the complete model: determining how light gets into the scene and how materials react to the light. Direct3D handles these issues with lights and materials.

Understanding Lights
The DirectX Light class represents a light source. Lights have properties that determine the type of the light, its color, and the amount of light it produces for use by ambient, diffuse, and specular illumination.

For example, if a scene contains no lights that provide specular light, then none of the objects in the scene can produce specular highlights. If a light provides green diffuse light, then objects in the scene can reflect green light through diffuse reflection. If another light provides red ambient light, then that light contributes to each object’s apparent color.

Direct3D provides three kinds of lights: directional, point, and spot. All three types have ambient, diffuse, and specular components, although they have some other properties that differ.

Directional Lights
A directional light provides light rays that all travel in the same direction. You can think of it as a light that is infinitely far away. For most practical purposes, the sun’s rays are all traveling in the same direction when they reach Earth so it’s basically a directional light.

Directional lights have the simplest mathematics because light at every point in the scene is traveling in the same direction, which simplifies calculating the light’s incident angles. Therefore, directional lights put the smallest load on the Direct3D engine, so it can render directionally lit scenes relatively quickly.

The sample d3dLightedSphere program, which you’ll find in the downloadable code in both Visual Basic and C# versions, creates a directional light. It sets the light’s type to Directional, sets its diffuse and ambient colors, and specifies the light’s direction. In this example, the light is moving in the direction <0, -4, 1>. Finally the code enables the light so Direct3D will use it when rendering the scene:

   m_Device.Lights(0).Type = LightType.Directional   m_Device.Lights(0).DiffuseColor = New ColorValue(0, 0, 256)   m_Device.Lights(0).AmbientColor = New ColorValue(10, 10, 20)   m_Device.Lights(0).Direction = New Vector3(0, -4, -1)   m_Device.Lights(0).Enabled = True

Point Lights
Point lights represent light rays that originate from a single point in space, much as a bare light bulb produces light. Because the light rays from point lights travel in different directions, this type of light makes calculating incident angles harder, and slows down rendering. For example, suppose your scene contains a huge triangle. With a directional light, every light ray hits the triangle at the same angle, so (ignoring specular reflection) the entire triangle is a single color. In contrast, with a point light Direct3D must calculate a different angle of incidence for every pixel that the light can reach.

The following code shows how d3dLightedSphere creates a point light:

   m_Device.Lights(2).Type = LightType.Point   m_Device.Lights(2).Diffuse = Color.White   m_Device.Lights(2).AmbientColor = New ColorValue(0, 1, 0)   m_Device.Lights(2).Position = New Vector3(2, 2, -2)   m_Device.Lights(2).Attenuation0 = 0   m_Device.Lights(2).Attenuation1 = 0.75   m_Device.Lights(2).Attenuation2 = 0   m_Device.Lights(2).Range = 100   m_Device.Lights(2).Specular = Color.White   m_Device.Lights(2).Enabled = True

The code sets the light’s type to Point, gives it diffuse and ambient color values, and specifies the light’s position in space.

Next, the code sets three Attenuation properties that indicate how the light’s intensity diminishes over distance. These parameters are confusing and complicated; I won’t describe them further here, but you can get more information from MSDN.

The Range parameter indicates how far the light penetrates. The value 100 used in this example is big enough so the light can reach every object. Setting it to smaller values would leave some objects in the dark, which could save some processing time for very complicated scenes. The last two lines set the light’s specular color to white, and enable the light (turn it on).

Spot Lights
Spot lights are spotlights located at a point in space that project a cone of light in a particular direction. The light is brightest within an inner cone, but diminishes to lighter values in an outer cone that allows the light to fade out at the edges. Spot lights provide no light outside the outer cone. Their complex mathematics makes these lights much more complicated than directional and point lights, and your computer may not be fast enough to render complex scenes containing spot lights quickly enough for animation.

The following code shows how program d3dLightedSphere creates a spot light:

   m_Device.Lights(3).Type = LightType.Spot   m_Device.Lights(3).Diffuse = Color.Green   m_Device.Lights(3).AmbientColor = New ColorValue(0, 1, 0)   m_Device.Lights(3).Position = New Vector3(0, 0, -2)   m_Device.Lights(3).XDirection = -m_Device.Lights(3).XPosition   m_Device.Lights(3).YDirection = -m_Device.Lights(3).YPosition   m_Device.Lights(3).ZDirection = -m_Device.Lights(3).ZPosition   m_Device.Lights(3).Attenuation0 = 1   m_Device.Lights(3).Range = 100   m_Device.Lights(3).Falloff = 1   m_Device.Lights(3).InnerConeAngle = Math.PI / 8   m_Device.Lights(3).OuterConeAngle = Math.PI / 4   m_Device.Lights(3).Enabled = True

The code sets the light’s type to Spot, gives it diffuse and ambient colors, and sets the light’s position. This example sets the light’s direction to point back toward the origin where the sphere is located. The code then sets the light’s Attenuation and Range parameters.

Next, the code sets the Falloff, InnerConeAngle, and OuterConeAngle properties that determine the shape of the spotlight, and finishes by enabling the light.

Independent Ambient Light
Direct3D can also add ambient light to the scene that is independent of any light sources. This seems more in keeping with the concept of having ambient light that should apply to the entire scene even if no light is shining directly on an object. However, it makes sense to give your lights ambient components, too, because turning a light off then affects the amount of ambient light in the scene.

The following code shows how the d3dLightedSphere sample program could add independent ambient light (the code is commented out in the sample program):

   m_Device.RenderState.Ambient = Color.FromArgb(255, 32, 32, 32)

Specular Notes
Specular highlights appear only when the viewing position is very close to the mirror angle. If a scene contains relatively large triangles, then it is likely that few (if any) triangles will be situated to produce a specular highlight. That means you are likely to see either no highlights at all, or a few large bright triangles surrounded by darker areas, producing an unnatural appearance.

To produce more realistic still images, you can use more triangles. For example, a sphere built from 5,000 triangles will produce smoother specular highlights than one made of only 400 triangles.

Unfortunately, specular calculations are very time consuming, and adding more triangles to a scene slows calculations even further, so it can be hard to produce realistic highlights in animated scenes. Sometimes it may be best to turn specular reflection off completely to avoid odd-looking highlights and save processing time.

In fact, specular lighting calculation is so expensive that Direct3D turns it off by default. If you want to see specular reflections, you need to turn it on explicitly. The following code shows how the sample program d3dLightedSphere turns on lighting in general and specular lighting in particular:

   m_Device.RenderState.Lighting = True   m_Device.RenderState.SpecularEnable = True

Materials
The previous sections explained the Direct3D lighting model and lights that add illumination to a scene. The final piece needed to turn lights and objects into colored pixels on the screen is the idea of material. A material describes the physical characteristics (as far as the lighting model is concerned) of the objects.

The material indicates which colors an object’s surface reflects and how. For example, if you shine a pure red light on a pure green ball, you won’t see anything. If you shine a white light on a green ball, the ball appears green. If you shine a red light on a white ball, the ball appears red.

The material also determines the object’s specular properties?how reflective it is. It determines whether the object produces specular highlights and how big those highlights are.

The following code shows how the d3dLightedSphere program example defines the material it uses to render its sphere. It creates a new Material object and sets its ambient, diffuse, and specular components to white so the object picks up the colors of the lights that shine on it:

   m_SphereMaterial = New Material()   m_SphereMaterial.Ambient = Color.White   m_SphereMaterial.Diffuse = Color.White   m_SphereMaterial.Specular = Color.White   m_SphereMaterial.SpecularSharpness = 100

The code then sets the material’s SpecularSharpness property to 100. Smaller values produce larger, fuzzier highlights, while bigger SpecularSharpness values produce smaller, sharper highlights. (To produce less intense highlights, you would set the material’s specular color value to something darker, such as DarkGray.)

To tell Direct3D what material to use, simply set the device’s Material property before you draw any primitives. For example, the following code makes the device use the m_SphereMaterial object:

   m_Device.Material = m_SphereMaterial

The very last material-related issue to consider (I promise it’s the last one) is how Direct3D calculates surface normals. You might think that it simply uses each triangle’s vertices to calculate the triangle’s normal?but it doesn’t. You need to tell it the normals for each vertex explicitly. This lets you adjust the normals to produce special effects, such as triangles with colors that blend smoothly together, but it does mean a little more work.

The previous article in this series stored vertex information in an array of PositionColored type. As its name indicates, the PositionColored type includes information about each vertex’s position and color. To give Direct3D additional information about normals, a program can use the PositionNormalColored type instead. This type includes a vertex’s location, normal vector, and color.

The sample program d3dLightedCube (available in the downloadable code in both VB and C# versions), uses the MakeRectanglePNC subroutine to create two triangles that represent a rectangle. It takes parameters giving the rectangle’s corners and the colors it should have at each corner.

Here’s how the routine calculates the rectangle’s normal and builds the first triangle:

   Dim vec0 As New Vector3(x1 - x0, y1 - y0, z1 - z0)   Dim vec1 As New Vector3(x2 - x1, y2 - y1, z2 - z1)   Dim n As Vector3 = Vector3.Cross(vec0, vec1)   n.Normalize()      vertices(i) = New _      CustomVertex.PositionNormalColored( _      x0, y0, z0, n.X, n.Y, n.Z, c0)   i += 1      vertices(i) = New _      CustomVertex.PositionNormalColored( _      x1, y1, z1, n.X, n.Y, n.Z, c1)   i += 1      vertices(i) = New _      CustomVertex.PositionNormalColored( _      x2, y2, z2, n.X, n.Y, n.Z, c2)   i += 1
?
Figure 2. Cube Colors: The d3dLightedCube program creates a cube with red, green, and blue sides, and then rotates the cube so you can see the effect of the light sources on the colors.

The code starts by making vectors from point 0 to point 1, and from point 1 to point 2. The cross product of those two vectors (produced by the Vector3.Cross method) gives a vector that is perpendicular to both original vectors. Because the two original vectors lie within the rectangle, the cross product is perpendicular to the rectangle and therefore points along the normal. The code calls this new vector’s Normalize method to lengthen or shorten it so it points in the same direction but has length 1. The result is the rectangle’s normal.

Having found the normal, the code can now build its first triangle. The code simply creates points of the PositionNormalColored type, setting each point’s position, normal vector components, and color.

Now that you’ve seen all the pieces?the lighting model, lights, materials, and normals?you can start making pretty pictures. Figure 2 shows the output of the sample program d3dLightedCube, which creates a scene that uses two directional lights. The cube has red, green, and blue sides. In Figure 2, the blue side is hidden from the lights so it is illuminated only by ambient light. As the cube rotates, and the blue side turns toward a light, its blue color grows brighter and brighter as the incident angle of the light hits the face more squarely.

The d3dLightedCube program is a nice example to experiment with when learning about lighting models because it is very simple; it has only a few surfaces and only two directional lights. Download the code and try turning off ambient light, changing the material’s diffuse component, and so forth.

Program d3dLightedSphere demonstrates different kinds of light sources and specular reflections. It uses a sphere with a white material that reflects the colors of the lights shining on it. (Note that this example doesn’t build the sphere itself; instead, it uses the Mesh class’s Sphere method to make the sphere. The sphere contains 10,000 triangles so its specular highlight is smooth.)

?
Figure 3. Sphere Spots: The d3dLightedSphere program displays a sphere lit by directional, point, and spot lights.

Figure 3 shows the program with all of its lights enabled. The scene includes blue and red directional lights above and to the right, respectively; a white point light above and to the right; and a green spot light shining directly on the sphere’s front surface.

Figure 3 shows several interesting effects. If you look closely, you can see that the green spot light is brightest in the center and drops off in intensity toward the edges?in other words, you can see the inner and outer cones of the spot light clearly. The white point light includes specular light and produces a bright white highlight above and to the right of the green area. The blue and red lights lie slightly on the far side of the sphere, so each illuminates somewhat less than half the sphere. You can see a paler purplish color where the blue and red lights overlap.

Figure 4 shows the program with only the blue and red lights turned on. Not only is the white specular highlight gone, but other white light that previously lightened the blue and red areas is also missing. The result seems less three-dimensional without the highlight to provide an extra depth cue.

?
Figure 4. Sphere Spots: In this configuration, the d3dLightedSphere program displays a sphere lit by directional lights only.
?
Figure 5. Seeing Surfaces: The d3dLightedSurface program displays a three-dimensional green surface illuminated by two directional lights.

Figure 5 shows the d3dLightedSurface sample program. This program (which is very similar to the d3dColoredSurface program described in the first part of this series) builds a three-dimensional surface and displays it using a green material and two gray directional lights. Different incident angles cause the triangles that make up the surface appear as different shades of green, giving an effective illusion of depth.

?
Figure 6. Teapot and Torus: The d3dLightedTeapotTorus program draws a teapot, torus, and sphere illuminated by two directional lights.

Figure 6 shows program d3dLightedTeapotTorus displaying a teapot, a torus, and a sphere. It uses two white directional lights and different materials for the three objects to give them different colors and depth.

Like the d3dLightedSphere example, this program doesn’t build its objects itself, instead relying on the Mesh class’s Sphere, Torus, and Teapot methods, all of which build objects of default sizes positioned at the origin.

The following code shows how the program draws the objects in its Render subroutine. It applies a world transformation to each object that appropriately scales, rotates, and positions it. The method then selects the object’s material and calls its DrawSubset method to draw it:

   ' Transform and draw the teapot.   m_Device.Transform.World = Matrix.Multiply( _       Matrix.RotationY(Math.PI / 4), _       Matrix.Translation(0, 0, -3))   m_Device.Material = m_TeapotMaterial   m_TeapotMesh.DrawSubset(0)      ' Transform and draw the torus.   m_Device.Transform.World = Matrix.Identity   m_Device.Material = m_TorusMaterial   m_TorusMesh.DrawSubset(0)      ' Transform and draw the sphere.   m_Device.Transform.World = Matrix.Translation(0, 1, 2)   m_Device.Material = m_SphereMaterial   m_SphereMesh.DrawSubset(0)

Adding Textures
In addition to drawing three-dimensional objects with specific colors, materials, and lights, a third alternative is to specify the colors of an object’s pixels by mapping an image onto the object’s triangles. In Direct3D this is called texture-mapping.

Direct3D represents an object’s color properties with the Material class. Similarly, it represents an object’s texture map with the Texture class. You can think of a Texture object as basically being a fancy wrapper for a bitmapped image. The TextureLoader class provides shared methods for loading texture data from various sources, such as files and streams.

The following code declares a Texture object and then loads it from the file named l_wood04.jpg. The m_Device parameter informs the FromFile method of the Direct3D device that will later use the texture:

   Private m_WoodTexture As Texture = _       TextureLoader.FromFile(m_Device, "l_wood04.jpg")

The sample program d3dTexturedCube uses the following code to load a texture from a program resource named l_wood04. This is the same image file used in the previous example, but it’s saved in a resource rather than loaded from an external file at run time:

   Private m_WoodTexture As Texture   ...   Dim bmp_stream As IO.MemoryStream   bmp_stream = New IO.MemoryStream()   My.Resources.l_wood04.Save( _      bmp_stream, ImageFormat.Jpeg)   bmp_stream.Seek(0, IO.SeekOrigin.Begin)   m_WoodTexture = TextureLoader.FromStream( _      m_Device, bmp_stream)   bmp_stream.Close()

The code creates a memory stream and makes the image resource save itself into the stream. It then uses the TextureLoader’s FromStream method to load the texture.

Next, you need to tell Direct3D which part of the texture image should go on which part of each of the triangles that you render. You can do this with a new vertex type: PositionTextured. As its name implies, the PositionTextured type allows you to specify a vertex’s position and its location in a texture. The type’s texture properties, which are called Tu and Tv, give the vertex’s position in the texture image. In the image coordinate system, the point (0, 0) is in the lower left corner, the U axis increases to the right, and the V axis increases upward.

Program d3dTexturedCube uses the MakeRectanglePT subroutine to simplify creating textured rectangles. Its parameters are the rectangle’s corners and the Tu and Tv values each corner should have.

The following code shows how MakeRectanglePT creates the rectangle’s first triangle. It simply creates each vertex, passing its position and texture coordinates into the PositionTextured class’s constructor:

   vertices(i) = _      New CustomVertex.PositionTextured( _      x0, y0, z0, u0, v0)   i += 1   vertices(i) = _      New CustomVertex.PositionTextured( _      x1, y1, z1, u1, v1)   i += 1   vertices(i) = _      New CustomVertex.PositionTextured( _      x2, y2, z2, u2, v2)   i += 1

The following code shows the call to MakeRectanglePT that d3dTexturedCube makes to build the cube’s top. The result is a horizontal rectangle where -1 <= x <= 1, -1 <= z <= 1, and y = 1:

   MakeRectanglePT(vertices, i, _      1, 1, -1, 0, 1, _      -1, 1, -1, 0, 0, _      -1, 1, 1, 0.75, 0, _      1, 1, 1, 0.75, 1)
?
Figure 7. Bricks and Boards: The d3dTexturedCube program draws a cube with five wooden sides and one brick side.

After you load the texture and set up the data, you simply tell the Direct3D device to use it. The following code shows how the d3dTexturedCube program tells Direct3D to use textures when rendering. The code first selects a wood texture and draws every triangle except the last two. It then selects a brick texture and draws the final two triangles. Figure 7 shows the result:

   m_Device.SetTexture(0, m_WoodTexture)   m_Device.DrawPrimitives( _       PrimitiveType.TriangleList, _       0, NUM_TRIANGLES - 2)      ' Draw two triangles in brick.   m_Device.SetTexture(0, m_BrickTexture)   m_Device.DrawPrimitives( _       PrimitiveType.TriangleList, ( _       NUM_TRIANGLES - 2) * 3, 2)

If you study Figure 7 for a few seconds, you’ll notice that it isn’t using a lighting model. Each of the cube’s sides has the same brightness. If the sides were colored rather than covered by images, you wouldn’t be able to see the edges between the sides.

Illuminating Textures
You could improve the results of texture-rendering and make it more realistic by creating a different image for each side and then making some darker than others to simulate a lighting model. If the light sources in your scene never move, that would do the trick but it would be a lot of work.

Fortunately Direct3D provides a better solution by allowing you to combine textured surfaces with the Direct3D lighting model. To use textures with the lighting model, simply add lights, use a material, and use a texture. Direct3D combines the light values generated by its lighting model with those determined by the texture to give the result.

The only catch is that the images defined by textures are usually darker than a pure white surface, so they tend to darken the surface. When you are using the lighting model alone, you may want to use dimmer lights to produce a subtle, less harsh image. If you combine soft lights with a texture, the result may be very dark. To avoid that, you may need to use bright white lights. You may also need to experiment to find an acceptable result.

Figure 8 shows the d3dLightedTexturedCube program displaying the same cube shown in Figure 7 but this time with lighting added.

?
Figure 8. Bricks and Boards: The d3dLightedTexturedCube program draws a cube using textures and a lighting model.

You might think that texturing would be a performance killer, because the mathematics required to map an image onto an arbitrary triangle is moderately complicated and must be repeated a huge number of times to generate even a small image such as the one shown in Figure 8. Fortunately, most modern graphics hardware includes support for just this sort of mapping on the chipset which makes the calculations extremely fast.

The sample program d3dGasket (see Figure 9), draws a three-dimensional Sierpinski gasket. You can build one yourself by taking a wooden cube and carving square tunnels through its center parallel to the X, Y, and Z axes. Then carve smaller tunnels through each of the small cubes surrounding the tunnels. Continue carving smaller and smaller tunnels until you reach a satisfactory level.

To create Figure 9, d3dGasket uses three levels of recursion to carve three sets of tunnels. To render this scene, the program must cull, shade, and texture 96,000 triangles in real time. Despite the large number of triangles, the program has no trouble drawing the scene quickly enough to provide smooth rotation. However, at the next level of recursion, the program has some serious trouble, because it must draw some 1.92 million triangles, so it takes nearly a second to generate each frame.

?
Figure 9. Great Gasket: The d3dGasket program draws a Sierpinski gasket with 96,000 triangles.
?
Figure 10. Cube Farm: The d3dManyBoxes program rotates 120,000 triangles in real time.

The d3dManyBoxes program, shown in Figure 10, draws 10,000 rotating shaded and textured boxes (120,000 triangles) in real time. You can change the constants used by the program to try drawing different numbers of boxes. On my computer, the animation becomes noticeably rougher at around 1 million triangles.

When a program draws such huge scenes, you need to be aware of a new restriction. The Direct3D device’s DrawPrimitives call can draw only a certain number of primitives at one time. You can learn the maximum by checking the device’s DeviceCaps.MaxPrimitiveCount property. To draw more than that number, you must loop through the primitives making repeated calls to DrawPrimitives until you finish drawing them all.

Both d3dGasket and d3dManyCubes must draw their primitives this way. The following code shows that each time through the loop, the program draws m_MaxPrimitives triangles:

   Dim first_triangle As Integer = 0   Dim num_triangles As Integer = m_MaxPrimitives   Do While first_triangle < m_NumTriangles       If first_triangle + num_triangles > m_NumTriangles Then           num_triangles = m_NumTriangles - first_triangle       End If          m_Device.DrawPrimitives(PrimitiveType.TriangleList, _           first_triangle * 3, num_triangles)       first_triangle += num_triangles   Loop

That’s the theory, anyhow; in practice I found that the d3dGasket program had trouble drawing the full number of primitives allowed according to the device’s DeviceCaps.MaxPrimitiveCount property, so I reduced the number to 10,000, which works better for this program at least.

Go Forth and Draw
Direct3D’s lighting models and textures let you produce some amazingly realistic scenes at performance levels capable of animating them in real time. Textures and lighting models add extra depth cues that make a scene appear more believable and engaging. If your light sources are stationary and you choose your textures carefully, you may even be able to provide some shadows to further enhance the illusion of reality.

Direct3D supports a few other methods for making scenes appear more realistic. These techniques include:

  • Bump Mapping ? lets you perturb the normals on a surface to provide a little extra randomness.
  • Stencil Buffers ? let you provide mirrors and shadows.
  • Environment Mapping ? You can think of environment mapping as defining a three-dimensional “texture” wrapped around the outside of an object. Using this technique, you can make objects appear reflective, at least to a casual observer.

These techniques are complicated, and as the quest for ever more realistic scenes continues, the techniques become even more complicated and much slower. For example, Direct3D cannot reasonably produce true reflections on curved surfaces, or handle transparent or translucent objects. Ray tracing algorithms can handle those effects?and more?but they are very slow.

Even ray tracing is incapable of rendering effects such as light reflected and transmitted into the area surrounding an object. For that level of realism, you need to move to even more complicated algorithms such as radiosity and photonics. (For some remarkable results produced by a ray tracing and photonics program written in Visual Basic, take a look at Camil Moujaber’s web page.)

Meanwhile, download the sample programs and try them out. Modify them?or build your own programs and create more complicated scenes. If you come up with one that’s really interesting, email me at [email protected], and I may post it for others to see on the VB Helper web site.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist