devxlogo

WPF Wonders: 3D Drawing

WPF Wonders: 3D Drawing

hen many developers think of WPF (if they think of it at all), they think of user interfaces and controls, such as panels that fly on and off the screen, icons that swirl around to show relationships among pieces of data, and gratuitously spinning buttons that make funny noises when clicked.

But that’s not all WPF brings to the table. In addition to giving flat interfaces a new lease on life, WPF also lets you move easily into the third dimension. With a minimum of extra effort, it lets you draw three-dimensional objects made of different materials, shaded with colors or textures (such as wood grain or bricks), and illuminated by different kinds of lights to produce a high-performance, realistic?and three-dimensional experience.

Author’s Note: “Realistic” is relative. You can make scenes that are plenty realistic enough to make good business presentations and even nice games but you’re not going to fool anyone into thinking they can reach through the monitor.

This article explains how to get started with three-dimensional drawing in WPF. It shows you how to build scene data, define lights, and use cameras to view a scene. It also shows how to use WPF’s data binding and animation capabilities to make a scene move.

Direct3D under the Hood

WPF uses DirectX as its rendering engine, so it enjoys many of DirectX’s benefits, including its high-performance 3D graphics subsystem, Direct3D. You can set up scenes, define lights, cameras, and so forth in XAML. WPF uses your XAML code to build the data structures that Direct3D needs, and then Direct3D does all of the heavy lifting, performing all the complex mathematical calculations to produce the final rendered result.

Direct3D was built to take advantage of your graphics hardware, so?depending on what you have installed?it can really fly. Typically, Direct3D can display thousands of objects in real time.

Recent versions of the Windows operating system come with DirectX built in so you don’t even need to install it. For example, Vista ships with DirectX 10, and rumor has it that Windows 7 will have DirectX 11 on board.

So if Direct3D does all of the work, why do you need WPF? Couldn’t you use Direct3D directly?

The answer is: Sure you can use Direct3D directly. I’ve even written some DevX articles that explain how to get started. (See Part 1, Part 2, and Part 3.) In fact, WPF imposes some restrictions and overhead, so you can get better flexibility and performance using Direct3D directly from code.

But the catch is that coding to Direct3D directly is more work. When a Direct3D program starts, it needs to do a fair amount of rather confusing setup to figure out what kind of graphics hardware the system has, which graphical operations that hardware supports, what kind of rendering model it should use to optimize performance, what data types to use, and so forth. WPF does all that setup for you. Of course, WPF has its own confusing hierarchy of objects that you need to build, so you’re not completely off the hook; but on the whole it’s simpler.

More importantly, as new versions of DirectX appear (Direct3D 11 should pop out in the last quarter of 2009), WPF can handle any new details transparently. Some previous versions of Direct3D required changes to initialization code, which meant you had to modify existing programs so they would work with the new libraries. Now, WPF should handle any new initialization requirements, so existing 3D WPF programs should still run.

Author’s Note: For more information on DirectX or to download DirectX SDKs, go to the DirectX home page msdn.microsoft.com/directx.

To get started with WPF 3D, you need the scene data, a camera, lights, and materials.

Cameras

Until someone makes an inexpensive holographic monitor, three-dimensional graphics must display on a two-dimensional screen. At some point the program must rotate, scale, and project the three-dimensional data into a two-dimensional version that you can actually draw.

A camera tells WPF (and thereby Direct3D) how to perform that conversion. Behind the scenes, a camera defines a complicated mathematical transformation from three dimensions to two.

?
Figure 1. Orientation Situation: In WPF, the X axis points right, the Y axis points up, and the Z axis points out of the screen.

Fortunately WPF’s camera classes provide a relatively intuitive way to specify that transformation. Simply imagine that you’re Steven Spielberg holding a camera pointed at the scene. You can move the camera to different positions and point it at different parts of the scene to either film the guy with the big sword, or Indiana Jones pulling out his gun and shooting him. You can also tilt the camera to get different angles on the scene and use different lenses to zoom in or out.

In a WPF camera object, these values are specified by the properties Position, LookDirection, and UpDirection.

Position gives the camera’s location. Note that in WPF, Direct3D, and many other 3D drawing systems the X axis points right, the Y axis points up, and the Z axis points out of the screen toward you, as shown in Figure 1.

Author’s Note: All the figures for this article were generated by WPF programs that are available for download in the download area in both C# and Visual Basic versions.

LookDirection controls the direction in which the camera is pointed. Note that this is a direction relative to the cameras position?not a point in space. If the LookDirection coordinates are the negative of the Position coordinates, then the camera is pointed back at the origin. For example, if the camera has Position (10, 20, 30) and the LookDirection is -10, -20, -30>, then the camera is looking back at the origin. (Syntax note: I’ll use parentheses for points such as Position and brackets for directions such as LookDirection.)

The UpDirection orients the final result on the screen. It determines which direction goes at the top of the screen. For example, the value 0, 1, 0> puts the Y axis on top.

?
Figure 2. Useful Directions: LookDirection determines where the camera is pointed and UpDirection determines how it is tilted.

As another example, suppose Position is (0, 0, 10), LookDirection is 0, 0, -10>, and UpDirection is 1, 1, 0>. That means the camera is on the Z axis looking towards the origin, and tilted to the right (as if you’re filming a bad guy’s hideout in the old Batman TV series).

WPF expands whatever the camera sees to fill the display area. That means the result is zoomed in when FieldOfView is small (a small view is expanded to fill the area) and zoomed out when FieldOfView is large (a big view is reduced to fill the area).

The following code shows how a WPF program can define a simple perspective camera that looks at the origin from along the positive Z axis.

Lights

Even if you’ve defined 3D objects and defined a camera, you still won’t see anything, because the lights are turned off. You need to add lights to a scene to make anything visible. WPF provides four kinds of lights: ambient, directional, point, and spot.

  • Ambient light hits every surface in the scene equally. This is the kind of light that lets you see under your desk (assuming it’s daylight and the curtains are open) even though you probably don’t have a light under there.
  • Directional light hits every surface in a scene as if it is coming from a specific direction. For example, a directional light might point down along the Y axis or it might point left along the X axis.
  • A point light sends light radiating out from a particular spot, as if you had a tiny light bulb there.
  • A spot light sends light out in a cone pointed in a particular direction. Surfaces in the center of the cone receive more light than those near the edges, and surfaces that are completely outside the cone get no light at all.
  • ?
    Figure 4. Bright Lights: Different lights result in different effects.

    The direction at which light strikes a surface is important because it helps determine the surface’s color. When light falls squarely on a surface, that surface receives a lot of light. If the light hits the surface at an oblique angle, the surface receives less light. Note that a point light source will cast light that makes different angles with different surfaces depending on their orientations to the source.

    The first cube in Figure 5 uses ambient light only, so all the cube’s sides have the same color?and they blend together. The following code shows how this scene creates its light.

The second cube in Figure 5 uses ambient light and one directional light. The directional light strikes the three visible faces at slightly different angles, so they appear as different colors. Unfortunately the light doesn’t strike the cube’s hidden faces so those faces all have the same color. You can download the code and try it yourself. If you use the program’s scroll bars to peek behind and below the cube, you’ll see that those faces blend together. The following code shows how this scene creates its lights.

The final cube uses the same lights as the second cube but adds an additional directional light that shines on two of the cube’s hidden faces. The new light doesn’t shine on the cube’s bottom, so that face is the darkest. If you rotate the scene, you’ll see that every face has a slightly different color than its neighbors. The following code shows how this scene creates its lights.

Note that lighting is cumulative. Lots of gray lights produce as much light as fewer light gray or white lights.

Materials

I said earlier that a surface’s color depends on the angle at which light strikes it. The color also depends on the material out of which it is made. WPF provides three kinds of materials: diffuse, specular, and emissive.

  • Diffuse materials absorb light and then radiate some of the light’s energy in various wavelengths, depending on the material’s color. Basically this is what ordinary objects do. If you shine a white light on a green tennis ball, the ball radiates green light.
  • Specular materials are shiny. They reflect light brightly near the mirror angle their surface makes with the light source. (If the object were perfectly reflective, then the mirror angle is where you would see the reflection of the light source.) The result is an extra bright spot. For example, if you shine a white light on a polished red apple, you’ll see a bright white highlight near the mirror angle.
  • Emissive materials glow. Note that an emissive material in WPF doesn’t act as a light source for other objects. It makes its own object glow but does not shine on other objects. Giving an object an emissive material makes it brighter. Normally, you can achieve a similar effect by using brighter lights, but using a material lets you make one object brighter without affecting others as brighter lights would.
  • Normally, diffuse materials contribute the most to an object’s color, and often you can get away with using only diffuse materials. To add highlights or adjust brightness, however, you can combine more than one type of material in the same object.

    ?
    Figure 6. Marvelous Materials: The diffuse, specular, and emissive materials are combined to create the bright green ball with a highlight on the right.

    The first three spheres in Figure 6 are drawn with diffuse, specular, and emissive materials, while the rightmost sphere combines all three materials to produce a bright green ball with a highlight.

    There are two main ways to use a material in XAML code. First, you can define a material right within the 3D object that uses it. The following XAML code defines a GeometryModel3D object that creates a rectangle. The object’s Material attribute element defines the yellow diffuse material that the square uses. (Don’t worry about the rest of the code right now. I’ll get to it later.)

                                                    

The second approach is to define the material in a Resources section and later refer to it as a static resource. For example, the following code fragment defines a diffuse material named matRed in the Window’s Resources section. Later a GeometryModel3D object defines an object (a rectangle) that uses this material as a StaticResource. (Don’t worry about the missing code for now.)

                    ...    ...                      ...

This approach is useful if you want to make many objects that use the same material. For example, if you want to make a cube consisting of six sides that all use the same material, you can define the material as a resource once and refer to it six times. Later, if you decide to change the cube from red to blue, you need to change only the material’s definition.

The spheres in Figure 6 are actually created by code sitting behind the XAML file. Each sphere contains 3,420 triangles so, while you could build them all by hand in the XAML file, it’s a lot easier to generate them from code using a couple of for loops.

The following code shows the Window Loaded event handler used by program Materials.

// Build the model.private void Window_Loaded(object sender, RoutedEventArgs e){    MakeSingleMeshSphere(Sphere00,        new DiffuseMaterial(Brushes.Green), 1, 20, 30);    MakeSingleMeshSphere(Sphere01,         new SpecularMaterial(Brushes.Green, 50), 1, 20, 30);    MakeSingleMeshSphere(Sphere02,         new EmissiveMaterial(Brushes.DarkGreen), 1, 20, 30);    MaterialGroup combined_material = new MaterialGroup();    combined_material.Children.Add(        new DiffuseMaterial(Brushes.Green));    combined_material.Children.Add(        new SpecularMaterial(Brushes.Green, 50));    combined_material.Children.Add(        new EmissiveMaterial(Brushes.DarkGreen));    MakeSingleMeshSphere(Sphere03, combined_material, 1, 20, 30);}

Subroutine MakeSingleMeshSphere builds a Mesh3DGeometry object holding a sphere. It works much as you’d expect, using for loops to create the points and triangles that make up the sphere.

MakeSingleMeshSphere's second parameter specifies the material that the sphere should use. You can see how the preceding code passes in new DiffuseMaterial, SpecularMaterial, and EmissiveMaterial objects when it builds the first three spheres.

The final sphere uses a material that combines the three other kinds of materials. The code creates a new MaterialGroup object and adds the other kinds of materials to its Children collection. The code then passes the MaterialGroup object to MakeSingleMeshSphere.

Marvelous Meshes

Now that you’ve mastered (or at least glimpsed) cameras, lights, and materials, you’re ready to actually build something.

One of the WPF’s 3D workhorses is GeometryModel3D. The members that class exposes represent the geometric data that makes up three-dimensional objects. A GeometryModel3D object’s Geometry property stores the actual geometric data. A MeshGeometry3D object can serve nicely for the Geometry property.

The MeshGeometry3D object has two key properties that determine exactly where the object sits.

The Positions property is a list of three-dimensional points giving the vertices of the triangles that make up the object. For example, the values -1,0,-1 -1,0,1 1,0,1 1,0,-1 specify four points in the Y = 0 plane with X = +/-1 and Z = +/- 1.

The TriangleIndices property is a list of indexes into the Positions list telling which triples of points make up triangles. For example, the values 0,1,2 0,2,3 means use points 0, 1, and 2 to make up one triangle and use points 0, 2, and 3 to make up another.

The order of the indices is important because Direct3D uses it to determine whether a triangle is visible from the camera. To build an object properly, the points that make up a triangle must be outwardly oriented according to the right-hand rule.

In the right-hand rule, imagine placing your wrist at the first point and pointing your fingers toward the second point. Now curl your finger tips towards the third point. If you can’t do that without curling your fingers backwards, flip your hand over so it’s upside down. Now your thumb points in the triangle’s outward direction. A vector that points perpendicularly away from the triangle in this direction is called a normal.

For example, consider the first triangle described in the previous paragraphs with points (-1, 0, -1), (-1, 0, 1), and (1, 0, 1). Place your wrist at (-1, 0, -1), point your fingers toward (-1, 0, 1) and curl your fingertips toward (1, 0, 1). (Drawing a picture often helps.) If you did it right, your thumb points up.

Before it draws a triangle, Direct3D checks its orientation. If a triangle’s normal points toward the camera, then the triangle is visible and Direct3D draws it. If the normal points away from the camera, then the triangle is on the back side of an object and isn’t visible, so Direct3D doesn’t draw it.

For example, if the triangles make up a cube, then each should be oriented so their normals point out away from the cube and not into it. Then, as you rotate the cube, the faces that are visible to the camera have normals pointing more or less toward the camera. Faces that are on the far side of the cube have normals pointing away from the camera and Direct3D doesn’t need to draw them.

The following code defines an almost complete three-dimensional scene.

                                                                                                                                                                                                                              
?
Figure 7. Square in the Air: The sample program Square uses a camera, lights, a material, and a mesh to draw a floating square.

A Viewport3D object holds a ModelVisual3D object. That in turn contains a Model3DGroup, which holds lights and a GeometryModel3D. In this case, the GeometryModel3D’s Geometry property defines a square. The GeometryModel3D’s Material property gives the square a blue diffuse material.

The only piece missing from the preceding code is a definition of the Viewport3D object’s camera. I’m not including it here because this example uses a transformation bound to the program’s scroll bars, so it’s kind of messy.

When Direct3D draws a mesh, it considers all of the triangles that share a given point. It calculates the triangles’ normals and uses their average to determine the color at the shared point. This causes adjacent triangles to blend smoothly together and makes curved surfaces look better.

For example, the triangles that make up the spheres shown in Figure 6 share points so that Direct3D draws them smoothly.

However, for a shape such as a cube, you don’t want adjacent sides to blend smoothly together. If they did, you wouldn’t be able to tell where one face ended and the next began. The result would a bit like the left cube in Figure 5.

Author’s Note: I’ve left code to draw this sort of cube commented out in the Cube example program so you can experiment if you like.

One solution is to use duplicate points as program Cube does. Another is to use a separate mesh for each of the cube’s faces. The first solution is a bit more efficient but for small examples such as this one (only 24 points and 12 triangles), the difference isn’t noticeable.

The next article continues this exploration with a discussion of Textures, Transformations, and a dip into more realistic 3D rendering.

Editor’s Note: The downloadable code for this article contains the sample code for both the WPF 3D articles in this series, not just this article.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist