WPF Wonders: Transformations (and Robots!)

WPF Wonders: Transformations (and Robots!)

indows Presentation Framework (WPF) gets a lot of mileage out of being layered on top of DirectX, including fast rendering, multimedia support, audio, video, and more. One of the features in the ?more? category is the ability to easily use transformations. WPF provides four kinds of 2D transformations and three kinds of 3D transformations. This article briefly summarizes WPF’s transformation types and shows how to use them to make useful interfaces?and robots!

2D Transformations

WPF provides four classes that implement 2D transformations.

  • TranslateTransform: This transformation moves an object horizontally and vertically. Its X and Y properties tell how far the object should be moved in each direction.
  • ScaleTransform: Use this transformation to stretch or shrink an object horizontally and vertically. Set the ScaleX and ScaleY properties to the amounts you want to scale the object in each direction. Values smaller than 1 make the object smaller, while values greater than 1 make it larger.
  • RotateTransform: This transformation rotates another object. The Angle property determines how far (in degrees) to rotate the object.
  • SkewTransform: This transformation skews an object by changing the orientation of its X and Y axes. The AngleX and AngleY properties determine the angles (in degrees, clockwise) for the Y and X axes. For example, if you set AngleY = 0 and AngleX = -30, you can get an italic-looking effect.

The ScaleTransform, RotateTransform, and SkewTransform classes also have CenterX and CenterY properties that determine the point around which the transformation takes place. For example, normally a RotateTransform rotates an object around its center, but you can make it rotate around the upper left corner (the point 0, 0) by setting both CenterX and CenterY to 0.

Using Transformations

Figure 1. Many Modifications: WPF’s transformations make it easy to change a control’s appearance.

The sample program TransformedLabels, shown in Figure 1 and available in both C# and Visual Basic versions in the downloadable code, demonstrates these transformations. It shows labels that have been stretched, squashed, rotated, and skewed.

The following XAML snippet shows how TransformedLabels draws its two stretched labels. It stretches the red label by a factor of 2 horizontally and 6 vertically, so it appears relatively tall and thin. It stretches the top label by a factor of 5 horizontally and 2 vertically, so it appears short and wide.

The program draws the yellow skewed label in the bottom right of Figure 1 by setting the SkewTransform object’s AngleX property to 30. In other words, it rotates the control’s Y-axis 30 degrees counterclockwise from vertical around the control’s upper left corner. AngleY is 0, so the X-axis is horizontal.

The white label in Figure 1 is rotated 315 degrees clockwise. Here’s the code.

This example doesn’t show a translated label because it would simply change the object’s location (yawn). In practice, translations are often most useful when you want to combine them with other transformations. For example, you might want to translate an object to center it at the origin, scale and rotate it, and then translate it to its final destination.

The TransformedLabels program transforms labels only, but you can apply the same transformation concepts to all sorts of things, including other controls, controls that contain other controls, and even brushes. For example, you could apply a transformation to a Grid control containing lots of other controls to make the entire group move, rotate, stretch, or skew.

Transforming Brushes and Transform Properties

The TransformedBrush program example shown in Figure 2 fills a rectangle with a radial gradient brush that has been rotated 30 degrees.

Figure 2. Better Brush: You can produce interesting effects by using WPF transformations on brushes.

Here’s how TransformedBrush draws its rectangle. Notice that the RotateTransform object rotates the brush around the point (150, 50), which is the center of the rectangle.


Figure 3. Transformed Table: This program uses RotateTransform objects to display vertical labels.

Drawing interesting graphics is always fun, but the sample program VisualStudioFeatures (see Figure 3) shows a more practical use for transformations. It uses RotateTransform objects to display labels rotated sideways. The same chart with non-rotated labels would take up much more space.

Transform Properties

Objects that support transformations provide two properties that you can assign to a transformation object: LayoutTransform and RenderTransform.

Figure 4. Layout vs. Render: If you use a LayoutTransform, WPF transforms objects before performing layout.

When you use the LayoutTransform property, WPF transforms the object before applying its layout algorithms. For example, if you stretch an object vertically, it becomes taller, so if it is contained in a StackPanel, the StackPanel creates extra room for the object’s new size.

When you use the RenderTransform property, WPF performs its normal layout chores and applies the transformation only when it is drawing the object. That means controls such as StackPanel won’t take the transformed object’s size into account when they perform layout. That may be a little more efficient but can lead to strange results.

The sample program Transformations (see Figure 4) shows the difference between the LayoutTransform and RenderTransform properties. Both sides of the program display three buttons inside a StackPanel. The buttons on the left rotate using RenderTransforms while those on the right use LayoutTransforms so their StackPanel allows room for them.

3D Transformations

3D transformations are similar to their two-dimensional counterparts, although as you’d expect, they apply to three-dimensional objects instead of two-dimensional objects, so there are a few differences. This section explains how 3D transformations work, but to really get to know them, you need to build some 3D applications. See my article WPF Wonders: 3D Drawing to get started.

WPF provides three basic types of 3D transformations that perform translation, scaling, and rotation:

  • The TranslateTransform3D class lets you translate an object in 3D space. Its OffsetX, OffsetY, and OffsetZ properties determine how far the object is offset in each dimension. This is pretty simple and similar to the 2D version.
  • The ScaleTransform3D class lets you scale an object in 3D space. Its ScaleX, ScaleY, and ScaleZ properties determine how much the object is scaled in the X, Y, and Z directions. Again no real surprises here.
  • A RotateTransform3D object lets you specify a rotation in three dimensions. In two dimensions, you rotate an object around a point. In three dimensions, you rotate an object around a line or axis. The object’s Rotation property specifies the rotation. This value should be a Rotation3D object. Rotation3D is an abstract class so you must use one of its derived classes: AxisAngleRotation3D or QuaternionRotation3D. I’m going to ignore the second of these and describe only AxisAngleRotation3D because it’s reasonably intuitive.

An AxisAngleRotation3D object represents a rotation around an axis by a specified angle. Its Angle property determines the angle of rotation in degrees and its Axis property determines the vector around which to rotate.

For example, the following XAML snippet rotates an object 30 degrees around the X axis.


(Yes, it seems somewhat roundabout to create a RotateTransform3D object and set its Rotation property to an AxisAngleRotation3D object but that’s the way it works.)

The axis is specified as a vector so its coordinates are relative. The value 1,0,0 points in the direction of the X-axis but doesn’t indicate where the axis starts. In other words, it could specify the X-axis passing through the origin, or a line parallel to the X-axis that passes through the points (0, 1, 0) and (1, 1, 0).

By default, the axis passes through the origin, but you can change that by setting the RotateTransform3D object’s CenterX, CenterY, and CenterZ properties to specify a point through which the axis of rotation should pass.

Using 3D Transformations

So now that you understand what the 3D transformations do, what good are they? It turns out they’re quite useful for building scenes (and making robots).

Figure 5. Remote Controlled Robot: Example program RobotArm lets you use sliders to control the robot arm.

For example, building a bunch of rectangular blocks scattered around various places is straightforward but rather tedious. If you mess up the points’ coordinates or the order in which the points are used to make triangles, you can end up with a mess?or worse, with some sides oriented improperly, so parts of the block disappear when viewed from various angles.

In contrast, it’s relatively easy to build a single cube that’s one unit wide centered at the origin. With that cube in hand, you can copy and paste it and then apply transformations to move the copy into position to make up your blocks.

Transformations are also quite handy for building robots. The sample program RobotArm shown in Figure 5 displays a robot arm that you can control by using the program’s sliders. The sliders inside the drawing area let you rotate the entire model and zoom in and out.

Each piece of the robot is built from a single cube centered at the origin that has been scaled, translated, and rotated into position.

This arm has a red base that can rotate around the vertical Y-axis. The base ends in a shoulder that can rotate so the green upper arm moves up and down. The elbow also rotates to move the blue forearm. The red hand at the end of the forearm can rotate side-to-side and twist around the axis of the forearm. Finally, the hand’s two green fingers can move together and apart. (In a more elaborate application, the fingers could grab things so the arm could pick them up.)

The tricky part of building this kind of robot is figuring out what transformations to use to put the various pieces in the right positions. What series of transformations do you need to use to get the forearm or wrist positioned correctly?

Although this seems like a daunting problem, it’s actually fairly simple if you think about it properly.

The first step is to think of the robot as being in some initial position. In this case, the arm’s initial position is pointing straight up when all of its joints are rotated by 0 degrees.

Now think about what happens when you rotate the base around the Y-axis. Rotating the base makes the entire arm from that point on rotate. The upper arm, forearm, hand, and fingers all rotate because they are all connected. If you rotate the base, everything attached to it also rotates.

Next, consider what happens if you rotate the shoulder downward. The upper arm moves downward and so do the forearm, hand, and fingers that are attached to it. Again, all of the pieces after the shoulder move because they are all attached.

To keep these related pieces together, you can create a Model3DGroup object containing the related pieces. Then, when you transform the group, all the pieces transform as well.

So here’s how RobotArm works. It starts with a Model3DGroup that holds everything, including the lights and the ground. It doesn’t have a transformation because the ground doesn’t move.

The top-level group contains a group for the base. That group holds the red base itself, built by stretching a cube vertically. The group’s transformation rotates the whole group, which so far contains just the base.

The base group also contains another group that represents the upper arm. This group holds the upper arm, again built by stretching a cube. This time the upper arm’s transformation also translates it so it sits on top of the base. The group uses a transformation to represent the shoulder’s rotation. It moves the upper arm up or down.

The upper arm group contains another group that represents the lower arm. This group contains the lower arm itself and another group representing the hand.

By now you should begin to see the pattern. Each group contains a piece of the arm plus another group that holds everything farther down the chain. Transforming any group moves the entire group, so the whole thing moves as a unit.

I recommend that you download the RobotArm program and see the code for details, but the code contains two little tricks that are worth mentioning.

First, the program lets you control the arm’s joints by moving sliders. Each slider is named and the groups’ transformations refer to those names to get their values.

For example, the following code shows the base group’s transformation. It uses a single RotateTransform3D object to rotate the base and everything attached to it around the Y-axis. The transform’s Angle property is bound to the sliBase slider’s Value property.


The other sliders control transformations in similar ways.

The second trick I wanted to mention deals with the fingers. The following code shows how the program builds its left finger.


The first transformation takes a cube centered at the origin and moves it so it sits on the XY plane. The second transformation stretches the cube to have the finger’s shape. The third transformation moves the finger to its position on top of the hand after all the other transformations higher up the arm’s hierarchy are set to their initial values.

The final transformation moves the finger according to the value provided by the sliHand slider. All this follows the previous techniques fairly closely, using transformations to build the basic finger and then binding a final transformation to a slider to move the finger.

The tricky part comes when you try to build the right finger. I wanted the same slider to move both the left and right fingers closer together or farther apart. For that to work, when the left finger moves left, the right finger must move right. If the sliHand slider has value V and the left finger is translated distance V in the Z direction, the right finger needs to move distance - V.

Unfortunately, WPF doesn’t allow you to use any calculations in XAML code, so you cannot multiply the value V by -1. You could handle this in code but it would be nice to have an all-XAML solution.

But there is a workaround. Here’s the trick: Rather than multiplying the value by -1, you can use a scale transformation to scale it by a factor of -1. That essentially flips the sign of the offset and solves the problem.

The following code shows the right finger’s transformations. They mirror the left finger’s transformations, but scaled by a factor of -1 in the Z direction. Now when the left finger moves left, the right finger moves right.


A More Complete Robot Example

Here’s a similar example, applied to a more complete figure. The Stickman sample program displays a stick figure robot (see Figure 6). The sliders let you turn the robot?s head and move its left and right shoulders, elbows, hips, and knees.

Figure 6. Robot on the Run: The sample program Stickman lets you use sliders to control a stick figure robot.

The basic idea for building Stickman's transformations is the same as the one used by the RobotArm program. The top-level group contains the robot?s base (which in this case is its head), spine, shoulder group, and hip group.

The shoulder group contains the shoulders and groups for the left and right upper arms. Each upper arm group contains the actual upper arm plus the lower arm.

Similarly the hip group contains the hips and groups for the left and right upper legs. The upper leg groups contain the upper and lower legs.

Unlike the RobotArm program, this example builds its robot out of spheres and cylinders. All together, those shapes contain thousands of triangles, so building the whole thing in XAML code is impractical; instead, the program uses code-behind to build its shapes. That introduces a new technique: binding a transformation to a slider’s value at run time.

The following code shows how the program builds the robot?s right forearm.

// Right forearm.GeometryModel3D rforearm_model = null;Material rforearm_material =   new DiffuseMaterial(medium_blue_brush);MeshGeometry3D rforearm_mesh = null;MakeCylinder(TheModel, ref rforearm_model, ref rforearm_mesh,   rforearm_material,   0, -6, 3, 0, -9, 3,   0, 0, 1, 1, 0.5, 0.5, 20);

The code creates GeometryModel3D, Material, and MeshGeometry3D objects for the forearm, and then calls the MakeCylinder method to create a cylinder representing the forearm.

The following code shows how the program bends the robot?s right elbow.

// Right forearm transformation.AxisAngleRotation3D relbow_rotation =    new AxisAngleRotation3D();relbow_rotation.Axis = new Vector3D(0, 0, 1);Binding relbow_binding = new Binding("Value");relbow_binding.Source = sliRElbow;BindingOperations.SetBinding(relbow_rotation,   AxisAngleRotation3D.AngleProperty, relbow_binding);RotateTransform3D relbow_rot =   new RotateTransform3D(relbow_rotation, 0, -6, 3);Transform3DGroup rforearm_transform = new Transform3DGroup();rforearm_transform.Children.Add(relbow_rot);rforearm_transform.Children.Add(rshoulder_rot);rforearm_model.Transform = rforearm_transform;

The code first instantiates an AxisAngleRotation3D object to represent rotation around the Z-axis. (The robot?s initial position is standing in the YZ plane facing in the positive X direction, so the shoulders and elbows rotate parallel to the Z-axis.)

The code then creates a Binding object that binds to a Value property. It sets the Binding?s Source to the appropriate slider, and calls SetBinding to bind the AxisAngleRotation3D object?s Angle property.

It then uses the AxisAngleRotation3D object to make a new RotateTransform object that rotates around the point (0, -6, 3), the elbow?s location in the robot?s initial position.

Finally the code makes a Transform3DGroup, adds the elbow and shoulder transformations to it, and applies the result to the forearm?s model.

The Stickman program demonstrates the basic techniques you need to build robots, but you could go a long way toward making it more realistic. For example, you could add textures to improve the realism of the surfaces; or extend the model, adding facial features, hands, feet, fingers, and toes. However, if you did that, the program would become hopelessly cluttered with sliders.

The Stickman example allows only fairly simple transformations. For example, the upper arms bend only forward and backward relative to the shoulders?but a real shoulder joint is a ball-and-socket joint that allows much more freedom of movement. Similarly, to improve the realism, the hips should allow greater range of motion, and the lower arms and legs should allow some twist. This robot will never do the Macarena, but it does show how to use simple transformations.

Download the examples and experiment with them. Add some extra degrees of freedom at the joints and see if you can build a more flexible robot that can dance and do yoga. Once you?ve mastered using transformations to provide joints, you can move on to more complex models such as spiders, snakes, and octopi.


About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist