The first part in this two-part articlee xplains how you can modify an image without changing its basic content. It shows how to brighten or darken a picture, increase its contrast, or remove redeye. All of those techniques rely only on the value of the pixel at the point you are manipulating so they are called point processes. (Part 1 also explains useful ways to load and save, compress, and manipulate images quickly so you may want to look over that article before you read this one.)
This article explains some useful area processes, techniques that use the values of several pixels to give a pixel a new value. This article groups area processes into three broad categories: basic filters, edge detectors, and artistic processes.
In spatial filtering (sometimes called neighborhood processing), the value of a pixel is determined by the pixels in its neighborhood. For example, a simple blurring filter might set a pixel’s new value to the average of its value and the pixels next to it.
When you apply a spatial filter, you must place the new pixel values in a new image instead of modifying the original image. If you tried to save the new values in the original image, a pixel’s new value would affect the values of its neighbors that had not yet been computed. The result might be interesting but it won’t match what you see in this article.
A filter’s kernel is the area or neighborhood used by a spatial filter.
The result you get from a spatial filter depends on the kernel and the operations that you apply to the pixels under it. For some applications these can be quite involved and can produce all sorts of odd results. Some of the artistic filters described later in this article use some unusual kernels and operations but this section focuses on what is probably the most common type of spatial filtering: linear spatial filters.
A linear spatial filter calculates a pixel’s value from a linear combination of the values of its neighbors.
In your program, you can store information about the filter’s kernel in a square array of numbers. This array is sometimes called the mask, although it’s often simply called the kernel because, for this kind of filter, it basically defines the kernel (in the more general sense) and the operation that you will perform.
To apply the kernel to a pixel, you conceptually center the array over the pixel. You then multiply each neighbor’s value by the corresponding number in the array. You add these products and assign the result to the target pixel.
Note that the target pixel is under a position in the array, too, so it can contribute to its own value. You can also remove the target pixel or any other pixel in the neighborhood from the calculating by setting its kernel value to 0.
For example, consider the following kernel and some pixel values, and suppose you want to apply the kernel to the center pixel with value 6.
To find the center pixel’s new value, multiply the kernel’s values by the corresponding pixel values and add them up. In this example, the center pixel’s new value is:
1/16 * 3 + 2/16 * 4 + 1/16 * 5 +2/16 * 5 + 4/16 * 6 + 2/16 * 7 +1/16 * 7 + 2/16 * 7 + 1/16 * 8 = 5.8125
To make it easier to write down and work with kernels, people often give the entries integer values. Then after multiplying the kernels values by the neighborhood’s pixel values and adding them all up, you divide by a new value called the kernel’s weight.
For example, you can write the previous kernel with integer values and a weight of 16 as follows:
This makes the kernel less cluttered so it’s easier to understand what the kernel does.
The code to apply a filter is relatively straightforward. For each pixel in the image, you simply loop through the kernel’s values multiplying them by the neighboring pixels’ values, add them up, and divide by the kernel’s weight. To handle color, you work with the red, green, and blue color components separately. The only real trick is that you need to be careful near the image’s edges so you don’t try to multiply a kernel value by a neighboring pixel that lies off the edge of the image.
The AreaProcesses example program, which is available for download in C# and Visual Basic versions and shown in Figure 1, demonstrates all of the techniques described in this article. It uses the Filter class to represent filters. That class contains a two-dimensional array of floats to represent the kernel. It also has a Weight property to represent the kernel’s weight.
Figure 1. Area of Study: The AreaProcesses example program demonstrates 32 different area processing techniques.
The program uses the Bitmap32 class to manipulate images. Its ApplyFilter method applies a filter to an image. The code is a relatively straightforward series of nested loops so it isn’t shown here to save space. Download the example to see the details.
At this point you know enough about kernels and filtering to see some specific kernels and learn what they do.
The previous kernel produces an averaging or blurring filter. Each pixel’s new value includes some amount taken from the pixels around it so their values spread slightly. This kind of filter is called a Gaussian filter because its coefficients are distributed according to a Gaussian function (or normal distribution).
To see how the filter works, suppose you have a white image with a single black pixel in the middle. When you apply the filter to the black pixel, its new value will include some of the white from its neighbors so it becomes grayer. Similarly when you consider one of the neighboring white pixels, some of the new value comes from the black pixel so those pixels become darker. The result is that the single black pixel becomes a fuzzy gray dot.
This kernel weights the pixels closest to the center most heavily so they have the greatest influence on the outcome. The following kernel places an equal weight on all of the pixels in the neighborhood so the center pixel’s new value depends more heavily on its neighbors. That makes this kernel’s result more blurred than the results of the previous kernel.
Figure 2 shows the results of 5×5 versions of these filters. The picture on the left shows the original image. The center picture shows the result after applying a 5×5 Gaussian filter. The picture on the right shows the result of a 5×5 averaging filter. You can make the results even more blurred by using larger kernels.
Figure 2. Do I Need Glasses? Gaussian and averaging filters blur an image.
The blurring filters tend to allow low frequency (slowly changing) features pass through into the final result. For example, in Figure 2 the big features such as large areas or purple remain purple. Smaller, faster changes such where the table’s grain quickly changes from one shade of brown to another are blurred together and lost. Because these filters allow low frequency features to survive, they are called low-pass filters.
In contrast, a high-pass filter emphasizes high frequency features. It highlights areas where colors are changing rapidly and deemphasizes large areas of constant color.
To see how a high-pass filter works, consider the following kernel and suppose for now that the kernel’s weight is 1.
What happens if you apply this kernel to a large area of constant color? If every pixel under the kernel has the same value, then the products cancel out when you add them up so the center pixel’s new value is 0 (black).
Now suppose the middle pixel has a value that is very different from those around it. For example, suppose the middle pixel has a large value (white) and the other pixels are close to 0 (black). In that case the middle pixel survives unscathed.
As a result, when you apply the kernel to an image, large areas of uniform color tend to vanish to 0 and areas with rapidly changing colors tend to brighten unchanged.
Depending on the result you want, you could increase the kernel’s weight to reduce the brightening of high-frequency areas. You could also add an offset to every pixel’s new value to move them away from 0 so the result isn’t mostly black. For example, if you add 127 to the pixel values (assuming pixels are measured from 0 to 255), then the black pixels become a neutral gray.
Figure 3 shows the results of two high-pass filters. The picture on the upper right was produced by a 3×3 kernel with weight 1 and offset 127. The picture on the lower right was produced by a 5×5 kernel with weight 191 and offset 127.
Figure 3. A High Mountain Pass. High-pass filters emphasize high-frequency changes in color.
Both of the results in Figure 3 suppressed the uniformly colored sky and highlighted the busier parts of the picture. The large kernel weight used for the bottom picture makes it produce a more subtle result.
An embossing filter records differences as pixels change in a certain direction. For example, consider the following kernel with a weight of 1 and an offset of 12
In areas of uniform color, the pixels cancel out and, after adding the offset, you get a value around 127.
Next consider an area where the pixels to the upper left have lower values (are darker) than those to the lower right. In that case the bigger values dominate so the result is positive. After you add the offset, the center pixel will have a bright value.
Finally consider an area where the pixels on the upper left have higher values than those on the lower right. Then the relatively large values in the upper left multiplied by -1 dominate and the result is negative. After you add the offset, the center pixel will be relatively dark.
The net effect of all this is that much of the image is a neutral gray and areas where the color changes from upper left to lower right appear as bright highlights and shadows. The result looks embossed.
Figure 4 shows the result of two embossing filters. The picture on the left was produced by the previous kernel. The one on the right was produced by the following kernel.
Figure 4: Embossed in the Frost. Embossing filters appear to add highlights and shadows to images.
Blurring filters make an image blurred. In contrast (pun intended), sharpening filters make an image crisper and less fuzzy.
Like high-pass filters, sharpening filters emphasize places where colors change quickly. In fact, one way to make a sharpening filter is to use a high-pass filter to find those places and then add the result to the original image. This technique is sometimes called boosting.
The following kernel with weight 16 does essentially that. The kernel is basically a high-pass filter but where a typical high-pass filter would have 12 in the center position, this kernel holds the value 14. That basically adds two extra copies of the original image to the high-pass result.
Unfortunately the result of this kernel has low contrast so you may want to follow it with a contrast enhancement.
A less obvious approach to image sharpening is to subtract a blurred image from the original image. The blurred version contains mostly low-frequency information so subtracting it from the original image leaves more of the high-frequency information, making the image sharper. Because this method removes a blurred or unsharp version of the image, it is called unsharp masking.
Figure 5 shows an original image in the upper left and four sharpened versions. The picture on the lower left shows the image boosted. The picture on the lower right shows the image boosted, contrast enhanced, and then darkened slightly. The final picture on the upper right shows the image after unsharp masking with a 5×5 Gaussian filter.
Figure 5: Unsharper Than a Serpent’s Tooth. Unsharp masking (upper right) sharpens an image by subtracting a blurred version from the original.
Edge detectors are filters that highlight abrupt changes in color. Embossing filters highlight changes in color so they make good edge detectors. The embossing filters shown earlier highlight changes in color as the image moves from the upper left to lower right or vice versa so they detect diagonal edges moving from upper right to lower left.
You can move the kernel’s coefficients around to detect edges in other directions. For example, the following kernels detect horizontal and vertical edges.
A Prewitt edge detection filter applies both of these filters and adds them together to highlight both vertical and horizontal edges. Diagonal edges are still highlighted, just not as strongly. Figure 6 shows an original image and the result of Prewitt edge detection.
Figure 6: Detection Perfection. Prewitt edge detection highlights edges and is particularly useful for machine vision applications.
The area techniques described so far use kernels to modify images in fairly straightforward ways. Some of the results are impressive but the basic approach is simple and methodical.
However, not all filters are so mechanical. They can do just about anything you can imagine to the image’s pixels. Figure 7 shows an original image and the results of 5 filters.
Figure 7: Filter Fun. Filters can do all sorts of interesting things to images.
In order from top-to-bottom and right-to-left, these filters are:
- Flatten – This filter reduces the number of colors in the image. This example maps red, green, and blue color components into 4 values each. This is also sometimes called posterizing the image.
- Solarize – Solarizing filters shift parts of the image in color and brightness. This version maps bright colors to dark ones and vice versa, and inverts their color components. The result is a little weirder than a simple inversion.
- Randomize – This filter moves each pixel a small random distance.
- Minimum rank – This filter sorts the colors under its kernel and then selects the smallest value for the central pixel. This tends to make dark areas larger. Other rank filters might select the largest or median values.
- Edge Beveling – This filter increases the brightness of the pixels near the image’s top and left edges, and decreases the brightness of the pixels near the bottom and right edges.
The next three techniques demonstrated by the AreaProcesses program work with images that have transparent pixels. Figure 8 shows the original image and three modified versions that show beveled edges, outer glow, and drop shadows.
Figure 8: Shaping Up. Bevel, outer glow, and drop shadow effects work with images that have transparent pixels.
This technique creates a bevel beside the edges defined by the image’s transparent pixels. This algorithm is surprisingly tricky so I’ll describe the basic idea here and you can check the code for the details. The program first finds the non-transparent pixels near the transparent edges. To do that, it builds a generation array. It sets the generation value to 1 for the transparent pixels and 0 for the non-transparent pixels.
Next the program examines each pixel with generation 1 and sets the generation of its neighboring pixels to 2 if they don’t already have a generation. The program repeats this step once for each pixel that should be in the bevel. For example, if the bevel should be 10 pixels thick, then the program repeats this step 10 times.
As it identifies the edge pixels, the program keeps track of the transparent pixel closest to each edge pixel. The program needs to know that so it can decide what color to give the edge pixel.
The code assigns the pixel a shade value that depends on the sine of the angle from the edge pixel to the nearest transparent pixel and the light’s direction, which in Figure 8 is coming from the upper left. If the edge faces the light, then the shade is bright. If the edge is roughly parallel to the light, the shade isn’t as bright. If the edge faces away from the light, then the shade is dark.
After it calculates shades for the edge pixels, the code runs the values through an averaging filter to blur them and smooth them out. Finally it multiplies each edge pixel’s color components by the shades to produce the final result.
Compared to the edge bevel filter, the outer glow filter is simple. The code grows generations out from the image’s non-transparent pixels. It works much as the bevel process does expect it grows the generations into the transparent pixels instead of the other way around. The code calculates a shade for each of these pixels, making the values drop off quickly as they leave the non-transparent objects behind.
The code finishes by applying a blurring filter to the shades and then applying them to the glow’s color (red in Figure 8). Drop Shadow
Making drop shadows is also simple compared to making edge bevels. For each non-transparent pixel in the image, the code creates a shadow pixel offset some distance to the right and down in a new image. It blurs the new image and adds it below the original.
You can use many of the techniques described in this article to build interesting user interfaces. For example, try adding beveled edges to an image to make it look like a button and use an embossed version when the “button” is disabled. Effects such as drop shadow and outer glow add interest and depth to what might otherwise be a flat list of items.
Bevel, embossing, glow, and drop shadows are all effects provided by WPF and now, using the techniques described here, you can add them to your Windows Forms applications, too.
Download the example program and experiment with its 30+ effects. Then use the program as a starting point for your own effects. By combining different effects and making a few modifications, who knows what interesting results you can come up with!