Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Interpreting Images with MRDS Services

Yes, you can teach robots to "see." Using the web camera service available with Microsoft Robotics Development Studio, you can program a simulated robot to follow lines autonomously.


advertisement
ost people do not realize how difficult it can be to process a digital image and extract meaningful data, such as information required to drive a robot or operate an embedded platform. Humans see the world as a series of concrete images, but robots using a web cam see it as an array of red, green, and blue pixel values. To extract meaningful visual data out of this array of pixel values, you need special software. That's what the robot example in this article does. To better understand how a robot can process digital images, this article describes how to use Microsoft Robotics Development Studio (MRDS), version 2.0 to create a simulation in which a LEGO NXT Tribot follows a black line. The article code creates a simulated robotics service that processes images from an onboard web camera (webcam). Using the simple technique of computing the center of gravity along the X-Axis, you can direct the robot to follow the black line autonomously.

If you are not already familiar with MRDS, you might want to first check out some earlier articles such as "An Introduction to Programming Robots with Microsoft Robotics Studio" and "Delivering Next-Generation SOA Apps with Microsoft Robotics Development".

What You Need
  • Visual Studio .NET 2008
  • Microsoft Robotics Studio 2008 Community Technical Preview (CTP)

Interpreting Digital Images
 
Figure 1. Floor with Line: The figure shows an image of a floor, taken by a webcam mounted on a LEGO NXT Tribot.
Before jumping into the code for the simulated service, here are a few ground rules. This project will process images from a webcam. While that's not the only type of camera you can use for image processing, it is a relatively simple and inexpensive option. A typical webcam returns a 320 x 240 pixel image. That's a relatively narrow view of the world, so you have to start by realizing that the camera view limits how much your robot can see at one time. Figure 1 shows a sample image taken from an actual webcam.

To follow a line on a floor, the camera must necessary point down towards the floor, giving it a very restricted view of its environment. Additionally, cameras may have a different field of view or view angle, which works like the zoom lens on your camera. To get an idea what this is like for the robot, hold an empty paper towel roll up to your eye. Close the other eye, point the roll to the ground and try walking around without bumping into anything. Another important factor is lighting. The type and amount of lighting available can have a dramatic effect on image processing. For example, the image in Figure 1 was taken from a webcam mounted on top of a LEGO NXT Tribot, as it drove over a white surface with a black line. You'll probably notice the bright white blob in the bottom center of the image, which was actually a reflection from the bright office light overhead. When processing an image in which you need to identify a black line on a white surface, light reflections as illustrated this image may cause the robot to behave unexpectedly.



 
Figure 2. Lighting Variance: The figure shows two different images of a white floor with a black line from the same webcam. The image on the left was taken in low lighting conditions, and the image on the right in bright lighting.
Alternatively, too little lighting in a room can affect the clarity of the captured image. For example, Figure 2 shows two views from a single webcam. Both images were taken of a white floor with a black line. The image on the right was taken in a room with adequate lighting, but the image on the left was taken in a low lighting condition. Using the method described in this article to compute the center of gravity, the robot would likely not perform correctly when navigating through the room with low lighting. This article will not attempt to cover all the possible problems that can occur when processing digital images; the article simply demonstrates how complex digital image processing can be. Humans take the complex processing required for our eyes to accept images and for our brain to interpret them for granted; it seems so easy from our perspective. But, from the robot's perspective, it is anything but easy.

Fortunately, here you'll be working with a simulation in which you can closely control all the environmental variables, which greatly increases your chances of having the robot behave as expected, while still exposing you to some of the issues you might face when controlling a robot using a webcam.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap