RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX


Interpreting Images with MRDS Services : Page 4

Yes, you can teach robots to "see." Using the web camera service available with Microsoft Robotics Development Studio, you can program a simulated robot to follow lines autonomously.

Executing the Code
After adding all the code for your service, compile it by clicking Build → Build Solution from the Visual Studio Menu. If the service compiles successfully, you can run it by pressing F5 or clicking Debug → Start Debugging. After the command window has loaded, you should see the Visual Simulation Editor appear. The simulation scene should include the LEGO NXT positioned in front of a NXT Pad (see Figure 6). The pad is a simulated version of the actual test pad that comes with the LEGO NXT.

Figure 6. Starting the Simulation: The figure shows the starting position of the line-following simulation as seen from the main camera view.
Figure 7. Robot's Perspective: Here's the simulation scene from the robot's perspective.

If you click the Camera menu, and select Show Robocam, you can also see the simulation scene from the robot's perspective (see Figure 7). By selecting "Show Robocam in a separate window," you can see both views, the one from the main camera and the one from the Robocam simultaneously.

Examining the Robot's Behavior
After the simulation scene loads in the Visual Simulation Editor, you will see the robot move forward, turn right, and then follow the line until it gets to the other side of the pad. Surprisingly, it will then continue moving towards the end of the pad and eventually drive off the pad, rather than following the oval black line back to the starting point. Can you guess why?

To explain why this happens, look at Figure 7 again. Notice that the far (right) side of the pad has several colored bars that run along the edge. Recall that the ProcessFrameHandler method first strips the image of all colors and then compares the resulting pixels against a threshold value. The method factors in the pixels located in the top right-hand corner as well as those in the oval line. Including these additional pixels, although they're really just outliers, elevates the COG value and thus influences the overall turning behavior of the robot.

This behavior is intentional to demonstrate a point. To prevent this type of behavior in a real-world situation, you would need to do one of two things. The first thing would be to rigorously control the environment so that no outliers can negatively influence the robot's behavior. For example, you could edit the NxtPad image used in the simulation and remove the colored bars at the edge of the pad. While this might sound reasonable, it is not a good idea; it misrepresents the real NxtPad and thus negates the point of running a simulation before working with an actual robot.

Extending the Functionality
A better alternative would be to tweak the code so that it ignores the colored-bar outliers so that the robot continues to follow the oval line. How do you do that? I am going to leave that question for you to discover, but I'll give you a hint: There are actually several ways to solve the problem. Consider changing some of the variables in the program, such as the angle of the camera, the field of view, or the position of the robot. You might also consider changing the threshold value used to compute the center of gravity. Finally, you might want to consider integrating a neural network algorithm into the ProcessFrameHandler routine. Have fun.

Sara Morgan Rea is a 2007 Microsoft MVP for Office Communications Server. Her first book, Building Intelligent .NET Applications, was published in 2005. In addition to co-authoring several Microsoft Training Kits, she recently published Programming Microsoft Robotics Studio. She currently works as a robotic software engineer at CoroWare.com.
Email AuthorEmail Author
Close Icon
Thanks for your registration, follow us on our social networks to keep up-to-date