Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Interpreting Images with MRDS Services : Page 2

Yes, you can teach robots to "see." Using the web camera service available with Microsoft Robotics Development Studio, you can program a simulated robot to follow lines autonomously.


advertisement
Creating a Simulated Service
To get started, you will need to have Visual Studio 2008 and Microsoft Robotics Studio, version 2.0 already installed on your computer. After you do, open Visual Studio and select New and Project from the menu. To create a simulated service, select Microsoft Robotics as the project type and DSS Service (2.0) as the template. Name the project LineFollowing. You can save the project in your local Projects directory (see Figure 3).

 
Figure 3. New DSS Service: After installing MRDS, create a new service project in Visual Studio 2008 using the DSS Service (2.0) template.
 
Figure 4. DSS Service Wizard: The latest version of MRDS includes a wizard template for creating new service projects.
The MRDS 2.0 version includes a redesigned Visual Studio template that offers a wizard style interface. The DSS Service (2.0) template, which simplifies the service creation process, allows you to specify commonly accessed service properties and to declare service partners. For the LineFollowing service, you will need to uncheck the box labeled "Use subscription manager" on the Service tab. You will also need to select the Partners tab and scroll through the list of services until you find one named Simulation Engine. Select the service and click "Add as partner" (see Figure 4). When you are done, click OK to create the new service project.

Adding the Service Code
The rest of this article walks you through the steps required to create most of the code for this service, but goes into detail only for the image processing code. If you are new to MRDS, you may want to spend time looking through the documentation and tutorials that come with MRDS before continuing. It is important that you understand how services work because some of the syntax in this article may be new to you.

The first thing you will need to do is add references to the following .NET components:

  • RoboticsCommon
  • RoboticsCommon.Proxy
  • PhysicsEngine
  • SimulationCommon
  • SimulationEngine
  • SimulationEngine.Proxy
  • SimulatedWebcam.2006.M09.Proxy
  • SimulatedDifferentialDrive.2006.M06.Proxy
  • System.Drawing
When you create a service project using the built-in template, Visual Studio automatically generates two code files for you. You will need to add the following namespace references to the top of the LineFollowing.cs class file:

using Microsoft.Robotics.Simulation; using Microsoft.Robotics.Simulation.Engine; using engineproxy = Microsoft.Robotics.Simulation.Engine.Proxy; using Microsoft.Robotics.Simulation.Physics; using simengine = Microsoft.Robotics.Simulation.Engine; using drive = Microsoft.Robotics.Services.Simulation.Drive.Proxy; using simcam = Microsoft.Robotics.Services.Simulation.Sensors.SimulatedWebcam.Proxy; using webcam = Microsoft.Robotics.Services.WebCam; using Microsoft.Robotics.PhysicalModel; using Microsoft.Dss.Core; using System.Drawing; using System.Drawing.Imaging; using System.Runtime.InteropServices;

You will also need to add the following variable declarations below the service state declaration:

int _centerOfGravity = 128; const int ImageWidth = 320; const int ImageHeight = 240; byte[] _processFrame = new byte[ImageWidth * ImageHeight * 4]; byte[] _grayImg = new byte[ImageWidth * ImageHeight]; simengine.SimulationEnginePort _notificationTarget; simengine.CameraEntity _entity; float leftWheelPower; float rightWheelPower; LegoNXTTribot robotBaseEntity = null;

The _centerofGravity variable contains a calculated field that can be used to guide the robot in a certain direction. In this article, the center of gravity represents the area of the image along the X-Axis where most of the black pixels are located. The logic is that if you can find where most of the black pixels are, it should be easy for your robot to follow a black line on a white surface.

The program initializes the _centerOfGravity field with a value of 128, which represents the halfway point between 0 and 255, which is the range of possible color values for a pixel. The lowest value in the range, 0, represents the darkest possible shade of black, while the highest value, 255, represents the lightest possible shade of white.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap