Interpreting Images with MRDS Services

ost people do not realize how difficult it can be to process a digital image and extract meaningful data, such as information required to drive a robot or operate an embedded platform. Humans see the world as a series of concrete images, but robots using a web cam see it as an array of red, green, and blue pixel values. To extract meaningful visual data out of this array of pixel values, you need special software. That’s what the robot example in this article does.

To better understand how a robot can process digital images, this article describes how to use Microsoft Robotics Development Studio (MRDS), version 2.0 to create a simulation in which a LEGO NXT Tribot follows a black line. The article code creates a simulated robotics service that processes images from an onboard web camera (webcam). Using the simple technique of computing the center of gravity along the X-Axis, you can direct the robot to follow the black line autonomously.

If you are not already familiar with MRDS, you might want to first check out some earlier articles such as “An Introduction to Programming Robots with Microsoft Robotics Studio” and “Delivering Next-Generation SOA Apps with Microsoft Robotics Development“.

What You Need
  • Visual Studio .NET 2008
  • Microsoft Robotics Studio 2008 Community Technical Preview (CTP)

Interpreting Digital Images

?
Figure 1. Floor with Line: The figure shows an image of a floor, taken by a webcam mounted on a LEGO NXT Tribot.

Before jumping into the code for the simulated service, here are a few ground rules. This project will process images from a webcam. While that’s not the only type of camera you can use for image processing, it is a relatively simple and inexpensive option. A typical webcam returns a 320 x 240 pixel image. That’s a relatively narrow view of the world, so you have to start by realizing that the camera view limits how much your robot can see at one time. Figure 1 shows a sample image taken from an actual webcam.

To follow a line on a floor, the camera must necessary point down towards the floor, giving it a very restricted view of its environment. Additionally, cameras may have a different field of view or view angle, which works like the zoom lens on your camera. To get an idea what this is like for the robot, hold an empty paper towel roll up to your eye. Close the other eye, point the roll to the ground and try walking around without bumping into anything.

Another important factor is lighting. The type and amount of lighting available can have a dramatic effect on image processing. For example, the image in Figure 1 was taken from a webcam mounted on top of a LEGO NXT Tribot, as it drove over a white surface with a black line. You’ll probably notice the bright white blob in the bottom center of the image, which was actually a reflection from the bright office light overhead. When processing an image in which you need to identify a black line on a white surface, light reflections as illustrated this image may cause the robot to behave unexpectedly.

?
Figure 2. Lighting Variance: The figure shows two different images of a white floor with a black line from the same webcam. The image on the left was taken in low lighting conditions, and the image on the right in bright lighting.

Alternatively, too little lighting in a room can affect the clarity of the captured image. For example, Figure 2 shows two views from a single webcam. Both images were taken of a white floor with a black line. The image on the right was taken in a room with adequate lighting, but the image on the left was taken in a low lighting condition. Using the method described in this article to compute the center of gravity, the robot would likely not perform correctly when navigating through the room with low lighting.

This article will not attempt to cover all the possible problems that can occur when processing digital images; the article simply demonstrates how complex digital image processing can be. Humans take the complex processing required for our eyes to accept images and for our brain to interpret them for granted; it seems so easy from our perspective. But, from the robot’s perspective, it is anything but easy.

Fortunately, here you’ll be working with a simulation in which you can closely control all the environmental variables, which greatly increases your chances of having the robot behave as expected, while still exposing you to some of the issues you might face when controlling a robot using a webcam.

Creating a Simulated Service
To get started, you will need to have Visual Studio 2008 and Microsoft Robotics Studio, version 2.0 already installed on your computer. After you do, open Visual Studio and select New and Project from the menu. To create a simulated service, select Microsoft Robotics as the project type and DSS Service (2.0) as the template. Name the project LineFollowing. You can save the project in your local Projects directory (see Figure 3).

?
Figure 3. New DSS Service: After installing MRDS, create a new service project in Visual Studio 2008 using the DSS Service (2.0) template.
?
Figure 4. DSS Service Wizard: The latest version of MRDS includes a wizard template for creating new service projects.

The MRDS 2.0 version includes a redesigned Visual Studio template that offers a wizard style interface. The DSS Service (2.0) template, which simplifies the service creation process, allows you to specify commonly accessed service properties and to declare service partners. For the LineFollowing service, you will need to uncheck the box labeled “Use subscription manager” on the Service tab. You will also need to select the Partners tab and scroll through the list of services until you find one named Simulation Engine. Select the service and click “Add as partner” (see Figure 4). When you are done, click OK to create the new service project.

Adding the Service Code
The rest of this article walks you through the steps required to create most of the code for this service, but goes into detail only for the image processing code. If you are new to MRDS, you may want to spend time looking through the documentation and tutorials that come with MRDS before continuing. It is important that you understand how services work because some of the syntax in this article may be new to you.

The first thing you will need to do is add references to the following .NET components:

  • RoboticsCommon
  • RoboticsCommon.Proxy
  • PhysicsEngine
  • SimulationCommon
  • SimulationEngine
  • SimulationEngine.Proxy
  • SimulatedWebcam.2006.M09.Proxy
  • SimulatedDifferentialDrive.2006.M06.Proxy
  • System.Drawing

When you create a service project using the built-in template, Visual Studio automatically generates two code files for you. You will need to add the following namespace references to the top of the LineFollowing.cs class file:

   using Microsoft.Robotics.Simulation;   using Microsoft.Robotics.Simulation.Engine;   using engineproxy =        Microsoft.Robotics.Simulation.Engine.Proxy;   using Microsoft.Robotics.Simulation.Physics;   using simengine =       Microsoft.Robotics.Simulation.Engine;   using drive =       Microsoft.Robotics.Services.Simulation.Drive.Proxy;   using simcam =    Microsoft.Robotics.Services.Simulation.Sensors.SimulatedWebcam.Proxy;   using webcam = Microsoft.Robotics.Services.WebCam;   using Microsoft.Robotics.PhysicalModel;   using Microsoft.Dss.Core;   using System.Drawing;   using System.Drawing.Imaging;   using System.Runtime.InteropServices;

You will also need to add the following variable declarations below the service state declaration:

   int _centerOfGravity = 128;   const int ImageWidth = 320;   const int ImageHeight = 240;   byte[] _processFrame =        new byte[ImageWidth * ImageHeight * 4];   byte[] _grayImg = new byte[ImageWidth * ImageHeight];   simengine.SimulationEnginePort _notificationTarget;   simengine.CameraEntity _entity;   float leftWheelPower;   float rightWheelPower;   LegoNXTTribot robotBaseEntity = null;

The _centerofGravity variable contains a calculated field that can be used to guide the robot in a certain direction. In this article, the center of gravity represents the area of the image along the X-Axis where most of the black pixels are located. The logic is that if you can find where most of the black pixels are, it should be easy for your robot to follow a black line on a white surface.

The program initializes the _centerOfGravity field with a value of 128, which represents the halfway point between 0 and 255, which is the range of possible color values for a pixel. The lowest value in the range, 0, represents the darkest possible shade of black, while the highest value, 255, represents the lightest possible shade of white.

Creating the Simulation Environment
To see what is happening in the simulation, you must define a main camera. This camera represents a view into the simulated environment. You add it using the SetupCamera method:

   private void SetupCamera()   {     // Set up initial view -- hovering above the robot     CameraView view = new CameraView();     view.EyePosition = new Vector3(0.0f, 1.52f, 0.11f);     view.LookAtPoint =           new Vector3(0.0f, -110.75f, -1.12f);     SimulationEngine.GlobalInstancePort.Update(view);   }

You must define the various entities that will exist in your simulated environment. Entities can represent any physical object within the simulation. For this simulation, the defined entities will include not only the robot, but also the sky, the ground, and the white surface with black lines. The code in Listing 1 adds entities representing the sky, the ground, the robot, and the NXTPad, which is the white object with black lines on it:

The method AddLegoNxtRobot adds a new camera to the LEGO base by defining a variable as a CameraEntity type and then calling the CreateCamera method. The CreateCamera method, shown below, defines the properties for the simulated webcam. Notice that the CameraModelType is defined as an AttachedChild. Also notice that the camera is positioned on top of the robot and towards the front. It is also oriented such that it points downwards at an 85-degree angle. If you were to modify the property values that define the webcams position and orientation, you would see different results in the behavior of the robot.

   private CameraEntity CreateCamera()   {       // low resolution, wide Field of View, with an       // attached child model type       CameraEntity cam =           new CameraEntity(ImageWidth,          ImageHeight, ((float)Math.PI * 0.4f),          CameraEntity.CameraModelType.AttachedChild);       cam.State.Name = "robocam";       // position the camera on top front of the bot        cam.State.Pose.Position =            new Vector3(0.0f, 0.11f, -0.15f);       //Tilt the camera downwards       AxisAngle aa =           new AxisAngle(new Vector3(1, 0, 0),           (float)(-85 * Math.PI / 180));       cam.State.Pose.Orientation =          Quaternion.FromAxisAngle(aa);       // camera renders in offline buffer at each frame       cam.IsRealTimeCamera = true;       cam.UpdateInterval = 100;       // Start simulated webcam service       simcam.Contract.CreateService(ConstructorPort,                  Microsoft.Robotics.Simulation.          Partners.CreateEntityPartner(          "http://localhost/" + cam.State.Name));   }

Every service must include a Start method, which is the main entry point for the application. In this case, Visual Studio added a Start method automatically when you created the project from the template. You will need to modify this method to include code that subscribes to the simulation engine and receives notifications from any partners. This method also includes calls to the preceding SetupCamera and PopulateWorld methods (see Listing 1). Here’s the code you need to add to the Start method:

   // Issue Subscribe, which allows us to receive    // notifications from service partners   _notificationTarget =        new simengine.SimulationEnginePort();   simengine.SimulationEngine.GlobalInstancePort.Subscribe(      ServiceInfo.PartnerList, _notificationTarget);   SetupCamera();   PopulateWorld();      Activate(Arbiter.Receive(false, TimeoutPort(100),       CheckForUpdate));   return cam;

The last action performed in the Start method is a call to the CheckForUpdate method. This method is responsible for requesting an update from the webcam every 100 milliseconds:

   void CheckForUpdate(DateTime time)   {      if (_entity == null)         return; // the entity is gone, no more updates         _mainPort.Post(new webcam.UpdateFrame());         Activate(Arbiter.Receive(            false, TimeoutPort(100), CheckForUpdate));   } 

Processing the Images

?
Figure 5. Computing Center of Gravity: Here’s an image taken from the simulated webcam before and after image processing has taken place.

Images captured by the simulated webcam are processed in the ProcessFrameHandler method. This method accepts an array of red, green, and blue values as an input parameter and returns the center of gravity (COG) as an integer value. The image array is first stripped of all color to make it easier to search for black and white colors. Each pixel of the resulting array is then compared against a threshold value, 128 in this case, to determine whether the pixel value is closer to white or to black. The COG for the X-Axis is then computed as the sum of the positions of all the black pixels divided by the total number of black pixels.

To get a better understanding of how this works, refer to Figure 5. The image on the left is an original webcam image. The image on the right shows how it looks after stripping the colors and applying the threshold values.

Here’s the code for the ProcessFrameHandler method:

   public void ProcessFrameHandler(byte[] rgbValues,         out int result)   {      result = 128;         if (rgbValues == null)         //nothing to process         return;         //Strip out the red, green, and blue colors and       // make the image gray or monochrome      for (int i = 0, j = 0;          i < _grayImg.Length; i++, j += 4)      {         _grayImg[i] = (byte)(((int)rgbValues[j + 1] +             (int)rgbValues[j + 2] +             (int)rgbValues[j + 3]) / 3);      }         //Compare the values in the gray image against      //a threshhold value of 128, which is half of the      //range from 0 to 255      int sumX = 0;      int pixelCount = 0;      for (int i = 0; i < _grayImg.Length; i++)      {         if (_grayImg[i] < 128)         {            sumX += i % ImageWidth;            pixelCount++;         }       }          //Compute the center of gravity       if (pixelCount > 0)       {          result = (sumX / pixelCount);       }       else       {          result = 128;       }   }

The UpdateFrameHandler method (see Listing 2) executes every 100 milliseconds. It first retrieves the latest bitmap image from the simulated webcam, and then extracts the frame image using the LockBits method. This locks the image into system memory, but at this point, it is merely a memory pointer and not an actual array of values. To get the array of red, blue, and green values, call the Marshall.Copy function, which is part of the .NET Framework Interop services.

After calculating the COG, you can use that value to adjust the amount of power assigned to the left and right motors of the simulated LEGO NXT (causing the robot to follow the black line) using the SetMotorTorque method. The left motor is assigned a value equal to the COG divided by 1020 and the right motor is assigned a value opposite of the left motor. The results are divided by 1020 to reduce the power to each motor. The motors accept a value between 0.0 and 1.0, where 1.0 represents full power. Dividing by 1020 causes the LEGO NXT to travel at a quarter of its maximum speed.

Executing the Code
After adding all the code for your service, compile it by clicking Build ? Build Solution from the Visual Studio Menu. If the service compiles successfully, you can run it by pressing F5 or clicking Debug ? Start Debugging. After the command window has loaded, you should see the Visual Simulation Editor appear. The simulation scene should include the LEGO NXT positioned in front of a NXT Pad (see Figure 6). The pad is a simulated version of the actual test pad that comes with the LEGO NXT.

?
Figure 6. Starting the Simulation: The figure shows the starting position of the line-following simulation as seen from the main camera view.
?
Figure 7. Robot’s Perspective: Here’s the simulation scene from the robot’s perspective.

If you click the Camera menu, and select Show Robocam, you can also see the simulation scene from the robot’s perspective (see Figure 7). By selecting “Show Robocam in a separate window,” you can see both views, the one from the main camera and the one from the Robocam simultaneously.

Examining the Robot’s Behavior
After the simulation scene loads in the Visual Simulation Editor, you will see the robot move forward, turn right, and then follow the line until it gets to the other side of the pad. Surprisingly, it will then continue moving towards the end of the pad and eventually drive off the pad, rather than following the oval black line back to the starting point. Can you guess why?

To explain why this happens, look at Figure 7 again. Notice that the far (right) side of the pad has several colored bars that run along the edge. Recall that the ProcessFrameHandler method first strips the image of all colors and then compares the resulting pixels against a threshold value. The method factors in the pixels located in the top right-hand corner as well as those in the oval line. Including these additional pixels, although they’re really just outliers, elevates the COG value and thus influences the overall turning behavior of the robot.

This behavior is intentional to demonstrate a point. To prevent this type of behavior in a real-world situation, you would need to do one of two things. The first thing would be to rigorously control the environment so that no outliers can negatively influence the robot’s behavior. For example, you could edit the NxtPad image used in the simulation and remove the colored bars at the edge of the pad. While this might sound reasonable, it is not a good idea; it misrepresents the real NxtPad and thus negates the point of running a simulation before working with an actual robot.

Extending the Functionality
A better alternative would be to tweak the code so that it ignores the colored-bar outliers so that the robot continues to follow the oval line. How do you do that? I am going to leave that question for you to discover, but I’ll give you a hint: There are actually several ways to solve the problem. Consider changing some of the variables in the program, such as the angle of the camera, the field of view, or the position of the robot. You might also consider changing the threshold value used to compute the center of gravity. Finally, you might want to consider integrating a neural network algorithm into the ProcessFrameHandler routine. Have fun.

Share the Post:
Share on facebook
Share on twitter
Share on linkedin

Related Posts