Creating the Simulation Environment
To see what is happening in the simulation, you must define a main camera. This camera represents a view into the simulated environment. You add it using the
SetupCamera method:
private void SetupCamera()
{
// Set up initial view -- hovering above the robot
CameraView view = new CameraView();
view.EyePosition = new Vector3(0.0f, 1.52f, 0.11f);
view.LookAtPoint =
new Vector3(0.0f, -110.75f, -1.12f);
SimulationEngine.GlobalInstancePort.Update(view);
}
You must define the various entities that will exist in your simulated environment. Entities can represent any physical object within the simulation. For this simulation, the defined entities will include not only the robot, but also the sky, the ground, and the white surface with black lines. The code in
Listing 1 adds entities representing the sky, the ground, the robot, and the NXTPad, which is the white object with black lines on it:
The method
AddLegoNxtRobot adds a new camera to the LEGO base by defining a variable as a CameraEntity type and then calling the
CreateCamera method. The
CreateCamera method, shown below, defines the properties for the simulated webcam. Notice that the CameraModelType is defined as an
AttachedChild. Also notice that the camera is positioned on top of the robot and towards the front. It is also oriented such that it points downwards at an 85-degree angle. If you were to modify the property values that define the webcams position and orientation, you would see different results in the behavior of the robot.
private CameraEntity CreateCamera()
{
// low resolution, wide Field of View, with an
// attached child model type
<b>CameraEntity cam =
new CameraEntity(ImageWidth,
ImageHeight, ((float)Math.PI * 0.4f),
CameraEntity.CameraModelType.AttachedChild);
cam.State.Name = "robocam";
// position the camera on top front of the bot
cam.State.Pose.Position =
new Vector3(0.0f, 0.11f, -0.15f);
//Tilt the camera downwards
AxisAngle aa =
new AxisAngle(new Vector3(1, 0, 0),
(float)(-85 * Math.PI / 180));</b>
cam.State.Pose.Orientation =
Quaternion.FromAxisAngle(aa);
// camera renders in offline buffer at each frame
cam.IsRealTimeCamera = true;
cam.UpdateInterval = 100;
// Start simulated webcam service
simcam.Contract.CreateService(ConstructorPort,
Microsoft.Robotics.Simulation.
Partners.CreateEntityPartner(
"http://localhost/" + cam.State.Name));
}
Every service must include a
Start method, which is the main entry point for the application. In this case, Visual Studio added a
Start method automatically when you created the project from the template. You will need to modify this method to include code that subscribes to the simulation engine and receives notifications from any partners. This method also includes calls to the preceding
SetupCamera and
PopulateWorld methods (see
Listing 1). Here's the code you need to add to the
Start method:
// Issue Subscribe, which allows us to receive
// notifications from service partners
_notificationTarget =
new simengine.SimulationEnginePort();
simengine.SimulationEngine.GlobalInstancePort.Subscribe(
ServiceInfo.PartnerList, _notificationTarget);
SetupCamera();
PopulateWorld();
Activate(Arbiter.Receive(false, TimeoutPort(100),
CheckForUpdate));
return cam;
The last action performed in the
Start method is a call to the
CheckForUpdate method. This method is responsible for requesting an update from the webcam every 100 milliseconds:
void CheckForUpdate(DateTime time)
{
if (_entity == null)
return; // the entity is gone, no more updates
_mainPort.Post(new webcam.UpdateFrame());
<b>Activate(Arbiter.Receive(
false, TimeoutPort(100), CheckForUpdate));</b>
}
Processing the Images
 | |
Figure 5. Computing Center of Gravity: Here's an image taken from the simulated webcam before and after image processing has taken place. |
Images captured by the simulated webcam are processed in the
ProcessFrameHandler method. This method accepts an array of red, green, and blue values as an input parameter and returns the center of gravity (COG) as an integer value. The image array is first stripped of all color to make it easier to search for black and white colors. Each pixel of the resulting array is then compared against a threshold value,
128 in this case, to determine whether the pixel value is closer to white or to black. The COG for the X-Axis is then computed as the sum of the positions of all the black pixels divided by the total number of black pixels.
To get a better understanding of how this works, refer to
Figure 5. The image on the left is an original webcam image. The image on the right shows how it looks after stripping the colors and applying the threshold values.
Here's the code for the
ProcessFrameHandler method:
public void ProcessFrameHandler(byte[] rgbValues,
out int result)
{
result = 128;
if (rgbValues == null)
//nothing to process
return;
//Strip out the red, green, and blue colors and
// make the image gray or monochrome
for (int i = 0, j = 0;
i < _grayImg.Length; i++, j += 4)
{
_grayImg[i] = (byte)(((int)rgbValues[j + 1] +
(int)rgbValues[j + 2] +
(int)rgbValues[j + 3]) / 3);
}
//Compare the values in the gray image against
//a threshhold value of 128, which is half of the
//range from 0 to 255
int sumX = 0;
int pixelCount = 0;
for (int i = 0; i < _grayImg.Length; i++)
{
if (_grayImg[i] < 128)
{
sumX += i % ImageWidth;
pixelCount++;
}
}
//Compute the center of gravity
if (pixelCount > 0)
{
<b>result = (sumX / pixelCount);</b>
}
else
{
result = 128;
}
}
The
UpdateFrameHandler method (see
Listing 2) executes every 100 milliseconds. It first retrieves the latest bitmap image from the simulated webcam, and then extracts the frame image using the
LockBits method. This locks the image into system memory, but at this point, it is merely a memory pointer and not an actual array of values. To get the array of red, blue, and green values, call the
Marshall.Copy function, which is part of the .NET Framework Interop services.
After calculating the COG, you can use that value to adjust the amount of power assigned to the left and right motors of the simulated LEGO NXT (causing the robot to follow the black line) using the
SetMotorTorque method. The left motor is assigned a value equal to the COG divided by
1020 and the right motor is assigned a value opposite of the left motor. The results are divided by
1020 to reduce the power to each motor. The motors accept a value between
0.0 and
1.0, where
1.0 represents full power. Dividing by
1020 causes the LEGO NXT to travel at a quarter of its maximum speed.