Login | Register   
RSS Feed
Download our iPhone app
Browse DevX
Sign up for e-mail newsletters from DevX

By submitting your information, you agree that devx.com may send you DevX offers via email, phone and text message, as well as email offers about other products and services that DevX believes may be of interest to you. DevX will process your information in accordance with the Quinstreet Privacy Policy.


Ink Recognition and Ink Analysis

Digital Ink is only a collection of lines rendered on the screen, but with Ink recognition and analysis, you can turn it into meaningful information such as text, drawings, gestures, commands, and even the relationship between two shapes. The Tablet PC SDK makes it surprisingly simple to detect all these types of information in Digital Ink.




Application Security Testing: An Integral Part of DevOps

eing able to take handwritten notes or annotations is nice, but the real power of Tablet PCs comes from the ability to analyze and recognize handwriting written in digital Ink. Handwriting recognition is important because it converts digital Ink into standard text strings, but Ink analysis takes the concept a step further and adds spatial interpretation to the mix so you can further apply semantics. Gesture recognition enables users to trigger real-time actions.

Applications can interpret and recognize digital Ink in multiple ways. Simple handwriting recognition returns the most likely result as a string, but you can dig a lot deeper and retrieve alternate results, confidence levels, and much more. It is often helpful to divide Ink into individual segments, such as words or paragraphs.

But you can retrieve more information from Ink than simple text. Users can perform gestures that your application can interpret separately from the writing itself. For instance, your application could recognize a "scratch out" gesture to erase a section of previously written Ink.

Your application may need to analyze Ink based on its location. One example is that your application might recognize Ink written at the side of a document with a line attaching it to an area of the document as an annotation. This sort of functionality also requires recognition of primitives, such as lines, circles, triangles, or rectangles, and recognition of the spatial relationship.

Recognizing Ink as Text
Simple recognition is so straightforward that I would almost call it primitive. This, of course, is only true from the developer's point of view after applying the functionality of the SDK. There is nothing simple or primitive about handwriting recognition at all, but unless you work for Microsoft and/or write custom recognizers, you will not have to worry about the complexity of it.

Figure 1. Simple Ink-enabled Form: This is a simple form with an Ink-enabled panel.
As a first Ink recognition example, I will walk you through a simple form that provides a panel area that is Ink-enabled through a simple InkCollector (or InkOverlay) object (Figure 1). The details of the implementation of the form are not particularly important (see Listing 1 if you want to follow along with the examples). What matters is that you ultimately arrive at a situation where you have an Ink object stored inside the InkCollector or InkOverlay object. In this first simple example, you'll learn to convert that digital Ink to a text string. You'll use a piece of code like this.

string result = collector.Ink.Strokes.ToString(); MessageBox.Show(result);

The preceding code takes all the strokes inside the Ink object and returns the most likely recognition result. To accomplish that, a lot of work needs to happen under the hood. The system has to first determine which recognizers are installed. Almost all Tablet PC operating systems ship with multiple recognizers including (in the USA) an English handwriting recognizer and a gesture recognizer (see the sidebar "Foreign Languages and Strange Symbols"). To retrieve the most likely result, the default recognizer is used (on a US Windows Tablet PC operating system, that is likely be the English handwriting recognizer), and the collection of strokes is passed to the recognizer engine. This returns a result set that includes a whole lot of information (I will explore the details below), but most of it is ignored and only the single most likely result is returned.

You can write code to implement this approach yourself.

Recognizers recs = new Recognizers(); Recognizer reco = recs.GetDefaultRecognizer(); RecognizerContext context = reco.CreateRecognizerContext(); context.Strokes = this.collector.Ink.Strokes; RecognitionStatus status = RecognitionStatus.NoError; RecognitionResult res = context.Recognize(out status); if (status == RecognitionStatus.NoError) { MessageBox.Show(res.TopAlternate.ToString()); }

Using Recognizers
You can use a Recognizers object to determine the default recognizer installed (which replicates the behavior of the ToString() method. In real-life scenarios, it is more prudent to explicitly pick a specific language recognizer, such as the US-English recognizer, instead of relying on the default being set properly). Once you have that object, you have to create a private context.

Recognizers are busy objects. Imagine the recognizer context as a personal handle into a recognizer. You can change and configure your recognition context in any way you want without influencing other recognition operations. (Note: It is also possible to simply instantiate a RecognizerContext object with a similar result.) Once you have your own context, you can assign it the strokes you want to recognize. Then, all that's left is to call the Recognize() method and retrieve the result set, assuming that the out parameter (similar to a ByRef parameter in Visual Basic terms) returned a success code. To simulate the exact operations of the ToString() method, you simply look at the top alternate, which is the recognition result the recognizer is most confident about.

Recognition results can be improved drastically by providing factoids.
At this point, you've jumped through a lot of hoops to do what the ToString() method did in the first example. I think you need to understand how this setup works because you'll often want to do a lot more with recognition results than just retrieve the top alternate. For instance, you may want to look at all other variations the recognizer considers possible matches and find out how confident the recognizer is about those matches. Here's how to change the previous example to achieve that.

RecognitionResult res = context.Recognize(out status); if (status == RecognitionStatus.NoError) { string text = "Possible Matches:\r\n\r\n"; foreach(RecognitionAlternate alt in res.GetAlternatesFromSelection()) { text += alt.ToString() + " [" + alt.Confidence.ToString() + "]\r\n"; } MessageBox.Show(text); }

Figure 2. Alternate Recognition Results: Finding alternate recognition results with confidence levels is easy.
Figure 3. The TIP uses alternate recognition results as one of the correction options available to users.
Figure 2 shows this example at work. Using this information, it becomes relatively easy to create an interface that allows the user to pick alternate recognition results. The TIP (Tablet PC Input Panel) is a good example for such an interface (see Figure 3).

Comment and Contribute






(Maximum characters: 1200). You have 1200 characters left.



Thanks for your registration, follow us on our social networks to keep up-to-date