Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Top Five Touch UI-Related Design Guidelines

As hardware that supports new touch and multi-touch technologies becomes commonplace, developers need to keep the new capabilities in mind when building applications.


advertisement

s the old saying goes, "The world is changing before your very eyes." Actually, right now it should really be "the world is changing before your very fingers." The touch paradigm for user interfaces has been inching towards prominence for a while now. With the introduction of the Apple iPhone, touch has not only changed the way people interact with mobile devices, it's changing nearly everything, even the way we watch the news.

While touch technology has been around in one form or another for some 30 years, the technology has only recently reached the mass market—but in yet another new form. The new growth area is around multi-touch, a technology that was invented by Nimish Mehta (University of Toronto) back in 1982 and that has progressed steadily in capability. According to USA Today, the growth of touch-enabled devices is projected to increase from a relatively paltry 200,000 units in 2006 to over 21 million by the year 2012. Recently, HP introduced a new printer with a color touch interface; soon, touch will be everywhere…or perhaps it's everywhere already, and developers have simply not recognized the growth beyond the simple iPhone/iTouch devices.

Implementing Touch

So how do you implement a touch-enabled interface? That is usually the first question developers ask. Touch is usually a hardware-dependent technology that can be manipulated by software. While the hardware support for touch has undergone continuous innovation, this article focuses more on the software aspects of implementing touch.



Before diving into the software aspect, one of the biggest changes that touch provides is the ability for users to interact with a touch-enabled device through gestures. Gestures are a recognizable sequence of movements. Users can make these movements with many different types of input devices, including fingers, hands, a stylus or pen interface, etc. Gestures aren't limited to touch-enabled devices, although currently users typically execute gestures via a touch interface. Many people are already aware of mouse-driven gestures such as those in Opera, where holding the right-mouse key and moving the mouse either right or left executes back and forward operations.

The general public is probably most aware of gestures as implemented by the Nintendo Wii, where users move a wireless remote controller in 3D space to interact with games. That technology is advancing as well. Recently, Microsoft previewed a new project named "Natal," which uses a single-point camera to capture movements, analyzes them for gestures, and can then execute commands associated with those gestures. Unlike the Wii remote, Natal captures whole-body movements (or even movements from multiple bodies). In contrast, the Wii captures only the remote controller's movement, angle, and velocity. Each approach has pluses and minuses in regard to game interaction.

With that general terminology in place, the rest of this article discusses more specific considerations for software implementations that make touch and multi-touch possible today.

Touch in Windows 7

Focusing on the new Windows 7 release, Microsoft has drawn a line in the sand by stating that Windows 7 applications should be designed with gestures and touch in mind. From the MSDN Touch page: "All Microsoft® Windows® applications should have a great touch experience. And doing so is easier than you think."

This means that Microsoft is betting that customers will quickly come to expect their hardware and software solutions to support touch. Companies like mine (Embarcadero) that support the Microsoft platform have already invested time into making it easy and fun to build these types of applications. To build intuitive touch interfaces however, you need to know what Microsoft has done to the OS to expose touch functionality.

Window 7 exposes gesturing at a relatively low level that supports input types other than keyboard and mouse. For example, the new API RegisterTouchWindow method allows application designers to enable touch in any window; in other words, one window may be touch enabled while the next may not be. When a window is not touch enabled, it will still support standard mouse gestures. However, when a window is registered as a touchable window, interactions will go through the OS's touch interfaces. Having this infrastructure in place allows third-party providers to wrap the touch functionality and make it simpler to implement.



Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap