devxlogo

Top Five Touch UI-Related Design Guidelines

Top Five Touch UI-Related Design Guidelines

s the old saying goes, “The world is changing before your very eyes.” Actually, right now it should really be “the world is changing before your very fingers.” The touch paradigm for user interfaces has been inching towards prominence for a while now. With the introduction of the Apple iPhone, touch has not only changed the way people interact with mobile devices, it’s changing nearly everything, even the way we watch the news.

While touch technology has been around in one form or another for some 30 years, the technology has only recently reached the mass market—but in yet another new form. The new growth area is around multi-touch, a technology that was invented by Nimish Mehta (University of Toronto) back in 1982 and that has progressed steadily in capability. According to USA Today, the growth of touch-enabled devices is projected to increase from a relatively paltry 200,000 units in 2006 to over 21 million by the year 2012. Recently, HP introduced a new printer with a color touch interface; soon, touch will be everywhere…or perhaps it’s everywhere already, and developers have simply not recognized the growth beyond the simple iPhone/iTouch devices.

Implementing Touch

So how do you implement a touch-enabled interface? That is usually the first question developers ask. Touch is usually a hardware-dependent technology that can be manipulated by software. While the hardware support for touch has undergone continuous innovation, this article focuses more on the software aspects of implementing touch.

Before diving into the software aspect, one of the biggest changes that touch provides is the ability for users to interact with a touch-enabled device through gestures. Gestures are a recognizable sequence of movements. Users can make these movements with many different types of input devices, including fingers, hands, a stylus or pen interface, etc. Gestures aren’t limited to touch-enabled devices, although currently users typically execute gestures via a touch interface. Many people are already aware of mouse-driven gestures such as those in Opera, where holding the right-mouse key and moving the mouse either right or left executes back and forward operations.

The general public is probably most aware of gestures as implemented by the Nintendo Wii, where users move a wireless remote controller in 3D space to interact with games. That technology is advancing as well. Recently, Microsoft previewed a new project named “Natal,” which uses a single-point camera to capture movements, analyzes them for gestures, and can then execute commands associated with those gestures. Unlike the Wii remote, Natal captures whole-body movements (or even movements from multiple bodies). In contrast, the Wii captures only the remote controller’s movement, angle, and velocity. Each approach has pluses and minuses in regard to game interaction.

With that general terminology in place, the rest of this article discusses more specific considerations for software implementations that make touch and multi-touch possible today.

Touch in Windows 7

Focusing on the new Windows 7 release, Microsoft has drawn a line in the sand by stating that Windows 7 applications should be designed with gestures and touch in mind. From the MSDN Touch page: “All Microsoft® Windows® applications should have a great touch experience. And doing so is easier than you think.”

This means that Microsoft is betting that customers will quickly come to expect their hardware and software solutions to support touch. Companies like mine (Embarcadero) that support the Microsoft platform have already invested time into making it easy and fun to build these types of applications. To build intuitive touch interfaces however, you need to know what Microsoft has done to the OS to expose touch functionality.

Window 7 exposes gesturing at a relatively low level that supports input types other than keyboard and mouse. For example, the new API RegisterTouchWindow method allows application designers to enable touch in any window; in other words, one window may be touch enabled while the next may not be. When a window is not touch enabled, it will still support standard mouse gestures. However, when a window is registered as a touchable window, interactions will go through the OS’s touch interfaces. Having this infrastructure in place allows third-party providers to wrap the touch functionality and make it simpler to implement.

Enabling Touch in Applications

It’s an established fact that hardware is becoming more and more available every single day, and also that operating systems such as Windows (7, Vista, Tablet), Linux, Mac OSX, iPhone, game machines, and others, are adding support for touch-sensitive hardware and a touch interaction model. So, for developers, the next big question is: What do you have to take into consideration when designing for gestures, touch, and multi-touch?

Starting with gestures, it’s important to implement gestures that are natural and at least somewhat logical. For example, creating a gesture that starts at some point and goes to the right or moves forward would make an excellent gesture for moving to the next item in a database. Similarly, using a leftward gesture to navigate backward by one record in the database would make sense. These intuitive types of operations are what I mean by “natural.” If instead you were to create a circle gesture that caused the application to move forward one record in a database, users would be confused, because making a circle to move forward is counter-intuitive. Gestures need to be natural for people to adopt them.

Bear in mind that you can set recognized gestures both for individual windows and even for individual controls inside those windows. So, for example, if you have a textbox containing a person’s name, it may be logical to create a squiggly gesture to erase the contents. When the textbox is empty, it might be logical to use a focus gesture (mouse click or finger flick) in the control, which would set the focus to the control and activate a soft keyboard or equivalent. Such gestures provide an interface to users using their fingers to type in the information, which they would do by pushing buttons—the equivalent of typing letters on a keyboard to input characters into the textbox.

It’s also important to provide features that help users perform tasks for touch interfaces. For example, entering text into textboxes can be a tedious and lengthy operation. For complex text-input features, developers should give their applications support for text completion, letting users skip much of the tedious letter-by-letter input for recognizable expected or repeated input values.

Even more concepts come into play when the hardware is multi-touch capable, such as panning, zooming, rotating, two-finger taps, and press-and-tap. These raise the interaction level another notch—but they also raise the design complexity level. If you have an iPhone or have looked at one, you probably know that the device has the ability to zoom when viewing a picture. With the picture displayed, put two fingers in the middle of the picture and spread your fingers while touching the screen, and the picture will enlarge by the distance you spread your fingers. If you reverse the process, starting with your fingers apart, then pull them back together, the picture will shrink.

Again, the gestures associated with the Multi-Touch are going to be massively important. Having support for common gestures such as zoom and rotate will help developers build a new generation of multi-touch enabled applications.

Understanding the underlying operating system capabilities, how those capabilities have been exposed to developers, and how tools can help simplify the development of these applications is important. However, it’s just as important to keep solid design guidelines in mind when developing these types of applications. Here are my top five touch-related design guidelines:

  • Think BIG: Don’t try to put 50 10-by10 pixel icon buttons on a toolbar. Instead, use much larger buttons. Keep in mind that fingers vary in size, and that it’s difficult to touch small items.
  • Implement Undo: Because touch input errors are usually far more common than mouse or keyboard input errors, it is very important for applications to be forgiving, and allow for undo operations.
  • Layout and spacing: When designing for touch provide plenty of room between selection items. Don’t put controls close to an edge of a window, because they become very hard to hit or select.
  • Think naturally: Do the chosen gestures make sense in the context of the application? You can associate tolerances with gestures, so when you have a complex gesture, think of wearing a glove to execute the operation. If your gesture fails often, consider setting the tolerances higher to increase the “forgiveness factor.”
  • Interaction types are not equivalent: Don’t assume that if the application works well with a mouse it will work well with touch—or that if it works well with touch it will work well with a pen, or any other combination. The key is testing, testing, testing….

As with any new paradigm, there will be issues with touch interfaces, just as there were many bad window designs implemented when graphical windowing systems were first introduced. As developers, the most important thing to remember is to add those capabilities that help users get more out of your software. Keep the guidelines from this article in mind, and you’ll be ready to not only touch the new paradigm, but also to help lead the adoption curve of touch technology.

devxblackblue

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.

About Our Journalist