Over the past six years, whether we like it or not, we’ve all been retrained by Apple on how to use and design touchscreens. The importance of this type of mass “training” is often underestimated.
In learning how to use a new product, people either leverage their familiarity with similar products or learn from scratch. It is this type of familiarity that sometimes gets confused with products being “intuitive,” and one of the most important skills of being an interface designer is leveraging familiarity.
Touchscreens have been around for a long time. At the very first TED talk in 1984, Nicholas Negroponte of MI T Media Lab predicted that we would all be using touchscreens in the future. He was right. The AV industry started designing control systems with touchscreens about 10 years later. At the time they were exotic, expensive devices with cloudy, resistive touch overlays and monochromatic screens. Despite the fact that we were routinely designing systems with touchscreens, none of us had the design skills we needed. We were all good video and audio systems designers, but nobody I worked with had learned anything formally about interface design. We were making it up as we went along.
Then we discovered The Design of Everyday Things. Author Donald Norman didn’t teach how to design interfaces, but he did write about a more basic skill—designing products for humans. The Design of Everyday Things (or DOET as it is known in the design community) was first published in 1988. Just a few weeks ago, 25 years after the first publication, Norman published an update to DOET with some interesting new content.
Touchscreen designs continued to evolve between the early ’90s and early 2000s until an “evolutionary mutation” occurred in 2007—Apple introduced the iPhone. Apple had the focus, the will, and the money to invest in solving many of the preexisting problems with touchscreens such as multitouch, and improved screen quality at a reasonable cost. The newly released iOS7 is another move forward in the world of interface design, and there are interesting parallels between the new iOS and the new version of DOET.
In the original DOET Norman describes many psychological concepts that are the fundamental principles of interaction. One of these, an “affordance”, is the quality of a product that allows someone to perform an action. A chair affords sitting because it has a flat surface about 15 inches off the ground. As interface designers, we twisted this principle to indicate on a touchscreen the difference between a button and a label. At the time it wasn’t uncommon for people to confuse the two because they didn’t know where to press. We determined that if we made a graphic object that mimicked the look of an actual button that it would “virtually” afford pressing. This is now referred to as a digital skeuomorphic GUI design. Apple did the same in the first versions of iOS.
Norman has never agreed that a skeuomorph is a real affordance (because it is not). In the latest edition of DOET he introduces the new concept of a signifier primarily to address touchscreen design. He now makes a clear distinction between the touchscreen which truly affords touching (because it is physically designed to do so) and a signifier which indicates to the user where to touch. This is a subtle distinction, but it was necessary to properly clarify the principles of touchscreen interface design.
Now it appears that Apple’s new iOS7 has also made a transition from “virtual” affordances to what you might call “pure signifiers.” Newly appointed interface designer Jonathan Ive has taken off the training wheels, as many reviewers have noted, and now indicates pressable areas with minimal thin circular outlines or in some cases, just text. Since we have all been trained by the previous iOS’ “virtual” affordances, Ive can now do what he does best: reduce and simplify. Arguably, Apple has done the most to teach the world how to use a touchscreen and we would be foolish not to leverage what they have taught the world for our own designs. Now is time for everyone to take off the training wheels and create simple, clean, and, most importantly, familiar touchscreen designs.
Paul Chavez (firstname.lastname@example.org) is the director of systems applications for Harman Pro Group. He is a usability evangelist and a futurologist. Chavez has designed a variety of audiovisual systems ranging from themed attractions to super yachts. He has also taught and written on the topics of interaction design, audiovisual design, and networking.
The Seven Stages of Action
One of the most useful concepts in The Design of Everyday Things is the 7 Stages of Action. This is a basic checklist that describes the stages a person goes through when they attempt to use a product to achieve a goal. Let’s look at these stages as someone tries to turn on an AV system.
1. What do I want to accomplish? “I want to turn on the system.”
2. What are the alternatives for action sequences? “What controls are available? I see a small screen on the table.”
3. What action can I do now? “This screen looks like a touchscreen.”
4. How do I do it? “Maybe I should press the rectangle on the screen that is labeled ‘ON’.”
5. What happened? “A logo appeared on the display. And I heard a fan turn on”.
6. What does it mean? “It appears that the system turned on.”
7. Is this okay? Have I accomplished my goal? “It seems that the system is now on and that I can continue to my next goal of routing my computer to the screen”.
All of this sounds very fundamental, but as you create an interface, you should walk through this series of actions in your mind, playing the role of someone with no knowledge of the system. It is through this type of empathetic point-of-view that will help you to predict if your design is simple and communicates the systems capabilities and possible actions to the user. After this predictive design stage you can test it with your customer and find out if your point-of- view is correctly aligned.