<Previous Lesson

Human Computer Interaction

Next Lesson>

Lesson#24

FRAMEWORK AND REFINEMENTS

The aim of this lecture is to introduce you the study of Human Computer Interaction,
so that after studying this you will be able to:

. Discuss how to build an interaction framework?

. Discuss how to refine the form and behaviour?

24.1 Defining the interaction framework

The Requirements Definition phase sets the stage for the core of the design effort:
defining the interaction framework of the product. The interaction framework defines
not only the skeleton of the interaction — its structure — but also the flow and
behavior of the product. The following six steps describe the process of defining the
interaction framework:
1. Defining form factor and input methods
2. Defining views
3. Defining functional and data elements
4. Determining functional groups and hierarchy
5. Sketching the interaction framework
6. Constructing key path scenarios
Like previous processes, this is not a linear effort, but requires iteration. The steps are
described in more detail in the following sections.

STEP 1: DEFINING FORM FACTOR AND INPUT METHODS

The first step in creating a framework is defining the form factor of the product you'll
be designing. Is it a Web application that will be viewed on a high-resolution
computer screen? Is it a phone that must be small, light, low-resolution, and visible in
the dark and as well as in bright sunlight? Is it a kiosk that must be rugged to
withstand a public environment with thousands of distracted, novice users? What are
the constraints that each of these imply for any design? Answering these questions
sets the stage for all subsequent design efforts.
After you have defined this basic posture of the product, you should then determine
the valid input methods for the system: Keyboard, mouse, keypad, thumb-board,
touch screen, voice, game controller, remote control, and many other possibilities
exist Which combination is appropriate for your primary and secondary personas?
What is the primary input method for the product?

STEP 2: DEFINING VIEWS

The next step, after basic form factor and input methods are defined, is to consider
which primary screens or states the product can be in. Initial context scenarios give
you a feel for what these might be: They may change or rearrange somewhat as the

220
design evolves (particularly in step 4), but it is often helpful to put an initial stake in
the ground to serve as a means for organizing your thoughts. If you know that a user
has several end goals and needs that don't closely relate to each other in terms of data
overlap, it might be reasonable to define separate views to address them. On the other
hand, if you see a cluster of related needs (for example, to make an appointment, you
need to see a calendar and possibly contacts), you might consider defining a view that
incorporates all these together, assuming the form factor allows it.

STEP 3: DEFINING FUNCTIONAL AND DATA ELEMENTS

Functional and data elements are the visible representations of functions and data in
the interface. They are the concrete manifestations of the functional and data needs
identified during the Requirements Definition phase. Where those needs were
purposely described in terms of real-world objects and actions, functional and data
elements are described in the language of user interface representations:

. Panes, frames, and other containers on screen

. Groupings of on-screen and physical controls

. Individual on-screen controls

. Individual buttons, knobs, and other physical affordances on a device

. Data objects (icons, listed items, images, graphs) and associated attributes
In early framework iterations, containers are the most important to specify; later as
you focus on the design of individual containers, you will get to more detailed
interface elements.
Many persona needs will spawn multiple interface elements to meet those needs. For
example, Salman needs to be able to telephone his contacts. Functional elements to
meet that need include:

. Voice activation (voice data associated with contact)

. Assignable quick-dial buttons

. Selecting from a list of contacts

. Selecting the name from e-mail header, appointment, or memo

. Auto-assignment of a call button in proper context (appointment coming up)
Multiple vectors are often a good idea, but sometimes not all possible vectors will be
useful to the persona. Use persona goals, design principles, and patterns, as well as
business and technical constraints to winnow your list of elements for meeting
particular needs. You will also need to determine data elements. Some of Salman's
data elements might include appointments, memos, to-do items, and messages.

STEP 4: DETERMINING FUNCTIONAL GROUPS AND HIERARCHY

After you have a good list of top-level functional and data elements, you can begin to
group them into functional units and determine their hierarchy (Shneiderman, 1998).
Because these elements facilitate specific tasks, the idea is to group elements to best
facilitate the personal both within a task and between related tasks. Some issues to
consider include:

. Which elements need a large amount of real estate and which do not?

. Which elements are containers for other elements?

221

. How should containers be arranged to optimize flow?

. Which elements are used together and which aren't?

. In what sequence will a set of related elements be used?

. What interaction patterns and principles apply?

. How do the personas' mental models affect organization? (Goodwin. 2002)
The most important initial step is determining the top-level container elements for the
interface, and how they are best arranged given the form factor and input methods that
the product requires. Containers for objects that must be compared or used together
should be adjacent to each other. Objects representing steps in a process should, in
general, be adjacent and ordered sequentially. Use of interaction design principles and
patterns is extremely helpful at this juncture.

STEP 5: SKETCHING THE INTERACTION FRAMEWORK

You may want to sketch different ways of fitting top-level containers together in the
interface. Sketching the framework is an iterative process that is best performed with
a small, collaborative group of one or two interaction designers and a visual or
industrial designer. This visualization of the interface should be extremely simple at
first: boxes representing each functional group and/or container with names and
descriptions of the relationships between the different areas (see Figure).
Be sure to look at the entire, top-level framework first; don't let yourself get distracted
by the details of a particular area of the interface. There will be plenty of time to
explore the design at the widget level and, by going there too soon, you risk a lack of
coherence in the design later.

STEP 6: CONSTRUCTING KEY PATH SCENARIOS

Key path scenarios result from exploring details hinted at, but not addressed, in the
context scenarios. Key path scenarios describe at the task level the primary actions
and pathways through the interface that the persona takes with the greatest frequency,
often on a daily basis. In an e-mail application, for example, viewing and composing
mail are key path activities; configuring a new mail server is not.
Key path scenarios generally require the greatest interaction support. New users must
master key path interactions and functions quickly, so they need to be supported by
built-in pedagogy. However, because these functions are used frequently, users do not
remain dependent on that pedagogy for long: They will rapidly demand shortcuts. In
addition, as users become very experienced, they will want to customize daily use
interactions so that they conform to their individual work styles and preferences.

222

SCENARIOS AND STORYBOARDING

Unlike the goal-oriented context scenarios, key path scenarios are more task-oriented;
focusing on task details broadly described and hinted at in the context scenarios
(Kuutti, 1995). This doesn't mean that goals are ignored — goals and persona needs
are the constant measuring stick throughout the design process, used to trim
unnecessary tasks and streamline necessary ones. However, key path scenarios must
describe in exacting detail the precise behavior of each major interaction and provide
a walkthrough (Newman & Lamming, 1995) of each major pathway.
Typically, key path scenarios begin at a whiteboard and reach a reasonable level of
detail. At some point, depending on the complexity and density of the interface, it
becomes useful to graduate to computer-based tools. Many experts are fond of
Microsoft PowerPoint as a tool for aiding in the storyboarding of key path scenarios.
Storyboarding is a technique borrowed from filmmaking and cartooning. Each step in
an interaction, whether between the user and the system, multiple users, or some
combination thereof (Holtzblatt & Beyer, 1998) can be portrayed on a slide, and
clicking through them provides a reality check for the coherence of the interaction
(see Figure). PowerPoint is sufficiently fast and low-resolution to allow rapid drawing
and iterating without succumbing to creating excessive detail.

PRETENDING THE SYSTEM IS HUMAN

Just as pretending it's magic is a powerful tool for constructing concept-level, context scenarios,
pretending the system is human is a powerful tool at the interaction-level appropriate to key path
scenarios. The principle is simple
: Interactions with a digital system should be similar in tone and
helpfulness to interactions with a polite, considerate human (Cooper. 1999). As you construct your
interactions,
you should ask yourself: Is the primary persona being treated humanely by
the product? What would a thoughtful, considerate interaction look like? In what
ways can the software offer helpful information without getting in the way? How can
it minimize the persona’s effort in reaching his goals? What would a helpful human
do?

PRINCIPLES AND PATTERNS

Critical to the translation of key path scenarios to storyboards (as well as the grouping
of elements in step 3) is the application of general interaction principles and specific
interaction patterns. These tools leverage years of interaction design knowledge —
not to take advantage of such knowledge would be tantamount to re-inventing the
wheel. Key path scenarios provide an inherently top-down approach to interaction

223
dsign, iterating through successively more-detailed design structures from main
screens down to tiny subpanes or dialogs. Principles and patterns add a bottom-up
approach to balance the process. Principles and patterns can be used to organize
elements at all levels of the design.

24.2 Prototyping

It is often said that users can't tell you what they want, but when they see some thing
and get to use it, they soon know what they don't want. Having collected information
about work practices and views about what a system should and shouldn't do, we then
need to try out our ideas by building prototypes and iterating through several versions.
And the more iterations, the better the final product will be.

What is a prototype?

When you hear the term prototype, you may imagine something like a scale model of
a building or a bridge, or maybe a piece of software that crashes every few minutes.
But a prototype can also be a paper-based outline of a screen or set screens, an
electronic "picture," a video simulation of a task, a three-dimension paper and
cardboard mockup of a whole workstation, or a simple stack of hyper-linked screen
shots, among other things.
In fact, a prototype can be anything from a paper-based storyboard through to a
complex piece of software, and from a cardboard mockup to a molded or pressed
piece of metal. A prototype allows stakeholders to interact with an envisioned
product, to gain some experience of using it in a realistic setting, and to explore
imagined uses.
For example, when the idea for the PalmPilot was being developed, Jeff Hawkin
(founder of the company) carved up a piece of wood about the size and shape of the
device he had imagined. He used to carry this piece of wood around with him and
pretend to enter information into it, just to see what it would be like to own such a
device (Bergman and Haitani, 2000). This is an example of a very simple (some might
even say bizarre) prototype, but it served its purpose of simulating scenarios of use.
Ehn and Kyng (1991) report on the use of a cardboard box with the label "Desktop
Laser Printer" as a mockup. It did not matter that, in their setup, the printer was not
real. The important point was that the intended users, journalists and typographers,
could experience and envision what it would be like to have one of these machines on
their desks. This may seem a little extreme, but in 1982 when this was done, desktop
laser printers were expensive items of equipment and were not a common sight
around the office.
So a prototype is a limited representation of a design that allows users to interact with
it and to explore its suitability.

Why prototype?

Prototypes are a useful aid when discussing ideas with stakeholders; they are a
communication device among team members, and are an effective way to test out
ideas for yourself. The activity of building prototypes encourages reflection in design,
as described by Schon (1983) and as recognized by designers from many disciplines
as an important aspect of the design process. Liddle (1996), talking about software
design, recommends that prototyping should always precede any writing of code.

224
Prototypes answer questions and support designers in choosing between alternatives.
Hence, they serve a variety of purposes: for example, to test out the technical
feasibility of an idea, to clarify some vague requirements, to do some user testing and
evaluation, or to check that a certain design direction is compatible with the rest of the
system development. Which of these is your purpose will influence the kind of
prototype you build. So, for example, if you are trying to clarify how users might
perform a set of tasks and whether your proposed device would support them in this,
you might produce a paper-based mockup.

Low-fidelity prototyping

A low-fidelity prototype is one that does not look very much like the final product.
For example, it uses materials that are very different from the intended final version,
such as paper and cardboard rather than electronic screens and metal. Low-fidelity
prototypes are useful because they tend to be simple, cheap, and quick to produce.
This also means that they are simple, cheap, and quick to modify so they support the
exploration of alternative designs and ideas. This is particularly important in early
stages of development, during conceptual design for example, because prototypes that
are used for exploring ideas should be flexible and encourage rather than discourage
exploration and modification. Low-fidelity prototypes are never intended to be kept
and integrated into the final product. They are for exploration only.

Storyboarding

Storyboarding is one example of low-fidelity prototyping that is often used in
conjunction with scenarios. A storyboard consists of a series of sketches showing how
a user might progress through a task using the device being developed, it can be a
series of sketched screens for a GUI-based software system, or a series of scene
sketches showing how a user can perform a task using the device. When used in
conjunction with a scenario, the storyboard brings more detail to the written scenario
and offers stakeholders a chance to role-play with the prototype, interacting with it by
stepping through the scenario.

Sketching

Low-fidelity prototyping often relies on sketching, and many people find it difficult to
engage in this activity because they are inhibited about the quality of their drawing.
Verplank (1989) suggests that you can teach yourself to get over this inhibition. He
suggests that you should devise your own symbols and icons for elements you might
want to sketch, and practice using them. They don't have to be anything more than
simple boxes, stick figures, and stars. Elements you might require in a storyboard
sketch, for example, include "things" such as people, parts of a computer, desks,
books, etc., and actions such as give, find, transfer, and write. If you are sketching an
interface design, then you might need to draw various icons, dialog boxes, and so on.

High-fidelity prototyping

High-fidelity prototyping uses materials that you would expect to be in the final
product and produces a prototype that looks much more like the final thing. For
example, a prototype of a software system developed in Visual Basic is higher fidelity
than a paper-based mockup; a molded piece of plastic with a dummy keyboard is a
higher-fidelity prototype of the PalmPilot than the lump of wood.

225
If you are to build a prototype in software, then clearly you need a software tool to
support this. Common prototyping tools include Macromedia Director, Visual Basic,
and Smalltalk. These are also full-fledged development environments, so they are
powerful tools, but building prototypes using them can also be very straightforward.
Marc Rettig (1994) argues that more projects should use low-fidelity prototyj ing
because of the inherent problems with high-fidelity prototyping. He identifies these
problems as:

. They take too long to build.

. Reviewers and testers tend to comment on superficial aspects rather to content.

. Developers are reluctant to change something they have crafted for hours.

. A software prototype can set expectations too high.

<Previous Lesson

Human Computer Interaction

Next Lesson>

Home

Lesson Plan

Topics

Go to Top

Next Lesson
Previous Lesson
Lesson Plan
Topics
Home
Go to Top