Human Factors Guidelines
for Terminal Interface Design

D. Verne Morland

Communications of the ACM
July 1983 Volume 26 Number 7

Figures to be supplied.

Summary

This paper provides a set of guidelines for the design of software interfaces for video terminals. It describes how to optimize screen layouts, interactive data entry, and error handling, as well as many practical techniques for improving man-machine interaction. Emphasis is placed on factors relating to perceptual and cognitive psychology rather than on gross physiological concerns Ways in which interfaces can be evaluated to improve their user friendliness are also suggested. The author summarizes many ideas that can be found in other, more comprehensive texts on the subject. These guidelines will provide practicing software designers with useful insights into some of today's principal terminal interface design considerations.

1. INTRODUCTION

The recent proliferation of video terminals has not been accompanied by a comparable increase in user-friendly terminal software. Users of state-of-the-art hardware are often disappointed to find that their productivity is significantly reduced by cumbersome data entry procedures, obscure error messages, intolerant error handling, inconsistent procedures, and confusing sequences of cluttered screens.

For many years computer users accepted these disappointments without complaint. After all, everyone knew that computers needed to have everything just so if they were to be prevented from running amok. Punch cards were not to be "bent, folded, spindled (whatever that meant), or mutilated." The year in a calendar date had to be entered first (because it is more logical); everything was required to fit neatly into its assigned space. When the progressive Ms. Jablonowski married the liberal Mr. Levinowitz, the electric company began to address her as Ms. Jablonowski-Lev, again for very logical reasons:

NAME:  
J A B L O N O W S K I - L E V
 
E V A            
 01  02  03  04  05  06  07  08  09  10  11 12 13 14  15
 
 16  17  18  19 20 21 22
Last Name   First Name

Since computers were certainly developed "by the people," why were they not developed "for the people," as well? Computers evolved from scientific superbrains to business machines to household appliances; had they started out as video games, they would certainly have been designed with the user interface in mind. Under the circumstances, however, scientists and engineers have been much more concerned with computing solutions than with presenting them.

This bias still pervades many system design and development teams today. The classic formulation of a technical problem suggests that the real challenge is to derive or compute the answer. Once this is accomplished, the real problem is solved. Concerns about how this answer should be reported to the people who might actually do something with it were dismissed as inconsequential, mere "cosmetic details." More often than not, the real answer was left buried on a clutter screen or printout, obscured by irrelevant system drivel.

SYSOUT DK:01A       DM15392.001
GOLOG 1/0 TO DM832 FIN
WIN IN @@@@@@@@@@@@@@003         The real answer
W       M       Q       P
*** 9.8 *.* 1.02E-0005

.NOGOLOG $$3 DDA, %1-002, DK = N
TIME 00:00:02.5       USER NOCAP       ACC 533201A

Gradually, however, designers began to realize that their computers were not only obscuring the answers, but discouraging the questions. As the average user became bold enough to ask questions, he frequently asked the wrong questions and, thus, got the wrong answers. Computers were not doing a very good job of preventing these pitiable human errors. In fact, some systems were actually encouraging them! It is now generally recognized that a formal analysis of the man-machine interface is an essential part of the design process for any good interactive system. The intention here is to provide a convenient checklist for many important interactive features. Some of the recommendations made are based on well-established patterns of human perception and behavior; others, derived from the author's own professional experience, are more tentative.

An interface designer cannot afford to be dogmatic. His job is to provide a basis for accurate and efficient communication between a particular computer system (or program) and its expected users. Only by fully understanding the function of that system and the characteristics of those users can the designer determine if a particular interface feature is really desirable. In this special context, traditional techniques may not always be appropriate, while untested ideas can often offer unique and indispensable advantages. All prospective interface features (including those described in this paper) should take into account the environment in which the system will operate, and most should be tested in field trials before the system is released.

To be most effective, the analysis of prospective interface features must be performed early in the design phase-it cannot be an afterthought. If the initial release of the system is cumber-some or annoying to use, the users' initial negative reaction will retard the eventual acceptance of subsequent, improved versions. Even good operators (those who adapt to difficult procedures) will have difficulty unlearning their old patterns of behavior before learning new ones. Designers and developers should systematically evaluate all the basic human factor issues affecting their systems before those systems are introduced to their ultimate users.

1.1 Design Objectives

The primary interface design objective for interactive systems is not to eliminate all input errors, but to reduce their frequency and to limit their consequences. For terminal interfaces the single most important feature that contributes to this objective is simplicity. Although to some extent the complexity of an interface may be proportional to the complexity of the data processing that takes place behind it, sophisticated systems need not have sophisticated interfaces. For every subtle feature on a terminal screen there are usually several annoyed users who wonder what it is and how it works. There are two leading causes of interface complexity. The first is the typical programmer's fascination with intricate mechanisms. There is a widespread tendency for programmers, even good programmers, to embellish their work to the point that clean, classical architectures are transformed into baroque monstrosities. This problem can be reduced by encouraging programmer self-discipline, by requiring strict adherence to design specifications, and by fostering a visible concern for simple, direct, easy-to-use systems throughout the entire development team. This can be supported by frequent reviews of the emerging interface design with some of the system's intended end-users.

The second and most prevalent source of complexity is the distribution of interface design and implementation responsibility among several people. On projects with programmer teams it may be possible to assign full responsibility for the interface to one person. When this is not possible, it can be difficult to keep everyone pulling in the same direction. Interface design guidelines must be developed, agreed upon, and written down, and the emerging pieces of the interface must be periodically collected and reviewed for clarity and consistency. Interface control and standardization should be introduced early in the system design process in order to: 1) provide a unified and consistent interface; 2) maintain a focus on operational simplicity and clarity; 3) develop a user-friendly approach to error handling; 4) assure that user interests are properly represented when design trade-offs are evaluated.

Another tactic for improving the man-machine interface is to appoint a knowledgeable third party-neither a user nor a designer-as an independent " users' advocate." Even the best user-oriented designers can benefit from a neutral observer who subjects their interface designs to rigorous tests for clarity, efficiency, and ease of use. Good terminal interface design requires that at least as much attention be paid to user behaviors, attitudes, and proclivities as to system (computer) capabilities. Too many systems have been designed from the inside out; a more balanced approach is now needed.

1.2 Role of Documentation

Some people today tend to confuse good documentation with good man-machine interface design. In reality, good documentation is a necessary but not a sufficient condition for a good system. One cannot improve a poor interface design by carefully documenting its idiosyncrasies. In spite of pages and pages of system documentation, most systems today remain misunderstood. Typical sets of documentation are bewildering They usually include hardware and software reference manuals, user guides, maintenance manuals, error message dictionaries, and quick reference cards. The average person would be hard pressed to read even a fraction of this material, and if he were able, he would more likely be confused than enlightened.

Writing documentation is anathema to most programmers, and so many manuals are written by technical writers who have only a scant knowledge of a particular system. This is a perfect illustration of what is known in some circles as a "Boyce Compromise"-a solution that cleverly enables all parties to lose. Either a programmer is forced to write about something he understands but cannot describe, or a writer is forced to write about something he would describe if he could understand it.

To combat this problem, Brinegar and Farrar [10] have suggested that the user interface description be the first document written for a system. It should take precedence even over the system design specification. Brooks [1] agrees and points out that this ordering of priorities not only gives the system a strong human factors orientation from the beginning, but, since it is the chief system designer's responsibility, it also provides what Brooks considers to be the essential unifying concepts upon which the system will be built-its "conceptual integrity." Using the user interface description as a starting point, all other documents should be prepared by a skilled team of designers, implementers, testers, and writers as needed.

End-user documents, in particular, should be clearly organized and easy to understand. This implies a writing style that is consistent with the knowledge level of the user and one that is structured according to user activities. Functions should be described in the order the user will follow to accomplish a meaningful task, and never according to how they are coded in the software or laid out on a keyboard. User manuals should focus on what the system does, not on how the system does it. In this regard, professional writers usually have an advantage over designers and programmers because writers are less likely to brag about the system's (and, by implication, the technician's) technical cleverness. Writers are more inclined to tell the user what he really needs to know.

2. DATA PREPARATION

2.1 Direct Data Entry

A fundamental axiom of communication system design is that the more frequently data is encoded and decoded, the less reliable it will be. A corollary of this rule in the area of man-machine interface states that as more people are involved in data preparation and entry, the chances that it will be input correctly decrease. Thus, a primary objective of good interactive system design is to produce a system that will communicate directly with the individuals who either generate or use the information, that is, to eliminate the middle man. For example, management information systems should either contain or be closely coupled to the data systems performing operations support so that detailed information (from which the @ reports are produced) is entered only once, by the employees who are most familiar with it. This same principle is important for successful error handling. If the people entering the data also understand it, then errors can usually be corrected immediately, and the propagation of errors through the system will be sharply reduced.

2.2 Strong Correlation Between Input Forms and Screens

In addition to reducing the number of people involved in the data entry process, it is also advisable to eliminate all unnecessary data transcriptions. In some cases the volume of alphanumeric data may suggest that entry by a specially trained clerk/typist may be more efficient than direct end user data entry. In these cases, care should be taken that a very strong correlation exists between the layout of the data entry forms and the structure of the associated data entry screens. To require a terminal operator to scan and rescan an input form to locate various data items is to invite error.

2.3 Data Field Design

Individual data fields, whether on paper or on video screens, should be designed to circumvent normal human error. For example, terminal operators frequently:

  1. Omit characters in the middle of long strings (especially in numeric strings, e.g., 12345678 becomes 1234678)
  2. Transpose digits (e.&, 12345678 becomes 12435678)
  3. Improperly "close" fields (e.g., filling in gaps, extending abbreviations, etc.)
  4. Tend to use cultural conventions even when they conflict with system conventions (e.g., dates requested in the YY/MM/DD form are often entered MM/DD/YY)

2.3.1 Long String Reduction. One technique for preventing the omission of characters or the transposition of digits is the visual segregation of long data items into shorter clumps or clusters. For example, the screen format for 8-digit numbers might be changed from

OLDFORMAT NEW FORMAT

to

This item would improve the operator's ability to detect his own errors. (see below)

UNSEGMENTED DATA ENTRY (errors are difficult to spot) @6348231[ ] (Error Type 1) 634782 (Data as entered on terminal screen) (Actual d@ on entry 1 63748231 (Err-or Type 2)

SEGMENTED DATA ENTRY (errors are easier to spot) 634-823-1[ (Error Type 1) 634-782-31 Pata as entered on terminal screen) (Actual data on entry form) 637-482-31 (Error Type 2)

2.3.2 Mnemonic Structuring. A better technique for circumventing the same problems uses mnemonic structuring rather than simple segmentation. For example, instead of using a string of eight digits for identifying a computer program, the following scheme might be adopted:

Characters Assignment 1 System identifier 2-4 Function identifier 5-8 Component number

According to this approach, component 27 of the Operator Interface Control function in the FLAGSHIP Reporting System would be entered as

F-OIC-0027

and not as

37670428

3. VIDEO SCREEN LAYOUT DESIGN

3.1 Simplified Screen Formats

Simplify, simplify, simplify! This is the fundamental tenet of good screen design. As James Martin has observed, Given an expensive screen unit capable of displaying many characters, some programmers have a tendency to want to fill it just because it is there .... [Yet,] most alphanumeric dialogues win proceed faster if the amount of information on the screen is minimized.

3.1.1 Elimination of Social Amenities. Many screens that are designed for a stable group of regular users can be easily improved by two simple changes. First, although it is important that the computer appear civil and not rude or impersonal, many phrases used only to convey social amenities can be eliminated without adverse effects. Phrases like "PLEASE", "DO YOU WISH", and "IF YOU WANT" can be systematically eliminated to improve clarity. Second, in all screens requiring user input, the fact that the system is requesting input can be made implicit rather than explicit. Thus,

"PLEASE SELECT THE CRITERIA FOR YOUR FLAGSHIP REPORT":

becomes simply

"FLAGSHIP REPORT CRITERIA:"

and

"WHAT PM, USM, OR EA ACTIONS DO YOU WISH TO PERFORM?"

becomes

"MAINTENANCE ACTIONS:"

Figure 1 illustrates the effect of these simplifications on a sample screen layout.

Although systems designed for the general public should be as explicit as possible about what the user should do at each stage of the processing, designers should think twice before allowing their progeny to talk down to the humans they are designed to serve. In particular, attempts to be chummy with pseudo-personal greetings, such as, "Hello, Douglas. How was your day?" should be avoided.

3.1.2 Screen Titles and Identification Codes. Another valuable technique for increasing screen clarity and control is the use of screen titles and identification codes. Lengthy opening questions can be reduced to simple titles. In conjunction with screen ID codes this simplifies both technical and training documentation, and facilitates the communication of screen related problems and suggestions between users and user support personnel. Also, a one-to-one correspondence between menu items and the associated subsequent screen titles enables the user to easily perceive the logic of multiscreen function sequences and to detect menu selection errors (Figure 2).

3.2 Multilevel Design

A powerful strategy for permitting a wide variety of users to efficiently use the system is the incorporation of bi- or multi-level presentations. This concept can be applied to basic screen layouts, data entry, error diagnostics, and even the command language used.

3.2.1 Accelerated Paths for Advanced Users. In order to enable very experienced users to work with a system at an accelerated rate, the screen cues and input fields could be abbreviated and concatenated to facilitate rapid scanning, These condensed screens could be invoked automatically after a user is identified as an advanced operator.

This same principle can be extended to actually altering sequences of screens by merging several "novice" screens into fewer, more compact advanced user screens (Figure 3). This approach is more difficult to implement but can be worthwhile if the distribution of user skill levels is skewed so that there are large numbers of novices and experts and relatively few average users.

FIGURE 3. An Example of Screen "Streamlining" for Advanced Users.

3.2.2 Structured Error Diagnostics. Similarly, error diagnostics could be introduced on several levels. At the first occurrence of an error, a very brief diagnostic would signal the user and give him a clue to the nature of the fault. Then, if the error is repeated or if the user fails to understand the first message, a more extensive explanation would be presented to assist the novice user in correcting the problem. Finally, if the user is still uncertain as to the correct response, he would be able to explicitly summon assistance (for example, by typing "HELP"). This command would then produce a full screen diagnostic that would summarize the acceptable responses. Only as a last resort would the user be forced to leave the terminal and seek assistance from the system documentation or from his colleagues.

3.2.3 Native Language Presentation. When developing systems for international applications, provisions should be made for screen presentations in the local language. There is no point in increasing the potential for a negative reaction to computer automation by injecting an element of nationalism into what would otherwise be a universal product.

3.3 Standardization

Another important rule for good screen design is the standardization of terminology, abbreviations, and data entry conventions throughout the system. When standardizing screen terminology only one of several equivalent words or phrases should be consistently used throughout the system. For example, terms such as ENTER, INPUT, GIVE, and TYPE should not be scattered here and there across the system.

A common (and undesirable) violation of this principle occurs when system designers create a tenuous or artificial distinction between two or more words that are almost synonymous in everyday speech. For example, a system designer may define "INPUT" to mean "type in" and "ENTER" to mean "point to an option with a light pen." In many cases these subtle distinctions are lost on the average user and the results are predictable: confusion and irritation. Abbreviations should be chosen on the basis of common usage rather than on how many characters will fit on a given line. For example, INFORMA'NON should be abbreviated INFO and not INFORM or INFORMTN, etc.

Cue string conventions should be established in order to provide a systematic, predictable visual correlation between cues or prompts and their corresponding data entry fields. If data fields are usually located to the right of their cues, they should not occasionally be located below the cues. Special cases, such as tabular or columnated data fields, are justified exceptions to this rule, but for simple, single field data entries, the relationship between cues and input fields should be constant.

3.4 Terminal Optimization

In any system that must support several different terminals with different capabilities, it is easy to reduce all input and output specifications to match only those features that are common to all terminals. This approach has been adopted for a great deal of new microprocessor software in order to maximize transportability. It has a considerable liability, however, in that it penalizes the owners of more advanced computers and peripherals by ignoring their equipment's special features.

Many communication systems can automatically determine the class of the user's display device. Using this or other data stored in a permanent user characteristic block in the system data base, a terminal-specific display controller could be selected that would dynamically substitute special characters or commands into the screen definition buffers to tailor the display to the actual user device. This approach also yields dividends when systems must be upgraded to support new products while at the same time preserving compatibility with older terminals.

3.4.1 BELL and BLINK Features. Two special features that have been found to be extremely valuable on many data entry systems are audio signals and blinking texts. It is highly desirable to notify the user that an error has occurred as soon after its occurrence as possible. The use of a BELL signal is most effective when the system is being used by operators who are experienced enough to enter data without having to look at the screen. In cases of very rapid data entry, an audio interrupt is almost indispensable for informing the operator of exceptional situations, such as input errors, delayed computer response, processing status, etc. Different BELL tones have even been used to enable the operator to distinguish the severity of the error.

Similarly, BLINK can be used to flag errors or to highlight essential questions. BUNK is able to rapidly draw attention to a certain screen element. If simple BLINK parameters are system-controllable functions, a sense of urgency can even be conveyed by increasing the blink frequency and brightness.

3.4.2 Color Displays. Colors, even two or three different colors, can be very powerful visual cues for 1) linking logically related data; 2) differentiating between required and optional data; 3) highlighting errors; and 4) separating prompts, instructions, and input fields.

3.4.3 Light Pens and Tablets. Two-dimensional light pens on displays and electromagnetic pens on tablets are commonly used for graphic input. They are usually preferable to joysticks, twist knobs (potentiometers), and push buttons because they provide a more natural way for the user to define or identify elements on a two-dimensional surface. Light pens are used to point at areas of the screen, but for detailed graphic input they are clumsier and less precise than two-dimensional electromagnetic tablets.

Note, however, that in many applications for which a keyboard is the primary input device the use of a pen can interrupt the flow of operator actions. For this reason pens should be restricted to those functions requiring a high degree of graphic input. It would be a mistake to specify a fight pen for a text processing terminal just because the pen provides a more natural way of selecting items in a menu.

3.4.4 Special Function Keys. Special function keys can be either programmable or hardwired and can significantly accelerate function selection for advanced users. They can, however, also be both confusing for novice users and restricting for future systems developers and should therefore be used advisedly. The flexibility of programmable keys is really a double-edged sword-functions can be added or reassigned easily with each new release of the software. If the designers abuse this capability, however, the system's users will be confused and irritated.

This is in fact but one example of a widespread new danger in systems design. Far from having to struggle to find new features to introduce, today's designers must discipline themselves in order not to sacrifice system continuity on the altar of innovation. We are surrounded by opportunities to compromise conceptual integrity when we try to "enhance" basically sound systems with "falls" of dubious economic value.

Changes to established procedures frequently disrupt patterns of user behavior and result in inefficiencies that can outweigh marginal technical improvements. Unless users are convinced (usually through good documentation and additional training) that new procedures can benefit them, they will often ignore them and the convenience they may offer, in favor of more conventional procedures.

3.4.5 Field Attributes. Many terminals allow application programs to control certain characteristics of the fields defined on the display. Fields for output only can be protected against being overwritten by the user. Conversely, certain areas of the screen can be defined for input only and the cursor will be automatically positioned to these fields by the terminal hardware. Fields can be highlighted by intensifying each character or by reversing the video distinction between the characters and the background (light characters on a dark field and vice versa). By taking advantage of these options, designers can make their screens much simpler and easier to use.

3.5 Application Growth

Another important consideration for good screen design is the degree to which the face of the system-its screens-should be restricted by the original application specification. For example, the layout of several screens in an automated product testing system might assume that the only products to which the tests might be applied are designated x 15 and x 16. This might lead to screens that contain the construction x 15 (YIN): [ I x 16 (YIN):

and others that have:

Xi[

Rather than arbitrarily limiting the system in this fashion, it is preferable to request the same information with prompts such as

TEST PRODUCTS:

For the initial installation these data fields could be designed to default to TEST PRODUCTS: X15 X16

Future users would thus have the option of entering responses other than those envisioned in the original system design.

4. COMMAND AND DATA ENTRY

4.1 Positive Response

When a user completes an action, the system should respond within an interval that is commensurate with a reasonable user's expectations. Several levels of response expectations have been identified:

  1. Keystroke Response. When a key is depressed the user expects instantaneous acknowledgment. This typically is accomplished by immediately displaying the corresponding character and/or by providing an audible click.

  2. Simple Request Response. When the user completes a simple request (for example, answers a question in a dialog) he will usually tolerate (and may even want) a brief pause before the system responds.

  3. Complex Request Response. When the user has built up a complex processing request (for example, has completely specified a series of data selection criteria for a large data file), he will usually be content to wait some time for the system to respond. He may even need the time to prepare himself to deal intelligently with the answer or answers or to formulate a new question.

The keystroke response category is of little interest today since most terminals handle this automatically with keyboard and display hardware. Full duplex terminals do wait for a response from a communication processor before echoing a typed character on their screens or printers, and so systems that use these terminals must be designed so that the echo responses are always extremely fast.

All interactive system designers must wrestle with the response times for simple and complex requests since these are of critical importance. Many functionally satisfactory systems which were initially flawed by unacceptable response times have fought an uphill battle to gain user acceptance.

The limits of a "reasonable" user's expectations with a simple request response and with a complex request response were intentionally omitted from the definitions because of the difficulty involved in determining such statistics.

Studies suggest that the maximum acceptable times for each category are one-quarter second, three seconds, or even fifteen seconds, but that tolerance levels may vary greatly from function to function and from system to system. The greatest danger, of course, is that response times will be too long. A system that is always "one step ahead," however, is also unsatisfactory. Most users do not like to feel rushed or pressured and they feel uncomfortable if their system is perpetually waiting for them to act.

Usually a pattern of user response time expectations will emerge during preliminary design reviews which should be considered in the eventual design of the system. Assuring proper system performance, especially during peak periods, is usually beyond the scope of the interface design, and must involve the harmonious functioning of the entire system. One fact that should be remembered, however, is that, whatever the cause, bad performance can usually be either ameliorated or exacerbated by the interface.

It is of vital importance that the interface be designed so that the user is never left to wonder whether or not his command or data entry has been received. When a system's response time exceeds the user's expectations, the interface should be capable of providing one or more interim messages (even if they are meaningless placebos) to advise the user that everything is proceeding normally.

4.2 Default Entry Strategies

Default entry strategies are schemes by means of which the software system tries to anticipate the answers that the users will give to some or all of the system's questions. Rapid, accurate data entry can be facilitated by a clear, logical, and comprehensive default strategy. These three qualities cannot be emphasized too strongly-system-supplied entries that are neither apparent to the user, nor logical in their relationship to other default entries, nor systematically applied are going to be more disruptive than helpful. Accordingly, default entries and procedures must be carefully selected and universally applied.

These are several basic ways in which default values can be assigned. They can be classified according to: 1) how universal they are; 2) how stable they are; and 3) how context dependent they are. One set of default entries can be defined for the entire system or several sets can be defined for different sites, user groups, or individual users.

Default entries can either be static or dynamic; that is, they can either be assigned by the designer or development team when the system is created, or they can be dynamically controlled by a system administrator or even by the system itself. (N4ore on this last possibility in Section 6.2.) They can be associated with a particular input field with no regard for the content of other fields or the way in which the user got there, or they can be context dependent and change based on these same environmental factors.

A basic rule of default assignment is that the default entry should never result in a destructive process (file deletion, dialogue termination, etc.). This is a corollary of the design axiom stating that the user should be protected from the system and the system from the user.

When taken to the limit, the use of default entries results in an "entry-by-exception" dialogue in which the system supplies answers on-screen and the user simply modifies those which are inaccurate or incomplete. While this may be an extreme situation, it is certainly reasonable to assert that the computer should supply all information to which it has access either by virtue of internal system functions (e.g., date and time) or by virtue of previous user responses.

4.3 Error Prevention

Limiting the consequences of any user error-which is one of the primary objectives of man-machine interface design requires first of all the identification of all input fields that contain or might contain values that would have a significant impact on subsequent processing, Among such fields are database keys, program decision variables, and lower and upper range limits. Once identified, these fields should be studied carefully in order to devise: 1) safe default values; 2) reasonable error prevention tactics; and 3) special protection mechanisms to limit the consequences of input mistakes.

The simplest form of error prevention is on-line operator data verification. Options for the implementation of this procedure include 1) delayed repetition (data repetition on separate screens); 2) immediate repetition (data repetition in the same entry field); 3) reverse-order-verification (e.g., 1234554321). Powerful commands (such as DELETE) should usually be confirmed.

Another technique for catching errors is checking redundant data items or commands for consistency. For example, if database records can be uniquely identified by two different, independent attributes, the system could be designed so that delete commands require the user to enter both. Then, if the user supplies a pair of attributes that are not associated with the same record, the system could inform him of this contradiction and ask him to correct his input. In this case the redundancy is desirable since it would reduce the likelihood of serious error.

4.4 User Surveys

Data entry can only be optimized when the relevant characteristics of the system user population are known. Naturally, selection of "relevant characteristics" is a difficult task, but there are some traits that have been consistently identified in the professional literature. Among these are: 1) the level of special training, 2) the frequency of use; 3) the general education or intellectual aptitude; 4) the previous experience on similar systems; 5) the level in organization or relation to system (data provider or data user).

Naturally, the more homogenous the user group is, the more the interface design can concentrate on common user characteristics, but a designer should not disregard user profile analysis because he assumes a particular user community is too diverse to summarize. User profiles should be collected and then sorted by function, by location, by experience level, and by any other common factor that will simplify a designer's perceptions of the user community. This is important since many designers have a tendency to associate their own skills with those of the intended operators. This has caused many interactive systems to be designed by programmers for programmers, rather than for their intended users.

In addition to collecting basic demographic data on the user population, periodic user surveys and user group meetings can be used to solicit users' comments and suggestions on various aspects of a system's design and operation. This is especially useful for terminal interface development and refinement both before and after a system's introduction.

5. ERROR HANDLING

5.1 Error indications

The first objective of any error control strategy is to provide the users with unmistakable error indicators. As was noted in the introduction, errors will never be completely eliminated and so it is extremely important to inform the user of an error as close to its commission (in time and space) as possible. A system should never allow the user to wonder what (if anything) is wrong; the error diagnostics must be clear enough to preclude his resorting to any "Monte Carlo" attempts to correct his mistake. In this regard, an audio signal (BELL) for interrupting his entry pattern and a visual cue (BLINK) for attracting his attention to the erroneous data is perhaps the best arrangement.

5.2 Error Toleration

Wherever possible, error toleration should also be included in the system design. Most man-machine interfaces are unnecessarily rigid, and it is very desirable from a psychological point of view to "soften" the input requirements if this can be accomplished without introducing either logical ambiguity or user confusion. Indeed, with regard to the last point, it is often advisable to avoid mentioning all the acceptable alternatives to a given prompt in introductory operations literature (training manuals, handbooks, summary cards, etc.), and to reserve a complete description for the technical reference materials. Two simple examples of error toleration are the use of several codes for a given function (e.g., system termination is performed in response to the commands "EXIT" or "HALT" or "STOP" or "TERMINATE"), and the acceptance of several different symbols to delimit separate items in a string of data entries.

Another, very important aspect of error toleration suggests that the design axiom, Report errors immediately should be qualified by the condition, to the proper person. Most systems today are not tolerant in this respect. They check every entry on every screen and if an error is detected, they display a diagnostic message and block further processing until the error is corrected. If the operator cannot correct the error, current activity must be terminated and the data set that the operator was attempting to enter must be set aside for future analysis, correction, and reentry into the system.

Friendlier systems try to select dynamically one of several possible error handling procedures, depending on the error correcting capabilities of the current user. This is not nearly as sophisticated as it sounds. For example, the users of an order entry system might be divided into three categories: typists, order expediters, and account executives. A hierarchy of user errors from the most complex, logical errors down to the most simple (mechanical or typographical) ones could then be assigned to three levels of error handling routines. According to this strategy, typists would be expected to correct typing errors, but would not be responsible for spelling errors in customers' names. Order expediters would be expected to know how to spell customers' names, but would not be required to know the details of special pricing formulas. Account executives would be expected to be able to correct any error and might even be allowed to override the system's objections.

This approach to error toleration requires that the system have a way of accepting or storing input data conditionally, If, in our example, a typist correctly types the misspelled version of a customer's name, the system must first ascertain that it is not dealing with a typing error, possibly by asking the typist to re-enter that information once. If the typist is not at fault, the system could then store the data in a "SUSPECT NEW ORDERS" file and add certain key elements of the order's data to an err-or report addressed to an order expediter. This is advantageous in that the entire order is neither summarily rejected and forced to remain on paper outside the system nor fully accepted by the system. At the designer's discretion, the suspect orders may be included in certain statistics, such as the total number of orders received by date, and excluded from others, such as the accounts receivable summary, until they have been corrected and verified.

5.3 Error Correction by the Operator. Since some errors are inevitable, attention must also be paid to the design of mechanisms for operator error correction. One classification of errors according to their severity is:

  1. Single character error (e.g., "1235" for "1234")
  2. Single field error (e.g., "ABC" for "27.511)
  3. Multiple field error (possibly due to the omission of a field when entering several fields at a time)
  4. Screen error
  5. Function error

Simple, convenient, and, if possible, intuitive procedures should be available for the efficient handling of each of these classes of errors. A list of correction techniques corresponding to the above error list is:

  1. Backspace key (deletes one character per stroke)
  2. Delete/correct field key (backs up one field per stroke)
  3. Same as 2
  4. "Backscreen" key or command (this should be capable of backing up more than one screen).
  5. Resynchronization or abort command (resynchronization would permit the operator to resume possibly several screens back at some predetermined restart point).

5.4 Error Logging and Analysis

On-line systems should have some built-in error logging capabilities. This is as important for release versions of the system as it is for test versions. Automatic error logging is often the only way to obtain an accurate picture of many man-machine interface problems. Relying on users to manually prepare error reports is usually unsatisfactory since such a procedure inevitably eliminates a significant number of small but frequent errors that users do not have time to comment on. In many cases it is the cumulative effect of these individually trivial problems that can destroy user confidence in an otherwise powerful system. This can be critical in prerelease test systems, since the natural inclination to concentrate on "significant bugs" fails to anticipate many of these insidious nuisance factors.

A further advantage of automatic error logging is that it makes terminal operators more aware of their mistakes, and thus, hopefully, more inclined to prevent them. This is a sensitive area, however, and designers should be careful that their implementation does not give a system a "Big Brother" reputation. If operator errors are posted against a specific person (as opposed to a terminal location or group of users), users may resent the constant scrutiny and even commit more errors because of the emotional pressure it would produce.

6. ADVANCED TECHNIQUES

Since no design is perfect, good system designs should include a formal effort to identify the most probable areas of system growth and change and to provide a flexible system that can, in some ways, compensate for its own deficiencies and for the natural "human frailty" of its users. That is a bold and, some would argue, idealistic assertion. What is not arguable, however, is the fact that a survey should be conducted of recent efforts to provide this kind of adaptability so that system designers will be able to build systems that are adaptable to these techniques, even if they can't actually incorporate them right now.

6.1 System Directories

In many interactive systems with a wide variety of complex functions, the screens presented to the user can be classified into two basic categories: 1) function selection screens ("menus"); and 2) function execution screens ("action screens"). Typically, function selection screens or menus have been designed to reflect a logical decision tree invented by a system's designer (Fig. 4). In hierarchical fashion, the user must start at the "top" of the system and then work his way "down" through a series of secondary menus (arrows 1, 2, and 3),until he reaches the necessary action screens. Even in more advanced systems that allow users to jump directly from the main menu to a particular action screen (arrow 4), users are advised to have a mental image of the system's basic decision tree in order to avoid getting lost.

An alternative to this decision tree structure is the system directory. For many complex systems, directories can provide users with a more intuitive, powerful, and personal interface, and can significantly simplify system maintenance and change. A system directory is analogous to a book's table of contents: like a table of contents, it is a linear structure with a logical hierarchy among functions and subfunctions that is expressed by means of highlighting, underlining, capitalization, and indentation.

Associated with each function is a function code. When the user wishes to execute a particular function, he simply enters the appropriate function code. He is then presented with the action screens for that function. When he is finished he returns to the directory-there is no need to tell him where he is relative to other functions or to remind him of how he got there.

The user should be able to move a "viewing window" backward and forward over the directory with a few simple commands. A sample directory screen and its logical relationship to the underlying directory structure is illustrated in Figure 5.

Since directories can be external text files they can easily be created and modified with conventional editors or word processing programs at a fraction of the cost of modifying program modules. They need only be edited and tested there is no need to compile them or to link them into a program image.

Separate subdirectories can be created for special groups and individuals within the user population. These personalized directories can be designed to restrict or to extend a user's access to various system functions. Restricted directories can be designed to enhance system security or simply to enable users with only limited interest in the system to see only those parts that they regularly use. This can be a very cost-effective way of tailoring the user interface to novice, average, and expert users and of giving the system administrator a special interface without expensive additional software (Figure 6).

FIGURE 4. Logical Decision Tree Structure.

6.2 Statistically Generated Default Values

For systems in which the user expects default values to be constants, it is of vital importance that they be constants. One should consciously avoid altering any default value even if one has a good, logical reason for its modification. In such systems the power of default values is predicated on their always being what the user expects them to be.

A new concept that is worthy of further investigation advocates the evolution of default values on the basis of an ongoing statistical analysis of user responses. The responses can be compiled and evaluated by a host computer for the entire user population or, if cost and convenience so mandates, satellite processors can maintain statistics for separate locations. Whatever the processing distribution, the technique would be applied to an initial set of default values that would be updated by the system administrator whenever the system perceived that one member of that set was being systematically overridden by another value. That new value could then be incorporated into the default set (for that particular user or location or for the entire system) until such time as it, too, falls to a new, more statistically significant response.

For this approach to be successful, there are several critical control parameters that need to be rigorously evaluated and experimentally optimized (for example, the threshold frequency required for a new value to replace an existing default). A powerful new tool for "tailoring" the default strategy to the current user group, this mechanism enables the set of default values to keep pace with evolving patterns of system use. Since default values will change under this scheme, it is imperative that the current default values be displayed in their respective data fields as each screen is introduced.

6.3 Error Analysis

This section is an extension of the error logging function described in Section 5.4. It is conceptually straightforward to fink the system's error logging routines and its associated database to a series of standard statistical analysis and graphical output control routines in order to provide a variety of novel and, perhaps, illuminating correlations between the recorded variables. For example, if a problem is consistently occurring at one location but not at others, this might indicate that there is some important user characteristic missing at that site or that the training at that site failed to cover that function in sufficient detail. In either case, this data can then be checked against related user survey data or training logs to determine the best course of action for correcting the problem. Clearly, it is important to be able to differentiate between system deficiencies that require software or hardware solutions and user deficiencies that require personnel solutions.

6.4 Error Detection and Correction by the System

This is a rather involved topic with many esoteric methodologies for increasing reliability in critical systems, such as communications and life support. The most common software techniques involve the computation of special check digits, known as Error Detecting or Correcting Codes, that are appended to selected bit strings. If the data is subsequently corrupted in certain ways (e.g., if two bits are transposed or one bit value is reversed), the system can either simply detect the change or both detect and correct it.

Usually these -code digits are handled entirely within the system and are transparent to the users. Occasionally, for certain critical pieces of information such as identification numbers, the code digits are computed and appended to the data before it is entered into the system. in such cases, the system can immediately identify erroneous entries and thus protect the integrity of the data it contains.

6.5 Integrated Games

Many people who are bored and uninspired by the million dollar systems they use in their offices are often completely captivated by the hundred dollar systems they play with at home. While no one would argue that a payroll application will ever have the appeal of a "Par Man," nevertheless, as Edward Lawrence observed in a recent letter to DATAMATION [7], designers should actively seek to dramatize humdrum applications. Lawrence goes on to suggest that system "participants" (a term he prefers to "users") should be encouraged to explore their systems through "windows" controlled by management.

Clearly this strategy is oriented toward professional users, yet in the right environment, systems that can stimulate a sense of intellectual curiosity and provide useful and relevant information at the same time are quite valuable. For example, an order control system might occasionally compute some interesting statistics and then challenge its users with questions, such as: "Based on this week's orders, who are our top three customers?" or "How many orders has customer X placed in the last six months?"

Taking this approach one step further, the system could remember the most frequently asked questions and then supply some answers without being prompted by the users: "By the way, did you know that customer Y hasn't placed an order yet this month?" Properly implemented, this type of approach could transform a dull slave into a helpful junior colleague.

7. INTERFACE EVALUATION

7.1 Screen Frequency Analysis

A valuable insight that can be acquired during the evaluation of test systems has to do with the frequency with which each screen is displayed and used. With this information alone, all of the design efforts discussed above can be focused on the principle interfaces. These displays can be extensively analyzed to improve the speed and accuracy with which data is entered into and extracted from the system. Furthermore, if some of the more costly design procedures are deemed too expensive to be applied to the entire system, certainly many of them can be applied to these principle interfaces for the optimum return on investment.

7.2 Test Systems A significant part of good human factors design must, of necessity, take place after data has been collected from carefully organized test situations. The control that must be exercised over a system's test release is analogous to that required for an experiment in any branch of the social sciences. When trying to evaluate the contingencies of man-machine interaction, any test becomes useless if the number of variables prevents the test analyst from being able to identify cause and effect relationships.

7.2.1 Multiple Tests. In good system tests, several trial systems are exposed to a carefully selected subset of the anticipated user population. It is important that several test systems be evaluated in order to allow for several stages of system improvements before the first official release. It is characteristic of human nature to be more critical of a system feature when actually confronted with two or more alternative approaches than when confronted with only one. In the latter case, the evaluator (user) will often tacitly accept a cumbersome procedure rather than burden himself with the task of inventing a better one.

7.2.2 Design Specification and System Tests.

In summary, one of the key aspects of terminal interface design that most distinguishes it from other system design activities is that many requirements cannot be satisfactorily defined in a design specification written prior to system fabrication. Due to the qualitative nature of the disciplines that attempt to explain and to predict human behavior, a certain portion of any reasonable, user-friendly interface specification should be developed in cooperation with users during a simulation of the final operating environment. This fact does not diminish the importance of preparing an interface design specification to guide the programming team during system development, but it does emphasize the importance of thorough product testing with reasonable samples from the user population before the first customer release.

Acknowledgments. The author is indebted to T. Gilb, Martin, and G. M. Weinberg for their extensive work on terminal interface design. In addition, the author thanks his colleagues P. Boyce and R. W. Wedwick for their review and comments.

REFERENCES

[1] Brooks, F. P., The Mythical Man-Month. Addison-Wesley Publishing Company, Reading, Mass., 1979.

[2] Gilb, T. and Weinberg, G. M., Humanized Input. Winthrop Publishers, Inc., Cambridge, Mass., 1977.

[3] Martin, J., Design of Man-Computer Dialogues. Prentice-Hall, lnc., Englewood Cliffs, N. J., 1973.

[4] Martin, J., Design of Real-Time Computer Systems. Prentice-Hall, Inc., Englewood Cliffs, N. J., 1973.

[5] Martin, J., Systems Analysis for Data Transmission. Prentice-Hall, Inc., Englewood Cliffs, N. J., 1973.

[6] Morland, D. V., Friendly (A simple User-friendliness index). Datamation (Readers' Forum), Feb. 1982.

[7] Lawrence, E, R., The human side of software. Datamation (Readers' Forum), July 1982.

[8] Reiles, N., Price, L., A user interface for online assistance. IEEE, 1981.

[9] Peterson, D. E., Screen design guidelines. Small Systems World, Feb. 1979.

[10] In Brinegar, J. P., Farrar, B. D., Quality Software and the Technical Writer. Proceedings of the 28th Annual STC Technical Conference, Pittsburgh, Pa., 1981.

[11] Ledgard, H., Singer, A., Whiteside, J., Directions in Human Factors for Interactive Systems. Springer-Verlag, New York, 1981.

[12] Moody, W. E., Jr., How Humans Read and Understand. (Workshop Notes), IBM Corporation, 1982.

CR Categories and Subject Descriptors: H.1.2 [Models and Principles]: User/Machine Systems-human factors

General Terms: Human Factors, Design, Documentation

Additional Key Words and Phrases: Interactive terminals, display terminals, error tolerance, user friendliness, data entry, error prevention, system directories, interface evaluation, online systems

Received 12/81; revised 10/82; accepted 12/82.

D. Verne Morland is currently managing the acquisition, storage, and distribution of NCR's strategic business information. He has contributed articles to leading management and information industry publications on such topics as managerial problem solving and the management of information as a critical business resource.


 Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission.

© 1983 ACM 0001-0782/83/0700-0484 75¢s;