The Evolution of Software Architecture

D. Verne Morland

DATAMATION
February 1, 1985

In her best-selling book PASSAGES, Gail Sheehy suggests that throughout our lives we all experience a series of predictable periods of psychological equilibrium punctuated by times of often traumatic transition. Sheehy emphasizes that while the experts have studied child development extensively, adult development has been largely ignored.

To some extent, when we analyze the computer industry we suffer from the same preoccupation with dramatic, swift, and easily observable phenomena that induced leading psychologists to focus their attention on children. New products, new technologies, and new applications hold our interest until they become established fixtures in the industry landscape. Then, as we find them too diverse and too complex to classify, or too stable to hold our interest, we turn away from them and focus our attention on more exciting things.

While it may be true that the child is father of the man, it is the adult who directs the course of civilization. Similarly, while wafer-scale integration, voice recognition, and artificial intelligence may presage the shape of future products, we still have much to learn from the many more mature products that form the basis of our industry. Let's step back then, not one step but many, and reexamine the basic relationship that has evolved between computer hardware and software since the mid-1940s. Charting this evolution will uncover some trends that tell us a great deal about the future of software architecture in interactive systems.

The first stored program computers were designed by electrical engineers and mathematicians for whom the intricacies of hardware systems and binary arithmetic became an easily acquired second nature. The control programs for these systems were simply soft extensions of the hardware's logic. This software, written first in the language of the machine (codified binary notation) and later in an assembly language, exercised a very direct and hence primitive control over each of the computer's hardware components - card reader, drum storage, arithmetic and logic unit, and printer.

The computers themselves were massive devices that performed one job at a time. Trained operators mounted magnetic tapes, loaded cards, and ultimately flipped banks of toggle switches to load programs and get things started. Once under way, the early-computer applications required little or no human intervention. User interface hardware, if we stretch our modern-day meaning of the term, consisted of card or punched paper tape readers, magnetic tape drives, and rows of switches and flashing lights.

The relationships among the user interface hardware, the software, and the computational hardware are illustrated in Fig. 1.

Figure 1: Stage I - Primitive, Monolithic Software

Gradually, as computers became more popular and sophisticated, a profession that once consisted entirely of scientists and engineers began attracting many eager but less knowledgeable people. These new programmers could not be expected to cope with the ever-changing complexities of hardware. Schemes for addressing core memory cells, storing bits on magnetic drums and disk tracks, and decoding holes in the columns of Hollerith cards were always changing and each improvement in hardware design left the new breed of software specialists farther behind.

A buffer of some sort was needed. A buffer for the people writing software and a buffer for the software programs themselves, for whenever the hardware changed the software had to be changed too.

To solve these problems a relatively small group of interdisciplinary specialists began developing such a buffer. First came the Input/Output Control Systems (IOCs) with which an application program could indirectly request a variety of input or output data services. Later, as systems were improved to handle more than one job at a time, an executive or master program was created to schedule jobs and to mediate among jobs contending for scarce systems resources. Upon this base - the operating system - designers built libraries of device control and communication instruction sets (macros) and a host of utility programs and development tools.

This buffering layer of software (see Fig. 2) solved two problems. First, it simplified programming by enabling software developers to define tasks without having to know all the details of the hardware system on which the task would run. Second, it encouraged the development of portable software that could be written once, yet run on several different hardware systems.

Figure 2: Stage II - The Application System/Operating System Dichotomy

For obvious economic reasons, true software portability was not enthusiastically endorsed by the major hardware manufacturers. Instead, they concentrated on limited, vendor-specific versions of portability described by terms like "upward and downward compatibility." This allowed users to purchase ever more powerful machines without having to rewrite all of their programs, and yet it gave them a strong incentive to stay within the vendor's fold. Proprietary operating systems were the order of the day.

Today the industry seems obsessed with two ideas: user friendliness and industry compatibility. Especially as most mini and micro systems become high-volume commodity products, vendors recognize that it is important to get customers to buy the product in the first place; the key to sales in today's market is application software. Worries about retaining customers are secondary - if you don't sell them in the first place, you needn't worry about keeping them.

In previous years, most companies using computers found it relatively easy to make a central decision on which manufacturer's equipment to buy. The choice of manufacturers may not have been easy, but the need to make the decision centrally was clear. Computers were large, expensive, required extensive internal support, and ran a fairly limited, well-defined set of applications. Everyone knew what computers could do. They were employed as administrative support systems and every major vendor offered a standard set of application packages: personnel and payroll administration, accounts receivable and payable, customer records, and so on.

Now the picture is changing as several distinct forces converge to radically redefine the role of computers in business and in society at large. Most significantly, computers are becoming

  • smaller,
  • less expensive,
  • more powerful,
  • more flexible, and
  • more functional.

Now these trends are certainly not new, but in each area important thresholds have been crossed and the cumulative effect is far greater than the sum of the contributing parts. When vacuum tubes gave way to transistors, an important engineering breakthrough was achieved. But computers were still very large, very expensive, and the market was restricted to the large organizations that could afford them. When magnetic core memories were replaced by solid-state devices, the engineers could cut another notch in their slide rules, but the market impact was again comparatively minor.

Today, however, the synergy that has developed as a result of simultaneous advances in all levels of computer design and manufacture - from microelectronic components to complete systems - has changed the very nature of the business. End users are suddenly the important customers. Systems professionals have been relegated to the status of an important but small special interest group. Computer applications focus their attentions on ways to improve customers' competitive effectiveness; the drive for administrative efficiency is secondary.

Large mainframes will still be needed as network controllers, central database managers, and super problem solvers (such as complex system simulation and control), but micro systems are invading offices, factories, and homes. Although statistics indicate that there are already several microprocessors for every man, woman, and child in the United States, these "computers" are typically microchip controllers embedded in other products, like electronic automobile ignition systems, so that most of us are unaware of their presence.

Far more significant is the burgeoning market for microcomputers as personal computers, for in this capacity they must deal directly with human users and not with other electronic systems. And if you think computers are hard to talk to, think of it from the computer's perspective. Just how do we humans acquire information? How do we process it and what do we do with the results?

These questions are very much on the minds of PC application software systems designers. They have been told time and again that if their products are not user friendly they will not succeed. At the same time, the designers are reminded that their programs must be able to run on many machines to address the largest possible market.

In many respects these two goals - true user friendliness and maximum portability - are mutually exclusive. To be truly user friendly a program must be tailored to the individual likes and dislikes of the users, and most importantly to their preferred methods of working. Let those who like to type, type; let those who like to point, point; let those who like to speak, speak; and so on. Similarly, when presenting results, friendly systems let readers read, let listeners listen, and let graphically oriented people see graphs, tables, and other illustrations.

Software portability as practiced today, on the other hand, requires adherence to one cardinal rule: you can't write software code for input or output devices that aren't there. Assume for a moment that you are designing an application for two different personal computers. One machine has a keyboard, a touch screen, and a color monitor. The other has a keyboard, a mouse, and a speaker device for simulated voice response. Which of these devices can you use in your application? You guessed it: only the keyboard!

Now some designers would argue that you could devise a clever scheme to allow the users of these two different systems to dynamically tailor generic versions of the application to their own unique environments. But is such an approach viable when an application is to run on 20 to 50 different systems, some of which haven't even been fully specified when the software is being written? Obviously not.

Fig. 3 illustrates the parallels between this apparent paradox and the problem that led to the development of operating systems years ago. We have a complex and constantly changing interface problem (user friendliness) that non-specialists cannot hope to master. We also have an economic imperative to build systems for the broadest possible market (portable software). Already we are seeing signs that the machine interface system approach (operating system) is beginning to be applied to the human interface problem.

Issue Machine User
Complexity Hardware devices sort and retrieve information in increasingly complex ways and engineers are constantly modifying their specifications Human provide and receive information in complex ways and scientists are constantly discovering new principles of human information processing
Portability Software packages should be able to run on may different systems with many different hardware features Software packages should be able to run on many different systems with many different human interface features
Efficiency The software should adapt to the hardware environment in order to take full advantage of the hardware's capabilities The software should conform to each individual user's interface preferences so that he or she can use the system comfortably and efficiently

Figure 3: Parallels Between Machine Interfaces and User Interfaces

As recently as the summer of '83, I was still exhorting software designers to pay attention to the fundamentals of human information processing so that they might provide friendlier systems (see Communications of the ACM, July 1983). Now I realize that another approach is needed. The human interface is simply too important and too complex to be left in the hands of your average, garden-variety application programmer. Instead, user information systems must be designed by true human factors experts in much the same way that operating systems have been designed by a relatively small number of system specialists. Human factors are no longer viewed as mere cosmetic details; indeed, many analysts hold that the ultimate success or failure of most personal computer products will be decided on the basis of the user interfaces they provide. With commercial viability hanging in the balance, we cannot rely on our application programmers to master the intricacies of human perception and response to design good systems.

In Fig. 4, we see the third major stage in the evolution of software architecture and how it differs from the first two. Note that at this level of abstraction the concept of a user interface system is independent of many other contemporary industry debates. Electrons may or may not give way to photons, closely coupled chip sets may or may not be replaced by wafer-scale integration, and linear, non-von Neumann processes may or may not yield to parallel, non-von Neumann processes. Whatever happens on these levels, one fact remains-people must work through an interface to computers and computers must work through an interface to people.

Figure 4: Stage III - The Emergence of User Interface Systems

I have chosen to illustrate the user interface system not as a layer on the operating system, but as a separate, complementary, and conceptually independent entity lodged between the user and the application program. This is important for it encourages us to think of users as fundamentally different from hardware devices. While this may seem painfully obvious, there is a tendency among application programmers to write software for user interface devices, not for users. For example, if a particular screen can display 24 lines of information, many programmers will use all 24 regardless of what the user can best handle.

With an advanced user interface system, a programmer could request that N items of data be presented to the user, just as he now requests the operating system to store N items on a logical storage unit. With such a system the programmer needn't be concerned that in certain hardware environments the user interface system will display the items five at a time in two columns on a liquid crystal panel, just as in the case of data storage he needn't know that the operating system will write his data in 256-byte records in continuous blocks of an 81-megabyte disk. Of course, many ambitious programmers may decide to study human perceptual mechanisms in detail and this extra effort will probably be rewarded by friendlier programs, just as programmers who understand how various peripheral devices work are often able to write more efficient code. The main point is that we shouldn't have to be human factors experts to write friendly systems. Again the analogy to operating systems holds: we need not be hardware experts to write efficient programs

Furthermore, an equally significant advantage of the user interface system is the increased portability of truly friendly systems. With appropriate levels of interface device independence, generic input and output requests could be written into the application software and the user interface system would translate those statements in ways that would best take advantage of each particular on which the package runs. Users of more advanced systems would no longer be penalized by software that was written only for those interface devices common to all systems.

This concept provides for maximum flexibility and at the same time it minimizes the impact on an installed base of equipment when new technologies are introduced. Future user interface systems could be made artificially intelligent to dynamically tailor I/O and take full advantage of the system's interface sophistication while simultaneously accounting for the users' abilities and preferences. Automatic program generation could also be used to a greater extent if the many judgments involving screen design and device selection were moved out of the domain of the application program and into a standardized user interface system. Both of these advances could be introduced without disturbing a large base of existing applications if the interface control routines were external to the application programs themselves.

At this point I should probably mention that we in the computer industry have never really used the word "standard" properly. For all the talk about standardized operating systems, none really are. Most systems that purport to be standards are simply more standardized than their competitors, meaning that they are more openly specified and more widely used. Lack of functional compatibility among versions of the same system is the rule rather than the exception.

Undoubtedly, this will also be true in the case of user interface systems for many years to come. But even if no true standard does emerge, the formal separation of user interface control and application processing will, in itself, bring about many of the benefits I've just described.

The commercial implications of extending popular applications to many or most machines are enormous.- Just as the emergence of an ad hoc standard for PCs based on the IBM PC encouraged thousands of programs to be written for it and its many look-alikes, the existence of one or more standard user interface systems could make compatibility much easier to achieve. Application programmer s could again concentrate on the information management functions of their programs, leaving both the nuances of complicated hardware and the idiosyncrasies of humans to the experts who devote their lives to such concerns.

At the first USA-Japan Conference on Human-Computer Interaction held in Honolulu in August 1984, several speakers suggested that we must find ways to modularize human interface routines. Researchers reported on their efforts to classify a broad range of human computer interactions into formal categories, and on national efforts to standardize user interface devices and protocols. These are clearly the first steps toward user interface function libraries and, ultimately, generalized user interface systems.

The earliest user interface systems will be closely associated with certain operating system environments and will focus on providing common interface protocols for several related applications. Today's integrated PC software packages, like Lotus 1-2-3 from Lotus Corp., Cambridge, Mass.; VisiOn from VisiCorp, San Jose; DesQ from Quarterdeck, Santa Monica, Calif.; Windows from Microsoft, Bellevue, Wash.; and most recently IBM's TopView for the PC AT, all provide common interfaces into which the most often requested applications - text processing, spreadsheets, and graphics - are or can be linked. Application developers are encouraged to tie their programs into these interface systems, but they are still required to know a great deal about their target systems' configurations and their target users' abilities and preferences.

As user interface systems are improved, they will begin to offer programmers increased support in areas like:

  • video screen layout,
  • input and output device selection,
  • input validation,
  • diagnostic dialogs (to correct errors and resolve ambiguities),
  • interaction analysis,
  • default value definition, and
  • on-line training.

And, whereas many of today's application development packages and programming environments may help programmers define these services quickly, tomorrow's user interface systems will perform many of them automatically. Developers will only need to specify the broad context of the activity and the system will handle the details.

Eventually, as other technologies such as artificial intelligence, voice recognition, and holographic imaging-become available, user interface systems will assume a more dynamic character. Instead of rigidly prescribing interface operations when the applications are linked to the interface or when the applications are loaded for use, the interface system will compensate until it matches the varying skills, attitudes, and levels of interest of individual users.

This is not to suggest that the interface will be so dynamic that it becomes unpredictable; some degree of consistency is essential, especially in repetitious tasks. We must recognize, however, that the classical system development process - analyze, design, program, test, then use - is woefully inadequate when task definitions change from day to day and from user to user. In our dealings with other people we sometimes want to go slowly and carefully, and at other times we're in a hurry and don't want to bother with details. Sometimes we expect complete explanations and at other times we take things on faith. Sometimes we concentrate on one aspect of a problem and at other times we ignore that aspect entirely. It is logical to expect that our dealings with computers will eventually permit this kind of moment to moment, situational flexibility, but this can only be accomplished and controlled with advanced, modularized user interface systems.

A political pundit has said that among economists the real world is often a special case. Now we in the computer industry must stop viewing the human interface as a special case. The user interface system is an idea whose time has come and we must develop a framework within which our application programs can smoothly evolve as our knowledge of ourselves improves.


D. Verne Morland is an internal consultant for NCR Corp. He is responsible for monitoring technology and market developments in many information industry sectors and he provides training and counsel on the application of strategic management techniques in high technology enterprises.