Experience with Contextual Field Research (Panel)

Moderator: Michael Good, Digital Equipment Corporation
Panelists: Robert Campbell, IBM
Gene Lynch, Tektronix
Peter Wright, University of York

Originally published in Proceedings of CHI ’89 Human Factors in Computing Systems (Austin, TX, April 30 – May 4, 1989), ACM, New York, pp. 21-24.  Copyright © 1989 by Michael Good, Robert Campbell, Gene Lynch, and Peter Wright.  All rights reserved.


Contextual field research is a growing area of interest in human-computer interaction. This panel is an outgrowth of a CHI ’88 Special Interest Group on “Field Research Techniques for Building Usable Products.” The panelists and audience will share experiences in using contextual field research methods both in research and product development, and discuss issues which have come out of their experience.

Contextual field research differs from more traditional methods used in human-computer interaction in several ways. It emphasizes the importance of context in understanding usability issues — including the work context, the social context, the motivational context, the organizational context. and the physical context of computer use. Experimental research often abstracts away the effect of context; contextual field research focuses on context. Field studies are emphasized over laboratory studies, though contextual methods can also be applied to laboratory research.

One key issue in contextual field research is its relationship to other techniques, such as experimental and laboratory testing, used in human-computer interaction. Contextual field research often focuses on broad usability issues, while experimental laboratory research often focuses on specific, narrower issues. All the panelists continue to do laboratory work in one form or another, and will address the relationship between contextual and laboratory methods in the context of their work.

In keeping with the spirit of contextual research, this panel will be structured differently from many CHI panels of the past two years. Instead of having each panelist address a pre-determined set of issues related to contextual field research, the panelists’ presentations will focus on the work as a whole. Specific issues will emerge from these more holistic views of research and development work in human-computer interaction.

The Panelists

Robert Campbell joined the User Interface Institute at IBM as a Research Staff Member in 1985. He received his Ph.D. in Developmental Psychology from the University of Texas at Austin. For the past two years, he has been exploring qualitative research methods to assess the usability of online help systems. His other research interests include the relationship between human-computer interaction and psychology (joint work with Jack Carroll) and the development of expertise in programming.

Gene Lynch is a principal scientist in the Tektronix User Interface Laboratory. His background includes a Ph.D. in Engineering Science from the University of Notre Dame and positions in software engineering, applications development, human factors, applied research, and teaching. He chaired the HFS/ANSI committee during the development of the American National Standard for Human Factors Engineering of Video Display Terminal Workstations. His current interests are in developing design methods which lead to advanced instruments and systems that fit the application domain and are both learnable and usable. Gene is active in SIGCHI and invites you all to CHI ’90 in Seattle where he is the General Co-Chair.

Peter Wright graduated from York University in psychology. He went on to study Cognitive Science at the Edinburgh School of Epistemics where he completed his Ph.D. on the development of expertise in well-structured problem solving domains. His first post-doctoral position was in the Linguistics Department at Essex University where he researched into discourse comprehension. He is currently back at York working in the HCI group on a research project dedicated to combining formal methods with empirical evaluation and rapid prototyping.

Michael Good is a principal software engineer in Digital’s Software Usability Engineering group. He received his B.S. and M.S. degrees in computer science from MIT. He was one of the designers of the XUI (X User Interface) style for the DECwindows system, and contributed to the user interface design of many DECwindows products. His interests include developing methods for building more usable software systems.

Robert L. Campbell: Extending the Scope of Field Research in HCI

A major concern in our work at IBM Research is developing research methods that will increase our understanding of users’ needs and their experiences of usability. Quantitative measures, like keystroke counts and performance times on benchmark tasks, leave out most of what is important about usability: the kinds of errors that users make, their satisfaction or dissatisfaction with features of the interface, etc. Such information we need can only be obtained through qualitative measures, such as various forms of interviews, thinking out loud, and video observation. Because systems need to be usable by users working on real tasks in real work settings, we are devoting increasing attention to field research, and to improving the methods that we use in the field.

Field studies of online help

One area in which we are currently developing our field methods is my research on task-oriented online assistance. This work began with a laboratory evaluation of the help system for a command-based text editor. In the laboratory study, we found that thinking out loud, in conjunction with monitoring of help use, provided rich and extensive information about the kinds of difficulties users have with existing help systems. Where a help system has to be usable, however, is in users’ offices when they are working on tasks of their own choosing. Users in the laboratory study in fact remarked that the tasks were contrived and that they were learning at an unnatural pace, moving on to new functions without having time to practice the old ones.

Online help is a challenging subject for field studies because it forces them to be longitudinal. Learning to use a system takes weeks or months. Spot observations cannot convey the nature of the learning process. Even repeated interviews miss the detail of satisfactory and unsatisfactory interactions with the help system, as users quickly forget these. Monitoring help use produces detailed records over time, but fails to convey the meaning of interactions with the help: what problem the user was trying to solve, and whether the help provided relevant information or not [1]. In a study now under way, we are using keyboard-activated tape recorders into which users can make spoken comments while using the help for our text editor. In effect, we are asking for localized thinking out loud. To the extent that users are willing to make spoken comments, we will have a method for tracking users over time in the field, which could be applied to other features of the interface as well, Of course, we are concerned not only with the quality of information we obtain in this way, but also with the additional time and expense of such field studies in the course of a usability engineering approach to product design.

Field studies of professional work

Field studies of professional work have been an important element of the empirical design approach at IBM Research for many years [4]. In past work [5, 6], and again in work in progress by Bob Mack and Joan Roemer, we have used surveys and interviews to understand the computer support needs of business professionals, and to develop requirements for integrated office systems.

Our field work has been influenced by real-world constraints, most notably the simple fact that we have very limited access to business professionals. We cannot spend much time with them in face-to-face conversation or observation. Moreover, no one data-gathering instrument provides all the information we need. Mack and Nielsen [5] found that surveys were more valuable when complemented by follow-up interviews, guided by the survey results. The surveys provided a background for carrying out a semi-structured interview. More recently we have developed interpretations from multiple sources, e.g., staff and secretaries of target business professionals. We present this evolving interpretation in narrative form to each interviewee for comment and elaboration. Although this tends to structure the interview in terms of existing work activities, it is useful in the face of the limited access, and it provides converging evidence for our interpretations. Developing this composite picture of a professional’s work also proceeds iteratively between different informants and different levels of analysis.

A key question in the face of these limitations is, of course, are we learning anything useful about our interviewees? The answer to this question depends on our goals. We aim at developing requirements for providing software to support and improve the work of those whom we interview. We assume that we understand the professionals’ work well enough to provide useful design guidance in the form of requirements, or anticipated problems with the current design. A challenge we face working within a usability engineering framework is developing measurable objectives, and assessments for these objectives, that are ecologically valid, and that make contact with the information and understanding we gain from field studies. Standard laboratory approaches to specifying objectives conflict with the fact that our users are unlikely to participate in laboratory studies. Even if they would, we don’t know to what extent defining and evaluating measurable objectives in the lab can provide valid indicators of the usefulness and usability of an evolving system. Should objectives for usability engineering be defined and measured in the field? Or can objectives defined and measured in the lab be shown to be reasonable stand-ins for aspects of usability in the field?

Gene Lynch: Ecologically Valid Usability Design

The objective of our research is to develop engineering methods which deliver the appropriate functionality to meet application demands in a manner that is both learnable and usable. Our recent qualitative methods have focused on the learnability of systems by domain experts both in laboratory settings and in the normal work context. These methods have been applied to systems that range from unique combinations of hardware and software with embedded computers to software applications running on generic workstations.

The contextual interviews point out the actual usage of existing systems which are being used to solve real problems. This highlights the strengths and shortcomings of these systems and points toward areas in need of innovation and improvement. Used at the beginning of or prior to the initiation of a product design, they can provide overall and specific design directions. The results of the contextual interviews define the usability issues for the domain in question and indicate the areas to be tested in the laboratory

The next step is a more focused laboratory investigation of the existing systems to uncover the winners and losers in functionality and interface designs. This can be cast as alternative analysis. This addresses questions such as: What about the existing systems meets the expectations of the domain experts and what elements run counter to these expectations? Synthesis of these findings lead to component design strategies and hypotheses.

The qualitative assessment of learnability and usability are next tested with domain experts interacting with varying levels of simulated systems performing tasks indicated in the contextual interviews. In this iterative design process domain experts who have never worked with the system in question are asked to first perform atomic tasks (e.g., selection of the proper tool in a drawing application). The ease or difficulty and additional knowledge required to accomplish these tasks can also be quantified. They are then given a composite task from their application domain. In both these scenarios the user’s expectations, behaviors, strategies, reactions, and expressions are probed and recorded on videotape. We were surprised at the quality of the information that users provided with even the simplest levels of simulation.

In addition to yielding a more learnable and usable system from the iterative design process, the documentation and training teams participate in the analysis of the data, giving added value to these aspects of the whole system.

The directed dialogs and the contextual interviews expose design concepts and products to the most critically qualified testers, users flying to get their work done. The raw data is rich in user behaviors, expectations, model building and hypothesis development and testing. The edited tapes are powerful motivators to the design team working to provide the best quality product to their customers.

Peter Wright: Eliciting and Interpreting Contextual Data for Iterative Evaluation

Our research at York considers usability problems as forming a hierarchy of different levels with each level requiring more contextualised knowledge for its diagnosis. By contextualised knowledge we mean knowledge of the particular user’s goals, and priorities at the time the problems arise. This hierarchy is apparent in the following examples: (i) keystroke level errors which can be identified and diagnosed by reference to system logs from unknown users, (ii) keystroke errors based on mismatches between the application concept and the user’s model and (iii) problems which are not manifest as keystroke errors such as task-action mappings whose complexity is unacceptable to the user. Following Winograd and Flores [8], we refer to this broad spectrum of usability problems as “breakdowns” since their effect is to produce failures of transparency in the interface. It is problems at the higher levels, e.g. (ii) and (iii), which require the evaluator to have contextualised information for their identification and diagnosis.

We have experimented with several verbal protocol techniques as means of eliciting this contextual information and in this panel we shall describe the results of working with one such method, a question-answer dialogue method. The users are encouraged to consider themselves not as experimental subjects but rather as co-evaluators. They are asked to think aloud as they carry out a series of tasks with the interface. They may also ask the evaluator for assistance where necessary. In response, the evaluator adopts a kind of clinical/tutorial role. When asked direct questions about what to do next, questions are asked of the users to find out about their model of the system, their understanding of the operations available and their interpretation of screen information.

This method was applied to the study of one user of a prototype bibliographic database. Sections of dialogue were categorised as “attempts” followed by success, or failure and recovery. Significant usability problems were identified in both attempts and recovery. Breakdowns resulting from failure, such as typographical and mode errors, were generally considered by the user to be less important than those identified in attempts, such as awkward chains of commands. This is consistent with the view that usability problems at higher levels of the hierarchy will generally be more serious even when they are not manifest at the behavioural level. In addition, the user was able to make useful recommendations about how the system might be improved.

A possible concern about the use of question-answer dialogue, or contextual methods in general, is that they may require a high degree of training. To investigate the ease with which the method can be learned, 14 computer science graduate students, with no previous experience of evaluation work, followed a brief period of training (under 6 hours) and then, working in 7 teams of 2, they evaluated the same bibliographic database. Each team had the help of a naive user. An average of 4.3 of the 9 most important usability problems known to exist with this system were identified by the teams.

In this panel presentation I shall describe the question-answer dialogue method in more detail and explain how it fits into an incremental evaluation procedure. The value of contextual field research and what we consider to be the real problems and difficulties for its further development will also be discussed. In particular, it will be argued that many conventional criticisms are straw men and that the more genuine and immediate problem is to relate these methods to a broader context of research and in so doing clarify the process and role of qualitative analysis.

Michael Good: Contextual Field Research Contributions to the DECwindows Program

Over the past few years, Digital’s Software Usability Engineering group has adapted engineering techniques to the design of usable computer systems. These usability engineering techniques have evolved from an emphasis on laboratory testing of user interfaces to an emphasis on contextual field techniques [7].

Two main factors motivated us to change our focus to contextual field research. First, our laboratory work was often not sufficient to produce usable, competitive products. Although the lab tests often met their goals, the goals were usually not grounded in customer experience. Second, we became aware of theoretical and philosophical work which argues for the importance of context in understanding human behavior [3, 8].

As we were changing our paradigm for developing usable systems. Digital began work on the DECwindows program for workstation software. This was the largest software engineering project in Digital’s history. A major part of this program was the development of the XUI (X User Interface) style: a consistent, modern interface for workstation software. Our Software Usability Engineering group was heavily involved in the design of the XUI style and the user interfaces of DECwindows applications from the inception of the program.

We have used data from contextual interviews in three ways. First, contextual research helped build an understanding within the company of why this major new software engineering effort was needed. Second, we used contextual data to begin building a theory of usability, grounded in user experience, that is incorporated into the XUI design philosophy. Third, contextual data provided much global and detailed information for the design of specific DECwindows products.

Early in the DECwindows program, we interviewed some of our customers who were using both Digital and competitive workstations. Those interviews tended to be very harsh on our older workstation software. One interview was conducted in response to a letter written to one of our senior managers. From this interview, we produced a tape that showed about a dozen of the major interface deficiencies in our old workstation product when compared to one of our major competitors. This tape was shown many times throughout the company both to illustrate the need for the DECwindows program, and to show some of the fundamental issues that the DECwindows architecture had to address to be successful.

We began to develop a theory of usability grounded in user experience based on contextual data collected during the first year of the DECwindows program. This theory was used to guide many parts of the XUI design, and served as the basis for the opening philosophy chapter in the XUI Style Guide [2]. For example, one subsection describes how to support user hypothesis testing through techniques like using an Apply button in modeless dialog boxes. We derived this recommendation from contextual interviews which showed the importance of hypothesis testing in workstation applications as diverse as desktop publishing and astrophysics research.

Contextual interviews also provide many design ideas for specific products, from high-level issues to low-level details. The pop-up menus in the XUI Style Guide and Toolkit, and their usage in several DECwindows products, came from contextual data which showed how pop-up menus can help keep users in the flow of their work.

We believe that our contextual approach to interface design has greatly increased our ability to produce more usable systems. We also find contextual field work to be more enjoyable and fulfilling than our earlier laboratory work. We are continuing to develop our techniques to build even better systems which enrich human experience.


The views expressed in this paper are those of the authors and do not necessarily reflect the views of Digital Equipment Corporation, IBM, Tektronix, or the University of York. XUI and DECwindows are trademarks of Digital Equipment Corporation.


  1. Campbell, R. L. Evaluating online assistance empirically. IBM Research Report RC 13410, Yorktown Heights, NY, 1988.
  2. Digital Equipment Corp. XUI Style Guide. Order No. AA-MG20A-TE, Maynard, MA, Dec. 1988.
  3. Dreyfus, H. L. and Dreyfus, S. E. Mind over Machine. The Free Press, New York, 1986.
  4. Gould, J. D. How to design usable systems. In M. Helander (Ed.), Handbook of Human-Computer Interaction, North-Holland, Amsterdam, 1988, pp. 757-789.
  5. Mack, R. and Nielsen, J. Software integration in the professional work environment: Observations on requirements, usage and interface issues. IBM Research Report RC 12677, Yorktown Heights, NY, 1987.
  6. Nielsen, J., Mack, R. L., Bergendorff, K. H. and Grischkowsky, N. L. Integrated software usage in the professional work environment: Evidence from questionnaires and interviews. In Proceedings of the CHI ’86 Conference on Human Factors in Computing Systems. (Boston, April 13-17), ACM, New York, 1986, pp. 162-167.
  7. Whiteside. J., Bennett, J. and Holtzblatt, K. Usability engineering: Our experience and evolution. In M. Helander (Ed.), Handbook of Human-Computer Interaction, North-Holland, Amsterdam, 1988, pp. 791-817.
  8. Winograd, T. and Flores, F. Understanding Computers and Cognition: A New Foundation for Design. Ablex, Norwood, NJ, 1986.