Finding the Shape of Information

by Mary Hrovat

If you have ever searched the World Wide Web with Netscape or used a reference work on CD-ROM, you might be familiar with the experience of feeling "lost in cyberspace." You might have found yourself asking questions such as Where am I? or I know I've seen something about this, but where? Andrew Dillon, an associate professor since 1994 in the School of Library and Information Science (SLIS) at Indiana University Bloomington, studies the process of human-computer interaction (HCI), including the ways in which users navigate through multimedia, what sort of structure they perceive in information space, and how this helps them use multimedia. He seeks to understand and minimize navigation problems by asking, Can we learn anything about how humans navigate in the physical world, and, by extension, in the world of electronic information?

Dillon, who holds bachelor's and master's degrees in psychology and a doctorate in human factors, describes his work as drawing from, and ultimately feeding back into, cognitive science (in which he is a core faculty member at IUB). In bringing cognitive science to bear on the potentials and problems of multimedia, he seeks to reach two goals. The first is to increase the success and satisfaction of users of multimedia by providing software designers with information on how people find their way through multimedia resources. The second goal is to understand the cognitive processes involved in information gathering and processing.

Imagine reading a book electronically. Multimedia technology can allow users to search quickly and accurately for any portion of the text and links to information sources combining sound, animation, film, and text. This presentation obviously differs from the traditional printed book, and the skills required to use it effectively and maneuver through it are a mix of those required for print media and some totally new ones. Dillon mentions an "electronic book" that had a screen display that mimicked the printed page, down to requiring users to click on the corner of the page and watch the sheet of paper turn over. This shows what happens when the analogy to paper is carried too far; at the same time, studies of how people use paper texts yield valuable results that can shed light on how people use multimedia. A good example is the recent study Dillon carried out involving articles from academic journals. He looked at several variables including familiarity with the discipline and presence or absence of nontextual cues (such as graphic material) as factors in people's ability to correctly place isolated paragraphs in the correct section of an article. The results of this study are currently in preparation, but they have the potential to offer insights into readers' use of structure, which could prove useful in cyberspace as well.

Dillon emphasizes the importance of empirical evidence in understanding how people locate things in an electronic environment. Multimedia is new enough that many assumptions being made about its use have yet to be proven or quantified. For example, when a user opens a folder on a Macintosh, the folder appears to "zoom" out and the new location is thus graphically linked, via the animation effect of the zoom, to the original location of the folder. This is thought to be a valuable location cue for users, yet Dillon cites results from the work of doctoral student Michael Chui that show that presence or absence of this feature does not seem to affect whether users remember which folder went where. Chui is also investigating aspects of scrolling and movement on the screen.

Models of physical navigation serve as springboards for sketching an understanding of how people perceive structure in information space. Dillon works with a schema-instantiation model of navigation through physical space: people have a general schema of what a city, for example, is like. When they go to a new city, they instantiate this general schema by getting to know the actual conditions of that particular city--where the landmarks are, how to follow a route to a particular destination, and a general survey knowledge of the features of the city (this third level of instantiation usually follows the other two). How applicable is this model to the electronic environment, and how does it need to be adapted for that environment? The SLIS Usability Laboratory provides a powerful tool for studies that will begin to answer these and other questions.

The Usability Labratory is located adjacent to the SLIS Library in the basement of the Main Library. It consists of two rooms with a one-way glass window between them; in one room, users can work at either a Macintosh or an IBM workstation, surveyed by two cameras. In the other room, the experimenter can manipulate a sophisticated bank of video equipment. The cameras record the facial expressions, mouse and keyboard motions, and other activities of the user (such as consulting documentation or conferring with other users). The screen is captured directly to a third video feed so the experimenter sees exactly what the user sees. Microphones record the verbal element of the session. The experimenter can zoom in and out with the cameras and can move them to focus on important details as an experiment progresses. The three video sources (two cameras and the direct screen feed) are synchronized, and there is a time stamp on the tapes. For academic purposes, the tapes constitute a complete record of each experiment and combine to form a rich source of data for analysis. In addition, a session can be edited down to one tape, with a split-screen feature showing details from any or all of the three sources; the single tape can then be used to make a presentation at, for example, a conference.

There are three partners that operate the Usability Laboratory--the School of Library and Information Science, Indiana University Libraries, and University Computing Services. The partners began assembling the laboratory in late 1995 after receiving a grant from the Office of Information Technology. There are two types of work envisioned for the laboratory. Students, primarily SLIS and cognitive science students, but also those from the Instructional Systems Technology program in the School of Education, the Department of Computer Science in the College of Arts and Sciences, and the School of Business, can learn how to gather and analyze data for usability studies, thus acquiring a skill set that is valuable in the marketplace. Software developers, both those within the university and those from industry, can contract to use the laboratory for usability studies on new products. The Philadelphia-based Institute for Scientific Information is currently using the laboratory to evaluate World Wide Web–based versions of their citation indexes. Dillon points out that a common mistake in industry is to assume that simply gathering data will solve problems. The university can provide a valuable service in helping to analyze as well as gather data.

There are two types of analysis that can be done in usability studies. One type, outcome analysis, involves measuring discrete variables such as speed of performance, correctness of identification of screens, or accuracy of search results. The multiple cameras in the Usability Laboratory keep track of everything and allow sequences of events and the times between them to be accurately measured, even, Dillon says, at the microanalysis level of "establishing speed changes that occur over several keystrokes."

The second type of analysis is aimed at studying processes involved in user interaction with the system. The most important method used in the study of these processes is concurrent verbal protocol analysis. A protocol is a subject's verbal account of search strategies and procedures, elicited in real time; the more detailed the protocol, the better for researchers. The protocol, captured on the videotape, contains a great deal of complex data, and protocol analysis is correspondingly time intensive; Dillon estimates that a single hour of tape requires ten hours of work to analyze. This process data, though, can be among the most valuable information gathered in the laboratory. Outcome data treats the user as a black box into which goes the information on the computer screen and out of which comes decisions and actions. Protocol analysis attempts to get inside the black box to find out what's going on in it. The laboratory's time-stamped, synchronized tapes and the multiple data sources allow for highly detailed analysis of the process of interaction. Dillon points out that it is possible to have "systems with similar outcome measures but different ways people utilize the system and differing levels of satisfaction."

Dillon is working with SLIS graduate student Jennifer Heffron on a study intended to discover what constitutes a memorable landmark in the electronic world. The experiment gives subjects a set of tasks that requires them to assemble menus for dinner guests with various dietary requirements using the World Wide Web to view a lengthy online vegetarian cookbook. After they complete their tasks, they are shown a series of screens and are asked to identify whether they have seen the screens in their work. Analysis of the "false hits" (screens identified as having been seen, when in fact they were not) can yield information about whether the screens were similar in some discernible feature to screens that actually were seen. If there is a pattern to the similarities between false hits and screens actually seen, it could provide clues about what features are salient for people when they find something memorable--whether they are navigating at a conceptual level and lumping together everything that has to do with food, for example, or whether they are using primarily visual cues, such as a graphic image in a particular location, colors, or the size or position of graphics. As this and future longitudinal experiments begin to yield data, the relevance of the navigation model described above to the electronic domain should become more apparent as the qualities of electronic navigation by landmark, route, and survey become clearer.

The ViewFinder database offers another tool for understanding what type of cues people look for when they are moving around in cyberspace. The database, developed by Javed Mostafa, an assistant professor of library and information science at IUB, contains one hundred brief clips from movies. The database initially displays a random selection of still shots from movie clips. The user can then select any one of them to watch a brief moving clip and can then search for other clips based on characteristics of the selected clip (in essence, telling the search engine: find me more like that, or find me more from this movie). There is also a text menu that allows the user to search by actor, film title, director, or subject. In addition, users can manipulate the images by zooming in or out, pausing a moving image, or moving forward or backward one frame at a time. This database will be used, Mostafa explains, to try to understand which of the given cues people will use to find their way around the database and what sorts of paths they create through the complex information space he has created using the features provided.

Dillon's work explores issues that must be addressed, as we all come to spend more time in electronic information space. Results of his work are presented in academic settings to further cognitive science research and are presented to practitioners in the software industry who are responsible for designing new products. Potential applications are wide-ranging: office software, educational software, browsers and information managers for the World Wide Web. As designers gain a better understanding of what users need, we can expect to be able to put the capabilities of multimedia to better and better use.

Back to the Table of Contents