Design Plan for an Online Advising System

Final Exam, by Zach Tomaszewski

for ICS 667, Fall 2002, taught by Dr. Dan Suthers


Introduction

To develop an online advising system for the University of Hawaii, we must first develop a viable design plan. I propose the following usability engineering methods and schedule.

Clarify Goals

The first step is to clarify the expectations for the system. An excellent way to do this is to produce a root concept document, which is basically a mission statement for the project. We will later collect a lot of information, and we need a guiding document to refer back to in order to stay on track. The root concept is not set in stone, but it should still be well-researched enough ahead of time that it does not change too much during the project. We should determine who the stakeholders are, and what they have in mind when they speak of an "online advising system." We should determine who is responsible for design decisions. How will this new system integrate with current operations--will it be supported by the same people who support the current system, or will this be handled by a new separate department? We already know we have 1 year to complete the project. By answering questions like these, we can reach some basic consensus, before we start, about what we are trying to do, who the users and stakeholders are, what limitations we have, and the assumptions we're making about the finished product. These findings are summed up by the root concept. Indeed, until this first step is done, even this general description of a design methodology may be misguided.

Overall Design Approach

Next, we should determine a general design approach. Most design processes usually fall into either a structured design or a rapid prototype paradigm. In structured design, there is basically a "waterfall model", where the design team progresses sequentially through the design steps--planning, data gathering, design, prototyping, implementation--logically producing the finished product based on an top-down design. Though theoretically there should be a number of iterations between these steps, in practice the process is largely sequential. Once a team spends weeks researching needs, they don't usually have the time, resources, or inclination to fully review and revise these after implementing a prototype.

In the rapid prototype paradigm, the focus is on fast production. Here, instead of a linear sequence of design steps, there is a "star model" of steps. The steps are essentially the same as in structured design, but the design team alternates between any of the star's points as needs dictate. Though perhaps not obvious at first, there are advantages to such an unplanned, free-flowing design method. Frequently, stakeholders' needs are clarified by interacting with the system itself. Rapid prototyping allows many design cycles to work out bugs and gather further information. However, without an over-arching design plan or a focus on researching users' needs, the resulting system can be bit of a hodgepodge.

Of course, different projects require different approaches. Transferring an existing system to a new medium is usually best undertaken with a linear, structured design plan. In this case, there is no need to reinvent the system itself. Instead, there is only the need to understand how the tasks are currently done and support those same tasks in the new medium.

On the other hand, innovative new designs are well-suited for rapid prototyping. Users rarely understand innovative projects until they can interact with them. In this case, creating a model with which users can interact and provide feedback about should be the first step. Additionally, some systems are deeply enmeshed within a social network and so impact a number of different users. Though structured design can get a good sense of these though data flow and social context mappings, sometimes introducing new technology changes the system itself. Rapid prototyping can better adapt to this sort of change.

With these tradeoffs in mind, I propose we use a two-pronged attack for the UH online advising system. This project seems primarily structured in nature: there already exists a system of advising that merely needs to be transferred online. By collecting data and modeling the current system, we should be able fulfill those same needs online. However, since online is a different environment, there may be new or additional needs that need to be met. These could go unnoticed when using a solely structured approach because users can not use the system until it is completed.

Thus, I believe the design team should be composed of two sub-groups engaged in simultaneous but largely independent design processes. The majority of the design team should work from a structured approach. As mentioned above, this approach best fits the project of transferring an existing system to an online version, as we hope to do. It may be slow at times, with limited iteration, but it is generally effective. It logically produces software based on documented needs.

Meanwhile, a small group of one to three people should be engaged in rapid prototyping. While I think this approach alone would be inadequate for this project, in moderation it will allow us to shore up some of the deficiencies of a purely structured approach. We can build the system responding to real use by real users. It also takes some of the pressure of the data collection phase of design, since stakeholders can see results from very early on.

Structural Design Steps

We should begin designing by gathering data. We already have a lot of advising currently happening offline. We will have to support these same activities online.

First, we should review the formal position of the university regarding advising and its role in a student's scholastic career.

Then we should interview the stakeholders identified in the root concept. Presumably, these primarily consist of students who receive advising, faculty who advise, and staff who deal with any of the paperwork generated. Such interviews should contain both general and specific questions. Generally, we want an idea of how the current system works, what users like and dislike about the current system, what features they would need and what features they would like to see in an online system, etc. We should ask for a verbal walk-through of the advising process. We want a broad view of the current process from each of the viewpoints involved, as well as a list of the possible features we could include in an online system.

Specific questions of the interviews should focus on more concrete concerns. Some example queries are how many students are advised, how frequently are they advised, what are users' abilities with computers and online systems, what are users feelings about an online system, and have they used any other online systems to date.

A common occurrence in interviews is that workers report the official description the process and not what they actually do. We should sit in on a few example advising sessions and watch advisors dealing with advisees' emails. This contextual inquiry gives us a chance to see how advising is really done, provides an opportunity to ask about the details of the process as it happens, and helps detect any tasks not documented in the interviews.

This is also the time collect artifacts for artifact analysis. What sorts of tools are used during advising? We should note any forms used, notes jotted down, diaries or calendars consulted, etc. Also, how and where is any generated information recorded? How is it processed later? If both online and offline advising will be processed by the same staff, they will need to have similar output formats.

Now that we have the raw information about the processes, it needs to be consolidated through activity analysis. First, we need to generate essential use cases, which are the basic tasks involved in advising. (The tasks could be analyzed further into their component parts using hierarchical task analysis, but I think that is too specific at this point. We are more concerned with the general process; the specific details of how a task is accomplished will likely be different in an online system.) Essential use cases are related to each other by a use case map. We need to delineate the different user roles involved in the system, as well as any social contexts that affect how those users interact with the system. Finally, a data flow diagram can show us how information moves through the system--what input comes from who, how it is processed, and where it is stored.

We now know what an online advising system must do. This would be a great time to review any existing solutions. There may already be software close enough to what we need, which may be cheaper than implementing our own. Even if this is not the case, a product review can reveal how other designs have met or neglected our discovered user needs, as well as reveal possibly undiscovered needs.

For reports to people not on the design team, a small number of problem and activity scenarios would be possible. Scenarios are simple "stories" describing how a single, specific example user interacts with the system. Problem scenarios describe the current system, and activity scenarios describe the future system, though not at a level specific enough to describe the interface or implementation. Scenarios are useful to describe a design since they are not overly abstract nor do they require special notation. They would be helpful to get feedback from users as a "reality check" that the designers really understand the current system and the needs of an online version.

Now we can begin the interface and interaction design. We can rely primarily on context models generated from the essential use cases. Context models can be used to describe an abstract interface based on which essential uses need to be supported when, and what sorts of interface elements are required to provide that support. Claims analyses would be helpful to compare the tradeoffs of possible implementations of the context models.

Meanwhile... Rapid Prototype Steps

While the bulk of the design team works on gathering and structuring data about the current offline advising system, a small contingency should be engaged in building a working system with what tools they can find. Such prototypers must be creative, innovative, and technically savvy.

The rapid prototype project relies primarily on participatory design. This means that only a subset of users will be available. Specifically, they must be users with a high desire to work with an online system and willing to deal with the technical hardships of a low-fidelity but working prototype. They will also need to be quite competent with existing online technology, and willing to deal with frequent interface changes. They may also need compensation for their time and effort helping in the design.

The prototyping team will start with a very basic design and add features based on feedback from users. For example, they might start with a web page with links to the UH catalog and an embedded, existing chat program. By watching users interact through the system--such as by log file analysis or watching chat sessions--and based on feedback and suggestions from users, they should add further features as required. Prototypers may need to do some Wizard of Oz work--that is, providing some of the background processing themselves. For example, they may need to manually reformat some of the information gathered before passing it on to other users.

Like the structural design team, the prototype team is guided by the root concept. Prototypers can use short surveys to elicit user feedback or rating scales for added features. They should also follow usability design heuristics as much as possible, since these sorts of prototypes should be too short-lived to involve extensive usability studies. Instead, the focus is on general feedback and constant change.

While the structured design team carefully builds a solid design based on research, the prototypers should build the best system they can as fast as they can based on intuition and direct user feedback. The two teams should meet occasionally to compare notes. If a prototyper recognizes a need for a feature but can not implement it, he can still pass on that information to the structured team. Additionally, prototypers may gain ideas for new features or unmet needs from the findings of the structured team.

As with all prototypes, it must be emphasized to all users that this is not the final design, but merely a stepping-stone in that direction.

Synthesis

At the end of the design period, the two teams should again come together as one. At this point, the structural design team has an abstract interface based on essential uses. The prototypers have experience with a few different designs and what actually gets used in an online advising system. These two views should be synthesized into a final design. This is the last chance to pause for reflection--to see that all the user needs that will be supported are included in the design and that they are included in a logical manner. A couple interaction and interface scenarios could be used to elicit some outside user feedback in the case of any debatable design versions.

Implementation of the final design should, to the extent possible, begin with those features that have already been prototyped with success. (These should be the most essential features if the prototyping team succeeded in following user calls for features.) Additional features or questionable aspects can be prototyped first to get some level of user feedback. Whenever possible, the implementation of the final design should be merged with the continuing prototype.

This will produce the final, completed system. There may be a short "beta" period to fix any small bugs and make some minor changes. However, at this stage, the combination of research and prototyping should have caught the big problems; it is doubtful that there will be time or resources to fix any big problems by this point.

Schedule

I conclude this proposal with a schedule showing the approximate timeline for this design.

 Structured TeamPrototyping Team
Month 1meetings with shareholders, root concept document, assign teams
user interviewsfind prototype users
Month 2contextual inquiry, artifact analysis prototyping, guided by participatory design and design heuristics
Month 3activity analysis: essential use cases, use case maps, user roles, social context maps, data flow diagram
Month 4existing product review, user feedback from problem and activity scenarios
Month 5context models and abstract interface design
Month 6synthesis to produce final design
Month 7review interaction and interface scenarios with users, implement prototype and test questionable aspects of final design before final implementation
Month 8
Month 9
Month 10
Month 11
Month 12beta test and unleash


Thanks be to Orson Scott Card's Ender's Game, for the idea that a small, independent, creative unit can generate the solutions that keep a larger, static unit from failing.