1
8.3kviews
Explain about the DECIDE framework to guide evaluation.
1 Answer
0
697views

To guide our evaluations we use the DECIDE framework, which provides the following checklist to help novice evaluators:

  1. determine the overall goals that the evaluation addresses.
  2. Explore the specific questions to be answered.
  3. Choose the evaluation paradigm and techniques to answer the questions.
  4. Identify the practical issues that must be addressed, such as selecting participants.
  5. Decide how to deal with the ethical issues.
  6. Evaluate, interpret, and present the data.

1.Determine the goals

Goals should guide an evaluation, so determining what these goals are is the first step in planning an evaluation. For example, we can restate the general goal statements just mentioned more clearly as:

  • Check that the evaluators have understood the users' needs.
  • Identify the metaphor on which to base the design.
  • Check to ensure that the final interface is consistent.
  • Investigate the degree to which technology influences working practices.
  • Identify how the interface of an existing product could be engineered to improve its usability.

2. Explore the questions

  • In order to make goals operational, questions that must be answered to satisfy them have to be identified. For example, the goal of finding out why many customers prefer to purchase paper airline tickets over the counter rather than e-tickets can be broken down into a number of relevant questions for investigation.
  • Questions can be broken down into very specific sub-questions to make the evaluation even more specific
  • For example, what does it mean to ask, "Is the user interface poor?": Is the system difficult to navigate? Sub-questions can, in turn, be further decomposed into even finer-grained questions, and so on.

3. Choose the evaluation paradigm and techniques

The evaluation paradigm determines the kinds of techniques that are used. Practical and ethical issues (discussed next) must also be considered and trade-offs made. For example, what seems to be the most appropriate set of techniques may be too expensive, or may take too long, or may require equipment or expertise that is not available, so compromises are needed.

4. Identify the practical issues

Some issues that should be considered include users, facilities and equipment, schedules and budgets, and evaluators' expertise. Depending on the availability of resources, compromises may involve adapting or substituting techniques.

  • Users It goes without saying that a key aspect of an evaluation is involving appropriate users. For laboratory studies, users must be found and screened to ensure that they represent the user population to which the product is targeted. For example, usability tests often need to involve users with a particular level of experience.
  • Facilities and equipment There are many practical issues concerned with using equipment in an evaluation. For example, when using video you need to think about how you will do the recording: how many cameras and where do you put them? Schedule and budget constraints
  • Time and budget constraints are important considerations to keep in mind. It might seem ideal to have 20 users test your interface, but if you need to pay them, then it could get costly. Planning evaluations that can be completed on schedule is also important, particularly in commercial settings.
  • Expertise Does the evaluation team have the expertise needed to do the evaluation? For ex- ample, if no one has used models to evaluate systems before, then basing an evaluation on this approach is not sensible.

5. Decide how to deal with the ethical issues.

The Association for Computing Machinery (ACM) and many other professional organizations provide ethical codes that they expect their members to up- hold, particularly if their activities involve other human beings. For example, people's privacy should be protected, which means that their name should not be associated with data collected about them or disclosed in written reports. The following guidelines will help ensure that evaluations are done ethically and that adequate steps to protect users' rights have been taken.

  • Tell participants the goals of the study and exactly what they should expect if they participate

  • Be sure to explain that demographic, financial, health, or other sensitive in- formation that users disclose or is discovered from the tests is confidential.

  • Make sure users know that they are free to stop the evaluation at any time.

  • Pay users when possible because this creates a formal relationship.

  • Avoid including quotes or descriptions that inadvertently reveal a person's identity.

  • Ask users' permission in advance to quote them, promise them anonymity, and offer to show them a copy of the report before it is distributed

6.Evaluate, interpret, and present the data

Decisions are also needed about what data to collect, how to analyze it, and how to present the findings to the development team. To a great extent the technique used determines the type of data collected, but there are still some choices. For example, should the data be treated statistically? If qualitative data is collected, how should it be analyzed and represented? Some general questions also need to be asked

  • Reliability The reliability or consistency of a technique is how well it produces the same results on separate occasions under the same circumstances. Different evaluation processes have different degrees of reliability.
  • Validity Validity is concerned with whether the evaluation technique measures what it is supposed to measure. This encompasses both the technique itself and the way it is performed
  • Biases Bias occurs when the results are distorted. For example, expert evaluators per- forming a heuristic evaluation may be much more sensitive to certain kinds of de- sign flaws than others.
  • Scope The scope of an evaluation study refers to how much its findings can be general- ized. For example, some modeling techniques, like the keystroke model, have a narrow, precise scope
  • Ecological validity Ecological validity concerns how the environment in which an evaluation is conducted influences or even distorts the results. For example, laboratory experiments are strongly controlled and are quite different from workplace, home, or leisure environment. Laboratory experiments therefore have low ecological validity because the results are unlikely to represent what happens in the real world
Please log in to add an answer.