Evaluation Patterns: Difference between revisions

From Design Science Research Methods
Jump to navigation Jump to search
(Created page with "Evaluation Patterns for Design Science Research Artefacts as described by Sonnenberg and vom Brocke <p style="font-weight: 400;" >In order to exemplify their approach, they distinguish four evaluation types (Eval 1 to Eval 4), which are derived from typical DSR activities, and which are characterized by the input and output of each activity as wells as specific evaluation criteria and evaluation methods.</p> *Eval 1: Evaluating the problem identification: criteria inclu...")
 
No edit summary
Line 1: Line 1:
Evaluation Patterns for Design Science Research Artefacts as described by Sonnenberg and vom Brocke
Evaluation Patterns for Design Science Research Artefacts as described by Sonnenberg and vom Brocke<ref>Sonnenberg, C., and vom Brocke, J. 2012. “Evaluation Patterns for Design Science Research Artefacts,” in Practical Aspects of Design Science, Communications in Computer and Information Science, M. Helfert and B. Donnellan (eds.), Berlin, Heidelberg: Springer, pp. 71–83. (https://doi.org/10.1007/978-3-642-33681-2_7).
</ref>


<p style="font-weight: 400;" >In order to exemplify their approach, they distinguish four evaluation types (Eval 1 to Eval 4), which are derived from typical DSR activities, and which are characterized by the input and output of each activity as wells as specific evaluation criteria and evaluation methods.</p>
<p style="font-weight: 400;" >In order to exemplify their approach, they distinguish four evaluation types (Eval 1 to Eval 4), which are derived from typical DSR activities, and which are characterized by the input and output of each activity as wells as specific evaluation criteria and evaluation methods.</p>

Revision as of 09:19, 5 November 2022

Evaluation Patterns for Design Science Research Artefacts as described by Sonnenberg and vom Brocke[1]

In order to exemplify their approach, they distinguish four evaluation types (Eval 1 to Eval 4), which are derived from typical DSR activities, and which are characterized by the input and output of each activity as wells as specific evaluation criteria and evaluation methods.

  • Eval 1: Evaluating the problem identification: criteria include importance, novelty and feasibility.
  • Eval 2: Evaluating the solution design: criteria include simplicity, clarity and consistency.
  • Eval 3: Evaluating the solution instantiation: criteria include ease of use, fidelity with real-world phenomenon and robustness.
  • Eval 4: Evaluating the solution in use: criteria include effectiveness, efficiency and external consistency.
  1. Sonnenberg, C., and vom Brocke, J. 2012. “Evaluation Patterns for Design Science Research Artefacts,” in Practical Aspects of Design Science, Communications in Computer and Information Science, M. Helfert and B. Donnellan (eds.), Berlin, Heidelberg: Springer, pp. 71–83. (https://doi.org/10.1007/978-3-642-33681-2_7).