A Semi-Automated Usability Evaluation Framework for Interactive Image Segmentation Systems

For complex segmentation tasks, the achievable accuracy of fully automated systems is inherently limited. Specifically, when a precise segmentation result is desired for a small amount of given data sets, semi-automatic methods exhibit a clear benefit for the user. The optimization of human computer...

Full description

Saved in:
Bibliographic Details
Published inInternational journal of biomedical imaging Vol. 2019; no. 2019; pp. 1 - 21
Main Authors Maier, Andreas, Weingarten, Markus, Strumia, Maddalena, Kortekaas, Reinier, Steidl, Stefan, Amrehn, Mario, Kowarschik, Markus
Format Journal Article
LanguageEnglish
Published Cairo, Egypt Hindawi Publishing Corporation 2019
Hindawi
John Wiley & Sons, Inc
Hindawi Limited
Wiley
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:For complex segmentation tasks, the achievable accuracy of fully automated systems is inherently limited. Specifically, when a precise segmentation result is desired for a small amount of given data sets, semi-automatic methods exhibit a clear benefit for the user. The optimization of human computer interaction (HCI) is an essential part of interactive image segmentation. Nevertheless, publications introducing novel interactive segmentation systems (ISS) often lack an objective comparison of HCI aspects. It is demonstrated that even when the underlying segmentation algorithm is the same throughout interactive prototypes, their user experience may vary substantially. As a result, users prefer simple interfaces as well as a considerable degree of freedom to control each iterative step of the segmentation. In this article, an objective method for the comparison of ISS is proposed, based on extensive user studies. A summative qualitative content analysis is conducted via abstraction of visual and verbal feedback given by the participants. A direct assessment of the segmentation system is executed by the users via the system usability scale (SUS) and AttrakDiff-2 questionnaires. Furthermore, an approximation of the findings regarding usability aspects in those studies is introduced, conducted solely from the system-measurable user actions during their usage of interactive segmentation prototypes. The prediction of all questionnaire results has an average relative error of 8.9%, which is close to the expected precision of the questionnaire results themselves. This automated evaluation scheme may significantly reduce the resources necessary to investigate each variation of a prototype’s user interface (UI) features and segmentation methodologies.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Academic Editor: Anne Clough
ISSN:1687-4188
1687-4196
DOI:10.1155/2019/1464592