img Leseprobe Leseprobe

Explainable AI and User Experience. Prototyping and Evaluating an UX-Optimized XAI Interface in Computer Vision

Georg Dedikov

PDF
39,99
Amazon iTunes Thalia.de Weltbild.de Hugendubel Bücher.de ebook.de kobo Osiander Google Books Barnes&Noble bol.com Legimi yourbook.shop Kulturkaufhaus ebooks-center.de
* Affiliatelinks/Werbelinks
Hinweis: Affiliatelinks/Werbelinks
Links auf reinlesen.de sind sogenannte Affiliate-Links. Wenn du auf so einen Affiliate-Link klickst und über diesen Link einkaufst, bekommt reinlesen.de von dem betreffenden Online-Shop oder Anbieter eine Provision. Für dich verändert sich der Preis nicht.

GRIN Verlag img Link Publisher

Naturwissenschaften, Medizin, Informatik, Technik / Informatik

Beschreibung

Master's Thesis from the year 2023 in the subject Computer Science - Commercial Information Technology, grade: 1,0, University of Regensburg (Professur für Wirtschaftsinformatik, insb. Internet Business & Digitale Soziale Medien), language: English, abstract: This thesis presents a toolkit of 17 user experience (UX) principles, which are categorized according to their relevance towards Explainable AI (XAI). The goal of Explainable AI has been widely associated in literature with dimensions of comprehensibility, usefulness, trust, and acceptance. Moreover, authors in academia postulate that research should rather focus on the development of holistic explanation interfaces instead of single visual explanations. Consequently, the focus of XAI research should be more on potential users and their needs, rather than purely technical aspects of XAI methods. Considering these three impediments, the author of this thesis derives the assumption to bring valuable insights from the research area of User Interface (UI) and User Experience design into XAI research. Basically, UX is concerned with the design and evaluation of pragmatic and hedonic aspects of a user’s interaction with a system in some context. These principles are taken into account in the subsequent prototyping of a custom XAI system called Brain Tumor Assistant (BTA). Here, a pre-trained EfficientNetB0 is used as a Convolutional Neural Network that can divide x-ray images of a human brain into four classes with an overall accuracy of 98%. To generate factual explanations, Local Interpretable Model-agnostic Explanations are subsequently applied as an XAI method. The following evaluation of the BTA is based on the so-called User Experience Questionnaire (UEQ) according to Laugwitz et al. (2008), whereby single items of the questionnaire are adapted to the specific context of XAI. Quantitative data from a study with 50 participants in each control and treatment group is used to present a standardized way of quantifying the dimensions of Usability and UX specifically for XAI systems. Furthermore, through an A/B test, evidence is presented that visual explanations have a significant (α=0.05) positive effect on the dimensions of attractiveness, usefulness, controllability, and trustworthiness. In summary, this thesis proves that explanations in computer vision not only have a significantly positive effect on trustworthiness, but also on other dimensions.

Weitere Titel in dieser Kategorie
Cover Cyber Operations
Jerry M. Couretas
Cover Cyber Operations
Jerry M. Couretas

Kundenbewertungen

Schlagwörter

Brain Tumor, XAI, Cohens d, Deep Learning, LIME, Literature Review, high-stake, Cronbach alpha, Master Thesis, DL, Healthcare, Convolutional Neural Networks, Röntgenbilder, Prototyping, ML, User Experience Questionnaire, Whitney U Test, UX principles, Figma, AI, UEQ, Hypothesis Test, EfficientNetB0, Machline Learning, Prototype, Medicine, CNN, X ray, Explainable AI, UX, Computer Vision, UI, Local interpretable model-agnostic explanations, User-centered Design