Feedback

HEC-Ecole de gestion de l'Université de Liège
HEC-Ecole de gestion de l'Université de Liège
MASTER THESIS

Demystifying AI: understanding stakeholder interactions in explaining how AI works

Download
Guennoun, Omar ULiège
Promotor(s) : Steils, Nadia ULiège
Date of defense : 1-Sep-2025/5-Sep-2025 • Permalink : http://hdl.handle.net/2268.2/24370
Details
Title : Demystifying AI: understanding stakeholder interactions in explaining how AI works
Author : Guennoun, Omar ULiège
Date of defense  : 1-Sep-2025/5-Sep-2025
Advisor(s) : Steils, Nadia ULiège
Committee's member(s) : El Midaoui, Youssra ULiège
Language : English
Keywords : [en] Artificial Intelligence (AI)
[en] Explainable Artificial Intelligence (XAI)
[en] Trust
[en] Transparency
[en] Explainability
[en] Decision-making
[en] Users
[en] Consumers
[en] Technology Acceptance Model (TAM)
[en] Perceived Usefulness
Discipline(s) : Business & economic sciences > Marketing
Target public : Researchers
Professionals of domain
Student
Institution(s) : Université de Liège, Liège, Belgique
Degree: Master en sciences de gestion, à finalité spécialisée en international strategic marketing
Faculty: Master thesis of the HEC-Ecole de gestion de l'Université de Liège

Abstract

[en] This thesis addresses the interaction between consumers and companies in explaining how AI
functions, and resolving two research questions. The first investigates how consumer and firm
hopes for explainability of AI differ, and how these differences affect consumer trust and
engagement. The second investigates how companies can most effectively convey the rationale
of AI-based decisions to enhance transparency and establish consumer trust. To elicit the
responses to these sorts of questions, the research uses a qualitative approach through
conducting semi-structured consumer and company representative interviews followed by
thematic analysis of the transcripts using NVivo. From the results, stark differences in stakeholder
demands come through: consumers are more interested in offering clear, transparent
descriptions that offer a sense of control and create trust, whereas company representatives are
more focused on explainability from the domains of operational viabilities, regulatory
compliance, and brand image. The paper highlights the importance of customized, context
sensitive explanations to support heterogenous user requirements, as well as co-creation
practices wherein users collaborate with providers to develop explanation features, practices that
help align AI systems with user expectations and foster trust. Academically, the study makes a
contribution to explainable AI (XAI) literature through employing a stakeholder interaction view,
revealing explainability as a complex construct and integrating stakeholder-specific knowledge
into models of trust in AI. Managerially, it offers organizational strategies that can be used to
demystify AI, including employing multi-level and user-specific explanation interfaces, proactive
transparency in AI communications, and involving users in explanation design, hence fostering
increased trust in AI-based services.


File(s)

Document(s)

File
Access Thesis Guennoun Omar.pdf
Description:
Size: 991.73 kB
Format: Adobe PDF

Author

  • Guennoun, Omar ULiège Université de Liège > Master sc. gest., fin. spéc. int. strat. mark.

Promotor(s)

Committee's member(s)

  • El Midaoui, Youssra ULiège Université de Liège - ULiège > HEC Liège : UER > UER Management : Marketing et intelligence stratégique
    ORBi View his publications on ORBi








All documents available on MatheO are protected by copyright and subject to the usual rules for fair use.
The University of Liège does not guarantee the scientific quality of these students' works or the accuracy of all the information they contain.