Demystifying AI: understanding stakeholder interactions in explaining how AI works
Guennoun, Omar
Promotor(s) :
Steils, Nadia
Date of defense : 1-Sep-2025/5-Sep-2025 • Permalink : http://hdl.handle.net/2268.2/24370
Details
| Title : | Demystifying AI: understanding stakeholder interactions in explaining how AI works |
| Author : | Guennoun, Omar
|
| Date of defense : | 1-Sep-2025/5-Sep-2025 |
| Advisor(s) : | Steils, Nadia
|
| Committee's member(s) : | El Midaoui, Youssra
|
| Language : | English |
| Keywords : | [en] Artificial Intelligence (AI) [en] Explainable Artificial Intelligence (XAI) [en] Trust [en] Transparency [en] Explainability [en] Decision-making [en] Users [en] Consumers [en] Technology Acceptance Model (TAM) [en] Perceived Usefulness |
| Discipline(s) : | Business & economic sciences > Marketing |
| Target public : | Researchers Professionals of domain Student |
| Institution(s) : | Université de Liège, Liège, Belgique |
| Degree: | Master en sciences de gestion, à finalité spécialisée en international strategic marketing |
| Faculty: | Master thesis of the HEC-Ecole de gestion de l'Université de Liège |
Abstract
[en] This thesis addresses the interaction between consumers and companies in explaining how AI
functions, and resolving two research questions. The first investigates how consumer and firm
hopes for explainability of AI differ, and how these differences affect consumer trust and
engagement. The second investigates how companies can most effectively convey the rationale
of AI-based decisions to enhance transparency and establish consumer trust. To elicit the
responses to these sorts of questions, the research uses a qualitative approach through
conducting semi-structured consumer and company representative interviews followed by
thematic analysis of the transcripts using NVivo. From the results, stark differences in stakeholder
demands come through: consumers are more interested in offering clear, transparent
descriptions that offer a sense of control and create trust, whereas company representatives are
more focused on explainability from the domains of operational viabilities, regulatory
compliance, and brand image. The paper highlights the importance of customized, context
sensitive explanations to support heterogenous user requirements, as well as co-creation
practices wherein users collaborate with providers to develop explanation features, practices that
help align AI systems with user expectations and foster trust. Academically, the study makes a
contribution to explainable AI (XAI) literature through employing a stakeholder interaction view,
revealing explainability as a complex construct and integrating stakeholder-specific knowledge
into models of trust in AI. Managerially, it offers organizational strategies that can be used to
demystify AI, including employing multi-level and user-specific explanation interfaces, proactive
transparency in AI communications, and involving users in explanation design, hence fostering
increased trust in AI-based services.
File(s)
Document(s)
Cite this master thesis
The University of Liège does not guarantee the scientific quality of these students' works or the accuracy of all the information they contain.

Master Thesis Online


Thesis Guennoun Omar.pdf