Feedback

HEC-Ecole de gestion de l'Université de Liège
HEC-Ecole de gestion de l'Université de Liège
Mémoire

Demystifying AI: understanding stakeholder interactions in explaining how AI works

Télécharger
Guennoun, Omar ULiège
Promoteur(s) : Steils, Nadia ULiège
Date de soutenance : 1-sep-2025/5-sep-2025 • URL permanente : http://hdl.handle.net/2268.2/24370
Détails
Titre : Demystifying AI: understanding stakeholder interactions in explaining how AI works
Auteur : Guennoun, Omar ULiège
Date de soutenance  : 1-sep-2025/5-sep-2025
Promoteur(s) : Steils, Nadia ULiège
Membre(s) du jury : El Midaoui, Youssra ULiège
Langue : Anglais
Mots-clés : [en] Artificial Intelligence (AI)
[en] Explainable Artificial Intelligence (XAI)
[en] Trust
[en] Transparency
[en] Explainability
[en] Decision-making
[en] Users
[en] Consumers
[en] Technology Acceptance Model (TAM)
[en] Perceived Usefulness
Discipline(s) : Sciences économiques & de gestion > Marketing
Public cible : Chercheurs
Professionnels du domaine
Etudiants
Institution(s) : Université de Liège, Liège, Belgique
Diplôme : Master en sciences de gestion, à finalité spécialisée en international strategic marketing
Faculté : Mémoires de la HEC-Ecole de gestion de l'Université de Liège

Résumé

[en] This thesis addresses the interaction between consumers and companies in explaining how AI
functions, and resolving two research questions. The first investigates how consumer and firm
hopes for explainability of AI differ, and how these differences affect consumer trust and
engagement. The second investigates how companies can most effectively convey the rationale
of AI-based decisions to enhance transparency and establish consumer trust. To elicit the
responses to these sorts of questions, the research uses a qualitative approach through
conducting semi-structured consumer and company representative interviews followed by
thematic analysis of the transcripts using NVivo. From the results, stark differences in stakeholder
demands come through: consumers are more interested in offering clear, transparent
descriptions that offer a sense of control and create trust, whereas company representatives are
more focused on explainability from the domains of operational viabilities, regulatory
compliance, and brand image. The paper highlights the importance of customized, context
sensitive explanations to support heterogenous user requirements, as well as co-creation
practices wherein users collaborate with providers to develop explanation features, practices that
help align AI systems with user expectations and foster trust. Academically, the study makes a
contribution to explainable AI (XAI) literature through employing a stakeholder interaction view,
revealing explainability as a complex construct and integrating stakeholder-specific knowledge
into models of trust in AI. Managerially, it offers organizational strategies that can be used to
demystify AI, including employing multi-level and user-specific explanation interfaces, proactive
transparency in AI communications, and involving users in explanation design, hence fostering
increased trust in AI-based services.


Fichier(s)

Document(s)

File
Access Thesis Guennoun Omar.pdf
Description:
Taille: 991.73 kB
Format: Adobe PDF

Auteur

  • Guennoun, Omar ULiège Université de Liège > Master sc. gest., fin. spéc. int. strat. mark.

Promoteur(s)

Membre(s) du jury

  • El Midaoui, Youssra ULiège Université de Liège - ULiège > HEC Liège : UER > UER Management : Marketing et intelligence stratégique
    ORBi Voir ses publications sur ORBi








Tous les documents disponibles sur MatheO sont protégés par le droit d'auteur et soumis aux règles habituelles de bon usage.
L'Université de Liège ne garantit pas la qualité scientifique de ces travaux d'étudiants ni l'exactitude de l'ensemble des informations qu'ils contiennent.