Towards Transparent and Interpretable Predictions of Student Performance Using Explainable AI

Prof Doc Thesis


Kwakye, S. F. 2025. Towards Transparent and Interpretable Predictions of Student Performance Using Explainable AI. Prof Doc Thesis University of East London School of Architecture, Computing and Engineering https://doi.org/10.15123/uel.8zx64
AuthorsKwakye, S. F.
TypeProf Doc Thesis
Abstract

Artificial Intelligence (AI) is increasingly being adopted in educational contexts to support data-driven decision-making, particularly in predicting student outcomes. However, the opaque nature of many high-performing models raises concerns around fairness, accountability, and interpretability which are factors that are especially critical in high-stakes environments such as GCSE examinations. This study investigates how Explainable AI (XAI) techniques can enhance the transparency and interpretability of machine learning models used to predict GCSE English Language and Mathematics performance.

Using a real-world dataset from a secondary school in England, this research developed and evaluated predictive models, including Histogram-based Gradient Boosting (HGB) and a Multi-Layer Perceptron (MLP), to estimate student achievement outcomes. To address imbalances and maximise performance, the pipeline incorporated data pre-processing, feature engineering, and fairness-aware resampling strategies. The final HGB model achieved strong predictive accuracy while maintaining robustness across subgroups.

To ensure interpretability, four XAI techniques namely SHAP (Shapley Additive Explanations), LIME (Local Interpretable Model-Agnostic Explanations), PDP (Partial Dependence Plots), and ALE (Accumulated Local Effects) were applied. These methods provided insight into the most influential features driving predictions, including attendance, CAT3 scores, SEN status, and EAL. Novel explainability metrics such as transparency score, explainability ratio, and interpretability ratio were proposed to systematically evaluate explanation quality and model clarity.

In addition to technical evaluation, the study employed a stakeholder-centred design to assess how teachers, school leaders, and students interact with and interpret model explanations. Mixed-methods user studies revealed that personalised, context-sensitive explanations improved stakeholders’ decision confidence, supported intervention planning, and prompted critical reflection. Concerns were also raised about fairness, overreliance, and the ethical implications of demographic profiling.

The research demonstrates that explainable models can enhance trust, transparency, and pedagogical utility when appropriately designed and evaluated in real-world educational settings. By integrating technical rigour with ethical and user-centred evaluation, this work contributes to the development of responsible, interpretable AI systems that align with the values and needs of educators and learners. The study offers both methodological innovations and practical recommendations for the responsible deployment of XAI in education.

Year2025
PublisherUniversity of East London
Digital Object Identifier (DOI)https://doi.org/10.15123/uel.8zx64
File
License
File Access Level
Anyone
Publication dates
Online10 Jul 2025
Publication process dates
Completed30 Jun 2025
Deposited10 Jul 2025
Copyright holder© 2025 The Author. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) Licence (https://creativecommons.org/licenses/by-nc-nd/4.0). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms.
Permalink -

https://repository.uel.ac.uk/item/8zx64

Download files


File
2025_D.DataSc_Kwakye.pdf
License: CC BY-NC-ND 4.0
File access level: Anyone

  • 10
    total views
  • 3
    total downloads
  • 10
    views this month
  • 3
    downloads this month

Export as