Gramegna, Alex and Giudici, Paolo (2021) SHAP and LIME: An Evaluation of Discriminative Power in Credit Risk. Frontiers in Artificial Intelligence, 4. ISSN 2624-8212
pubmed-zip/versions/1/package-entries/frai-04-752558/frai-04-752558.pdf - Published Version
Download (1MB)
Abstract
In credit risk estimation, the most important element is obtaining a probability of default as close as possible to the effective risk. This effort quickly prompted new, powerful algorithms that reach a far higher accuracy, but at the cost of losing intelligibility, such as Gradient Boosting or ensemble methods. These models are usually referred to as “black-boxes”, implying that you know the inputs and the output, but there is little way to understand what is going on under the hood. As a response to that, we have seen several different Explainable AI models flourish in recent years, with the aim of letting the user see why the black-box gave a certain output. In this context, we evaluate two very popular eXplainable AI (XAI) models in their ability to discriminate observations into groups, through the application of both unsupervised and predictive modeling to the weights these XAI models assign to features locally. The evaluation is carried out on real Small and Medium Enterprises data, obtained from official italian repositories, and may form the basis for the employment of such XAI models for post-processing features extraction.
Item Type: | Article |
---|---|
Subjects: | Science Repository > Multidisciplinary |
Depositing User: | Managing Editor |
Date Deposited: | 27 Feb 2023 05:04 |
Last Modified: | 25 Jul 2024 07:12 |
URI: | http://research.manuscritpub.com/id/eprint/997 |