Wrapper boxes for increasing model interpretability via example-based explanations
dc.contributor.advisor | Lease, Matthew A. | |
dc.contributor.committeeMember | Li, Jessy | |
dc.creator | Su, Yiheng | |
dc.date.accessioned | 2023-08-14T20:22:09Z | |
dc.date.available | 2023-08-14T20:22:09Z | |
dc.date.created | 2023-05 | |
dc.date.issued | 2023-04-21 | |
dc.date.submitted | May 2023 | |
dc.date.updated | 2023-08-14T20:22:10Z | |
dc.description.abstract | We propose wrapper boxes to provide interpretability in deep learning that is model, training, and dataset-agnostic. The prediction model is trained as usual on some dataset(s), typically optimizing some predetermined loss function. At inference time, the prediction model is augmented by a simpler model that makes forecasts by leveraging learned representations from the former. Hence, any black box model such as deep neural networks can be made more interpretable by "wrapping" them with white box auxiliaries that are explainable by design. We demonstrate the effectiveness of wrapper box approaches across two datasets and three large pre-trained language models, showing that performance is not noticeably different compared to the original model across various configurations, even for simple augmentations like k-nearest neighbors, support vector machines, decision trees, and k-means. In particular, we present quantitative evidence that representations retrieved from the penultimate layer alone are sufficient for white boxes to achieve not noticeably different performance. Finally, we illustrate the additive explainability of white box augmentations by showcasing intuitive and faithful example-based explanations. We hypothesize that any minor degradation in predictive performance is justified by enhanced interpretability for human users, enabling the combined human-AI partnership to be more performant than possible with a black box model alone. | |
dc.description.department | Computer Science | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | https://hdl.handle.net/2152/121132 | |
dc.identifier.uri | http://dx.doi.org/10.26153/tsw/47962 | |
dc.language.iso | en | |
dc.subject | Model interpretability | |
dc.subject | Model explainability | |
dc.title | Wrapper boxes for increasing model interpretability via example-based explanations | |
dc.type | Thesis | |
dc.type.material | text | |
thesis.degree.department | Computer Sciences | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | The University of Texas at Austin | |
thesis.degree.level | Masters | |
thesis.degree.name | Master of Science in Computer Sciences |
Access full-text files
Original bundle
1 - 1 of 1