Constraint based approaches to interpretable and semi-supervised machine learning

dc.contributor.advisorGhosh, Joydeep
dc.contributor.committeeMemberKoyejo, Oluwasanmi
dc.contributor.committeeMemberSanghavi, Sujay
dc.contributor.committeeMemberVikalo, Haris
dc.contributor.committeeMemberMooney, Raymond
dc.creatorJoshi, Shalmali Dilip
dc.date.accessioned2019-04-03T16:20:54Z
dc.date.available2019-04-03T16:20:54Z
dc.date.created2018-12
dc.date.issued2018-12
dc.date.submittedDecember 2018
dc.date.updated2019-04-03T16:20:54Z
dc.description.abstractInterpretability and Explainability of machine learning algorithms are becoming increasingly important as Machine Learning (ML) systems get widely applied to domains like clinical healthcare, social media and governance. A related major challenge in deploying ML systems pertains to reliable learning when expert annotation is severely limited. This dissertation prescribes a common framework to address these challenges, based on the use of constraints that can make an ML model more interpretable, lead to novel methods for explaining ML models, or help to learn reliably with limited supervision. In particular, we focus on the class of latent variable models and develop a general learning framework by constraining realizations of latent variables and/or model parameters. We propose specific constraints that can be used to develop identifiable latent variable models, that in turn learn interpretable outcomes. The proposed framework is first used in Non–negative Matrix Factorization and Probabilistic Graphical Models. For both models, algorithms are proposed to incorporate such constraints with seamless and tractable augmentation of the associated learning and inference procedures. The utility of the proposed methods is demonstrated for our working application domain – identifiable phenotyping using Electronic Health Records (EHRs). Evaluation by domain experts reveals that the proposed models are indeed more clinically relevant (and hence more interpretable) than existing counterparts. The work also demonstrates that while there may be inherent trade–offs between constraining models to encourage interpretability, the quantitative performance of downstream tasks remains competitive. We then focus on constraint based mechanisms to explain decisions or outcomes of supervised black-box models. We propose an explanation model based on generating examples where the nature of the examples is constrained i.e. they have to be sampled from the underlying data domain. To do so, we train a generative model to characterize the data manifold in a high dimensional ambient space. Constrained sampling then allows us to generate naturalistic examples that lie along the data manifold. We propose ways to summarize model behavior using such constrained examples. In the last part of the contributions, we argue that heterogeneity of data sources is useful in situations where very little to no supervision is available. This thesis leverages such heterogeneity (via constraints) for two critical but widely different machine learning algorithms. In each case, a novel algorithm in the sub-class of co–regularization is developed to combine information from heterogeneous sources. Co–regularization is a framework of constraining latent variables and/or latent distributions in order to leverage heterogeneity. The proposed algorithms are utilized for clustering, where the intent is to generate a partition or grouping of observed samples, and for Learning to Rank algorithms – used to rank a set of observed samples in order of preference with respect to a specific search query. The proposed methods are evaluated on clustering web documents, social network users, and information retrieval applications for ranking search queries.
dc.description.departmentElectrical and Computer Engineering
dc.format.mimetypeapplication/pdf
dc.identifier.urihttps://hdl.handle.net/2152/74127
dc.identifier.urihttp://dx.doi.org/10.26153/tsw/1259
dc.subjectInterpretable machine learning
dc.subjectSemi-supervised machine learning
dc.titleConstraint based approaches to interpretable and semi-supervised machine learning
dc.typeThesis
dc.type.materialtext
thesis.degree.departmentElectrical and Computer Engineering
thesis.degree.disciplineElectrical and Computer Engineering
thesis.degree.grantorThe University of Texas at Austin
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
JOSHI-DISSERTATION-2018.pdf
Size:
6.62 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
4.45 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.84 KB
Format:
Plain Text
Description: