Optimizing for Interpretability in Deep Neural Networks with Tree Regularization

Main Article Content

Mike Wu
Sonali Parbhoo
Michael C. Hughes
Volker Roth
Finale Doshi-Velez

Abstract

Deep models have advanced prediction in many domains, but their lack of interpretability  remains a key barrier to the adoption in many real world applications. There exists a large  body of work aiming to help humans understand these black box functions to varying levels  of granularity – for example, through distillation, gradients, or adversarial examples. These  methods however, all tackle interpretability as a separate process after training. In this  work, we take a different approach and explicitly regularize deep models so that they are  well-approximated by processes that humans can step through in little time. Specifically,  we train several families of deep neural networks to resemble compact, axis-aligned decision  trees without significant compromises in accuracy. The resulting axis-aligned decision  functions uniquely make tree regularized models easy for humans to interpret. Moreover,  for situations in which a single, global tree is a poor estimator, we introduce a regional tree regularizer that encourages the deep model to resemble a compact, axis-aligned decision  tree in predefined, human-interpretable contexts. Using intuitive toy examples, benchmark  image datasets, and medical tasks for patients in critical care and with HIV, we demonstrate  that this new family of tree regularizers yield models that are easier for humans to simulate  than L1 or L2 penalties without sacrificing predictive power. 

Article Details

Section
Articles