Unpacking the Machine Learning Black Box in Healthcare Applications

Unpacking the Machine Learning Black Box in Healthcare Applications

Expertise  |  December 20, 2018

By May Shen, PhD, Research Data Scientist

Machine learning models are being quickly adopted in various industry applications because of their outstanding performance compared to more traditional approaches like linear regression. However, machine learning models are also known as ‘black boxes’ because they provide little explanations for their predictions.

The black-box nature of many machine learning models frequently casts doubt on its usability in practice, especially in the healthcare field- “The doctor just won’t accept that,” as noted by Zachary Lipton [1]. When health or life is at stake, it’s hard to trust a black-box algorithm that does not provide sufficient explanations for its decisions.

Because of the call for interpretations, there has been a surge of interest in interpretable machine learning, a field which attempts to unpack the machine learning black box. The goal of such efforts is not to understand the detailed inner workings of the models, the feasibility of which is under debate, but to understand them sufficiently for downstream tasks [2].

Interpretable Machine Learning
The benefits of interpretable machine learning are multifold [3]. For developers and researchers, understanding the model’s internal mechanisms can help them improve the model in cases of unexpected failure, help ward against the potential for discrimination in the algorithms, and increase system safety.

It was reported that Amazon scraped their machine learning tool that showed bias against women in recruiting [4]. Though, if we know how the model processes gender information, this type of bias could be prevented beforehand. For end-users like doctors, interpretable outputs can reduce the discomfort with machine learning and establish trust. Additionally, the explanations themselves present opportunity for generating actionable recommendations.

The Techniques
There are a number of different techniques for interpreting models. When appropriate, we can deliberately build interpretable models by choosing simple algorithms such as linear regression or enforcing interpretability constraints in the modeling process [5], but a pitfall of such approaches is that the interpretability constraints might compromise model performance for many types of problems.

To achieve better performance, we can utilize more complex approaches (e.g. boosting or deep learning) and interpret the models in a post-hoc fashion. For example, using local interpretable model-agnostic explanations (LIME), we can mimic the local behaviors of complex models with linear models that are easy to interpret [6]. Another tool, SHAP (SHapely Additive ExPlanations), determines the contribution of each feature to predictions based on game theory [7].

These techniques have been actively developed and maintained in various software packages. For example, in Python, there are tools such as skater [8], SHAP [9], and lime [10]. In R, there are packages such as lime [11], DALEX [12], and iml [13].

Applying Machine Learning to Our Solutions
At axialHealthcare, we are continuously developing machine learning models to assist with clinical decision making. For example, one of our models can predict a patient’s risk for an opioid use disorder diagnosis using numerous descriptions from his or her life. Another model focuses on predicting opioid overdose incidents.

For the predictive opioid overdose model, we utilized LIME via the iml R package to generate local, individual explanations for some patients’ predictions, which is an important step of our model evaluation. For example, when the model correctly predicted a patient to overdose on opioids in the next month, the LIME method listed recent drug abuse incidents, the number of unique diagnoses, and the number of drug tests as the most prominent contributors for this patient’s prediction.

This kind of explanation not only helps providers trust the results, but can also be used as a basis for individualized treatment recommendations. In this case, the provider could have intervened by referring the patient to drug abuse treatment, as abuse was listed as one of the prediction explanations.

 

References
[1] Lipton, Z. C. (2017). The Doctor Just Won’t Accept That!. arXiv preprint arXiv:1711.08037.
[2] Doshi-Velez, F., & Kim, B. (2017a August). Interpretable machine learning. Talk presented at the International Conference on Machine Learning, Sydney, Australia.
[2] Doshi-Velez, F., & Kim, B. (2017b). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
[3] Du, M., Liu, N., & Hu, X. (2018). Techniques for Interpretable Machine Learning. arXiv preprint arXiv:1808.00033.
[4] Dastin, J. (2018, October 9). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, p. 1. Retrieved from https://www.reuters.com
[5] Freitas, A. A. (2014). Comprehensible classification models: a position paper. ACM SIGKDD explorations newsletter, 15(1), 1-10
[6] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). ACM.
[7] Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems (pp. 4765-4774).
[7] Štrumbelj, E., & Kononenko, I. (2014). Explaining prediction models and individual predictions with feature contributions. Knowledge and information systems, 41(3), 647-665.
[8] Kramer A., Choudhary P., DataScience.com Team, & contributors. (2018). Skater: Model Interpretation Library. Python package version 1.1.2. Retrieved from https://pypi.org/project/skater/
[9] Lundberg, S. M. (2018). shap. Python package version 0.25.0. Retrieved from https://pypi.org/project/shap/
[10] Ribeiro, M. T. (2018) lime. Python package version 0.1.1.32. Retrieved from https://pypi.org/project/lime/
[11] Pedersen T. L. & Benesty M. (2018) lime: Local Interpretable Model-Agnostic Explanations. R package version 0.4.0. Retrieved from https://cran.r-project.org/web/packages/lime/index.html
[12] Biecek P. (2018). DALEX: Descriptive mAchine Learning EXplanations. R package version 0.2.4. Retrieved from https://cran.r-project.org/web/packages/DALEX/index.html
[13] Molnar, C., Casalicchio, G., & Bischl, B. (2018). iml: An R package for Interpretable Machine Learning. The Journal of Open Source Software, 3, 786.

Suggested Post
Explore related posts.
Using Machine Learning to Predict Patient Risk for OUD Expertise  |  January 14, 2019

By David Simon, Research Data Scientist Predicting a patient’s risk for adverse outcomes is an important part of delivering precision medicine and improving the lives of patients. In order to to achieve these goals, axialHealthcare has developed a number of machine learning models that quantify patient risk. An exciting example is our machine learning model…

Read More ›