Explainable Machine Learning
Explainable Machine Learning

Abstract: 

As machine learning applications become more specialized, the models become increasingly opaque and harder to interpret. The ability to interpret black box nonlinear models is critical in certain fields such as finance, healthcare, and self-driving technology. Bill.com leverages our data to build fraud-risk and credit-risk models using machine learning, which helps unlock capital for small to medium businesses (SMB).

A credit decision requires both an accurate assessment of risk and plain English explanations. For example, it’s not enough to reject a potential client but to give a reason. We explore a solution that utilizes what-if scenarios to calculate the marginal influence of features per prediction and compare with standardized methods such as locally interpretable model estimation (LIME). At each prediction, we calculate the marginal impact of each feature independently to the response variable. The solution compares this approach to standardized methods such as LIME and reports the computational efficiency and accuracy of explanations. We then deliver an accompanying plain English explanation, such as, “Client A is rejected because their months since most recent diluted payment is 2 (1.8 above median), and the USD amount requested is $72K ($57K above median).”

Bio: 

Eitan Anzenberg is the Director of Data Science at Bill.com. He has 7+ years experience in Data Science with a background in machine learning, applied statistics, modeling and engineering. Eitan was a Postdoctoral Scholar at Lawrence Berkeley National Lab, received his PhD in Physics from Boston University and his B.S. in Astrophysics from University of California Santa Cruz. He holds 2 patents, has 11 publications and has spoken at various international conferences around the world.