Deep Learning: Guide with Challenges and Solutions

Introduction to Deep Learning

AI Adoption changes the way of working for all industries. Various approaches can be used to implement AI, such as Machine learning, Deep learning, Natural language generator, etc. One of these approaches is Deep Learning. Deep learning algorithms can give highly accurate outputs. But still, the adoption of deep learning in the Finance industry is limited. The primary reason behind this is not algorithms’ performance. No doubt, it provides us with more precise and accurate results than other ML models. But the reason is its nature; the deep learning model having black-box functioning means the system is not answerable for its decisions. Due to the opaque nature of deep learning algorithms, users feel a hindrance to adopting them.


The Bank and Finance industry use Artificial Intelligence to make their task more efficient, reliable, productive, and fast.


What are the Challenges of Deep Learning?

Especially in credit risk assessment, where it is required for a financial institution to give the reason when they decline a credit or loan application according to the “Equal Credit Opportunity Act” and “Fair Credit Reporting Act.” But if they use a deep learning model here, due to the model’s opaque nature, financial institutions will not give the reason behind the decision as the model doesn’t explain. Also, not able to detect bias and fairness. 

In neural networks, some techniques can explain the model using feature importance; these are LIME (Local Interpretable Model-Agnostic Explanations), Deep Lift, Integrated Gradient, etc.

Though these techniques provide explanations, the finance industry is still not using them in credit assessment because it is still difficult to answer some questions. The following two questions are shared that resist financial organizations from using the Deep learning model in some of their systems. These are:

  • For Trust: Do these methods provide an accurate or interpretable explanation?
  • Reliable: How consistent is a system to produce trustworthy explanations?

Self-driving cars or automated vehicles' main goal is to provide a better user experience and safety rules and regulations.


What are the Solutions for Deep Learning?

For these questions and to increase user’s trust and satisfaction, it will discuss the approaches that can answer the questions discussed. It will check the trustworthiness and reliability of these approaches to let lenders adopt Deep Learning. 

So users can apply these approaches for checking the trust and reliability of neural networks.

Trustworthiness of Deep Learning

It will check the trustworthiness of the approaches used to provide interpretability. An ML model will be used to predict the credit risk. First, it will calculate the Global feature importance using its weights. And then find the local feature importance using LIME, integrated gradient, and deep lift, etc.

It will review these methods’ feature importance and then compare them with global feature importance for checking trustworthiness. The method that would have more similarity has more accuracy hence more trustworthy.

Deep Learning Features

Figure 1.1 Global Feature importance of Model

Deep Learning Solutions

Figure 1.2 Local Feature importance using LIME, SHAP, and DeepLift

The figure 1.1 shows the Global importance of the top five features using a transparent method. Figure 1.2 shows Local feature importance using LIME, Deep Lift, and Integrated Gradients. 

In the above case, Deep Lift and Integrated Gradients give similar feature importance as Global Explanation is giving. Deep lift and Integrated Gradients have four similar features. Yet LIME has only one similar feature. This result is extracted only for one local observation. Therefore, Integrated Gradients and Deep Lift have more chances to accept than LIME.

Reliability in Deep Learning

Reliability allows checking the consistency of methods to provide a trustworthy result. To check the reliability, it will check the baseline. In our use case, there is no baseline; therefore, we will take a reference point for justification. How reliable an approach is to select a random baseline. The more an approach can respond to the random observations, the more reliable it is.


Conclusion

Interpretability allows us to make a fair and trustable system. Lime, Deep Lift, and integrated gradient are three approaches we discussed to explain the neural network’s decisions. We have discussed the methods to compare and check the trustworthiness and reliability of the approaches. The best way to select an approach is to first work on properties that must always be in the system. SHAP (SHapley Additive exPlanations), one of the Explainable AI libraries, has the multi-framework implementation of deep-lift and Integrated Gradients.