Understanding, Visualizing, and Interpreting Deep Learning Models: A Path to Explanable Artificial Intelligence
DOI:
https://doi.org/10.70705/ppp.bioai.2024.v03.i01.pp16-21Keywords:
Artificial intelligence, Deep neural networks, Black box models, Interpretability, Sensitivity analysis, Layer-wise relevance propagationAbstract
More and more complicated jobs are now being handled by AI systems that perform at least as well as humans, thanks to the
availability of huge datasets and recent advancements in deep learning technique. Some fields that have made remarkable strides
in this direction include picture categorization, sentiment analysis, voice comprehension, and strategy gaming. Unfortunately,
because to their layered non-linear structure, these very effective AI and machine learning models are often implemented in a
way that does not reveal how they arrive at their predictions. The development of ways to visualize, explain, and understand
deep learning models has lately garnered increased interest due to the fact that this lack of transparency may be a significant
negative, for example in medical applications. A need for more interpretability in AI is made in this study, which also summarizes
recent advances in the area. Also included are two ways to break down decisions into their component variables, one that calculates
the prediction’s sensitivity to changes in the input and another that uses deep learning to explain the models’ predictions.
There are three classification tasks that these approaches are tested on.

