Learn About Interpretable and Explainable Deep Learning
When: Dec. 15th, 2021 11:00 am - 12:00 pm
To Know
Where:
Online Webinar
Organizer:
NCI
About this Class
In this seminar, Mrs. Aya Abdelsalam Ismail will give an overview on interpreting neural networks, with a particular focus on the use of Deep Neural Networks (DNNs) to track and predict changes over time. DNNs are proving to be highly accurate alternatives to conventional statistical and analytical methods, especially when considering numerous variables (genes, RNA molecules, proteins, etc.) and multiple interactions. Still, practitioners in fields such as science, bioinformatics, and research often are hesitant to use DNN models because they can be difficult to interpret. During the event, Mrs. Ismail will:- highlight the limitations of existing saliency-based interpretability methods for Recurrent Neural Networks and offer methods for overcoming these challenges.
- describe a framework for evaluating time series data using multiple metrics to assess the performance of a specific saliency method for detecting importance over time.
- show how to apply that evaluation framework to different saliency-based methods across diverse models.
- offer solutions for improving the quality of saliency methods in time series data using a two-step temporal saliency rescaling (TSR) approach (which first calculates the importance of each time step before calculating the importance of each feature over time).
- talk about how interpretations can be improved using a novel training technique known as saliency-guided training.