ncibtep@nih.gov

Bioinformatics Training and Education Program

AttentiveChrome: Deep-learning for Predicting Gene Expression from Histone Modifications

AttentiveChrome: Deep-learning for Predicting Gene Expression from Histone Modifications

 When: Sep. 22nd, 2021 11:00 am - 12:00 pm

This class has ended.
To Know
  • Where: Online Webinar
  • Organized By: CBIIT

About this Class

Registration is required. During this upcoming webinar, Dr. Yanjun Qi will demonstrate AttentiveChrome, an attention-based deep learning approach that uses a unified architecture to model and interpret interactions and dependencies among the chromatin factors underlying gene regulation. The past decade has seen a deluge of genomic technologies resulting in a flood of new genome-wide profiling tools. To understand gene expression and regulation, most of today’s studies have relied on information from DNA sequencing and other chromatin (such as the proteins or histones that help organize and compress the DNA structure). Charting the locations and intensities of modifications, known as “marks,” over the chromatin using machine learning could aid in modeling and interpreting the DNA sequencing data. However, two fundamental challenges exist: (1) genome-wide chromatin signals are spatially structured, high-dimensional, and very modular, and (2) the core aim is to understand all the relevant factors and how they work together. Models from earlier studies have either failed to capture the complex dependencies among input signals or have relied on singular analysis to explain the decisions rather than considering the wide variety of marks that exist and influence gene regulation. AttentiveChrome relies on a hierarchy of multiple long short-term memory (LSTM) modules to encode the input signals. It allows users to model how various chromatin marks interact and cooperate. AttentiveChrome trains two levels of attention simultaneously, allowing it to model all the relevant marks and identify important positions per individual mark. It can be used to model all 56 different cell types (tasks) in humans. Studies show this proposed architecture not only is more accurate, but its attention scores have resulted in interpretations that are proving to be more accurate than other state-of-the-art visualization methods, such as saliency maps. Presenter: Yanjun Qi, Ph.D. Dr. Yanjun Qi is an associate professor at University of Virginia in the Department of Computer Science and currently serves as a Data and Technology Advancement (DATA) National Service Scholar at NIH. Dr. Qi was recognized by the National Science Foundation (NSF) and NeurIPS for her contribution to the field, receiving the CAREER Award from NSF and a Best Paper Award at a NeurIPS workshop for “Transparent and Interpretable Machine Learning.”