NIST - National Institute of Standards and Technology

09/13/2024 | Press release | Distributed by Public on 09/14/2024 03:08

Examining the Effects of Deep Learning Model Structure on Model Interpretability for Time-Series Classifications in Fire Research

Published
September 13, 2024

Author(s)

Wai Cheong Tam, Fan Linhao, Qi Tong, Fang Hongqiang

Abstract

This present work utilizes an interpretability model to understand and explain the decisions of deep learning models. The use of DeepLIFT is proposed and attributions of a study case are obtained. Benchmarking against two other interpretability models, namely Grad-CAM and dCAM, is conducted. Results show that DeepLIFT can provide precise attributions to the model inputs in both temporal and spatial directions. A parametric study is also carried out to understand the effects of deep learning model structure on the attributions obtained from the interpretability model. Ten different convolutional neural network model structures are considered. Three important observations are made: 1) changes in the model structure have minor effects on the attributions in the temporal direction, but 2) they have negligible effects on attributions in the spatial direction, and 3) convolutional layers need to be fixed to avoid attribution discrepancies. It is hoped that this work can contribute to the development of trustworthy deep learning models for the fire research community
Citation
Journal of Physics: Conference Series
Pub Type
Journals

Download Paper

Keywords

Explainable Artificial Intelligence, Trustworthy AI, Smart Firefighting, Flashover Prediction

Citation

Tam, W. , Linhao, F. , Tong, Q. and Hongqiang, F. (2024), Examining the Effects of Deep Learning Model Structure on Model Interpretability for Time-Series Classifications in Fire Research, Journal of Physics: Conference Series, [online], https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=957741 (Accessed September 14, 2024)

Additional citation formats

Issues

If you have any questions about this publication or are having problems accessing it, please contact [email protected].