Interpretability in Convolutional Neural Networks for Building Damage Classification in Satellite Imagery

Abstract

Natural disasters ravage the world’s cities, valleys, and shores on a monthly basis. Having precise and efficient mechanisms for assessing infrastructure damage is essential to channel resources and minimize the loss of life. Using a dataset that includes labeled pre- and post- disaster satellite imagery, we train multiple convolutional neural networks to assess building damage on a per-building basis. The xBD dataset is utilized, which is the most comprehensive dataset to date for our purposes due to its geographic inclusivity and disaster-type diversity. In order to investigate how to best classify building damage, we present a highly interpretable deep-learning methodology that seeks to explicitly convey the most useful information required to train an accurate classification model. We also delve into which loss functions best optimize these models. Our findings include that ordinal-cross entropy loss is the most optimal loss function to use, as well as that including the type of disaster that caused the damage in combination with a pre- and post-disaster image best predicts the level of damage caused. We also further work on qualitative representations of which parts of inputted images the model is using for prediction, through gradient class activation maps. Future work can build on our approach of interpretability by studying more input modalities. Our research seeks to computationally contribute to aiding in this ongoing and growing humanitarian crisis, heightened by climate change.

Presenters

Thomas Chen
Student, High School Degree, The Academy for Mathematics, Science, and Engineering, United States

Details

Presentation Type

Poster Session

Theme

Human Impacts and Responsibility

KEYWORDS

Remote Sensing, Damage, Climate Change, Interpretability, Satellite Imagery

Digital Media

This presenter hasn’t added media.
Request media and follow this presentation.