Article: T3-69

 

Rapid, Automated Post-Event Image Classification and Documentation

 

Chul Min Yeum1, Shirley J. Dyke1, Bedrich Benes2, Thomas Hacker3, Julio Ramirez1, Alana Lund1, Santiago Pujol1

1 Lyles School of Civil Engineering, Purdue University
West Lafayette, IN, 47907, United States
cyeum@purdue.edu
sdyke@purdue.edu
ramirez@purdue.edu
alund15@purdue.edu
spujol@purdue.edu
2 Computer Graphics Technology, Purdue University
West Lafayette, IN, 47907, United States
bbenes@purdue.edu
3 Computer and Information Technology, Purdue University
West Lafayette, IN, 47907, United States
tjhacker@purdue.edu

 

 

Abstract. Reconnaissance teams collect large volumes of perishable data after a natural disaster that are related to the condition of the buildings and other infrastructure. Each event is an opportunity to evaluate the performance of our structures under circumstances that cannot entirely be reproduced in the laboratory or numerical simulation. In the field, engineers typically prefer to record such information through images. For each building, images readily document the visual appearance of damage to structures and their components. Each team follows a similar procedure that includes taking views of the structure, from both outside and inside, using various distances and angles. During this process, engineers frequently incorporate metadata in the form of images, such as structural drawings, GPS devices, watches, and even measurements (e.g. an image of a structural column with a measuring tape). Large quantities of images with a wide variety of content are collected within a short period, and their timely organization and documentation are important. Engineers need to generate accurate and rich descriptions of the images before the details are forgotten.
To distil such information in an efficient and rapid manner, we developed an automated approach that uses computer vision techniques for classifying and organizing the large volume of images collected from each building. Deep convolutional neural network (CNN) algorithms are successfully implemented to extract robust features of key visual contents in the images. Thsi capability is demonstrated using data collected by various reconnaissance teams from buildings damaged during past earthquakes. A schema is developed based on the real needs of field teams examining buildings, with several different categories and information defined and used for annotating the images, thus supporting organization of the data that parallels the procedure followed by the engineer in the field. A significant volume of these images from past earthquakes is used to train robust classifiers that can automatically classify the images. These are then used to automatically generate individual reports for buildings that were damaged in past earthquakes.
 
Keywords: Post-disaster evaluation, Convolutional neural networks, Image classification, Building reconnaissance.

 

Download FULL-TEXT