Simplified object-based deep neural network for very high resolution remote sensing image classification

Pan, Xin; Zhang, Ce; Xu, Jun; Zhao, Jian. 2021 Simplified object-based deep neural network for very high resolution remote sensing image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 181. 218-237.

Before downloading, please read NORA policies.
[img] Text
N531265PP.pdf - Accepted Version
Restricted to NORA staff only until November 2023.

Download (14MB)


For the object-based classification of high resolution remote sensing images, many people expect that introducing deep learning methods can improve then classification accuracy. Unfortunately, the input shape for deep neural networks (DNNs) is usually rectangular, whereas the shapes of the segments output by segmentation methods are usually according to the corresponding ground objects; this inconsistency can lead to confusion among different types of heterogeneous content when a DNN processes a segment. Currently, most object-based methods utilizing convolutional neural networks (CNNs) adopt additional models to overcome the detrimental influence of such heterogeneous content; however, these heterogeneity suppression mechanisms introduce additional complexity into the whole classification process, and these methods are usually unstable and difficult to use in real applications. To address the above problems, this paper proposes a simplified object-based deep neural network (SO-DNN) for very high resolution remote sensing image classification. In SO-DNN, a new segment category label inference method is introduced, in which a deep semantic segmentation neural network (DSSNN) is used as the classification model instead of a traditional CNN. Since the DSSNN can obtain a category label for each pixel in the input image patch, different types of content are not mixed together; therefore, SO-DNN does not require an additional heterogeneity suppression mechanism. Moreover, SO-DNN includes a sample information optimization method that allows the DSSNN model to be trained using only pixel-based training samples. Because only a single model is used and only a pixel-based training set is needed, the whole classification process of SO-DNN is relatively simple and direct. In experiments, we use very high-resolution aerial images from Vaihingen and Potsdam from the ISPRS WG II/4 dataset as test data and compare SO-DNN with 6 traditional methods: O-MLP, O+CNN, OHSF-CNN, 2-CNN, JDL and U-Net. Compared with the best-performing method among these traditional methods, the classification accuracy of SO-DNN is improved by up to 7.71% and 10.78% for single images from Vaihingen and Potsdam, respectively, and the average classification accuracy is improved by 2.46% and 2.91% for the Vaihingen and Potsdam images, respectively. SO-DNN relies on fewer models and easier-to-obtain samples than traditional methods, and its stable performance makes SO-DNN more valuable for practical applications.

Item Type: Publication - Article
Digital Object Identifier (DOI):
UKCEH and CEH Sections/Science Areas: Soils and Land Use (Science Area 2017-)
ISSN: 0924-2716
Additional Keywords: CNN, very high resolution, semantic segmentation, classification, OBIA
NORA Subject Terms: Electronics, Engineering and Technology
Computer Science
Data and Information
Date made live: 19 Oct 2021 10:31 +0 (UTC)

Actions (login required)

View Item View Item

Document Downloads

Downloads for past 30 days

Downloads per month over past year

More statistics for this item...