-
文中选取实验环境搭载ENVI5.3遥感数据处理软件,与图片处理软件Matlab 2017。深度学习服务器的CPU为IntelXeon E5-2678 v3 CPU,其主频最高为2.5 GHz,显卡是NVIDIA(英伟达)GTX TITAN XP 12 G(泰坦系列),核心频率最高1582 MHz,3840个CUDA核心,服务器运行内存64 G,4 T储存空间。实验模型选择Adam作为优化器,该优化器可以在训练过程中自适应地调整学习率,设置初始学习率为0.01,设置batch size为16,迭代周期为100次,选取TensorFlow深度学习框架,模型构建完成后,将9735张训练图像和2433张验证图像存储在Numpy数组中以方便实验。
-
为更为深入地分析文中改进网络模型农作物类型提取环节的具体优势与劣势,选择现今广泛应用的U-Net、Segnet和FCN算法分别进行识别效果比较。对于研究区内一处区域,借助4类技术开展提取测试,就这4类技术提取效果的精度、完整度与准确度展开统计、对比。
-
有效且合理的分类结果评价技术能够更为全面、客观地评估分类结果,有助于验证分类方法的准确性与有效性。文中选取总体精度和交并比作为实验的分类精度评价指标。
基于遥感影像在分类对象与标准上的不同,分别通过总体精度(OA)、Accuracy (准确度A)、Precision (精确度P)与Recall (召回率R)开展量化评估[16]。
基于卷积神经网络(CNN)对遥感影像各图像块的分类结果,对错误、正确分类的目标地物像素量展开统计,将OA当作遥感地物分类评估指标,当作一类常用的精度评估指标,OA代表各像素准确分类的概率,以下为计算公式:
式内:NKK、Ntotal分别代表图像内像元被准确分类的数量、图像内像元的总量;将卫星遥感影像分类提供的地块对象当做基本单元,可通过Accuracy、Precision与Recall来量化评价分类效果,以下为计算公式:
式中:TP、FP、FN、TN依次所指为准确识别的此类田块数目、错误识别的此类田块数目、此类田块被识别成其他田块的数目、准确识别的其他田块数目;均交并比(MIoU)[17],代表的是语义分割的标准度量,它能够用于分析两集合的交集和并集之比,针对语义分割问题而言,其分别为真实值(ground truth)和预测值(predicted segmentation)。这个比例可以变形为TP (交集)比上TP、FP、FN之和(并集)。在每个类上计算IoU,然后取平均值。
-
为进一步探索文中方法在农作物分类中的适用性,将文中方法与U-Net、SegNet、FCN进行识别对比实验,通过与U-Net模型进行对比来总结文中的改进想法是否成立,同时,FCN和SegNet网络是经典的图像识别网络,具有一定的代表性,与其对比,可以使文中的结果更有说服力。模型的特点如表1所示。以总体分类精度和MIoU作为评估指标。为验证改进U-Net网络的稳定性,比较各类方法的总体分类精度结果如表2所示。
Deep learning segmentation model Model characteristics FCN For the first time, a fully convolutional network based on the end-to-end concept is proposed, which removes the fully connected layer and samples in the deconvolutional layer SegNet The pooling layer result is used in the decoding and a large amount of coding information is introduced U-Net Based on the end-to-end standard network structure, the decoder is obtained by splicing the results of each layer on the encoder, and the result is more ideal Improve U-Net The ability of semantic recognition is enhanced, and it is more sensitive to feature extraction Table 1. Characteristics of network model for different deep learning
Experimental
networkU-Net SegNet FCN Improve U-Net OA MIoU OA MIoU OA MIoU OA MIoU Precision 85.41% 0.39 84.86% 0.39 86.44% 0.45 88.33% 0.52 Table 2. Experimental results of crop recognition with different methods
-
通过深度学习模型的农作物分类识别实验以及精度评价,基于模型训练得到的FCN、SegNet、U-Net和改进的U-Net模型的农作物分类识别的模型,输入测试集图像,得到农作物分类结果;运用MIoU和总体分类精度两种精度评定指标对文中研究的分类结果进行定量的精度评价;结果表明,在相同的样本库进行模型训练的场景下,从总体分类精度来看:文中改进的U-Net模型的总体分类精度达到88.83%,从交并比来看,文中改进的U-Net模型在MIoU上达到0.52,均高于传统机器学习算法,说明文中改进的U-Net模型能够有效应用于农作物分类识别场景,从分类结果可以看出:薏仁米种植面积很集中,样本采集也较多,所以在四种模型中表现出来的分类精度很高,从而可以看出不同类别的样本的数量与样本分布情况也是可能影响农作物分类精度的重要因素。改进后U-Net是基于图像语义进行分类的深度学习模型,其特征识别能力得到了一定的强化,既运用了薏仁米表现在影像上自身的特征信息,同时也结合了围绕在其周围的像素进行识别和分类,所以其准确性和可靠性更高。
文中方法与其他算法部分实验结果对比如图5所示。
Research on the classification of typical crops in remote sensing images by improved U-Net algorithm
doi: 10.3788/IRLA20210868
- Received Date: 2021-11-22
- Rev Recd Date: 2021-12-11
- Accepted Date: 2021-12-16
- Publish Date: 2022-09-28
-
Key words:
- deep learning /
- crop classification /
- drone remote sensing /
- improved U-Net model
Abstract: Aiming at the problem of incomplete classification features of remote sensing images extracted by traditional algorithms and low accuracy of crop classification, we use drone remote sensing images as the data source and propose an improved U-Net model to classify and recognize crops such as barley, corn, etc. in the study area. In the experiment, the remote sensing image is preprocessed, and the data set is labeled and enhanced. Secondly, the algorithm is improved by deepening the U-Net network structure, introducing the SFAM module and the ASPP module, and using the multi-level and multi-scale feature aggregation pyramid method to construct an improved U-Net algorithm. Finally model training and improvement are completed. The experimental results show that the overall classification accuracy OA reaches 88.83%, and the combined ratio of MIoU reaches 0.52. Compared with the traditional U-Net model, FCN model and SegNet model, the classification index and accuracy are significantly improved.