-
根据参考文献[11],单演信号可用于SAR图像分解和特征提取。对于待分解的SAR图像
$f({\textit{z}})$ ,其中${\textit{z}} = {(x,y)^{\rm{T}}}$ 为坐标位置,其单演信号${f_M}({\textit{z}})$ 为:$${f_M}({\textit{z}}) = f({\textit{z}}) - (i,j){f_R}({\textit{z}})$$ (1) 式中:
${f_R}({\textit{z}})$ 为输入图像的Riesz变换;$i$ 和$j$ 均为虚数单位。根据上式,分别定义三类单演信号特征:$$A({\textit{z}}) = \sqrt {f{{({\textit{z}})}^2} + {{\left| {{f_R}({\textit{z}})} \right|}^2}} $$ $$\varphi ({\textit{z}}) = a\tan 2(\left| {{f_R}({\textit{z}})} \right|,f({\textit{z}})) \in ( - \pi ,\pi ]$$ $$\theta ({\textit{z}}) = a\tan 2({f_y}({\textit{z}})/{f_x}({\textit{z}})) \in \Bigg( - \frac{\pi }{2},\frac{\pi }{2}\Bigg]$$ (2) 式中:
${f_x}({\textit{z}})$ 和${f_y}({\textit{z}})$ 分别为两个轴上单演信号的分量;$A({\textit{z}})$ 为幅度信息;$\varphi ({\textit{z}})$ 和$\theta ({\textit{z}})$ 分别为相位成分和方位信息。三类单演信号特征能够有效反映图像的多层次特征,包括局部幅度、相位以及方向特性。通过结合这三类特征可为图像分析提供更为充分的信息。为此,文中采用单演信号描述各个子块的图像特征,具体按照参考文献[11]的思路对分解得到的三类特征进行矢量化串接以及降采样获得低维度特征矢量。最终,基于单演信号特征对各个子块进行分类。
-
稀疏表示分类根据不同类别对未知样本进行线性拟合,通过拟合误差大小实施判决。现阶段,稀疏表示分类已经在SAR目标识别方法中得到广泛运用并得到验证[8, 18-21]。对于测试样本
$y$ ,其稀疏表示为:$$ \begin{array}{c} \hat \alpha = \mathop {\arg \min }\limits_x {\left\| \alpha \right\|_0}\\ {\rm{s}}.{\rm{t}}.\;\;\left\| {y{\rm{ - }}D\alpha } \right\|_2^2 \leqslant \varepsilon \end{array} $$ (3) 式中:
$D = [{D^1},{D^2}, \cdots ,{D^C}] \in {{\rm{R}}^{d \times N}}$ 表示C个训练类别构成的全局字典;$\alpha $ 为稀疏表示系数。在${\ell _0}$ 范数的约束下,$\alpha $ 呈现稀疏特性。根据稀疏表示系数的求解结果
$\hat \alpha $ ,分别计算不同类别的重构误差:$$ r(i) = \left\| {y{\rm{ - }}{D_i}{{\hat \alpha }_i}} \right\|_2^2(i = 1,2, \cdots ,C) $$ (4) 式中:
${\hat \alpha _i}$ 表示第$i$ 类的稀疏矢量;$r(i)\;$ 则为第$i$ 类的重构误差。传统分类策略即根据最小误差的类别进行决策。对于与测试样本一致的训练类别,其对应的重构误差显著小于其他类别,因此可以据此进行类别判定。
-
文中采用SRC对原始SAR图像的4个子块分别进行分析处理,据此获得它们相应的重构误差矢量。对不同子块的结果,以线性加权为基本手段进行融合。但考虑到单一固定权值矢量的局限性,文中采用多组随机权值矢量进行处理,设计的权值矩阵如下:
$$W = \left[ {\begin{array}{*{20}{c}} {{w_{11}}}&{{w_{12}}}& \cdots &{{w_{1N}}}\\ {{w_{21}}}&{{w_{22}}}& \cdots &{{w_{2N}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{w_{K1}}}&{{w_{K2}}}& \cdots &{{w_{KN}}} \end{array}} \right]$$ (5) 其中,每一列对应一组随机权值矢量,满足约束:
$$ \sum\limits_{k = 1}^K {{w_{ki}}} = 1\;{\text{且}}{w_{ki}} \geqslant 0 $$ (6) 在公式(6)的约束下,每一组权值矢量随机确定,共计得到N组随机权值矢量。每一组权值矢量的可能对于不同的子块赋予不同的权值,其最终的结果可以更为有效的获得融合结果,有效规避了传统确定权值可能带来的不稳定性。
记第
$i$ 类对第$k$ $(k = 1,2, \cdots ,4)$ 个子块的重构误差为$r_k^i$ ,随机加权融合描述为:$$R_n^i = [r_1^i\;r_2^i\; \cdots \;r_K^i]\left[ \begin{array}{c} {w_{n1}}\\ {w_{n2}}\\ \vdots \\ {w_{nK}} \end{array} \right]$$ (7) 以公式(7)为基础,采用公式(5)中的所有权值矢量重复操作,第
$i$ 类则有$N$ 个结果$R = \left[ {R_1^i\;R_2^i\; \cdots \;R_N^i} \right]$ ,记为融合误差矢量。若第
$i$ 类为实际类别,则各个子块相应的重构误差都较小。此时,在随机权值矢量下,融合误差矢量中各个元素的数值较小且变化较为平缓。反之,若当前测试样本并非来自第$i$ 类,则各个子块的重构误差相对较大,最终在随机权值下的融合误差矢量均值和反差都相对较大。因此,根据以上统计特征,定义决策变量为:$$J = m + \lambda \sigma $$ (8) 式中:
$m$ 和$\sigma $ 对应任一类别融合误差矢量的均值及方差;$\lambda $ 为大于零的调节参数。按照公式(8)可分别计算各个类别对应的决策变量${J_1},{J_2}, \cdots ,{J_C}$ ,具有最小值的类别即被判断为测试样本真实目标类别。图2显示了提出方法的基本实施流程。采用图像分块算法对所有训练样本进行处理,并对每个子块进行单演特征矢量提取。在此基础上,形成各个子块的字典。对于测试样本,相应进行分块操作并获得相应的单演特征矢量。基于SRC计算重构误差并进行随机加权融合处理,最终根据不同类别的决策变量获得目标所属类别。
-
MSTAR数据集由美国DARPA和AFRL发布,包括图3中的10类目标,图像分辨率0.3 m,可据此设置场景对方法进行分析。共设置5类对比方法,包括参考文献[2]基于Zernike矩的方法;参考文献[11]中采用单演信号的方法(记为Mono);参考文献[13]中基于属性散射中心匹配的方法(记为ASC Matching);参考文献[22]中的全卷积神经网络(A-ConvNet,记为CNN1)方法以及参考文献[24]中设计的平移、旋转不变网络(记为CNN2)。前三类对比方法侧重于特征提取,通过不同类型的特征提高SAR目标识别性能;后两种对比方法则是档期最为流行的深度学习方法。
-
测试条件1为标准操作条件,整体难度相对较小。表1为测试条件1的相关设置,包括了图3所有目标。训练和测试样本分别为17°和15°下的SAR图像集。测试样本和训练样本来自相同的目标及型号。图4显示了文中方法在标准操作条件下的分类混淆矩阵。10类目标对应的正确识别率对应于对角线元素,均高于98.5%。定义平均识别率Pav为正确识别样本数占总样本数的比例,统计10类目标的Pav=99.32%,充分显示了提出方法的有效性。表2对比各类方法在标准操作条件下的识别性能。各类方法在标准操作条件下的平均识别率均高于98%,也反映了标准操作条件下的识别问题相对简单。所提方法在当前条件下可以取得最高的识别率,显示了方法的优势。
表 1 测试条件1相关设置
Table 1. Relevant setup for test condition 1
Type Training Test Depression Scale Depression Scale BMP2 17° 232 15° 193 BTR70 232 194 T72 231 194 T62 297 271 BRDM2 296 272 BTR60 255 193 ZSU23/4 297 272 D7 297 273 ZIL131 297 272 2S1 297 271 表 2 测试条件1下的结果
Table 2. Results under test condition 1
Method Pav Proposed 99.32% Zernike 98.12% Mono 98.64% ASC Matching 98.32% CNN1 99.08% CNN2 99.12% -
由于实际场景的复杂性,目标自身、背景环境以及传感器等要素都可能发生变化,因此扩展操作条件更为常见。后续设置3类典型扩展操作条件进行测试。
(1)测试条件2
测试条件2为型号差异,表3为相关试验数据和设置。其中,BMP2和T72采用不同型号的样本分别用于训练和分类。表4列出了不同方法在此时的识别率。对比测试条件1,各方法受到型号差异影响,性能出现下降。对于基于深度洗的CNN1和CNN2方法,由于存在的型号差异,最终平均识别率下降最为显著。文中方法对测试SAR图像进行分块处理,并且分区进行局部分析,因此有利于充分考察由于型号差异带来的局部图像变化。从识别结果上可以看出,文中方法的识别率更高,表明其对于型号差异的稳健性。
表 3 测试条件2相关设置
Table 3. Relevant setup for test condition 2
Type Training Test Depression/(°) Configuration Scale Depression/(°) Configuration Scale BMP2 17 9 563 233 15 9 566 196 c21 196 BTR70 17 c71 233 15 c71 196 T72 17 132 232 15 812 195 s7 191 表 4 测试条件2下的结果
Table 4. Results under test condition 2
Method Pav Proposed 98.46% Zernike 96.82% Mono 97.82% ASC Matching 98.02% CNN1 96.54% CNN2 97.02% (2)测试条件3
测试条件3为俯仰角差异,表5给出了相应的训练和测试集。采用17°俯仰角样本对算法进行训练,分别采用30°和45°俯仰角作为测试样本,考察不同俯仰角差异条件下的影响。表6显示不同方法的结果。在30°时,俯仰角差异造成的影响相对不大,各方法仍能保持94%以上的识别率。但在45°时,俯仰角差异带来的影响十分显著,各方法平均识别率大幅度降低,说明此时俯仰角差异带来了较大的SAR图像差异。与型号差异的情形类似,CNN1和CNN2方法的性能下降最为显著。文中方法通过图像分块匹配以及多权值的融合以及统计分析,可以更好地分析目标的局部变化,进而通过统计分析获得可靠的决策结果。
表 5 测试条件3相关设置
Table 5. Relevant setup for test condition 3
Type Training Test Depression Scale Depression/(°) Scale 2S1 17° 288 30 285 45 302 BDRM2 289 30 284 45 302 ZSU23/4 289 30 287 45 302 表 6 测试条件3下的结果
Table 6. Results under test condition 3
Method Pav 30° 45° Proposed 97.12% 73.63% Zernike 94.82% 68.24% Mono 96.35% 70.92% ASC Matching 96.72% 71.36% CNN1 95.82% 66.74% CNN2 96.24% 67.56% -
测试条件4为噪声干扰。当测试样本处于较低的信噪比(SNR)时,其与高信噪比的测试样本会出现较大的差异,导致识别问题难度显著增加。原始MSTAR测试样本于训练样本来自相近的信噪比,不能直接用于测试SAR目标识别方法的噪声稳健性。为此,文中首先通过噪声生成的方式获得含噪测试集,验证方法的噪声为减小。图5显示不同方法的结果,所提方法随着噪声加权可以保持更为稳定的性能。随着测试样本的信噪比降低,基于深度学习的CNN1和CNN2方法性能下降较为显著。散射中心匹配方法在噪声干扰条件下性能相对稳健,主要是属性散射中心提取过程中有效剔除了噪声的印象。图5所示结果充分验证了文中方法在噪声干扰下的性能优势。
SAR target recognition based on image blocking and matching
-
摘要: 提出基于分块匹配的合成孔径雷达(Synthetic Aperture Radar,SAR)目标识别方法。对待识别SAR图像进行4分块处理,每个分块描述目标的局部区域。对于每个分块,基于单演信号构造特征矢量,描述其时频分布以及局部细节信息。单演信号从幅度、相位以及局部方位3个层次对图像进行分解,可有效描述图像的局部变化情况,对于扩展操作条件下的目标变化分析具有重要的参考意义。对于构造得到的4个特征矢量,分别采用稀疏表示分类(Sparse Representation-based Classification,SRC)分别进行分类,获得相应的重构误差矢量。在此基础上,按照线性加权融合的基本思想,通过构造随机权值矩阵进行分析。对于不同权值矢量下获得的结果,经统计分析构造有效的决策变量,通过比较不同训练类别的结果,判定测试样本的类别。所提方法在特征提取和分类决策过程中充分考虑SAR图像获取条件的不确定,通过统计分析获得最优决策结果。实验在MSTAR数据集上设置和开展,包含了1类标准操作条件和3类扩展操作条件。通过与现有几类方法对比,有效证明了所提方法的有效性。Abstract: A synthetic aperture radar (SAR) target recognition method based on image blocking and matching was proposed. The tested SAR image was blocked into four patches, which described the local regions of the target, respectively. For each SAR image patch, the monogenic signal was employed to construct a feature vector, which described its time-frequency distribution and local details. The monogenic signal decomposed the input image from amplitude, phase, and local orientation. Therefore, it could reflect the local variations in the image so providing more reference information for the analysis of target changes under the extended operating conditions. For the 4 feature vectors, the sparse representation-based classification (SRC) was used for classification and produce the corresponding reconstruction error vectors. Accordingly, based on the linear weighting fusion, the random weight matrix was constructed for analysis. For the results from different weight vectors, an effective decision variable was defined based on statistical analysis. By comparison of the decision values of different classes, the target label of the test sample could be decided. The proposed method made sufficient analysis of the uncertainties about the operating conditions during SAR image measurement, an optimal decision was made based on statistical analysis. Experiments were set up and conducted on the MSTAR dataset including one standard operating condition and three extended operating conditions. Compared with several present methods, the results confirmed the validity of the proposed method.
-
Key words:
- synthetic aperture radar /
- target recognition /
- image blocking /
- monogenic signal /
- random weight
-
表 1 测试条件1相关设置
Table 1. Relevant setup for test condition 1
Type Training Test Depression Scale Depression Scale BMP2 17° 232 15° 193 BTR70 232 194 T72 231 194 T62 297 271 BRDM2 296 272 BTR60 255 193 ZSU23/4 297 272 D7 297 273 ZIL131 297 272 2S1 297 271 表 2 测试条件1下的结果
Table 2. Results under test condition 1
Method Pav Proposed 99.32% Zernike 98.12% Mono 98.64% ASC Matching 98.32% CNN1 99.08% CNN2 99.12% 表 3 测试条件2相关设置
Table 3. Relevant setup for test condition 2
Type Training Test Depression/(°) Configuration Scale Depression/(°) Configuration Scale BMP2 17 9 563 233 15 9 566 196 c21 196 BTR70 17 c71 233 15 c71 196 T72 17 132 232 15 812 195 s7 191 表 4 测试条件2下的结果
Table 4. Results under test condition 2
Method Pav Proposed 98.46% Zernike 96.82% Mono 97.82% ASC Matching 98.02% CNN1 96.54% CNN2 97.02% 表 5 测试条件3相关设置
Table 5. Relevant setup for test condition 3
Type Training Test Depression Scale Depression/(°) Scale 2S1 17° 288 30 285 45 302 BDRM2 289 30 284 45 302 ZSU23/4 289 30 287 45 302 表 6 测试条件3下的结果
Table 6. Results under test condition 3
Method Pav 30° 45° Proposed 97.12% 73.63% Zernike 94.82% 68.24% Mono 96.35% 70.92% ASC Matching 96.72% 71.36% CNN1 95.82% 66.74% CNN2 96.24% 67.56% -
[1] Wu Wenda, Zhang Bao, Hong Yongfeng, et al. Design of co-aperture antenna for airborne infrared and synthetic aperture radar [J]. Chinese Optics, 2020, 13(3): 595-604. (in Chinese) [2] Amoon M, Rez-radai G A. Automatic target recognition of synthetic aperture radar (SAR) images based on optimal selection of Zernike moment features [J]. IET Computer Vision, 2014, 8(2): 77-85. doi: 10.1049/iet-cvi.2013.0027 [3] Fu Fancheng. SAR target recognition based on target region matching [J]. Electronics Optics & Control, 2018, 25(4): 37-40. (in Chinese) [4] Xie Qing, Zhang Hong. Multi-level SAR image enhancement based on regularization with application to target recognition [J]. Journal of Electronic Measurement and Instrumentation, 2018, 32(9): 157-162. (in Chinese) [5] Anagnostopoulos G C. SVM-based target recognition from synthetic aperture radar images using target region outline descriptors [J]. Nonlinear Analysis, 2009, 71(2): 2934-2939. [6] Papson S, Narayanan R M. Classification via the shadow region in SAR Imagery [J]. IEEE Transactions on Aerospace and Electronic System, 2012, 40(8): 969-980. [7] Mishra A K, Motaung T. Application of linear and nonlinear PCA to SAR ATR[C]//Radioelektronika, 2015: 1-6. [8] Han Ping, Wang Huan. Research on the synthetic aperture radar target recognition based on KPCA and sparse representation [J]. Journal of Signal Processing, 2013, 29(13): 1696-1701. (in Chinese) [9] Cui Z Y, Cao Z J, Yang J Y, et al. Target recognition in synthetic aperture radar via non-negative matrix factorization [J]. IET Radar, Sonar and Navigation, 2015(9): 1376-1385. doi: 10.1049/iet-rsn.2014.0407 [10] Li Shuai, Xu Yuelei, Ma Shiping, et al. SAR target recognition using wavelet transform and deep sparse autoencoders [J]. Video Engineering, 2014, 38(13): 31-35. (in Chinese) [11] Dong G G, Kuang G Y, Wang N, et al. SAR target recognition via joint sparse representation of monogenic signal [J]. IEEE Journal of Selected Topics Applied Earth Observation and Remote Sensing, 2015, 8(7): 3316-3328. doi: 10.1109/JSTARS.2015.2436694 [12] Chang M, You X, Cao Z. Bidimensional empirical mode decomposition for SAR image feature extraction with application to target recognition [J]. IEEE Access, 2019, 6: 7, 135720-135731. [13] Ding Baiyuan, Wen Gongjian, Yu Liansheng, et al. Matching of attributed scattering center and its application to synthetic aperture radar automatic target recognition [J]. Journal of Radar, 2017, 6(2): 157-166. (in Chinese) [14] Ding B Y, Wen G J, Zhong J R, et al. A robust similarity measure for attributed scattering center sets with application to SAR ATR [J]. Neurocomputing, 2017, 219: 130-143. doi: 10.1016/j.neucom.2016.09.007 [15] Liu Yang. Target recognition of SAR images based on multi-level matching of attributed scattering centers [J]. Journal of Electronic Measurement and Instrumentation, 2019, 33(11): 192-198. (in Chinese) [16] Hao Yan, Bai Yanping, Zhang Xiaofei. Synthetic aperture radar target recognition based on KNN [J]. Fire Control & Command Control, 2018, 43(9): 113-115,120. (in Chinese) [17] Liu Changqing, Chen Bo, Pan Zhouhao, et al. Research on target recognition technique via simulation SAR and SVM classifier [J]. Journal of CAEIT, 2016, 11(3): 257-262. (in Chinese) [18] Liu H C, Li S T. Decision fusion of sparse representation and support vector machine for SAR image target recognition [J]. Neurocomputing, 2013, 113: 97-104. doi: 10.1016/j.neucom.2013.01.033 [19] Xing X W, Ji K F, Zou H X, et al. Sparse representation based SAR vehicle recognition along with aspect angle [J]. The Scientific World Journal, 2014, 834140: 1-10. [20] Zhang L, Tao Z W, Wang B J. SAR image target recognition using kernel sparse representation based on reconstruction coefficient energy maximization rule[C]//IEEE ICASSP, 2016: 2369-2373. [21] Tan Cuimei, Xu Tingfa, Ma Xu, et al. Graph-spectral hyperspectral video restoration based on compressive sensing [J]. Chinese Optics, 2018, 11(6): 949-957. (in Chinese) doi: 10.3788/co.20181106.0949 [22] Chen S Z, Wang H P, Xu F, et al. Target classification using the deep convolutional networks for SAR images [J]. IEEE Transactions on Geoscience and Remote Sensing, 2016, 54(8): 4806-4817. doi: 10.1109/TGRS.2016.2551720 [23] Zhang Panpan, Luo Haibo, Ju Moran, et al. An improved capsule and its application in target recognition of SAR images [J]. Infrared and Laser Engineering, 2020, 49(5): 20201010. (in Chinese) doi: 10.3788/irla.26_invited-zhangpanpan [24] Xu Ying, Gu Yu, Peng Dongliang, et al. SAR ATR based on disentangled representation learning generative adversarial networks and support vector machine [J]. Optics and Precision Engineering, 2020, 28(3): 727-735. (in Chinese) doi: 10.3788/OPE.20202803.0727