-
抛物型缩放尺度矩阵Aa和剪切矩阵Ss分别描述为:
$$ {{\displaystyle {{A}}}}_{{{a}}}=\left(\begin{array}{cc}a& 0\\ 0& \sqrt{a}\end{array}\right)\text{,}{{\displaystyle {{S}}}}_{{{s}}}=\left(\begin{array}{cc}\text{1}& s\\ 0& \text{1}\end{array}\right) $$ (1) 式中:a∈R+;s∈R。对任意
$\forall \psi \in {{{L}}^2}\left( {{{{R}}^2}} \right)$ 采取膨胀、剪切和平移的操作,得到连续剪切波函数$ {\psi _{a{\text{, }}s{\text{,}} t}}(x) $ 的表达式如下:$$ {\psi _{a,s,t}}\left( x \right) = {a^{ - \frac{3}{4}}}\psi \left( {{{A}}_{{a}}^{ - 1}S_s^{ - 1}\left( {x - t} \right)} \right) $$ (2) 对公式(2)其采取二维傅里叶变换,得到等式和连续剪切波变换Parseval等式,分别表示为:
$$ {\widehat{\psi }}_{a,s,t}\left(\omega \right)={a}^{\frac{3}{4}}{{\rm{e}}}^{-2\pi i\langle \omega ,t\rangle }\widehat{\psi }\left(a{\omega }_{1}\text{,}\sqrt{{a}}\left(s{\omega }_{1}+{\omega }_{2}\right)\right) $$ (3) $$ S{H_\psi }\left( f \right)\left( {a,s,t} \right) = \left\langle {f,{\psi _{a,s,t}}} \right\rangle = \left\langle {\hat f,{{\hat \psi }_{a,s,t}}} \right\rangle $$ (4) 式中:
$f \in {{{L}}^2}\left( {{{{R}}^2}} \right)$ 。同时,定义小波函数
$ {\widehat{\psi }}_{1}({\omega }_{1}) $ 和冲击函数$ {\widehat{\psi }}_{2}({\omega }_{2}) $ 分别表示为:$$ {\widehat{\psi }}_{1}({\omega }_{1})\text=\sqrt{{b}^{2}(2{\omega }_{1})+{b}^{2}({\omega }_{1})} $$ (5) $$ {\widehat{\psi }}_{2}({\omega }_{2})\text=\left\{\begin{array}{c}\sqrt{\upsilon \left(1\text+{\omega }_{2}\right)},{\omega }_{2}\leqslant 0\\ \sqrt{\upsilon \left(1-{\omega }_{2}\right)},{\omega }_{2} > 0\end{array}\right. $$ (6) 上式中,
$ \upsilon \left( x \right) $ 和$ b\left( x \right) $ 的表达式如下:$$ {{v}}\left( x \right) = \left\{ {\begin{array}{*{20}{l}} 0&{,x < 0}\\ {35{x^4} - 84{x^5} + 70{x^6} - 20{x^7}}&{,0 \leqslant x \leqslant 1}\\ 1&{,x > 1} \end{array}} \right. $$ (7) $$ b\left( x \right) = \left\{ {\begin{array}{*{20}{l}} {\sin \left( {\dfrac{\pi }{2}v\left( {\left| x \right| - 1} \right)} \right)}&{,1 \leqslant \left| x \right| \leqslant 2} \\ {\cos \left( {\dfrac{\pi }{2}v\left( {\dfrac{1}{2}\left| x \right| - 1} \right)} \right)}&{,2 \leqslant \left| x \right| \leqslant 4} \\ 1&{,其他} \end{array}} \right.$$ (8) 然后利用
$ {\widehat{\psi }}_{1}({\omega }_{1}) $ 和$ {\widehat{\psi }}_{2}({\omega }_{2}) $ 把频域分解成水平锥面${{{C}}^{{h}}}$ 、垂直锥面${{{C}}^v}$ 、锥面交叉线${{{C}}^{\boldsymbol{ \times }}}$ 和低频${{{C}}^{\text{0}}}$ 四部分,分解方法如图1所示。为了方便运算,将伸缩参数a、剪切参数s和平均参数t采取离散处理,方法如下:
$$ \left\{ {\begin{array}{*{20}{l}} {{a_j} = {2^{ - 2j}} = \dfrac{1}{{{4^j}}}}&{,j = 0, \cdots ,{j_0} - 1} \\ {{s_{j,k}} = k{2^{ - j}}}&{, - {2^j} \leqslant k \leqslant {2^j}} \\ {{t_m} = \left( {\dfrac{{{m_1}}}{M},\dfrac{{{m_2}}}{N}} \right)}&{,\;m \in \varsigma } \end{array}} \right. $$ (9) 式中:N表示抽样数;分解度
${j_0}=\left\lfloor {{{\left( {{{\log }_2}{{N}}} \right)} / 2}} \right\rfloor$ ;$\varsigma =\{ ( {m_1}{\text{, }} $ $ {m_2}): {m_i} = 0,1, \cdots ,N - 1,i = 1,2\}$ 。离散剪切波
$ {\hat \psi _{j,k,m}} $ 在频域可描述为:$$ {\hat \psi _{j,k,m}}\left( \omega \right) = {\hat \psi _1}\left( {{4^{ - j}}{\omega _1}} \right){\hat \psi _2}\left( {{2^j}\dfrac{{{\omega _2}}}{{{\omega _1}}} + k} \right){{\rm{e}}^{{{ - 2\pi i\left\langle {\omega {\kern 1pt} ,m} \right\rangle } / N}}} $$ (10) 式中:
$ \omega \in \{\left({\omega }_{1},{\omega }_{2}\right):{\omega }_{i}=-\left[\dfrac{N}{2}\right],\cdots ,\left[\dfrac{N}{2}\right]-1,i=1,2\} $ 。在锥面交叉线上定义
$ \left| k \right|={{\text{2}}^j} $ ,那么在频域上得到剪切波公式可表示为:$$ \hat \psi _{j,k,m}^{h \times v} = \hat \psi _{j,k,m}^h + \hat \psi _{j,k,m}^v + \hat \psi _{j,k,m}^ \times $$ (11) 从而可得出全频域内不同区域离散剪切波:
$$ SH\left( f \right)\left( {\tau ,j,k,m} \right) = \left\{ \begin{gathered} \left\langle {f,{\phi _m}} \right\rangle {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ,\tau = 0 \hfill \\ \left\langle {f,\hat \psi _{j,k,m}^\tau } \right\rangle {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \;\;\;\;,\tau \in \left\{ {h,v} \right\} \hfill \\ \left\langle {f,\hat \psi _{j,k,m}^{h \times v}} \right\rangle {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \;,\tau = \times \hfill \\ \end{gathered} \right. $$ (12) 式中:
$ j\text{=0},\cdots ,{j}_{0}-1 $ ;$ {\text{ - }}{{\text{2}}^j}{\text{ + 1}} \leqslant k \leqslant {{ - }}{{\text{2}}^j}{{ - 1}} $ ;$ m \in \varsigma $ ;$ {\phi _m} $ 、$ \hat \psi _{j,k,m}^\tau $ 、$ \hat \psi _{j,k,m}^{h \times v} $ 分别表示低频部分的离散剪切波、水平与垂直锥面交叉线上的离散剪切波、锥面交叉线上的离散剪切波[16]。为了更加直观的呈现FDST的分解和重构过程,以同心圆环图像进行了测试实验,测试过程如图2所示。利用二维快速傅里叶变换对图2中的原始图像进处理,并用非下采样金字塔方法对得到的频谱图像进行
$ l $ 层分解得到高低频子带,再用剪切波对高频子进行K层分解,得到$ \displaystyle \sum\nolimits_{k = 0}^{K - 1} {\mathop {\text{2}}\nolimits^{l + 2} } $ 个方向子带,最后进行逆变换得到重构图像。将得到的重构图像与原始图像做差,通过计算得到重构误差仅为8.73 e-17,信息的损失率非常小,从而验证了FDST在分解和重构的各环节能够最大限度保留源图像中的信息。 -
首先,借助FDST将红图像I和可见光图像V分解成不同尺度方向的低频子带
$ L_J^I $ 和$ L_J^V $ ,第$ l $ 层第$ k $ 个高频子带分别为$ H_{l,k}^I $ 和$ H_{l,k}^V $ ,然后根据高低频子带的特性分别采取不同的策略进行处理,得到各尺度方向子带,最后对融合后的高低频子带$ L_J^F $ 和$ H_{l,k}^F $ 进行FDST逆变换,得到内容更加丰富、细节更加清晰明亮的融合图像F。图像融合过程如图3所示。 -
图像低频子带里携带着主要的轮廓和边缘等信息,传统的融合方法大多对单个像素点进行分析和处理,而忽略了图像的整体特性,容易出现轮廓不清晰和边缘信息缺失的问题[17-18]。脉冲耦合人工神经网络PCNN最小的结构是神经元,通过接收、调制和脉冲发射三部分构成,无数个神经元就构成了反馈型网络,PCNN迭代模型描述:
$$ \left\{ {\begin{array}{*{20}{l}} {{F_{ij}}(n) = {I_{ij}}(n)} \\ {{L_{ij}}(n) = \exp ( - {\alpha _L}){L_{ij}}(n - 1) + \displaystyle \sum\limits_{p,q} {{{\boldsymbol{W}}_{ij,pq}}{Y_{pq}}} } \\ {{U_{ij}}(n) = {F_{ij}}(n) \cdot \left( {1{\text{ + }}\beta {L_{ij}}(n)} \right)} \\ {{\theta _{ij}}(n) = \exp ( - {\partial _\theta }){\theta _{ij}}(n - 1) + {V_\theta }{Y_{ij}}(n)} \end{array}} \right. $$ (13) 式中:
$ n $ 表示迭代次数;$ {F_{ij}}(n) $ 表示第$ n $ 次迭代的反馈输入;$ {U_{ij}}(n) $ 表示内部活动项;$ {L_{ij}}(n) $ 为链接输入;$ {Y_{ij}}(n) $ 为链接输出;$ {I_{ij}}(n) $ 为外部激励;$ {\theta _{ij}}(n) $ 表示变阈值函数输出;$ \beta $ 为链接强度;W和$ {V_\theta } $ 表示连接权系数矩阵与阈值增益系数;$ {\partial _L} $ 为链接输入;$ {\partial _\theta } $ 为时间衰减常数。在迭代的过程中,若出现
$ {U_{ij}}(n) > {\theta _{ij}}(n) $ 时,对应的神经元$ \left( {i,j} \right) $ 发出1个脉冲,即完成1次点火,$ {Y_{ij}}(n) = $ $ 1 $ ;反之,则$ {Y_{ij}}(n)=0 $ ,描述为:$$ {Y_{ij}} = \left\{ {\begin{array}{*{20}{c}} {1,\;\;{U_{ij}}(n) > {\theta _{ij}}(n)} \\ {0,\;\;{U_{ij}}(n) \leqslant {\theta _{ij}}(n)} \end{array}} \right. $$ (14) 迭代结束后,该神经元的点火次数
$ {T_{ij}}(n) $ 描述为:$$ {T_{ij}}(n) = {T_{ij}}(n - 1) + {Y_{ij}}(n) $$ (15) 由于PCNN具有空间邻近和相似聚集的特点,考虑到相邻像素点之间具有的相关性,采用空间频率函数求得图像行列两个方形的梯度能量,并将其作为PCNN的输入激励,可有效地克服伪吉布斯现象。
为了进一步改善融合效果,文中提出了采用改进的空间频率(MSF)策略,通过增加矩形窗口主副对角线两个方向的梯度能量来丰富像素灰度的变化率,从而提升总体的梯度能量,使融合达到更佳的效果。假设使用FDST对源图像I和V进行分解后得到的低频子带为
$ C\left( {i,j} \right) $ ,选取相邻区域矩形窗口大小为$ M \times N $ ,定义MSF的表达式如下:$$ MSF = \dfrac{1}{{M \times N}}\displaystyle \sum\limits_{i = 1}^M {\displaystyle \sum\limits_{j = 1}^N {(RF + CF + D{F_1} + D{F_2}} } ) $$ (16) 式中:RF、CF、DF1和DF2分别为(红外和可见光)低频子带的行、列、主和副对角线4个方向的频率,对应的表达式如下:
$$ \left\{ {\begin{array}{*{20}{l}} {RF = {{\left[ {C\left( {i,j} \right) - C\left( {i,j{{ - }}1} \right)} \right]}^2}} \\ {CF = {{\left[ {C\left( {i,j} \right) - C\left( {i - 1,j} \right)} \right]}^2}} \\ {D{F_1} = {{\left[ {C\left( {i,j} \right) - C\left( {i - 1,j{{ - }}1} \right)} \right]}^2}} \\ {D{F_2} = {{\left[ {C\left( {i,j} \right) - C\left( {i - 1,j{{ + }}1} \right)} \right]}^2}} \end{array}} \right. $$ (17) 文中选取矩形窗口
$ M = N = 4 $ 。链接强度$ \;\beta $ 可以表征对应窗口区域内灰度值的变化情况,传统的融合方法大多选取固定的链接强度$\; \beta $ ,但每个像素点之间的链接强度各不相同,所以固定的$ \;\beta $ 不能改善图像质量。由于平均梯度$ \overline{G}\left(i,j\right) $ 是评价图像细节差异的重要指标,为此文中利用图像的平均梯度$ \overline{G}\left(i,j\right) $ 来动态调节链接强度$ {\;\beta _{ij}} $ 的大小,根据图像的特征自适应变化,以便充分保留图像边缘的特征信息。自适应链接强度$ {\;\beta _{ij}} $ 描述为:$$ {\beta }_{{ij}}\text=\dfrac{1}{1\text+\mathrm{exp}\left(-\overline{G}\left(i\text{,}j\right)\right)} $$ (18) 其中,平均梯度
$\overline{G}\left(i,j\right)$ 具体描述如下:$$\begin{split} &\overline{G}\left(i\text{,}j\right)=\dfrac{1}{\left(M-1\right)\left(N-1\right)}\cdot \displaystyle \displaystyle \sum _{i=1}^{M-1}\displaystyle \displaystyle \sum _{j=1}^{N-1}\\ &{\sqrt{\dfrac{{\left(C\left(i+1,j\right)-C\left(i,j\right)\right)}^{2}+{\left(C\left(i,j+1\right)-C\left(i,j\right)\right)}^{2}}{2}}} \end{split} $$ (19) 所以在低频子带的融合策略上选择平均梯度
$\overline{G}\left(i,j\right)$ 较大的值,窗口区域就越清晰,根据公式(18)也可看出,此时链接强度$ {\;\beta _{ij}} $ 的值相应也越大,从而会促进PCNN点火。 -
高频子带携带是图像的纹理和细节等特征信息,主要影响视觉效果,传统的绝对值取大的方法容易受到噪声的影响,这种以单个像素点为对象的处理方法会直接影响融合效果。由于在图像融合的效果评价上,更希望关注目标区域,且灰度值大的像素点或者区域会包含更多的纹理和细节信息。根据红外光和可见光的特性,所以文中利用区域平均能量对比度的策略来选择高频子带,步骤如下:
(1) 若以像素
$ \left( {i,j} \right) $ 为中心的区域为$ {N_X}\left( {i,j} \right) $ ,其中$ X \in \left( {I,V,H_{l,k}^I,H_{l,k}^V} \right) $ ,那么$ X $ 在$ {N_X}\left( {i,j} \right) $ 上的区域能量可表示为:$$ \mathop E\nolimits_X \left( {i,j} \right) = \displaystyle \sum\limits_{\left( {m{\text{,}}\;n} \right) \in \mathop N\nolimits_X \left( {i,j} \right)} {\dfrac{{{{\left| {X\left( {i + m,\;j + n} \right)} \right|}^2}}}{{N \times N}}} $$ (20) 式中:
$ \left| {{\text{X}}\left( {i + m,\;j + n} \right)} \right| $ 为$ X $ 在$ \left( {i + m,\;j + n} \right) $ 点的灰度值。(2) 计算出红外和可见光图像对应的区域平均能量对比度分别为
$ C_{l,k}^I $ 和$ C_{l,k}^V $ ,表达式如下:$$ C_{l,k}^I\left( {i,j} \right) = \displaystyle \sum\limits_{}^{{N_{l,k}}} {\displaystyle \sum\limits_{m = - \left( {N - 1} \right)/2}^{\left( {N - 1} \right)/2} {\displaystyle \sum\limits_{n = - \left( {N - 1} \right)/2}^{\left( {N - 1} \right)/2} {\dfrac{{{E_{H_{l,k}^I}}\left( {i,j} \right)}}{{{E_I}\left( {i,j} \right)}}} } } $$ (21) $$ C_{l,k}^V\left( {i,j} \right) = \displaystyle \sum\limits_{}^{{N_{l,k}}} {\displaystyle \sum\limits_{m = - \left( {N - 1} \right)/2}^{\left( {N - 1} \right)/2} {\displaystyle \sum\limits_{n = - \left( {N - 1} \right)/2}^{\left( {N - 1} \right)/2} {\dfrac{{{E_{H_{l,k}^V}}\left( {i,j} \right)}}{{{E_V}\left( {i,j} \right)}}} } } $$ (22) 式中:
$ {N_{l,k}} = \displaystyle \sum\limits_k {\left( {{2^{{k_l}}} + 2} \right)} $ ,其中$ l $ 为分解的层数,$ {k_l} $ 为第$ l $ 层分解得到$ k $ 方向的局部化级数。(3) 根据红外和可见光图像的区域平均能量对比度的大小选取融合图像的高频子带,选取策略可表示为:
$$ {F}_{l,k}\left(i,j\right)=\left\{\begin{array}{l}{H}_{l,k}^{I}\left(i,j\right)\text{,}{C}_{l,k}^{I}\left(i,j\right)>{C}_{l,k}^{V}\left(i,j\right)\\ {H}_{l,k}^{V}\left(i,j\right)\text{,}{C}_{l,k}^{I}\left(i,j\right)\leqslant {C}_{l,k}^{V}\left(i,j\right)\end{array}\right. $$ (23) -
为了验证提出融合算法的有效性和优越性,从TNO 图像数据库中选择两组经过严格配准的红外与可见光图像,来进行融合对比实验。实验条件:处理器为AMD 7502 P 2.5 GHz,内存16 G,固态硬盘480 G,操作系统Windows10,仿真软件Matlab R2020 b。将得到的实验结果通过主观视觉、客观指标[19]和运行时间等进行多维度评价。
-
为了验证文中使用的FDST分解方法的优越性,采用参考文献[11]中小波变换DWT、参考文献[12]中NSCT和、参考文献[13]中NSST和文中的FDST四种变换域对源图像进行分解。通过对低频子带取均值和对高频子带取绝对值最大的方法进行融合,再对得到的高低频子带进行对应的逆变换后,得到的融合结果和四项客观评价指标如图4和表1所示。
表 1 客观指标和运行时间结果
Table 1. Results of objective indicators and running time
Objective indicators DWT NSCT NSST FDST EN 6.549 6.703 6.885 6.926 MI 1.891 1.929 1.977 2.154 QAB/F 0.502 0.543 0.569 0.5804 AG 13.655 13.748 13.926 14.351 Running time/s 4.512 4.987 4.724 4.173 由图4的仿真结果可看出:在可见光图4(a)中未能呈现出远处的山脉和船上的灯光效果,而在红外光图4(b)中对船体的呈现不够清晰,对这两幅图像分别采用四种不同的变换方法和相同的融合算法,重构后的效果差异非常明显。采用DWT变换得到的融合结果失真严重(伪影),图像整体较为模糊,主要是因为DWT的移变特性,在分解的过程中携带了大量的虚假信息所致;采用NSCT变换得到的融合结果较前者在图像细节上有了明显的提升,海面和山脉的轮廓开始凸显,主要因为NSCT变换具有了平移不变特性,使多尺度和多方向的信息得到了保留,但整体上还不够清晰;采用NSST变换得到的融合结果较前两个分解方法有了进一步的改善,模糊区域基本消失,山脉和船体的轮廓已基本呈现;相比较而言,采用文中的FDST变换得到的融合效果最佳,更多的细节得到了呈现,甚至可以看到了重叠的山脉和船尾的波浪,但受融合算法的制约,仍有改善空间。
由表1的客观评价指标也可以佐证主观视觉的分析,使用FDST分解方法得到的融合图像在四个指标EN、MI、QAB/F和AG均是最优的,说明采用FDST变换得到的融合图像可以保留更多信息,各项指标均优于另外三种比较分解方法。另外,从运算所需的时间上来看,由于在FDST中引入了快速傅里叶变换,也避免了下采样的环节,所以表现出了更快的运行速度。
-
为了验证所提融合算法的优越性,分别采用参考文献[14]、参考文献[16]、参考文献[20]和所提算法对另外一组红外与可见光图像进行融合,重构后的结果和四项客观评价指标分别如图5和表2所示。
图 5 不同变换域下的不同算法的融合结果
Figure 5. Fusion results of different algorithms in different transform domains
表 2 客观指标和运行时间结果
Table 2. Results of objective indicators and running time
从图5中可看出:在可见光(图5(a))未能呈现出人和地面光源的效果,而在红外光(图5(b))中则对天空、云朵、树枝和烟雾的呈现不够清晰,对这两幅图像分别采用FSDST的变换方法和四种不同的融合算法,重构后的效果却相差也较大。采用参考文献[14]的融合算法得到的结果中存在块状模糊和重影现象,尤其在天空区域较为明显,轮廓和细节信息丢失严重;参考文献[16]的融合算法得到的结果较前者有了一定的提升,但整体色泽暗淡,天空的云朵呈现出混沌的效果,且人物不够突出;参考文献[20]呈现的效果有了大幅改善,重影已基本消失,轮廓逐渐开始清晰,人物突出明显,但树梢、天空、房屋和云朵方面细节方面还略有模糊;而采用文中方法得到的融合图像的主观视觉表现最优,对比度鲜亮,人物更加清晰,树梢、房屋和云朵的细节的呈现更加丰富。
由表2的客观评价指标可以看出:在FDST变换的基础上采用不同的融合算法,可得到更优的客观评价指标,说明所提的改进融合算法的效果最优,能够从红外光和可见光的源图像中获取更多的互补信息,且运算所需要的时长最短,充分验证了文中方法的优越性。
Image fusion algorithm based on improved PCNN and average energy contrast
-
摘要: 为了改善红外光和可见光图像融合的视觉效果和运算时效性,借助有限离散剪切波变换(Finite discrete shearlet transform, FDST)将源图像分解一系列大小相同尺度不同的高低频子带;然后,在低频子带的融合过程中采用改进的空间频率作为脉冲耦合人工神经网络(Pulse Coupled Neural Network,PCNN)的输入激励,动态调节链接强度的大小,以便根据图像的特征自适应变化,充分保留了图像轮廓和边缘等特征信息。在高频子带的融合中,采用区域平均能量对比度的策略进行融合,尽可能突出了纹理和细节等信息;最后,对处理得到的高低频子带采取FDST逆变换,重构得到背景清晰和目标突出的图像。实验结果表明:提出的改进融合方法能够更加清晰和全面地呈现出图像中的背景和目标,与其他几种算法相比,主观视觉与客观指标均表现的最优,且具有更高的运算效率。Abstract: To improve the visual effect and time efficiency of infrared and visible image fusion, the source images were decomposed into a series of high and low frequency sub-bands with the same size and different scales by Finite Discrete Shearlet Transform (FDST). Then, in the fusion process of low frequency sub-bands, the improved spatial frequency was used as the input excitation of Pulse Coupled Neural Network(PCNN), and the link strength was dynamically adjusted to change adaptively according to the image features, which fully preserved the feature information of image contour and edge. In the fusion of high frequency sub-band, the strategy of regional average energy contrast was used to fuse, which highlighted the information such as texture and details as much as possible. Finally, the image with clear background and prominent target was reconstructed with the processed high and low frequency sub-bands by using FDST inverse transform. The experimental results show that the improved fusion method can present the background and target in the image more clearly and comprehensively, compared with other algorithms, and performs the best subjective and objective indicators with the highest operation efficiency.
-
Key words:
- image fusion /
- infrared /
- visiblet /
- spatial frequency /
- link strength /
- average energy contrast
-
表 1 客观指标和运行时间结果
Table 1. Results of objective indicators and running time
Objective indicators DWT NSCT NSST FDST EN 6.549 6.703 6.885 6.926 MI 1.891 1.929 1.977 2.154 QAB/F 0.502 0.543 0.569 0.5804 AG 13.655 13.748 13.926 14.351 Running time/s 4.512 4.987 4.724 4.173 -
[1] Zhou W Z, Fan C, Hu X P, et al. Multi-scale singular value decomposition polarization image fusion defogging algorithm and experiment [J]. Chinese of Optics, 2021, 14(2): 298-306. (in Chinese) doi: 10.37188/CO.2020-0099 [2] Gu M H, Wang M M, Li L Y, et al. Color image multi-scale fusion graying algorithm [J]. Computer Engineering and Applications, 2021, 54(4): 209-215. (in Chinese) doi: 10.3778/j.issn.1002-8331.1912-0319 [3] Shen Y, Huang C H, Huang F, et al. Research progress of infrared and visible image fusion technology [J]. Infrared and Laser Engineering, 2021, 50(9): 20200467. (in Chinese) [4] Zhu H R, Liu Y Q, Zhang W Y. Night-vision image fusion based on intensity transformation and two-scale decomposition [J]. Journal of Electronics & Information Technology, 2019, 41(3): 640-648. (in Chinese) doi: 10.11999/JEIT180407 [5] Lin S, Chi K C, Li W T, et al. Underwater optical image enhancement based on dominant feature image fusion [J]. Acta Photonica Sinica, 2020, 49(3): 203-215. (in Chinese) [6] Wang W, Zhang J E. A remote sensing image fusion algorithm based on guided filtering and shearlet sparse base [J]. Computer Engineering & Science, 2018, 40(8): 1453-1458. (in Chinese) doi: 10.3969/j.issn.1007-130X.2018.08.016 [7] Cai L M, Li X F, Tian X D. Virtual viewpoint rendering algorithm based on hierarchical image fusio [J]. Computer Engineering, 2021, 47(4): 204-210. (in Chinese) [8] Feng X, Zhang J H, Hu K Q, et al. The infrared and visible image fusion method based on variational multiscale [J]. Acta Electronica Sinica, 2018, 46(3): 680-687. (in Chinese) doi: 10.3969/j.issn.0372-2112.2018.03.025 [9] Jiao J, Wu L D, Yu S B, et al. Image fusion method using multi-scale analysis and improved PCNN [J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(6): 988-996. (in Chinese) [10] Wang F, Chen Y M, Li H. Image fusion algorithm of focal region detection and TAM-SCM based on SHT domain [J]. Journal of Northwestern Polytechnical University, 2019, 37(1): 114-121. (in Chinese) doi: 10.3969/j.issn.1000-2758.2019.01.017 [11] Che M, Zhang H M, Tuo M F. Spectral image fusion based on wavelet transform and edge information [J]. Laser Journal, 2019, 40(11): 71-75. (in Chinese) [12] Liu Z, Xu T, Song Y Q, et al. Image fusion technology based on NSCT and robust principal component analysis model with similar information [J]. Journal of Jilin University (Engineering and Technology Edition), 2018(5): 1614-1620. (in Chinese) [13] Lin J P, Liao Y P. A novel image fusion method with fractional saliency detection and QFWA in NSST [J]. Optics and Precision Engineering, 2021, 29(6): 1406-1419. (in Chinese) doi: 10.37188/OPE.20212906.1406 [14] Bai Y J, Xiong S H, Wu X Q, et al. Infrared and visible images fusion based on FDST and MSS [J]. Science Technology and Engineering, 2017, 17(6): 215-219. (in Chinese) doi: 10.3969/j.issn.1671-1815.2017.06.038 [15] Liu Z W, Li H, Zhao Z K. Improving multi-focus image fusion algorithm with finite discrete shearlet domain [J]. Electronics Optics & Control, 2019, 26(10): 49-53, 105. (in Chinese) doi: 10.3969/j.issn.1671-637X.2019.10.11 [16] Zhou Y, Zhou Y, Wang X H. Grayscale image fusion based on finite discrete shearlet transform [J]. Computer Engineering, 2016, 42(12): 222-227. (in Chinese) doi: 10.3969/j.issn.1000-3428.2016.12.038 [17] Wang Y, Liu F, Chen Z H. Image fusion algorithm based on improved weighted method and adaptive pulse coupled neural network in shearlet domain [J]. Computer Science, 2019, 46(4): 261-267. (in Chinese) doi: 10.11896/j.issn.1002-137X.2019.04.041 [18] Wang J, Wu X S. Image fusion based on the improved sparse representation and PCNN [J]. CAAI Transactions on Intelligent Systems, 2019, 14(5): 922-928. (in Chinese) [19] Xie W, Wang L M, Hu H J, et al. Adaptive multi-exposure image fusion with guided filtering [J]. Computer Engineering and Applications, 2019, 55(4): 193-199. (in Chinese) doi: 10.3778/j.issn.1002-8331.1711-0196 [20] Dai J D, Liu Y D, Mao X Y, et al. Infrared and visible image fusion based on FDST and dual-channel PCNN [J]. Infrared and Laser Engineering, 2019, 48(2): 0204001. (in Chinese)