Feature spaces [f.sub.1] and [f.sub.2] are established from each subband of GRWT from multi-modularity images.
where [u.sub.1] and [u.sub.2] denote the strength of the feature spaces of image phase and coherence respectively, which can be extracted from the input multi-modality images, which located in each subband of GRWT. Cv denotes the fusion coefficient, which is calculated for the visual
In this section, the fusion process is proposed through GRWT. The flow chart of the proposed fusion framework is presented in Fig.
Choosing the number of decomposition levels of GRWT, which determines the number of subbands.
Analyzing multi-modality images A and B by GRWT based on Eq.(12) and Eq.(13).
[DC.sub.Ai] = GRWT (A), [DC.sub.Bi] = GRWT (B), (17)
where [DC.sub.Ai] and [DC.sub.Bi] denotes the decomposition coefficients of image A and B respectively, based on GRWT, in scale i.
For the proposed GRWT based fusion method, when the order of Riesz transform is 1, and the decomposition level is 4, it gains a better fusion performance in our experiments.
It can be seen that GRWT based method can construct a more complete representation of the perceived scene than other fusion methods.
GRWT based fusion method can make a balance between the fusion performance and computation requirement.
Although Shearlet based method's visual contrast is higher than GRWT's, its numerical evaluation results indicate that the fusion process by Shearlet based method may destroy local structures of the scene, which results in low fusion performance in term of EN, Qab/f , SSIM and FSIM.