-
CiteScore
2.31
Impact Factor
Chinese Journal of Information Fusion, 2024, Volume 1, Issue 1: 33-49

Author's Talk | Free to Read | Research Article | Feature Paper | 28 May 2024
1 School of Information and Communication Engineering, North University of China, Taiyuan 030051, China
* Corresponding Author: Fengbao Yang, [email protected]
Received: 12 February 2024, Accepted: 24 May 2024, Published: 28 May 2024  
Cited by: 7  (Source: Web of Science) , 8  (Source: Google Scholar)
Author's Talk
A Mimic Fusion Algorithm for Dual Channel Video Based on Possibility Distribution Synthesis Theory

Abstract
In response to the current practical fusion requirements for infrared and visible videos, which often involve collaborative fusion of difference feature information, and model cannot dynamically adjust the fusion strategy according to the difference between videos, resulting in poor fusion performance, a mimic fusion algorithm for infrared and visible videos based on the possibility distribution synthesis theory is proposed. Firstly, quantitatively describe the various difference features and their attributes of the region of interest in each frame of the dual channel video sequence, and select the main difference features corresponding to each frame. Secondly, the pearson correlation coefficient is used to measure the correlation between any two features and obtain the feature correlation matrix. Then, based on the similarity measure, the fusion effective degree distribution of each layer variables for different difference features is constructed, and the difference feature distribution is correlated and synthesized based on the possibility distribution synthesis theory. Finally, optimize the select of mimic variables to achieve mimic fusion of infrared and visible videos. The experimental results show that the proposed method achieve significant fusion results in preserving targets and details, and was significantly superior to other single fusion methods in subjective evaluation and objective analysis.

Graphical Abstract
A Mimic Fusion Algorithm for Dual Channel Video Based on Possibility Distribution Synthesis Theory

Keywords
Image processing
Video fusion
Mimic fusion
Possibility distribution synthesis theory

Funding
This work was supported in part by the National Natural Science Foundation of China under Grants (61972363 and 61672472), in part by the Fundamental Research Program of Shanxi Province (202203021221104).

Cite This Article
APA Style
Guo, X., Yang, F., & Ji, L. (2024). A Mimic Fusion Algorithm for Dual Channel Video Based on Possibility Distribution Synthesis Theory. Chinese Journal of Information Fusion, 1(1), 33–49. https://doi.org/10.62762/CJIF.2024.361886

References
  1. Zhang, M., Dong, L., Ma, D., & Xu, W. (2022). Infrared target detection in marine images with heavy waves via local patch similarity. Infrared Physics & Technology, 125, 104283.
    [Google Scholar]
  2. Ma, J., Ma, Y., & Li, C. (2019). Infrared and visible image fusion methods and applications: A survey. Information fusion, 45, 153-178.
    [Google Scholar]
  3. Chen, J., Li, X., Luo, L., Mei, X., & Ma, J. (2020). Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Information Sciences, 508, 64-78.
    [Google Scholar]
  4. Li, H., Wu, X. J., & Kittler, J. (2020). MDLatLRR: A novel decomposition method for infrared and visible image fusion. IEEE Transactions on Image Processing, 29, 4733-4746.
    [Google Scholar]
  5. Fu, Z., Wang, X., Xu, J., Zhou, N., & Zhao, Y. (2016). Infrared and visible images fusion based on RPCA and NSCT. Infrared Physics & Technology, 77, 114-123.
    [Google Scholar]
  6. Zhang, Q., Wang, Y., Levine, M. D., Yuan, X., & Wang, L. (2015). Multisensor video fusion based on higher order singular value decomposition. Information Fusion, 24, 54-71.
    [Google Scholar]
  7. Zhang, Q., Wang, L., Ma, Z., & Li, H. (2012). A novel video fusion framework using surfacelet transform. Optics Communications, 285(13-14), 3032-3041.
    [Google Scholar]
  8. Guo, X., Ji, L., & Yang, F. (2021). Dual-mode Infrared Image Fusion Algorithm Selection Based on Possibility Information Quality Synthesis. Acta Photonica Sinica, 50(3), 167.
    [Google Scholar]
  9. Guo, X., Yang, F., & Ji, L. (2022). MLF: A mimic layered fusion method for infrared and visible video. Infrared Physics & Technology, 126, 104349.
    [Google Scholar]
  10. Yang, F. B. (2017). Research on theory and model of mimic fusion between infrared polarization and intensity images. Journal of North University of China (Natural Science Edition), 38(1), 1-8.
    [Google Scholar]
  11. Hanlon, R. T., Conroy, L. A., & Forsythe, J. W. (2008). Mimicry and foraging behaviour of two tropical sand-flat octopus species off North Sulawesi, Indonesia. Biological Journal of the Linnean Society, 93(1), 23-38.
    [Google Scholar]
  12. Ishida, T. (2021). A model of octopus epidermis pattern mimicry mechanisms using inverse operation of the Turing reaction model. Plos one, 16(8), e0256025.
    [Google Scholar]
  13. Hochberg, F. G., Norman, M. D., & Finn, J. (2006). Wunderpus photogenicus n. gen. and sp., a new octopus from the shallow waters of the Indo-Malayan Archipelago (Cephalopoda: Octopodidae). Mount Sinai Journal of Medicine, 73(8).
    [Google Scholar]
  14. Tomita, M., & Aoki, S. (2014). Visual Discrimination Learning in the Small Octopus O ctopus ocellatus. Ethology, 120(9), 863-872.
    [Google Scholar]
  15. Hu, H., Wu, J., Wang, Z., & Cheng, G. (2018). Mimic defense: a designed‐in cybersecurity defense framework. IET Information Security, 12(3), 226-237.
    [Google Scholar]
  16. Y.Z. Gao, J.M. Wang, Z.Y. Lei, et al. Method of mimicry signal processing for distributed opportunity array radar, Modern Radar 43(11) (2021) 1-8.
    [Google Scholar]
  17. D.F. Xu. Research on biomimetic Robot inspired by mimicry of octopus, Hangzhou Dianzi University, (2018)1-15.
    [Google Scholar]
  18. Guo, X., Yang, F., & Ji, L. (2023). A mimic fusion method based on difference feature association falling shadow for infrared and visible video. Infrared Physics & Technology, 132, 104721.
    [Google Scholar]
  19. Mack, Y. P., & Rosenblatt, M. (1979). Multivariate k-nearest neighbor density estimates. Journal of Multivariate Analysis, 9(1), 1-15.
    [Google Scholar]
  20. Langrené, N., & Warin, X. (2019). Fast and stable multivariate kernel density estimation by fast sum updating. Journal of Computational and Graphical Statistics, 28(3), 596-608.
    [Google Scholar]
  21. Wang, J. (2013). Pearson correlation coefficient. Encyclopedia of systems biology, 1671.
    [Google Scholar]
  22. Bouhamed, S. A., Kallel, I. K., Yager, R. R., Bossé, É., & Solaiman, B. (2020). An intelligent quality-based approach to fusing multi-source possibilistic information. Information Fusion, 55, 68-90.
    [Google Scholar]
  23. F. Yang, L. Ji, X. Wang, Possibility Theory and Application, Science Press, Beijing, (2019) 41-45.
    [Google Scholar]
  24. Ali, F. E., El-Dokany, I. M., Saad, A. A., & Abd El-Samie, F. E. (2010). A curvelet transform approach for the fusion of MR and CT images. Journal of Modern Optics, 57(4), 273-286.
    [Google Scholar]
  25. Cheng, B., Jin, L., & Li, G. (2018). General fusion method for infrared and visual images via latent low-rank representation and local non-subsampled shearlet transform. Infrared Physics & Technology, 92, 68-77.
    [Google Scholar]
  26. LIU, D., ZHOU, D., NIE, R., & HOU, R. (2018). Multi-focus image fusion based on phase congruency motivate pulse coupled neural network-based in NSCT domain. Journal of Computer Applications, 38(10), 3006.
    [Google Scholar]
  27. Bao, W., & Zhu, X. (2015). A novel remote sensing image fusion approach research based on HSV space and bi-orthogonal wavelet packet transform. Journal of the Indian Society of Remote Sensing, 43, 467-473.
    [Google Scholar]
  28. Bashir, R., Junejo, R., Qadri, N. N., Fleury, M., & Qadri, M. Y. (2019). SWT and PCA image fusion methods for multi-modal imagery. Multimedia tools and applications, 78, 1235-1263.
    [Google Scholar]
  29. Du, J., Li, W., Xiao, B., & Nawaz, Q. (2016). Union Laplacian pyramid with multiple features for medical image fusion. Neurocomputing, 194, 326-339.
    [Google Scholar]
  30. Aishwarya, N., & Thangammal, C. B. (2018). Visible and infrared image fusion using DTCWT and adaptive combined clustered dictionary. Infrared Physics \& Technology, 93, 300-309.
    [Google Scholar]
  31. Zhao, R., Liu, L., Kong, X., Jiang, S., & Chen, X. (2019). Multi-scale fusion algorithm of intensity and polarization-difference images based on edge information enhancement. Optical and Quantum Electronics, 51, 1-24.
    [Google Scholar]
  32. Wang, X., Yin, J., Zhang, K., Li, S., & Yan, J. (2019). Infrared weak-small targets fusion based on latent low-rank representation and DWT. IEEE Access, 7, 112681-112692.
    [Google Scholar]
  33. IEEE OTCBVS WS Series Bench. http://www.cse.ohio-state.edu/OTCBVS-BENCH
    [Google Scholar]
  34. Toet, A. TNO Image fusion dataset. Figshare. data, 2014.
    [Google Scholar]
  35. Li, S., Yang, B., & Hu, J. (2011). Performance comparison of different multi-resolution transforms for image fusion. Information Fusion, 12(2), 74-84.
    [Google Scholar]
  36. Xydeas, C. S., & Petrovic, V. (2000). Objective image fusion performance measure. Electronics letters, 36(4), 308-309.
    [Google Scholar]
  37. Wang, Z., & Bovik, A. C. (2002). A universal image quality index. IEEE signal processing letters, 9(3), 81-84.
    [Google Scholar]
  38. Piella, G., & Heijmans, H. (2003, September). A new quality metric for image fusion. In Proceedings 2003 international conference on image processing (Cat. No. 03CH37429) (Vol. 3, pp. III-173). IEEE.
    [Google Scholar]

Article Metrics
Citations:

Crossref

4

Scopus

7

Web of Science

7
Article Access Statistics:
Views: 6953
PDF Downloads: 736

Publisher's Note
IECE stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
IECE or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Chinese Journal of Information Fusion

Chinese Journal of Information Fusion

ISSN: 2998-3371 (Online) | ISSN: 2998-3363 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/iece/

Copyright © 2024 Institute of Emerging and Computer Engineers Inc.