-
CiteScore
-
Impact Factor
Volume 1, Issue 1, IECE Journal of Image Analysis and Processing
Volume 1, Issue 1, 2025
Submit Manuscript Edit a Special Issue
IECE Journal of Image Analysis and Processing, Volume 1, Issue 1, 2025: 27-35

Open Access | Research Article | 14 March 2025
High-Quality Multi-Focus Image Fusion: A Comparative Analysis of DCT-Based Approaches with Their Variants
1 Department of Computer Science, IQRA National University, Swat 19200, Pakistan
2 Department of Computer Science, University of Engineering and Technology Mardan, Mardan 23200, Pakistan
* Corresponding Author: Sarwar Shah Khan, [email protected]
Received: 21 November 2024, Accepted: 24 December 2024, Published: 14 March 2025  
Abstract
Image fusion, especially in the context of multi-focus image fusion, plays a crucial role in digital image processing by enhancing the clarity and detail of visual content through the combination of multiple source images. Traditional spatial domain methods often suffer from issues like spectral distortion and low contrast, which has led researchers to explore techniques in the frequency domain, such as the Discrete Cosine Transform (DCT). DCT-based methods are particularly valued for their computational efficiency, making them a strong alternative, especially in applications like image compression and fusion. This study focuses on DCT-based approaches, including variants that incorporate Singular Value Decomposition (SVD) and a combination of Correlation Coefficient with Energy-Correlation (Corr_Eng), both with and without Consistency Verification (CV). Extensive testing on multi-focus image datasets revealed that the DCT + SVD + CV method consistently shows better results in both qualitative and quantitative assessments. This indicates that integrating DCT+SVD+CV provides a powerful approach for achieving effective and efficient image fusion.

Keywords
multi-focused
image fusion
discrete cosine transform
spatial domain and frequency domain approaches

1. Introduction

Over the past two decades, image fusion has become one of the most significant research fields in digital image processing, leading to the development and implementation of numerous approaches aimed at enhancing accuracy. This process involves merging two source images into a single, resultant image that conveys more meaningful and informative content than any individual source image could provide on its own. In some cases, the required information can only be revealed by merging a couple of images. High-quality visuals are essential in many such as security, computer vision [1], medical, military [2], remote sensing [3], navigation guidance for pilots, and weather forecasting [4].

Multi-focus imaging is one of the essential types of image fusion that has seen extensive research interest over the last few decades. Optical lenses are limited by their depth of field, meaning that only objects at a specific distance from the lens will appear sharp and in focus. As a result, in any given image, only one object will be in focus, while another object at a different distance from the lens will be out of focus and, hence blurred. Several factors contribute to the extent of this blurring, including the distance from the object, the focal length, the number of lenses used, and the distance between the lens and the sensor plane [5].

Multi-focus image fusion necessitates a variety of traditional and advanced approaches to produce a more informative resultant image. These approaches can be broadly categorized into a couple of classes such as spatial domain and frequency domain approaches. In the spatial domain, to deal with the image in its original form, meaning that the pixel values are directly manipulated based on the scene. On the other hand, frequency domain approaches focus on the rate at which pixel values change in the spatial domain.

Spatial domain approaches include averaging methods, Principal Component Analysis (PCA), simple maximum and minimum methods [6], and Intensity Hue Saturation (IHS) [7]. However, these approaches often produce subpar results due to spectral distortions, leading to low-contrast images with less information [4]. Additionally, the spatial domain doesn’t provide enough robustness and perceivable [8]. While, frequency domain approaches encompass techniques like pyramid transform, Discrete Wavelet Transform (DWT), Stationary Wavelet Transform (SWT) [9], and Discrete Cosine Transform (DCT) [10]. The benefits of the frequency domain are minimal processing complexity, ease of view, the ability to manipulate the frequency composition and special transformed domain property is easily applicable [8].

Frequency domain methods like DWT are effective for extracting frequency information from images but fall short in providing detailed directional information. The DWT approach has limitations such as the presence of ringing artifacts, longer processing times, problems with shift variance and additive noise, and higher energy consumption [10]. To resolve the DWT issues the SWT was introduced and this approach is a fully shift-invariant transform that eliminates the down-sampling step of the decimated approach by up-sampling the filters, placing zeros among the filter coefficients. This approach has a simpler architecture and provides higher time-frequency localization [11].

However, due to the aforementioned issues with frequency methods, researchers have been increasingly interested in using the DCT for multi-focus image fusion [10]. DCT-based methods are particularly efficient for transmitting and archiving images encoded in the JPEG standard. In the compressed domain, these approaches have performed better, avoiding the need for the complex and time-consuming decoding and encoding operations required by spatial-based algorithms [12]. As a result, DCT-based multi-focus image fusion algorithms consume importantly less energy and time [10].

In this article, the focus is on DCT-based approaches, which come in many variations. These variations are thoroughly analyzed to expand our understanding of the functionality of different DCT approaches. Moreover, the article provides a comparative analysis with advanced approaches, including DWT, SWT, and DCT-based variations, highlighting their respective advantages and disadvantages in Table 1. The purpose of this comparison is to help new readers with basic concepts and explore potential modifications for new approaches. The DCT-based methods examined include DCT + SVD, DCT + SVD + CV, DCT + Corr_Eng, and DCT + Corr_Eng + CV.

Table 1 Advantages and disadvantages of different multi-focus image fusion.
Fusion Method Advantages Disadvantages
[c]DWT
  • DWT is an effective multi-focus image fusion approach.

  • DWT allows the decomposition of an image into various frequency sub-bands, enabling detailed analysis at different scales.

  • The DWT decreases the spectral distortion in an image.

  • It is particularly effective in representing image edges and textures.

  • Artifacts may be introduced during the decomposition and reconstruction process.

  • Due to DWT’s sensitivity to changes in the input image, the fused image may be inaccurate and misaligned.

  • DWT preserves only the vertical and horizontal properties.

  • It experiences anomalous ringing, which reduces the resolution of the resulting image.

[c]SWT
  • SWT produces more accurate fusion results than DWT because it is shift-invariant.

  • Regarding multi-focus picture fusion, SWT performs better at maintaining image edges and details information.

  • In real-time applications, the redundancy in SWT can be undesirable since it increases memory usage and computational costs.

  • The redundancy can sometimes lead to overfitting.

  • SWT is less efficient.

[c]DCT
  • DCT efficiently compacts most of the signal’s energy into a few low-frequency components.

  • DCT is computationally less expensive.

  • It is faster and easier to implement.

  • In block-based DCT, block artifacts may occur.

  • It’s possible that DCT is less successful in maintaining high-frequency elements.

[c]DCT + SVD
  • By combining the advantages of both methods—the energy compaction of DCT and the major feature extraction of SVD—detail preservation is improved.

  • This approach is resistant to noise and small image distortions.

  • Adding SVD to DCT increases the computational complexity.

  • The performance heavily depends on the selection of singular values.

[c] DCT + SVD + Correlation Coefficient (CV)
  • The correlation coefficient helps to ensure that the fused image maintains the structural similarity to the original images.

  • This combination is effective in fusing images with complementary information.

  • The fusion process is made more difficult by the requirement to optimize several parameters.

  • As with any multi-phase procedure, overfitting is a possibility.

[c] DCT + Correlation Coefficient and Energy-Correlation
  • By balancing the correlation and energy information between images, this approach creates a fused image that preserves structural details as well as energy properties.

  • The technique is flexible for a range of fusion tasks.

  • The overall complexity is increased when energy and correlation metrics are combined with DCT.

  • This technique could need a lot of processing power.

[c] DCT + Correlation Coefficient and Energy-Correlation + CV
  • This method maximizes the retention of structural and statistical information from the input images.

  • The fused image is likely to be of high accuracy, maintaining the essential features and details of the original images.

  • The integration of multiple techniques leads to a significant increase in computational requirements.

  • The complexity of integrating and tuning multiple techniques can make the implementation challenging.

2. Multi-focus Image fusion Approaches

Various approaches have been developed for both spatial and frequency domains in the area of multi-focus image fusion. Additionally, the frequency domain offers more advantages over the spatial domain [8]. Hence, this study mainly focuses on the DCT approach. Its purpose is to highlight the maximum effective approaches by analyzing their characteristics and quality, as well as presenting experimental results obtained from image sets.

2.1 Discrete Cosine Transform

The DCT approach facilitates the transformation from the spatial domain to the frequency domain, making it possible to extract detailed and outline information from an image based on pixel frequencies. DCT is an effective approach for handling frequencies, offering a fast and straightforward solution by utilizing only cosine functions for the transformation. The Inverse Discrete Cosine Transform (IDCT) can then be used to reconstruct the original pixel values from the frequencies derived through DCT [20]. A finite sequence of data points is significantly represented by DCT as the sum of cosine functions that oscillate at different frequencies [21]. The process of DCT evaluation is as follows.

The two-dimensional DCT transform of an N×N (usually 8×8) block of an image x (m,n) and the inverse DCT (IDCT) are defined in Eq.(1) and (3) respectively:

d(k,l)=2a(k)a(l)Nm=0N1n=0N1x(m,n)cos[(2m+1)πk2n]
×cos[(2n+1)πl2n]

where k,l=0,1,,N1 and

a(k)={12,if k=01,otherwise

x(m,n)=k=0N1j=0N12a(k)a(l)N×d(k,l)
×cos[(2m+1)πk2n]
×cos[(2n+1)πk2n]

where m,n=0,1,,N1.

In Eq.(1), d(0,0) is the DC coefficient, which is the coefficient with zero frequency in both dimensions, and the other d(k,l)s are the AC coefficients, which are the remaining coefficients with non-zero frequency of the block.

2.2 Singular Value Decomposition (SVD) in DCT Domain

SVD is a mathematical technique used to factorize a matrix into three component matrices. In the context of the DCT domain for image fusion, SVD helps in combining the important features of multiple images into a single, more informative image [13]. Decompose the DCT-transformed images Eq.(1) using SVD.

A=UΣVT

where A is the DCT-transformed matrix, U and V are orthogonal matrices, and Σ is a diagonal matrix containing the singular values. Combine the singular values from the corresponding matrices of the input images using a fusion rule, such as averaging or maximum selection.

Σf=Σ1+Σ22

where Σf is the resultant singular value matrix, Σ1 and Σ2 are the singular value matrices of the reference image. Apply inverse SVD to the resultant matrices and then use IDCT to transform using Eq.(3) back to the spatial domain [14].

Af=UΣfVT

2.3 Singular Value Decomposition (SVD) + Consistency Verification (CV) in DCT domain

SVD + CV are applied in the DCT domain for multi-focus image fusion. This method enhances the fusion process by ensuring that the merged image retains the highly relevant information from both input images. Using SVD in the DCT domain helps in effectively capturing the frequency information, while CV ensures that the resultant image is consistent and free from inconsistencies or artifacts. Ensure that the fused singular values using Eq.(4) are consistent with the input images using Eq.(3) by verifying the consistency of the coefficients:

CV(i,j)={1if |F1(i,j)F2(i,j)|<T0otherwise

where, T is a threshold, F1(i,j) and F2(i,j) are the DCT coefficients of the input images. If the difference is within the threshold, the value is kept; otherwise, it is discarded [13].

2.4 Correlation Coefficient and Energy-Correlation in the DCT domain

The Correlation Coefficient and Energy-Correlation Coefficient are computed in the DCT domain for an image fusion scenario. These metrics are essential in evaluating the quality of the resultant image, particularly in maintaining the correlation and energy consistency between the original and resultant images. Correlation coefficients are used to measure how effectively the fused image retains the information from the underlying photos, ensuring the quality of the resulting image. The Energy-Correlation Coefficient specifically measures how well the energy of the image (related to its contrast and brightness) is preserved during the fusion process.

The Correlation Coefficient between the original image A and the fused image F is computing using:

CC=i=1Nj=1M(A(i,j)A¯)(F(i,j)F¯)i=1Nj=1M(A(i,j)A¯)2i=1Nj=1M(F(i,j)F¯)2

where A¯ and F¯ are the mean values of the original and resultant images, respectively.

The energy of an image in the DCT domain can be represented as:

EA=u=0N1v=0M1|FA(u,v)|2

where EA is the energy of the image A, and FA(u,v) are the DCT coefficients.

The Energy-Correlation Coefficient among the original image A and the resultant image F is computing using:

ECC=u=0N1ν=0M1(|FA(u,ν)||FF(u,ν)|)u=0N1ν=0M1|FA(u,ν)|2u=0N1ν=0M1|FF(u,ν)|2

where FA(u,v) and FF(u,v) are the DCT coefficients of the original and resultant images, respectively.

2.5 Correlation Coefficient and Energy-Correlation + Consistency Verification in DCT domain

For multi-focus image fusion in the DCT domain, the Correlation Coefficient, Energy-Correlation Coefficient, and Consistency Verification are employed. These techniques work together to ensure that the fused image maintains a high level of quality, retaining important information and energy consistency from the original images while ensuring that the fusion process produces reliable and accurate results. The DCT domain is particularly useful in this context because it allows the image’s frequency components to be manipulated, making it easier to merge images effectively. In order to ensure that crucial information is retained, the Correlation Coefficient calculates how well the fused image corresponds with the original images. The Energy-Correlation Coefficient evaluates how well the image’s energy (related to its contrast and brightness) is retained. Finally, consistency verification makes sure that the integrity of the fused image is maintained by preventing artifacts or inconsistencies from being introduced throughout the fusion process.

CV(i,j)={1if |F1(i,j)F2(i,j)|<T0otherwise

where, T is a threshold, F1(i,j) and F2(i,j) are the DCT coefficients of the input images. If the difference is within the threshold, the value is kept; otherwise, it is discarded [13].

3. Experiments

3.1 Performance Metrics

Entropy is one of the commonly used and important metrics to evaluate the information content of the resultant image. The higher values mean good results.

E=k=0G1SklogSk

Correlation Coefficient (Corr) is an important measure that presents the correlation and also computes the similarity of spectral features between the reference and resultant images. The best value is close to the positive one, which depicts that the reference and resultant images are similar and if the image is dissimilar then the value is closer to zero [16].

Corr=2CzpCz+Cp
Czp=a=1Mb=1NIz(a,b)Ip(a,b)
Cz=a=1Mb=1NIz(a,b)2
Cp=a=1Mb=1NIp(a,b)2

Signal to Noise Ratio is a performance metric to use to find the ratio between information and noise of the resultant image. SNR higher values express that both the reference and resultant images are similar [17].

SNR=10log10(a=1Mb=1N(Iz(a,b))2a=1Mb=1N(Iz(a,b)Ip(a,b))2)

Peak Signal to Noise Ratio is a widely used performance metric, which is calculated by the number of gray levels in the image divided by the corresponding pixels in the reference and the Resultant images. The higher values indicate that the Resultant and reference images are similar [18, 19].

PSNR=20log10(G21M×Na=1Mb=1N(Iz(a,b)Ip(a,b))2)

fig1.jpg
Figure 1 The fused images and error images of different frequency domain methods on disk image set.

3.2 Results and discussion

In this study, we conducted a comparative analysis of various image fusion techniques, including DWT, SWT, and DCT, along with several variations of DCT-based methods. The specific variations of DCT methods we explored include DCT + SVD, DCT + SVD + CV, DCT + Correlation and Energy (DCT + Corr_Eng), and DCT + Correlation and Energy with CV (DCT + Corr_Eng + CV). The effectiveness of these fusion techniques was rigorously assessed through three different types of performance metrics: qualitative error image (QEI) analysis, quantitative measures, and qualitative assessments. In this letter, the experiments are performed on Test image; “Clocks”. The grayscale image set is provided by “Lytro multi-focus datasets” [15].

To evaluate the quantitative performance, we utilized four specific metrics: entropy, SNR (Signal-to-Noise Ratio), PSNR (Peak Signal-to-Noise Ratio), and correlation. These were selected for their effectiveness in capturing various dimensions of fusion quality. Experiments were conducted on widely recognized multi-focus image datasets, specifically the disk dataset, with each image in the set having a resolution of 520×520 pixels.

For the image fusion process, we focused on fusing two images at a time, although the algorithms used are flexible enough to handle more than two multi-focus images. Additionally, we used the qualitative error image (QEI) technique to evaluate the fusion results. The QEI is essentially a difference image obtained by subtracting the resultant fused image from a reference image [10]. The less visible the QEI, the closer the fused image is to the reference image, indicating better fusion quality. This method provides a clear visual indication of how well the fusion process has preserved the important features of the original images [4].

Table 2 The quantitative results of a Disk image set.
Metrics DWT SWT DCT DCT+Corr_eng DCT+Corr_eng+CV DCT+SDV DCT+SDV+CV
entropy 0.1790 0.1795 0.1834 0.1838 0.1675 0.1656 0.1690
SNR 15.6230 15.9438 15.6593 15.6855 15.5727 15.5455 15.7767
PSNR 35.9750 36.1354 35.9931 36.0062 35.9498 35.9363 36.0519
Correlation 0.9882 0.9891 0.9883 0.9884 0.9881 0.9880 0.9886

The qualitative results for all the fusion methods are illustrated in Figure 1, where we showcase both the final fused images and the corresponding error (or difference) images for each technique. While at first glance, the fused images across all methods may appear quite similar, with only minor differences, the true distinctions become evident when examining the qualitative error images. Notably, the fusion results and error images generated by the extended DCT + SVD approach are particularly impressive. It is clear that the difference image resulting from the DCT + SVD + CV method is more informative compared to DCT + SVD alone.

In our observations, when we closely examine and compare the error images in Figure 1 for all the fusion methods, it is evident that the qualitative performance of the DCT + SVD + CV method stands out across all three datasets. This approach consistently produces superior results, highlighting the effectiveness of incorporating CV into the fusion process.

The quantitative results, evaluated using four different metrics, reveal some intriguing insights. The statistical values across all the fusion methods are very close, indicating that each method performs well. However, as shown in Table 2, the DCT + SVD method stands out as the best performer for the disk image set. Despite the strong performance across the board, when we combine both visual and statistical assessments, it becomes clear that the DCT + SVD + CV method is the most superior among all the fusion techniques. This combination consistently outperforms the others, demonstrating its effectiveness in producing higher-quality fused images.

4. Conclusion

This study offers an in-depth examination of various image fusion techniques, with a particular focus on methods based on the Discrete Cosine Transform (DCT) and its advanced variations. It emphasizes the effectiveness of DCT in the frequency domain, especially for multi-focus image fusion when combined with Singular Value Decomposition (SVD) and Consistency Verification (CV). The results demonstrate that the DCT + SVD + CV method consistently outperforms others in both qualitative and quantitative assessments, making it the most effective for achieving high-quality image fusion. Experimental findings suggest that while techniques such as Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT) have their advantages, DCT-based methods, particularly when enhanced with SVD and CV, offer a more balanced approach to preserving image details and structural integrity. The DCT + SVD + CV technique stands out for its ability to retain essential image information while reducing artifacts, leading to superior overall fusion quality.

Future research could explore several avenues for improving DCT-based fusion techniques. One potential area is the optimization of threshold values in Consistency Verification to adaptively enhance fusion accuracy for different image types. Additionally, incorporating machine learning models with DCT-based methods could automate the fusion process, potentially yielding even better results. Further research could also investigate extending these fusion techniques to handle more than two input images and applying them to different image modalities, such as infrared and visible light fusion, to test their robustness and versatility in various scenarios.


Data Availability Statement
Data will be made available on request.

Funding
This work was supported without any funding.

Conflicts of Interest
The authors declare no conflicts of interest. 

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Farid, M.S., Mahmood, A., & Al-Maadeed, S.A. (2019). Multi-focus image fusion using content adaptive blurring. Information Fusion, 45, 96–112.
    [CrossRef]   [Google Scholar]
  2. Muller, A. C., & Narayanan, S. (2009). Cognitively-engineered multisensor image fusion for military applications. Information Fusion, 10(2), 137-149.
    [CrossRef]   [Google Scholar]
  3. Wang, J., Lu, T., Huang, X., Zhang, R., & Feng, X. (2024). Pan-sharpening via conditional invertible neural network. Information Fusion, 101, 101980.
    [CrossRef]   [Google Scholar]
  4. Bovith, T., Nielsen, A., Hansen, L., Overgaard, S., & Gill, R. (2006, July). Detecting weather radar clutter by information fusion with satellite images and numerical weather prediction model output. In 2006 IEEE International Symposium on Geoscience and Remote Sensing (pp. 511-514). IEEE.
    [CrossRef]   [Google Scholar]
  5. Zhang, S., Shen, X., Lin, Z., Měch, R., Costeira, J. P., & Moura, J. M. (2018). Learning to understand image blur. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6586-6595).
    [Google Scholar]
  6. Morris, C., & Rajesh, R. S. (2014, December). A novel and improved Spatial domain fusion method using Simple—PCA techniques. In 2014 International Conference on Communication and Network Technologies (pp. 90-94). IEEE.
    [CrossRef]   [Google Scholar]
  7. Singh, G., Khosla, A., & Anwar, M. I. (2016, February). Spatial domain color image enhancement based on local processing. In 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN) (pp. 265-269). IEEE.
    [CrossRef]   [Google Scholar]
  8. Ackar, H., Abd Almisreb, A., & Saleh, M.A. (2019). A review on image enhancement techniques. Southeast Europe Journal of Soft Computing, 8(1).
    [Google Scholar]
  9. Khan, S.S., Khan, M., Alharbi, Y., Haider, U., Ullah, K., & Haider, S. (2021). Hybrid Sharpening Transformation Approach for Multifocus Image Fusion Using Medical and Nonmedical Images. Journal of Healthcare Engineering, 2021.
    [CrossRef]   [Google Scholar]
  10. Amin-Naji, M., & Aghagolzadeh, A. (2018). Multi-Focus Image Fusion in DCT Domain using Variance and Energy of Laplacian and Correlation Coefficient for Visual Sensor Networks. Journal of AI and Data Mining, 6(2), 233–250.
    [CrossRef]   [Google Scholar]
  11. Gharbia, R., Hassanien, A.E., El-Baz, A.H., Elhoseny, M., & Gunasekaran, M. (2018). Multi-spectral and panchromatic image fusion approach using stationary wavelet transform and swarm flower pollination optimization for remote sensing applications. Future Generation Computer Systems, 88(11), 501–511.
    [CrossRef]   [Google Scholar]
  12. Tang, J., Peli, E., & Acton, S. (2003). Image enhancement using a contrast measure in the compressed domain. IEEE Signal Processing Letters, 10(10), 289–292.
    [CrossRef]   [Google Scholar]
  13. Amin-Naji, M., Ranjbar-Noiey, P., & Aghagolzadeh, A. (2017). Multi-focus image fusion using singular value decomposition in DCT domain. 2017 10th Iranian Conference on Machine Vision and Image Processing (MVIP), 45–51.
    [Google Scholar]
  14. Rajakumar, C., & Satheeskumaran, S. (2022). Singular value decomposition and saliency-map based image fusion for visible and infrared images. International Journal of Image and Data Fusion, 13(1), 21–43.
    [CrossRef]   [Google Scholar]
  15. Nejati, Mansour. (2016). Lytro Multi-focus Image Dataset.
    [CrossRef]   [Google Scholar]
  16. Karunasingha, D. S. K. (2022). Root mean square error or mean absolute error? Use their ratio as well. Information Sciences, 585, 609-629.
    [CrossRef]   [Google Scholar]
  17. Moushmi, S., Sowmya, V., & Soman, K. P. (2016). Empirical wavelet transform for multifocus image fusion. In Proceedings of the International Conference on Soft Computing Systems: ICSCS 2015, Volume 1 (pp. 257-263). Springer India.
    [CrossRef]   [Google Scholar]
  18. Shah, M., et al. (2023). Multi-Focus Image Fusion using Unsharp Masking with Discrete Cosine Transform, 1–5.
    [Google Scholar]
  19. Khan, S.S., Ran, Q., & Khan, M. (2020). Image pan-sharpening using enhancement based approaches in remote sensing. Multimedia Tools and Applications, 79(43), 32791-32805.
    [CrossRef]   [Google Scholar]
  20. Kumar, S., & B. K. (2013). Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal, Image and Video Processing, 7, 1125-1143.
    [CrossRef]   [Google Scholar]
  21. Lee, J., Vijaykrishnan, N., Irwin, M. J., & Radhakrishnan, R. (2004). Inverse discrete cosine transform architecture exploiting sparseness and symmetry properties. IEEE Workshop on Signal Processing Systems (SIPS), 361–366.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Osama, M., Khan, S.S., Khan, S., Ahmad, S., Mehmood, G., & Ali, I. (2025). High-Quality Multi-Focus Image Fusion: A Comparative Analysis of DCT-Based Approaches with Their Variants. IECE Journal of Image Analysis and Processing, 1(1), 27–35. https://doi.org/10.62762/JIAP.2024.764051

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 271
PDF Downloads: 48

Publisher's Note
IECE stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
CC BY Copyright © 2025 by the Author(s). Published by Institute of Emerging and Computer Engineers. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
IECE Journal of Image Analysis and Processing

IECE Journal of Image Analysis and Processing

ISSN: request pending (Online)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/iece/

Copyright © 2025 Institute of Emerging and Computer Engineers Inc.