-
CiteScore
2.17
Impact Factor
Volume 2, Issue 1, Chinese Journal of Information Fusion
Volume 2, Issue 1, 2025
Submit Manuscript Edit a Special Issue
Academic Editor
Xiaoling Wang
Xiaoling Wang
East China Normal University, China
Article QR Code
Article QR Code
Scan the QR code for reading
Popular articles
Chinese Journal of Information Fusion, Volume 2, Issue 1, 2025: 70-78

Open Access | Research Article | 27 March 2025
A Few-shot Learning Method Using Relation Graph
1 No.10th Research Institute, China Electronics Technology Group Corporation, Chengdu 610036, China
* Corresponding Author: Zijing Liu, [email protected]
Received: 04 March 2025, Accepted: 23 March 2025, Published: 27 March 2025  
Abstract
Few-shot learning aims to recognize new-class items under the circumstances with a few labeled support samples. However, many methods may suffer from poor guidance of limited new-class samples that are not suitable for being regarded as class centers. Recent works use word embedding to enrich the new-class distribution message but only use simple mapping between visual and semantic features during training. To solve the aforementioned problems, we propose a method that constructs a class relation graph by semantic meaning as guidance for feature extraction and fusion, to help the learning of the second-order relation information, with a light training request. In addition, we introduce two ways to generate pseudo prototypes for augmentation to resolve the lack of representation due to limited samples in novel classes: 1) A Generation Module(GM) that trains a small structure to generate visual features by using word embedding; 2) A Relation Module(RM) for training-free scenario that uses class relations in semantics to generate visual features. Extensive experiments on benchmarks including miniImageNet, CIFAR-FS and FC-100 prove that our method achieves state-of-the-art results.

Graphical Abstract
A Few-shot Learning Method Using Relation Graph

Keywords
few-shot learning
relation graph

Data Availability Statement
Data will be made available on request.

Funding
This work was supported by National Natural Science Foundation of China under Grant U20B2075.

Conflicts of Interest
Zijing Liu and Chenggang Wang are employees of the No.10th Research Institute, China Electronics Technology Group Corporation, Chengdu 610036, China. 

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Kim, J., Oh, T.-H., Lee, S., Pan, F., & Kweon, I. S. (2019). Variational prototyping-encoder: One-shot learning with prototypical images. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 9462–9470).
    [Google Scholar]
  2. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H. S., & Hospedales, T. M. (2018). Learning to compare: Relation network for few-shot learning. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1199–1208).
    [Google Scholar]
  3. Vinyals, O., Blundell, C., Lillicrap, T., & Wierstra, D. (2016). Matching networks for one shot learning. Advances in neural information processing systems, 29.
    [Google Scholar]
  4. Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical networks for few-shot learning. Advances in neural information processing systems, 30.
    [Google Scholar]
  5. Huang, J., Chen, F., Wang, K., Lin, L., & Zhang, D. (2022). Enhancing prototypical few-shot learning by leveraging the local-level strategy. In IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 1660–1664). IEEE.
    [CrossRef]   [Google Scholar]
  6. Chen, Y., Liu, Z., Xu, H., Darrell, T., & Wang, X. (2021). Meta-baseline: Exploring simple meta-learning for few-shot learning. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9062-9071).
    [Google Scholar]
  7. Schwartz, E., Karlinsky, L., Feris, R., Giryes, R., & Bronstein, A. M. (2022). Baby steps towards few-shot learning with multiple semantics. Pattern Recognition Letters, 160, 142–147.
    [CrossRef]   [Google Scholar]
  8. Li, A., Huang, W., Lan, X., Feng, J., Li, Z., & Wang, L. (2020). Boosting few-shot learning with adaptive margin loss. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 12576-12584).
    [Google Scholar]
  9. Xing, C., Rostamzadeh, N., Oreshkin, B., & O Pinheiro, P. O. (2019). Adaptive cross-modal few-shot learning. Advances in neural information processing systems, 32.
    [Google Scholar]
  10. Xu, J., & Le, H. (2022). Generating representative samples for few-shot classification. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9003-9013).
    [Google Scholar]
  11. Chen, Z., Fu, Y., Zhang, Y., Jiang, Y.-G., Xue, X., & Sigal, L. (2019). Multi-level semantic feature augmentation for one-shot learning. IEEE Transactions on Image Processing, 28(9), 4594–4605.
    [CrossRef]   [Google Scholar]
  12. Peng, Z., Li, Z., Zhang, J., Li, Y., Qi, G. J., & Tang, J. (2019). Few-shot image recognition with knowledge transfer. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 441-449).
    [Google Scholar]
  13. Tokmakov, P., Wang, Y. X., & Hebert, M. (2019). Learning compositional representations for few-shot recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6372-6381).
    [Google Scholar]
  14. Zhang, B., Li, X., Ye, Y., Huang, Z., & Zhang, L. (2021). Prototype completion with primitive knowledge for few-shot learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3754-3762).
    [Google Scholar]
  15. Bendou, Y., Hu, Y., Lafargue, R., Lioi, G., Pasdeloup, B., Pateux, S., & Gripon, V. (2022). Easy—ensemble augmented-shot-y-shaped learning: State-of-the-art few-shot classification with simple components. Journal of Imaging, 8(7), 179.
    [CrossRef]   [Google Scholar]
  16. Finn, C., Abbeel, P., & Levine, S. (2017, July). Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning (pp. 1126-1135). PMLR.
    [Google Scholar]
  17. Wang, Y., Chao, W. L., Weinberger, K. Q., & Van Der Maaten, L. (2019). Simpleshot: Revisiting nearest-neighbor classification for few-shot learning. arXiv preprint arXiv:1911.04623.
    [Google Scholar]
  18. Mangla, P., Kumari, N., Sinha, A., Singh, M., Krishnamurthy, B., & Balasubramanian, V. N. (2020). Charting the right manifold: Manifold mixup for few-shot learning. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 2218-2227).
    [Google Scholar]
  19. Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee.
    [Google Scholar]
  20. Rong, X. (2014). word2vec parameter learning explained. arXiv preprint arXiv:1411.2738.
    [Google Scholar]
  21. Bertinetto, L., Henriques, J. F., Torr, P. H., & Vedaldi, A. (2018). Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136.
    [Google Scholar]
  22. Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical Report, University of Toronto.
    [Google Scholar]
  23. Oreshkin, B., Rodríguez López, P., & Lacoste, A. (2018). Tadam: Task dependent adaptive metric for improved few-shot learning. Advances in neural information processing systems, 31.
    [Google Scholar]
  24. Zhang, C., Cai, Y., Lin, G., & Shen, C. (2020). DeepEMD: Few-shot image classification with differentiable earth mover's distance and structured classifiers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 12203-12213).
    [Google Scholar]
  25. Simon, C., Koniusz, P., Nock, R., & Harandi, M. (2020). Adaptive subspaces for few-shot learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4136-4145).
    [Google Scholar]
  26. Lee, K., Maji, S., Ravichandran, A., & Soatto, S. (2019). Meta-learning with differentiable convex optimization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10657-10665).
    [Google Scholar]
  27. Tian, Y., Wang, Y., Krishnan, D., Tenenbaum, J. B., & Isola, P. (2020). Rethinking few-shot image classification: a good embedding is all you need?. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XIV 16 (pp. 266-282). Springer International Publishing.
    [Google Scholar]
  28. Rizve, M. N., Khan, S., Khan, F. S., & Shah, M. (2021). Exploring complementary strengths of invariant and equivariant representations for few-shot learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10836-10846).
    [Google Scholar]
  29. Liu, J., Chao, F., & Lin, C. M. (2020). Task augmentation by rotating for meta-learning. arXiv preprint arXiv:2003.00804.
    [Google Scholar]
  30. Hiller, M., Ma, R., Harandi, M., & Drummond, T. (2022). Rethinking generalization in few-shot classification. Advances in neural information processing systems, 35, 3582-3595.
    [Google Scholar]
  31. He, Y., Liang, W., Zhao, D., Zhou, H. Y., Ge, W., Yu, Y., & Zhang, W. (2022). Attribute surrogates learning and spectral tokens pooling in transformers for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 9119-9129).
    [Google Scholar]

Cite This Article
APA Style
Liu, Z., & Wang, C. (2025). A Few-shot Learning Method Using Relation Graph. Chinese Journal of Information Fusion, 2(1), 70–78. https://doi.org/10.62762/CJIF.2025.146072

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 72
PDF Downloads: 17

Publisher's Note
IECE stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
CC BY Copyright © 2025 by the Author(s). Published by Institute of Emerging and Computer Engineers. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
Chinese Journal of Information Fusion

Chinese Journal of Information Fusion

ISSN: 2998-3371 (Online) | ISSN: 2998-3363 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/iece/

Copyright © 2025 Institute of Emerging and Computer Engineers Inc.