-
CiteScore
1.08
Impact Factor
IECE Transactions on Intelligent Systematics, 2024, Volume 1, Issue 1: 3-9

Free Access | Research Article | 15 May 2024 | Cited: 1
1 School of Computer Science and Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China
* Corresponding author: Huijun Ma, email: mahuijun@th.btbu.edu.cn
Received: 09 January 2024, Accepted: 10 May 2024, Published: 15 May 2024  

Abstract
Frontend feature tracking based on vision is the process in which a robot captures images of its surrounding environment using a camera while in motion. Each frame of the image is then analyzed to extract feature points, which are subsequently matched between pairwise frames to estimate the robot’s pose changes by solving for the variations in these points. While feature matching methods that rely on descriptor-based approaches perform well in cases of significant lighting and texture variations, the addition of descriptors increases computational costs and introduces instability. Therefore, in this paper, a novel approach is proposed that combines sparse optical flow tracking with Shi-Tomasi corner detection, replacing the use of descriptors. This new method offers improved stability in situations of challenging lighting and texture variations while maintaining lower computational costs. Experimental results, validated using the OpenCV library on the Ubuntu operating system, provide evidence of the algorithm’s effectiveness and efficiency.

Graphical Abstract
Visual Feature Extraction and Tracking Method Based on Corner Flow Detection

Keywords
Computer Vision
Feature Tracking
Optical Flow Method
Visual Features
Visual Tracking

References

‌[1]Ke, X., Yu, Y., Li, K., Wang, T., Zhong, B., Wang, Z., & Wang, C. (2023). Review on robot-assisted polishing: Status and future trends. Robotics and Computer-integrated manufacturing, 80, 102482.

‌[2]Guo, H., Lou, J., Yang, Z., & Xu, Y. (2023). Research on Multi Autonomous Vehicle Decentralization Strategy Based on Auction Multi Agent Deep Deterministic Strategy Gradient.Journal of Electronics and Information Technology, 45(7), 1-12.

‌[3]Gao, J., Yang, X., Zhang, T., & Xu, C. (2016). A Robust Visual Tracking Method Based on Deep Learning. Journal of Computer Science, 39(7), 16.

‌[4]Li, H., Bi ,D., Yang, Y., Cha, Y., Qin, B., & Zhang, L. (2015).Research on Visual Tracking Algorithm Based on Deep Feature Expression and Learning. Journal of Electronics and Information Technology, 37(9), 7.

‌[5]Li, H., Hu, Z., & Chen, X. (2017). Plp-slam:Visual SLAM method based on point, line, and surface feature fusion. robot(2), 8.

‌[6]Stephens, M., & Harris, C. (1989). 3D wire-frame integration from image sequences. Image and Vision Computing, 7(1), 24-30.

‌[7]Su, Z., He, Q., & Xie, Z. (2018). Detection of molten steel level based on optical flow analysis method. Journal of Northeastern University: Natural Science Edition, 39(2), 158-161.

‌[8]Jiang, L., & Cheng, G. (2015). Effective Target Point Enhancement Tracking Based on LK Optical Flow Tracking Method. Microcomputers and Applications, 34(6), 5.

‌[9]Huang, Z. (2014). Research on Human Behavior Recognition Based on Dense Optical Flow Trajectory. (Doctoral dissertation, Northeastern University).

‌[10]Hua, Y., Lin, J., & Lin, C. (2010, July). An improved SIFT feature matching algorithm. In 2010 8th World Congress on Intelligent Control and Automation (pp. 6109-6113). IEEE.

‌[11]Kumar, D., Pandey, R. C., & Mishra, A. K. (2024). A review of image features extraction techniques and their applications in image forensic. Multimedia Tools and Applications, 1-102.

‌[12]Wang, J., Huang, W., & Biljecki, F. (2024). Learning visual features from figure-ground maps for urban morphology discovery. Computers, Environment and Urban Systems, 109, 102076.

‌[13]Cheng, L., Wang, Y., Liu, Q., Epema, D. H., Liu, C., Mao, Y., & Murphy, J. (2021). Network-aware locality scheduling for distributed data operators in data centers. IEEE Transactions on Parallel and Distributed Systems, 32(6), 1494-1510.

‌[14]Liu, Q., Xia, T., Cheng, L., Van Eijk, M., Ozcelebi, T., & Mao, Y. (2021). Deep reinforcement learning for load-balancing aware network control in IoT edge systems. IEEE Transactions on Parallel and Distributed Systems, 33(6), 1491-1502.


Cite This Article
APA Style
Li, J., Wang, B., Ma, H., Gao, L., & Fu, H. (2024). Visual Feature Extraction and Tracking Method Based on Corner Flow Detection. IECE Transactions on Intelligent Systematics, 1(1), 3–9. https://doi.org/10.62762/TIS.2024.136895

Article Metrics
Citations:

Crossref

0

Scopus

1

Web of Science

1
Article Access Statistics:
Views: 1182
PDF Downloads: 664

Publisher's Note
IECE stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
IECE or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
IECE Transactions on Intelligent Systematics

IECE Transactions on Intelligent Systematics

ISSN: 2998-3355 (Online) | ISSN: 2998-3320 (Print)

Email: jinxuebo@btbu.edu.cn

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/iece/

Copyright © 2024 Institute of Emerging and Computer Engineers Inc.