IECE Transactions on Sensing, Communication, and Control
ISSN: 3065-7431 (Online) | ISSN: 3065-7423 (Print)
Email: [email protected]
[1] Kong, J., Wang, H., Wang, X., Jin, X., Fang, X., & Lin, S. (2021). Multi-stream hybrid architecture based on cross-level fusion strategy for fine-grained crop species recognition in precision agriculture. Computers and Electronics in Agriculture, 185, 106134.
[2] Li, J., Wang, B., Ma, H., Gao, L., & Fu, H. (2024). Visual Feature Extraction and Tracking Method Based on Corner Flow Detection. IECE Transactions on Intelligent Systematics, 1(1), 3-9.
[3] Jin, X., Tong, A., Ge, X., Ma, H., Li, J., Fu, H., & Gao, L. (2024). YOLOv7-Bw: A Dense Small Object Efficient Detector Based on Remote Sensing Image. IECE Transactions on Intelligent Systematics, 1(1), 30-39.
[4] Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., & Van Gool, L. (2016). Temporal segment networks: Towards good practices for deep action recognition. In European conference on computer vision (pp. 20-36). Springer, Cham.
[5] Yang, Z., An, G., Zhang, R., Zheng, Z., & Ruan, Q. (2023). SRI3D: Two-stream inflated 3D ConvNet based on sparse regularization for action recognition. IET Image Processing, 17(5), 1438-1448.
[6] Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision (pp. 4489-4497).
[7] Carreira, J., & Zisserman, A. (2017). Quo vadis, action recognition? a new model and the kinetics dataset. In proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 6299-6308).
[8] Feichtenhofer, C., Fan, H., Malik, J., & He, K. (2019). Slowfast networks for video recognition. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 6202-6211).
[9] Saha, A., Mazumdar, M., & Ghosh, A. (2019). Human motion recognition using CNN and SVM. Journal of Ambient Intelligence and Humanized Computing, 10(4), 1561574.
[10] Karpathy, A., & Fei-Fei, L. (2015). Deep visual-semantic alignments for generating image descriptions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3128-3137).
[11] Funke, I., Bodenstedt, S., Oehme, F., von Bechtolsheim, F., Weitz, J., & Speidel, S. (2019, October). Using 3D convolutional neural networks to learn spatiotemporal features for automatic surgical gesture recognition in video. In International conference on medical image computing and computer-assisted intervention (pp. 467-475). Cham: Springer International Publishing.
[12] Zha, Z., Wang, Y., & Wu, X. (2019). A comparative study of convolutional neural networks for video action recognition. Journal of Visual Communication and Image Representation, 58, 951-960.
[13] Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., & Darrell, T. (2015). Long-term recurrent convolutional networks for visual recognition and description. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2625-2634).
[14] Zou, J., Wang, D., & Li, X. (2020). Adaptive Regularization for CNNs. Neural Networks, 134, 151-159.
[15] Bertasius, G., Wang, H., & Torresani, L. (2021, July). Is space-time attention all you need for video understanding?. In ICML (Vol. 2, No. 3, p. 4).
[16] Fan, H., Xie, L.,& Wang, Z. (2021). Multiscale Vision Transformer for Video Understanding. International Journal of Computer Vision, 29(2), 129-142.
[17] Fu, L., & Laterveer, R. (2023). Special Cubic Four-Folds, K3 Surfaces, and the Franchetta Property. International Mathematics Research Notices, 2023(10), 8872-8902.
[18] Yang, J., & Yu, H. (2022). Temporal Shift Attention for Action Recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 121-130.
IECE Transactions on Sensing, Communication, and Control
ISSN: 3065-7431 (Online) | ISSN: 3065-7423 (Print)
Email: [email protected]
Portico
All published articles are preserved here permanently:
https://www.portico.org/publishers/iece/