Academic Editor
Author
Contributions by role
Author 4
Reviewer 2
Editor 2
Shenglun Yi
Department of Information Engineering, University of Padua, Italy
Summary
Edited Journals
IECE Contributions

Free Access | Research Article | 18 December 2024
Adaptive Tunable Predefined-Time Backstepping Control for Uncertain Robotic Manipulators
IECE Transactions on Sensing, Communication, and Control | Volume 1, Issue 2: 126-135, 2024 | DOI:10.62762/TSCC.2024.672831
Abstract
In engineering applications, high-precision tracking control is crucial for robotic manipulators to successfully complete complex operational tasks. To achieve this goal, this study proposes an adaptive tunable predefined-time backstepping control strategy for uncertain robotic manipulators with external disturbances and model uncertainties. By establishing a novel practical predefined-time stability criterion, a tunable predefined-time backstepping controller is systematically presented, allowing the upper bound of tracking error settling time to be precisely determined by adjusting only one control parameter. To accurately address lumped uncertainty, two updating laws are designed: a fuzz... More >

Graphical Abstract
Adaptive Tunable Predefined-Time Backstepping Control for Uncertain Robotic Manipulators

Free Access | Research Article | 30 October 2024
Enhanced Recognition for Finger Gesture-Based Control in Humanoid Robots Using Inertial Sensors
IECE Transactions on Sensing, Communication, and Control | Volume 1, Issue 2: 89-100, 2024 | DOI:10.62762/TSCC.2024.805710
Abstract
Humanoid robots have much weight in many fields. Their efficient and intuitive control input is critically important and, in many cases, requires remote operation. In this paper, we investigate the potential advantages of inertial sensors as a key element of command signal generation for humanoid robot control systems. The goal is to use inertial sensors to detect precisely when the user is moving which enables precise control commands. The finger gestures are initially captured as signals coming from the inertial sensor. Movement commands are extracted from these signals using filtering and recognition. These commands are subsequently translated into robot movements according to the attitud... More >

Graphical Abstract
Enhanced Recognition for Finger Gesture-Based Control in Humanoid Robots Using Inertial Sensors

Free Access | Research Article | 25 October 2024
Spatio-temporal Feature Soft Correlation Concatenation Aggregation Structure for Video Action Recognition Networks
IECE Transactions on Sensing, Communication, and Control | Volume 1, Issue 1: 60-71, 2024 | DOI:10.62762/TSCC.2024.212751
Abstract
The efficient extraction and fusion of video features to accurately identify complex and similar actions has consistently remained a significant research endeavor in the field of video action recognition. While adept at feature extraction, prevailing methodologies for video action recognition frequently exhibit suboptimal performance in the context of complex scenes and similar actions. This shortcoming arises primarily from their reliance on uni-dimensional feature extraction, thereby overlooking the interrelations among features and the significance of multi-dimensional fusion. To address this issue, this paper introduces an innovative framework predicated upon a soft correlation strategy... More >

Graphical Abstract
Spatio-temporal Feature Soft Correlation Concatenation Aggregation Structure for Video Action Recognition Networks

Code (Data) Available | Free Access | Research Article | Feature Paper | 09 August 2024
LI3D-BiLSTM: A Lightweight Inception-3D Networks with BiLSTM for Video Action Recognition
IECE Transactions on Emerging Topics in Artificial Intelligence | Volume 1, Issue 1: 58-70, 2024 | DOI:10.62762/TETAI.2024.628205
Abstract
This paper proposes an improved video action recognition method, primarily consisting of three key components. Firstly, in the data preprocessing stage, we developed multi-temporal scale video frame extraction and multi-spatial scale video cropping techniques to enhance content information and standardize input formats. Secondly, we propose a lightweight Inception-3D networks (LI3D) network structure for spatio-temporal feature extraction and design a soft-association feature aggregation module to improve the recognition accuracy of key actions in videos. Lastly, we employ a bidirectional LSTM network to contextualize the feature sequences extracted by LI3D, enhancing the representation capa... More >

Graphical Abstract
LI3D-BiLSTM: A Lightweight Inception-3D Networks with BiLSTM for Video Action Recognition

Free Access | Research Article | 29 May 2024 | Cited: 6
Parameter Adaptive Non-Model-Based State Estimation Combining Attention Mechanism and LSTM
IECE Transactions on Intelligent Systematics | Volume 1, Issue 1: 40-48, 2024 | DOI:10.62762/TIS.2024.137329
Abstract
Nowadays, state estimation is widely used in fields such as autonomous driving and drone navigation. However, in practical applications, it is difficult to obtain accurate target motion models and noise covariance.This leads to a decrease in the estimation accuracy of traditional Kalman filters. To address this issue, this paper proposes an adaptive model free state estimation method based on attention parameter learning module. This method combines Transformer's encoder with Long Short Term Memory Network (LSTM), and obtains the system's operational characteristics through offline learning of measurement data without modeling the system dynamics and measurement characteristics. In addition,... More >

Graphical Abstract
Parameter Adaptive Non-Model-Based State Estimation Combining Attention Mechanism and LSTM
1 2