-
CiteScore
2.31
Impact Factor
Volume 2, Issue 2, IECE Transactions on Intelligent Systematics
Volume 2, Issue 2, 2025
Submit Manuscript Edit a Special Issue
IECE Transactions on Intelligent Systematics, Volume 2, Issue 2, 2025: 76-84

Free to Read | Research Article | 14 April 2025
Iterative Estimation Algorithm for Bilinear Stochastic Systems by Using the Newton Search
1 College of Engineering, Zhejiang Normal University, Jinhua 321004, China
2 School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
* Corresponding Author: Feng Ding, [email protected]
Received: 02 November 2024, Accepted: 26 March 2025, Published: 14 April 2025  
Abstract
This study addresses the challenge of estimating parameters iteratively in bilinear state-space systems affected by stochastic noise. A Newton iterative (NI) algorithm is introduced by utilizing the Newton search and iterative identification theory for identifying the system parameters. Following the estimation of the unknown parameters, we create a bilinear state observer (BSO) using the Kalman filtering principle for state estimation. Subsequently, we propose the BSO-NI algorithm for simultaneous parameter and state estimation. An iterative algorithm based on gradients is given for comparisons to illustrate the effectiveness of the proposed algorithms.

Keywords
newton search
bilinear system
parameter estimation
system identification
iterative method

1. Introduction

System identification involves the theory and techniques used to explore and develop mathematical models of static and dynamic systems based on observation data [1, 2]. These systems can be linear, bilinear, or nonlinear [3, 4], with nonlinear systems frequently appearing in industrial applications. As a result, the need for effective modeling techniques for these systems has grown significantly in fields such as signal processing, system analysis, and control. Bilinear models, in particular, are advantageous for industrial applications, as they more accurately capture the nonlinear characteristics of systems compared to linear models. This capability makes them essential in various industrial processes, including heat exchangers, nuclear reactors, and chemical operations.

Some classical identification methods have been applied in the parameter estimation of bilinear systems. For instance, gradient identification techniques have been developed by minimizing criterion functions through negative gradient search [5, 6]. An et al. [8] proposed a multi-innovation gradient-based iterative algorithm for parameter estimation. Additionally, a maximum likelihood multi-innovation stochastic gradient algorithm was proposed for bilinear systems affected by colored noise. Least squares methods are foundational for identifying linear-parameter systems and have been widely utilized across various fields. In prior research, we presented a least squares-based iterative (LSI) algorithm to estimate unknown system parameters [9].

Recently, several new ideas, theories, and principles have emerged in system identification for constructing mathematical models and determining model parameters. Notable among these are the multi-innovation identification theory, hierarchical identification principle, and filtering identification concept, all of which contribute to the advancement of system identification. The multi-innovation identification theory enhances the accuracy of estimating parameters. The hierarchical identification principle significantly improves the computational efficiency of identification algorithms, particularly for large-scale complex systems. Meanwhile, the filtering identification concept addresses parameter estimation challenges in systems with colored noise, thereby enhancing estimation accuracy. These approaches are applicable to bilinear systems. Recently, An et al. [10] explored various parameter estimation algorithms for bilinear systems utilizing the maximum likelihood gradient-based iterative and hierarchical principles. Additionally, Gu examined an identification algorithm based on multi-innovation stochastic gradient that incorporates filtering methods for bilinear systems utilizing data filtering theory [11].

The Newton method, a classical optimization tool, is a root-finding algorithm that leverages the initial terms of a function's Taylor series expansion near an estimated root [12]. This method has been extensively studied over the years and has diverse applications, including minimization and maximization problems, solving transcendental equations, and numerically verifying solutions to nonlinear equations. For instance, Xu introduced a separable Newton recursive algorithm based on dynamically discrete measurements, which utilizes system responses with progressively increasing data lengths [13].

Notably, bilinear systems pose a greater challenge than general linear ones due to the increased number of parameters that need to be estimated. This complexity makes standard parameter estimation methods inadequate for achieving high-precision estimates. Additionally, the characteristics of bilinear systems prevent the direct application of conventional state estimation techniques like Kalman filtering [14, 15, 16] and finite impulse response (FIR) filter [17]. Consequently, there is a need to develop a novel state estimator utilizing Kalman filtering, specifically tailored and modified to meet the unique requirements of bilinear systems [18].

In this paper, we present a novel identification algorithm aimed at identifying bilinear stochastic systems. Our approach introduces a Newton-based iterative algorithm for parameter estimation. Subsequently, we employ the bilinear state observer-based NI (BSO-NI) method to achieve simultaneous estimation of states and parameters. The availability of the scheme is validated by comparing estimation errors with those of gradient-based algorithms. Below, we summarize the main contributions of our work.

  • For the system with unknown states, a bilinear state observer is designed based on the Kalman filtering principle for state estimation.

  • To tackle the challenge of estimating unknown parameters, we propose a Newton iterative algorithm founded on the Newton search, which utilizes all sampling data to estimate the parameters of the bilinear system.

  • The performance of the proposed algorithm is evaluated against a gradient-based iterative algorithm through a numerical simulation.

2. System description

Focus on a bilinear system in state-space representation featuring autoregressive noise, described by an observability canonical model:

𝒙(t+1)=𝑨𝒙(t)+𝑩𝒙(t)u(t)+𝑪u(t)+w(t),
y(t)=𝑫𝒙(t)+v(t),

where 𝒙(t)m, y(t), and u(t) are the state vector, output and input variable, and 𝑨m×m, 𝑩m×m, 𝑪m, and 𝑫1×m are the matrices/vectors of the system parameters:

𝑨 :=𝟎𝑰m1am𝒂
𝒂 :=[am1,am2,,a1]1×(m1),
𝑩 :=𝒃1𝒃2𝒃mm×m
𝒃l :=[bl1,bl2,,blm]1×m,l=1,2,,m,
𝑪 :=c1c2cmm,
𝑫 :=[1,0,,0]1×m,

where 𝑰m1 represents an identity matrix of size (m1)×(m1), w(t) is the process noise, v(t) is the measurement noise, let the known dimension of the state vector be m, and assume that y(t)=0 and v(t)=0 for t0. Bilinear systems can be interpreted as linear systems with time-varying characteristics, exhibiting stability that changes over time. The stability of these systems is determined by the time-varying matrix 𝑨+𝑩u(t). By designing an input sequence that ensures all eigenvalues of 𝑨+𝑩u(t) remain below 1.

Based on Equations (1)–(6), the subsequent relationships exist:

xp(t+1)=xp+1(t)+𝒃p𝒙(t)u(t)+cpu(t),
xm(t+1)=p=1mapxmp+1(t)+𝒃m𝒙(t)u(t)+cmu(t),

where p=1,,m1. To multiply both sides of Equation (7) by zp, and sum from p=1 to p=m1, and then add this to Equation (8), where both sides are multiplied by zm, one can derive

x1(t)=p=1mapxmp+1(tm)+p=1m𝒃p𝒙(tp)u(tp)
+p=1mcpu(tp).

Inserting (9) into (2) gives:

y(t)=p=1mapxmp+1(tm)+p=1m𝒃p𝒙(tp)u(tp)
+p=1mcpu(tp)+v(t).

After that, one define the parameter vector 𝜽, along with the data vector 𝝋(t) as:

𝜽:=[a1,a2,,am,𝒃1,𝒃2,,𝒃m,
c1,c2,,cm]m2+2m,
𝝋(t):=[xm(tm),xm1(tm),,x1(tm),
𝒙(t1)u(t1),,𝒙(tm)u(tm),
u(t1),u(t2),,u(tm)]m2+2m,

The expression for y(t) in Equation (11) can be expressed as follows:

y(t)=𝝋(t)𝜽+v(t).

The model for identification is established in Equation (18). The information vector 𝝋(t) includes 𝒙(tl)(l=1,2,,m), u(tl), and the system parameters al, 𝒃l, cl which together form the parameter vector 𝜽. The goal of this study is to explore novel approaches for estimating these states and parameters using {u(t)}, {y(t)}, which is affected by white noise v(t). Equation (18) serves as the basis for deriving the iterative estimation approach.

3. The NI algorithm for parameter estimation

Parameter estimation for different types of systems has been tackled using stochastic gradient methods, iterative methods based on gradients, and various other gradient-based approaches [19]. In order to enhance the precision of parameter estimation further, the use of Newton search has been suggested. Based on the iterative identification idea, a Newton iterative (NI) algorithm has been developed specifically for bilinear systems.

Considering the data length h, we define the criterion function

J1(𝜽):=12r=1h[y(r)𝝋(r)𝜽]2
=12𝒀(h)𝚽(h)θ2,

where the stacked observed data 𝒀(h) and the stacked information matrix 𝚽(h) are defined as:

𝒀(h):=[y(1),y(2),,y(h)]h,
𝚽(h):=[𝝋(1),𝝋(2),,𝝋(h)]h×(m2+2m).

Calculating the partial derivative of J1(𝜽) in relation to 𝜽 produces

[J1(𝜽)]:=J1(𝜽)𝜽
=[J1(𝜽)al,J1(𝜽)𝒃l,J1(𝜽)cl]m2+2m.

Define 𝜽^k as the approximation of 𝜽 during iteration k, where k=1,2,3, represents an iterative parameter. Set the estimate of the stacked information matrix 𝚽(h) at iteration k as follows:

𝚽^k(h)=[𝝋^k(1),𝝋^k(2),,𝝋^k(h)],

where

𝝋^k(t)=[x^m,k1(tm),,x^1,k1(tm),
𝒙^k1(t1)u(t1),,𝒙^k1(tm)u(tm),
u(t1),u(t2),,u(tm)],

From Equation (1), we can calculate the gradient of J1(𝜽) at 𝜽=𝜽^k1:

[J1(𝜽^k1)]:=𝚽^k(h)[𝒀(h)𝚽^k(h)𝜽^k1].

The gradient method relies on the first derivative, whereas the Newton method exhibits quadratically convergence. To enhance the accuracy of parameter estimation, we calculate the second partial derivative of the criterion function J1(𝜽) with respect to 𝜽, resulting in the Hessian matrix

𝑯(𝜽)=2J1(𝜽)𝜽𝜽=[J1(𝜽)]𝜽
=𝚽(h)𝚽(h).

Using the Newton search and iterative identification idea to minimize J1(𝜽), we can obtain the following iterative relation of computing 𝜽:

𝜽^k=𝜽^k1+H1[J1(𝜽^k1)][J1(𝜽^k1)].

Substituting (11) into the above equation, we obtain:

𝜽^k=𝜽^k1+H1[J1(𝜽^k1)]
×𝚽^k(h)[𝒀(h)𝚽^k(h)𝜽^k1].

From Equations (3)–(15), we can summarize the Newton iterative (NI) algorithm for the bilinear stochastic systems as follows:

𝜽^k=𝜽^k1+H1[J1(𝜽^k1)]𝚽^k(h)
×[𝒀(h)𝚽^k(h)𝜽^k1],
H[J1(𝜽^k1)]=𝚽^k(h)𝚽^k(h),
𝚽^k(h)=[𝝋^k(1),𝝋^k(2),,𝝋^k(h)],
𝝋^k(t)=[x^m,k1(tm),,x^1,k1(tm),
𝒙^k1(t1)u(t1),,
𝒙^k1(tm)u(tm),u(t1),
u(t2),,u(tm)],
𝒀(h)=[y(1),y(2),,y(h)]h,
𝜽^k=[a^1,k,a^2,k,,a^m,k,𝒃^1,k,,𝒃^m,k,
𝒄^1,k,𝒄^2,k,,𝒄^m,k]m2+2m.

In the above NI algorithm for the parameter estimation, we assume that the system states in the information vector are known. Thus, we design the state estimator to estimate them.

4. Bilinear state observer based NI algorithm

Let 𝒙𝒊(t):=𝑩𝒙(t)+𝑪. This type of bilinear state-space framework can be represented in the form of a linear time-varying model:

𝒙(t+1)=𝑨𝒙(t)+𝒙𝒊(t)u(t)+w(t),
y(t)=𝑫𝒙(t)+v(t),

According to the NI algorithm, the obtained parameter estimates a^l,k, 𝒃^l,k and c^l,k from 𝜽^k can be used to construct the estimates 𝑨^k, 𝑩^k and 𝑪^k of the system matrices/vector 𝑨, 𝑩 and 𝑪, respectively. According to the Kalman filtering principle and referencing to [7], we can design the bilinear state estimator as follows:

𝒙^k(t+1)=𝑨^k𝒙^k(t)+𝒙𝒊^k(t)u(t)
+𝑳k(t)[y(t)𝑫𝒙^k(t)],
𝑳k(t)=𝑨^k𝑷k(t)𝑫[1+𝑫𝑷k(t)𝑫]1,
𝑷k(t+1)=𝑨^k𝑷k(t)𝑨^k𝑳k(t)𝑫𝑷k(t)𝑨^k,
𝑨^k=𝟎𝑰m1a^m,k𝒂^k,
𝒂^k=[a^m1,k,a^m2,k,,a^1,k],
𝒙𝒊^k(t)=𝑩^k𝒙^k(t)+𝑪^k,
𝑩^k=[𝒃^1,k,𝒃^2,k,,𝒃^m,k],
𝑪^k=[c^1,k,c^2,k,,c^m,k].

Equations (3)–(11) form the state estimator for bilinear systems, enabling the computation of the state estimation vector denoted by 𝒙^k(t). The initial values 𝒙^k(1) and 𝑷k(1) can be chosen arbitrarily. For instance, we can set 𝒙^k(1)=𝟏m and 𝑷k(1)=𝑰m.

Algorithm 1 The BSO-NI algorithm for bilinear stochastic systems
  • Data:u(t) and y(t), t=1,2,,h

  • Result: The BSO-NI estimates 𝜽^k and 𝒙^k(t)

  • Initialization: Data length h, parameter estimation accuracy ε, the maximum iteration kmax, 𝒂^0=𝒄^0=𝟏m/p0, 𝒃^0=𝟏m2/p0, p0=106.

  • for k=1:kmax do

  •    Form 𝝋^k(t) using Equation (21);

  •    Construct 𝚽^k(h) and 𝒀(h) using Equation (20) and Equation (25);

  •    Compute H[J1(𝜽^k1)] using Equation (19);

  •    Update 𝜽^k using Equation (17);

  •    Read out a^l,k, 𝒃^l,k and c^l,k from 𝜽^k using Equation (26);

  •    Construct 𝑨^k, 𝑩^k and 𝑪^k using Equations (7)–(11);

  •    for t=1:h do

  •      Calculate 𝑳k(t) and 𝑷k(t+1) using Equation (5) and Equation (6);

  •      Calculate 𝒙^k(t+1) using Equation (3) ;

  •     

  •    end for

  •   if 𝛉^k𝛉^k1>ε then

  •     k=k+1;

  •   else

  •     Obtain 𝜽^k and 𝒙^k(t), break;

  •    end if

  •   

  • end for

Based on the above preparations, Equations (17)–(26) and (3) to (11) form the bilinear state estimator-based NI (BSO-NI) algorithm, and the joint state and parameter estimation is realized. Here, we can summarize the BSO-NI algorithm in Algorithm 1:

5. Bilinear state estimator based GI algorithm

To highlight the advantages of the proposed BSO-NI algorithm, this section presents the bilinear state observer based GI (BSO-GI) algorithm for comparison.

When the measurement data are collected from the system outputs, it is important to utilize these data effectively to update the parameter estimates [20]. Consider the 𝒀(h) and the 𝚽(h) defined in Section 3. Utilizing negative gradient search [21] and minimizing J1(𝜽) get

𝜽^k=𝜽^k1μk[J(𝜽^k1)]
=𝜽^k1+μk𝚽(h)[𝒀(h)𝚽(h)𝜽^k1],

where the iterative step-size μk satisfies

μk2λmax1[𝚽(h)𝚽(h)].

Nevertheless, similar issues occur. The information matrix 𝚽(h) includes the unknown terms xi(tm) and 𝒙(ti). Following the similar method from the BSO-NI algorithm, we replace these unknowns with their estimates x^i,k(tm) and 𝒙^k(ti). Following this, we can derive the estimated information matrix 𝚽^k(h). By substituting the unknown information matrix 𝚽(h) in (1) with its estimate 𝚽^k(h), and referring to the bilinear state estimator in (3)–(11), the BSO-GI algorithm for bilinear systems, based on BSO, can be outlined as follows:

𝜽^k=𝜽^k1+μk𝚽^k(h)𝑬k(h),
𝑬k(h)=𝒀(h)𝚽^k(h)𝜽^k1,
𝒀(h)=[y(1),y(2),,y(h)],
𝚽^k(h)=[𝝋^k(1),𝝋^k(2),,𝝋^k(h)],
𝝋^k(t)=[x^m,k1(tm),,x^1,k1(tm),
𝒙^k1(t1)u(t1),,
𝒙^k1(tm)u(tm),u(t1),
u(t2),,u(tm)],
0<μk2λmax1[𝚽^k(h)𝚽^k(h)],
𝒙^k(t+1)=𝑨^k𝒙^k(t)+𝒙𝒊^k(t)u(t)
+𝑳k(t)[y(t)𝑫𝒙^k(t)],
𝑳k(t)=𝑨^k𝑷k(t)𝑫1+𝑫𝑷k(t)𝑫,
𝑷k(t+1)=𝑨^k𝑷k(t)𝑨^k𝑳k(t)𝑫𝑷k(t)𝑨^k,
𝑨^k=[010000100001a^m,ka^m1,ka^m2,ka^1,k],
𝒙𝒊^k(t)=𝑩^k𝒙^k(t)+𝑪^k,
𝑩^k=𝒃^1,k𝒃^2,k𝒃^m,k,
𝑪^k=c^1,kc^2,kc^m,k.

Equations (3)–(24) form the BSO-GI algorithm. The algorithmic procedure for bilinear systems are outlined below:

  1. Initialization: Set the data length h, the parameter estimation accuracy ε and the maximum iteration kmax. Let x^i,0(tm)=1/p0, and 𝒙^0(ti)=𝟏m/p0, i=1,2,,m, ϑ^0=𝟏m2+2m/p0, p0=106.

  2. Set k=1, gather the input-output data u(t) and y(t), where t=1,2,,h. Construct the stacked output vector 𝒀(h) using Equation (5).

  3. Construct 𝝋^k(t) using Equation (7), and form 𝚽^k(h) using Equation (6). Compute 𝑬k(h) using Equation (4).

  4. Update 𝜽^k by Equation (3).

  5. Extract a^i,k, 𝒃^i,k and c^i,k from 𝜽^k, construct 𝑨^k, 𝑩^k and 𝑪^k using Equations (21)–(24).

  6. Set t=1 and initialize 𝒙^k(1)=𝟏m, 𝑷k(1)=𝑰m.

  7. Calculate 𝑳k(t) and 𝑷k(t+1) through Equations (14)–(15).

  8. Compute 𝒙^k(t+1) according to Equation (12).

  9. If th1, increment t by 1, return to Step 7.

  10. If 𝜽^k𝜽^k1>ε, then, increment k by 1, return to Step 3; otherwise, obtain 𝜽^k and 𝒙^k(t), terminate.

6. Simulations

The following consider a second-order bilinear system defined by the equations:

𝒙(t+1)=010.300.17𝒙(t)+2.001.20u(t)
+0.100.080.350.21𝒙(t)u(t)+w(t),
y(t)=[1, 0]𝒙(t)+v(t),

The system parameters will be estimated are

ϑ=[a1,a2,b11,b12,b21,b22,c1,c2]
=[0.17,0.30,0.10,0.08,0.35,0.21,2.00,1.20].

For the simulation, the sequence {u(t)} is set as an independent persistent excitation signal that has a mean of zero and unit variance, while {v(t)} is modeled as a white noise sequence, exhibiting zero mean and variances σ2=0.502, σ2=1.002 and σ2=1.502, respectively. We define h as 500 for the data length and establish a maximum of 30 iterations, represented by kmax.

NI-GI_error.eps
Figure 1 The estimation errors δ versus k for the BSO-NI and BSO-GI algorithms.

A.eps
Figure 2 The 𝑨 vector parameter estimates versus k for the BSO-NI and BSO-GI algorithms.

NI-GI_B.eps
Figure 3 The 𝑩 matrix parameter estimates versus k for the BSO-NI and BSO-GI algorithms.

NI-GI_C.eps
Figure 4 The 𝑪 vector parameter estimates versus k for the BSO-NI and BSO-GI algorithms.

For the state estimation of this bilinear system, the BSO-NI algorithm is utilized to estimate each state. The true state values and the state estimates are shown in Figure 5. With the length of the data increasing, the state estimates tend toward the actual values.

states.eps
Figure 5 The true states and their estimates for the BSO-NI algorithm.

Apply the BSO-NI algorithm Equations (17)–(26) and Equations (3)–(11) to estimate the parameters 𝜽 and the states 𝒙. For comparison, we also utilize the BSO-GI algorithm Equations (3)–(24) for the bilinear system estimation. The parameter estimation errors δ=𝜽^k𝜽/𝜽 for both algorithms are presented in Figure 1. From it, it is evident that the parameter estimation errors δ gradually decreases with the increase of k. Additionally, as the noise levels diminish, both algorithms yield more accurate parameter estimates. Under the same noise conditions, the BSO-NI algorithm demonstrates greater parameter estimation accuracy compared to the BSO-GI algorithm.

In order to better demonstrate the estimation effect of each parameter for the bilinear system, we use two algorithms to predict each parameter matrix or vector respectively, and show the comparison results in Figures 24. Figure 2 shows true parameters and the estimation results of the parameters in the 𝑨 matrix. Figure 3 displays the parameter estimation values for two algorithms in the 𝑩 matrix, and the parameter estimates in the 𝑪 vector are clearly presented in Figure 4. These figures clearly illustrate that the BSO-NI algorithm outperforms the BSO-GI algorithm in terms of parameter estimation precision. Furthermore, the BSO-NI algorithm converges to the parameter estimates more rapidly.

7. Conclusion

This paper studies the combined estimation of states and parameters for bilinear systems. By means of the Newton search and batch data, we derive a Newton iterative (NI) algorithm to estimate the parameters that are not known. Drawing inspiration from the Kalman filtering, the bilinear state estimator (BSO) is created for estimating the unknown states, and a BSO based NI (BSO-NI) algorithm is proposed for the bilinear systems. In comparison to gradient-based algorithm, this NI algorithm achieves higher estimation accuracy. Simulation results illustrate the validity of the suggested method.


Data Availability Statement
Data will be made available on request.

Funding
This work was supported by the Young Scientists Fund of the National Natural Science Foundation of Zhejiang Province, China under Grant LQN25F030017.

Conflicts of Interest
The authors declare no conflicts of interest. 

Ethical Approval and Consent to Participate
Not applicable.

References
  1. Ljung, L. (1999). System identification: Theory for the user. (2nd ed.). Prentice Hall, Englewood Cliffs, New Jersey. http://dx.doi.org/10.1109/MRA.2012.2192817
    [Google Scholar]
  2. Miao, G., Ding, F., Liu, Q., & Yang, E. (2023). Iterative parameter identification algorithms for transformed dynamic rational fraction input–output systems. Journal of Computational and Applied Mathematics, 434, 115297.
    [CrossRef]   [Google Scholar]
  3. Benesty, J., Paleologu, C., Dogariu, L. M., & Ciochină, S. (2021). Identification of linear and bilinear systems: A unified study. Electronics, 10(15), 1790.
    [CrossRef]   [Google Scholar]
  4. Li, F., Zhang, M., Yu, Y., & Li, S. (2024). Deep belief network-based hammerstein nonlinear system for wind power prediction. IEEE Transactions on Instrumentation and Measurement, 73, 1-12. 2024. 73:6505912.
    [CrossRef]   [Google Scholar]
  5. Chen, J., Mao, Y., Gan, M., Wang, D., & Zhu, Q. (2023). Greedy search method for separable nonlinear models using stage Aitken gradient descent and least squares algorithms. IEEE Transactions on Automatic Control, 68(8), 5044-5051.
    [CrossRef]   [Google Scholar]
  6. Chen, J., Ma, J., Gan, M., & Zhu, Q. (2022). Multidirection gradient iterative algorithm: A unified framework for gradient iterative and least squares algorithms. IEEE Transactions on Automatic Control, 67(12), 6770-6777.
    [CrossRef]   [Google Scholar]
  7. Khodarahmi, M., & Maihami, V. (2023). A review on Kalman filter models. Archives of Computational Methods in Engineering, 30(1), 727-747.
    [CrossRef]   [Google Scholar]
  8. An, S., He, Y., & Wang, L. (2023). Maximum likelihood based multi‐innovation stochastic gradient identification algorithms for bilinear stochastic systems with ARMA noise. International Journal of Adaptive Control and Signal Processing, 37(10), 2690-2705.
    [CrossRef]   [Google Scholar]
  9. Zhang, Q., Wang, H., & Liu, X. (2025). Auxiliary model maximum likelihood least squares-based iterative algorithm for multivariable autoregressive output-error autoregressive moving average systems. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 09596518241280323.
    [CrossRef]   [Google Scholar]
  10. An, S., Wang, L., & He, Y. (2023). Filtering-based maximum likelihood hierarchical recursive identification algorithms for bilinear stochastic systems. Nonlinear Dynamics, 111(13), 12405-12420.
    [CrossRef]   [Google Scholar]
  11. Gu, Y., Dai, W., Zhu, Q., & Nouri, H. (2023). Hierarchical multi-innovation stochastic gradient identification algorithm for estimating a bilinear state-space model with moving average noise. Journal of Computational and Applied Mathematics, 420, 114794.
    [CrossRef]   [Google Scholar]
  12. Xu, L. (2021). Separable multi-innovation Newton iterative modeling algorithm for multi-frequency signals based on the sliding measurement window. Circuits, Systems, and Signal Processing, 41(2), 805-830.
    [CrossRef]   [Google Scholar]
  13. Xu, L. (2022). Separable Newton recursive estimation method through system responses based on dynamically discrete measurements with increasing data length. International Journal of Control, Automation and Systems, 20(2), 432-443.
    [CrossRef]   [Google Scholar]
  14. Pan, Z., Luan, X., & Liu, F. (2022). Confidence set-membership state estimation for LPV systems with inexact scheduling variables. ISA Transactions, 122, 38-48.
    [CrossRef]   [Google Scholar]
  15. Pan, Z., Luan, X., & Liu, F. (2022). Set-membership state and parameter estimation for discrete time-varying systems based on the constrained zonotope. International Journal of Control, 96(12), 3226-3238.
    [CrossRef]   [Google Scholar]
  16. Chen, J., Zhu, Q., & Liu, Y. (2020). Modified Kalman filtering based multi-step-length gradient iterative algorithm for ARX models with random missing outputs. Automatica, 118, 109034.
    [CrossRef]   [Google Scholar]
  17. Pan, Z., Zhao, S., Huang, B., & Liu, F. (2023). Confidence set-membership FIR filter for discrete time-variant systems. Automatica, 157, 111231.
    [CrossRef]   [Google Scholar]
  18. Pan, Z., & Liu, F. (2022). Nonlinear set‐membership state estimation based on the Koopman operator. International Journal of Robust and Nonlinear Control, 33(4), 2703-2721.
    [CrossRef]   [Google Scholar]
  19. Yang, D., Liu, Y., Ding, F., & Yang, E. (2023). Hierarchical gradient-based iterative parameter estimation algorithms for a nonlinear feedback system based on the hierarchical identification principle. Circuits, Systems, and Signal Processing, 43(1), 124-151.
    [CrossRef]   [Google Scholar]
  20. Xu, F., Chen, J., Cheng, L., & Zhu, Q. (2025). Variable fractional order Nesterov accelerated gradient algorithm for rational models with missing inputs using interpolated model. Mechanical Systems and Signal Processing, 224, 111944.
    [CrossRef]   [Google Scholar]
  21. Xu, F., Cheng, L., Chen, J., & Zhu, Q. (2024). The Nesterov accelerated gradient algorithm for auto-regressive exogenous models with random lost measurements: Interpolation method and auxiliary model method. Information Sciences, 659, 120055.
    [CrossRef]   [Google Scholar]

Cite This Article
APA Style
Liu, S., & Ding, F. (2025). Iterative Estimation Algorithm for Bilinear Stochastic Systems by Using the Newton Search. IECE Transactions on Intelligent Systematics, 2(2), 76–84. https://doi.org/10.62762/TIS.2024.155941

Article Metrics
Citations:

Crossref

0

Scopus

0

Web of Science

0
Article Access Statistics:
Views: 192
PDF Downloads: 42

Publisher's Note
IECE stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions
Institute of Emerging and Computer Engineers (IECE) or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
IECE Transactions on Intelligent Systematics

IECE Transactions on Intelligent Systematics

ISSN: 2998-3355 (Online) | ISSN: 2998-3320 (Print)

Email: [email protected]

Portico

Portico

All published articles are preserved here permanently:
https://www.portico.org/publishers/iece/

Copyright © 2025 Institute of Emerging and Computer Engineers Inc.