References in periodicals archive ?
To solve GLME (1), we firstly define a scalar-valued error function E(t) = [[parallel][[SIGMA].sup.p.sub.l=1]X(t)[B.sub.l] - C[parallel].sup.2.sub.F]/2 [member of] [0, + [infinity]) associated with (1), where operator [[parallel]x[parallel].sub.F] denotes the Frobenius norm.
If the linear neural network model (10) is employed to solve GLME (1), starting from initial condition X(0) [member of] [R.sup.mxn], the state matrix X(t) [member of] [R.sup.mxn] of (10) can globally exponentially converge to unique theoretical solution X* [member of] [R.sup.mxn].
if the unique solution condition of GLME (1) holds.
By Lyapunov theory , (14) and (18) indicate that state matrix X(t) [member of] [R.sup.mxn] of (10) can globally and exponentially converge to the unique theoretical solution X* [member of] [R.sup.mxn] of GLME (1).
It is worth noting that if GLME (1) is with multiple theoretical solutions X* [member of] [R.sup.mxn], scalar a equals zero.
In this section, three examples are presented to illustrate the efficiency of the general nonlinear recurrent neural network (2) with its specific models under different types of activation functions (linear, power sum, and hyperbolic sine activation functions) for online solving GLME (1).
Let us consider the following GLME with l = 2: [A.sub.1] X[B.sub.1] + [A.sub.2]X[B.sub.2] = C, (24)
These can demonstrate the effectiveness of the general recurrent neural network model (2) for solving GLME (24).
Let us consider the following GLME with multiple theoretical solutions X* [member of] [R.sup.2x2]:
We use linear model (10) with design parameter [gamma] = 1 to solve GLME (27).
Let us consider the following GLME in a larger dimension with l = 10:
We exploit nonlinear neural network models (2) activated by power sum and hyperbolic sine functions and the linear model (10) to solve GLME (29) with design parameter [gamma] = 1.
Acronyms browser ?
Full browser ?