### Abstract

In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem.

Original language | English |
---|---|

Article number | 5979157 |

Pages (from-to) | 1557-1565 |

Number of pages | 9 |

Journal | IEEE Transactions on Neural Networks |

Volume | 22 |

Issue number | 10 |

DOIs | |

Publication status | Published - 2011 Oct 1 |

### Fingerprint

### Keywords

- Constant delay
- Lyapunov function
- neural network
- optimization
- variable delay

### ASJC Scopus subject areas

- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence

### Cite this

**Chaotic simulated annealing by a neural network with a variable delay : Design and application.** / Chen, Shyan Shiou.

Research output: Contribution to journal › Article

*IEEE Transactions on Neural Networks*, vol. 22, no. 10, 5979157, pp. 1557-1565. https://doi.org/10.1109/TNN.2011.2163080

}

TY - JOUR

T1 - Chaotic simulated annealing by a neural network with a variable delay

T2 - Design and application

AU - Chen, Shyan Shiou

PY - 2011/10/1

Y1 - 2011/10/1

N2 - In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem.

AB - In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem.

KW - Constant delay

KW - Lyapunov function

KW - neural network

KW - optimization

KW - variable delay

UR - http://www.scopus.com/inward/record.url?scp=80053619905&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=80053619905&partnerID=8YFLogxK

U2 - 10.1109/TNN.2011.2163080

DO - 10.1109/TNN.2011.2163080

M3 - Article

C2 - 21843986

AN - SCOPUS:80053619905

VL - 22

SP - 1557

EP - 1565

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 10

M1 - 5979157

ER -