TY - JOUR

T1 - Chaotic simulated annealing by a neural network with a variable delay

T2 - Design and application

AU - Chen, Shyan Shiou

N1 - Funding Information:
Manuscript received January 12, 2011; accepted July 14, 2011. Date of publication August 12, 2011; date of current version October 5, 2011. This work was supported in part by the National Science Council of Taiwan, the National Taiwan Normal University, and the National Center for Theoretical Sciences of Taiwan.

PY - 2011/10

Y1 - 2011/10

N2 - In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem.

AB - In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem.

KW - Constant delay

KW - Lyapunov function

KW - neural network

KW - optimization

KW - variable delay

UR - http://www.scopus.com/inward/record.url?scp=80053619905&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=80053619905&partnerID=8YFLogxK

U2 - 10.1109/TNN.2011.2163080

DO - 10.1109/TNN.2011.2163080

M3 - Article

C2 - 21843986

AN - SCOPUS:80053619905

SN - 2162-237X

VL - 22

SP - 1557

EP - 1565

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

IS - 10

M1 - 5979157

ER -