Generalized perceptron learning rule and its implications for photorefractive neural networks

Chau Jern Cheng, Pochi Yeh, Ken Yuh Hsu

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

We consider the properties of a generalized perceptron learning network, taking into account the decay or the gain of the weight vector during the training stages. A mathematical proof is given that shows the conditional convergence of the learning algorithm. The analytical result indicates that the upper bound of the training steps is dependent on the gain (or decay) factor. A sufficient condition of exposure time for convergence of a photorefractive perceptron network is derived. We also describe a modified learning algorithm that provides a solution to the problem of weight vector decay in an optical perceptron caused by hologram erasure. Both analytical and simulation results are presented and discussed.

Original languageEnglish
Pages (from-to)1619-1624
Number of pages6
JournalJournal of the Optical Society of America B: Optical Physics
Volume11
Issue number9
DOIs
Publication statusPublished - 1994 Sept
Externally publishedYes

ASJC Scopus subject areas

  • Statistical and Nonlinear Physics
  • Atomic and Molecular Physics, and Optics

Fingerprint

Dive into the research topics of 'Generalized perceptron learning rule and its implications for photorefractive neural networks'. Together they form a unique fingerprint.

Cite this