摘要
We consider the properties of a generalized perceptron learning network, taking into account the decay or the gain of the weight vector during the training stages. A mathematical proof is given that shows the conditional convergence of the learning algorithm. The analytical result indicates that the upper bound of the training steps is dependent on the gain (or decay) factor. A sufficient condition of exposure time for convergence of a photorefractive perceptron network is derived. We also describe a modified learning algorithm that provides a solution to the problem of weight vector decay in an optical perceptron caused by hologram erasure. Both analytical and simulation results are presented and discussed.
原文 | 英語 |
---|---|
頁(從 - 到) | 1619-1624 |
頁數 | 6 |
期刊 | Journal of the Optical Society of America B: Optical Physics |
卷 | 11 |
發行號 | 9 |
DOIs | |
出版狀態 | 已發佈 - 1994 9月 |
對外發佈 | 是 |
ASJC Scopus subject areas
- 統計與非線性物理學
- 原子與分子物理與光學