Hardware implementation of κ-winner-take-all neural network with on-chip learning

Hui Ya Li, Chien Min Ou, Yi Tsan Hung, Wen-Jyi Hwang, Chia Lung Hung

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Citations (Scopus)

Abstract

This paper presents a novel pipelined architecture of the competitive learning (CL) algorithm with κ-winners-take-all activation. The architecture employs a codeword swapping scheme so that neurons failing the competition for a training vector are immediately available for the competitions for the subsequent training vectors. An efficient pipeline architecture is then designed based on the codeword swapping scheme for enhancing the throughput. The CPU time of the NIOS processor executing the CL training with the proposed architecture as an accelerator is measured. Experiment results show that the CPU time is lower than that of other hardware or software implementations running the CL training program with or without the support of custom hardware.

Original languageEnglish
Title of host publicationProceedings - 2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010
Pages340-345
Number of pages6
DOIs
Publication statusPublished - 2010 Dec 1
Event2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010 - Hong Kong, China
Duration: 2010 Dec 112010 Dec 13

Other

Other2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010
CountryChina
CityHong Kong
Period10/12/1110/12/13

Fingerprint

Neural networks
Program processors
Hardware
Computer hardware
Learning algorithms
Neurons
Particle accelerators
Pipelines
Chemical activation
Throughput
Experiments

Keywords

  • Competitive learning
  • FPGA
  • On-chip learning
  • Reconfigurable computing
  • κ-winners-take-all

ASJC Scopus subject areas

  • Computer Science (miscellaneous)

Cite this

Li, H. Y., Ou, C. M., Hung, Y. T., Hwang, W-J., & Hung, C. L. (2010). Hardware implementation of κ-winner-take-all neural network with on-chip learning. In Proceedings - 2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010 (pp. 340-345). [5692497] https://doi.org/10.1109/CSE.2010.51

Hardware implementation of κ-winner-take-all neural network with on-chip learning. / Li, Hui Ya; Ou, Chien Min; Hung, Yi Tsan; Hwang, Wen-Jyi; Hung, Chia Lung.

Proceedings - 2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010. 2010. p. 340-345 5692497.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Li, HY, Ou, CM, Hung, YT, Hwang, W-J & Hung, CL 2010, Hardware implementation of κ-winner-take-all neural network with on-chip learning. in Proceedings - 2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010., 5692497, pp. 340-345, 2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010, Hong Kong, China, 10/12/11. https://doi.org/10.1109/CSE.2010.51
Li HY, Ou CM, Hung YT, Hwang W-J, Hung CL. Hardware implementation of κ-winner-take-all neural network with on-chip learning. In Proceedings - 2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010. 2010. p. 340-345. 5692497 https://doi.org/10.1109/CSE.2010.51
Li, Hui Ya ; Ou, Chien Min ; Hung, Yi Tsan ; Hwang, Wen-Jyi ; Hung, Chia Lung. / Hardware implementation of κ-winner-take-all neural network with on-chip learning. Proceedings - 2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010. 2010. pp. 340-345
@inproceedings{ae955c280bae4228b73a683ada1ffcf8,
title = "Hardware implementation of κ-winner-take-all neural network with on-chip learning",
abstract = "This paper presents a novel pipelined architecture of the competitive learning (CL) algorithm with κ-winners-take-all activation. The architecture employs a codeword swapping scheme so that neurons failing the competition for a training vector are immediately available for the competitions for the subsequent training vectors. An efficient pipeline architecture is then designed based on the codeword swapping scheme for enhancing the throughput. The CPU time of the NIOS processor executing the CL training with the proposed architecture as an accelerator is measured. Experiment results show that the CPU time is lower than that of other hardware or software implementations running the CL training program with or without the support of custom hardware.",
keywords = "Competitive learning, FPGA, On-chip learning, Reconfigurable computing, κ-winners-take-all",
author = "Li, {Hui Ya} and Ou, {Chien Min} and Hung, {Yi Tsan} and Wen-Jyi Hwang and Hung, {Chia Lung}",
year = "2010",
month = "12",
day = "1",
doi = "10.1109/CSE.2010.51",
language = "English",
isbn = "9780769543239",
pages = "340--345",
booktitle = "Proceedings - 2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010",

}

TY - GEN

T1 - Hardware implementation of κ-winner-take-all neural network with on-chip learning

AU - Li, Hui Ya

AU - Ou, Chien Min

AU - Hung, Yi Tsan

AU - Hwang, Wen-Jyi

AU - Hung, Chia Lung

PY - 2010/12/1

Y1 - 2010/12/1

N2 - This paper presents a novel pipelined architecture of the competitive learning (CL) algorithm with κ-winners-take-all activation. The architecture employs a codeword swapping scheme so that neurons failing the competition for a training vector are immediately available for the competitions for the subsequent training vectors. An efficient pipeline architecture is then designed based on the codeword swapping scheme for enhancing the throughput. The CPU time of the NIOS processor executing the CL training with the proposed architecture as an accelerator is measured. Experiment results show that the CPU time is lower than that of other hardware or software implementations running the CL training program with or without the support of custom hardware.

AB - This paper presents a novel pipelined architecture of the competitive learning (CL) algorithm with κ-winners-take-all activation. The architecture employs a codeword swapping scheme so that neurons failing the competition for a training vector are immediately available for the competitions for the subsequent training vectors. An efficient pipeline architecture is then designed based on the codeword swapping scheme for enhancing the throughput. The CPU time of the NIOS processor executing the CL training with the proposed architecture as an accelerator is measured. Experiment results show that the CPU time is lower than that of other hardware or software implementations running the CL training program with or without the support of custom hardware.

KW - Competitive learning

KW - FPGA

KW - On-chip learning

KW - Reconfigurable computing

KW - κ-winners-take-all

UR - http://www.scopus.com/inward/record.url?scp=79951595575&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79951595575&partnerID=8YFLogxK

U2 - 10.1109/CSE.2010.51

DO - 10.1109/CSE.2010.51

M3 - Conference contribution

SN - 9780769543239

SP - 340

EP - 345

BT - Proceedings - 2010 13th IEEE International Conference on Computational Science and Engineering, CSE 2010

ER -