next up previous contents
Next: 目次   目次

概要:

This study aimed how to learn more patterns at chaotic neural network using Incremental Learning. Incremental Learning is proposed for learning algorism for auto associative memory.

This study focused the amount of weight change and refractoriness. (Hereinafter, the amount of weight change called $\Delta w$ and refractorienss called $\alpha $) This study includes three experiments, and they showed the following results.

The bias of learning patterns influenced Incremental Learning in the small network constructed of 50, 100 neurons. The influence unspreaded in the large network constructed of more than 200 neurons.

This is NOT a effective method to change the value of $\Delta w$ belong a learning time. However, the change was in only an inverse proportion to learning time.

There are optimum combinations between $\Delta w$ and $\alpha $. Using these optimum combinations, a network constructed of 200 neurons can learn at least 250 patterns. These optimum combinations depend on the number of input patterns, and the number of optimum combination to learn 250 patterns is extremely smaller than to learn 180 patterns.




next up previous contents
Next: 目次   目次
Deguchi Lab. 2010年3月5日