next up previous contents
Next: 目次   目次

概要:

There is Incremental Learning as one of the chaotic neural network's learnings. In Incremental Learning, whether it learns is judged by the conditions of the network. There are time attenuation constants as parameters of the chaotic neuron.  We thought that we didn't need to use the time attenuation constant in Incremental learning if there was no noise in the input patterns.

The time attenuation constant was adjusted to 0, and the experiment was conducted. Other parameters were not changed. The result of the experiment was worse than that without change of the time attenuation constants.

Next, the learning result was improved by the experiment. As the result of experiments, it has been understood that the learning result is able to be improved by changing $\alpha $ and $\mathit {\Delta } w$ that are parameters other than the time attenuation constant. The maximum number of complete learning was able to be enlarged. It became the same as the value when the time attenuation constant was not changed.

As the result of experiments, when the time attenuation constant is 0, the learning result doesn't decrease when the appropriate values of $\alpha $ and $\mathit {\Delta } w$ is used.




next up previous contents
Next: 目次   目次
Deguchi Lab. 2011年3月4日