Departmental Bulletin Paper 平行平均畳み込み処理とドロップアウトによる畳み込みニューラルネットワークの汎化能力向上

山森, 一人  ,  長野, 泰久  ,  相川, 勝

46pp.309 - 312 , 2017-07-31 , 宮崎大学工学部
ISSN:05404924
Description
Convolutional neural networks can express models very well, but this ability gives us overfitting problem. To avoid overfitting problem, dropout technique are proposed. However, it is sometimes not enough because convolutional neural networks include a huge number of weights and biases. In this paper, we propose a combination of dropout technique and parallel convolution. In our method, neural network has two parallel convolution layer, and average value of these two convolution layer is used as the input to the pooling layer. Our proposed neural network could improve generalization ability about 4% for the unknown test patterns.
Full-Text

http://opac2.lib.miyazaki-u.ac.jp/webopac/bdyview.do?bodyid=TC10156637&elmid=Body&fname=p309_cover.pdf

Number of accesses :  

Other information