0. Referencehttps://arxiv.org/abs/1603.09382 Deep Networks with Stochastic DepthVery deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comesarxiv.org1. Introduction- 본 논문은 ResNet의 성능향상에 focus를 두고 있다.- N..
0. Referencehttps://jmlr.org/papers/v15/srivastava14a.html Dropout: A Simple Way to Prevent Neural Networks from OverfittingDropout: A Simple Way to Prevent Neural Networks from Overfitting Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov; 15(56):1929−1958, 2014. Abstract Deep neural nets with a large number of parameters are very powejmlr.org1. Introduct..