0. Referencehttps://arxiv.org/abs/1906.02629 When Does Label Smoothing Help?The generalization and learning speed of a multi-class neural network can often be significantly improved by using soft targets that are a weighted average of the hard targets and the uniform distribution over labels. Smoothing the labels in this way prevearxiv.org1. Introduction- Classification,Speech recognition, Machi..
0. Referencehttps://arxiv.org/abs/1512.00567 Rethinking the Inception Architecture for Computer VisionConvolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although incrarxiv.org1. Introduction- Inception의 복잡성..
0. Referencehttps://arxiv.org/abs/1409.4842https://phil-baek.tistory.com/entry/3-GoogLeNet-Going-deeper-with-convolutions-%EB%85%BC%EB%AC%B8-%EB%A6%AC%EB%B7%B0 Going Deeper with ConvolutionsWe propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visua..
0. Referencehttps://arxiv.org/abs/1409.1556 Very Deep Convolutional Networks for Large-Scale Image RecognitionIn this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3xarxiv.org1. Introduction- 본 논문에선..