-
Modern CNN (3주차)NAVER AI TECH 2023. 3. 21. 13:44
1. AlexNet
ImageNet Large-Scale Visual Recognition Challenge 2012 1st
- 11 x 11 x 3 filters, 5 convolutional layers, 3 dense layers
- ReLU activation
- overcome the gradient vanishing problem - GPU implementation (2 GPUs)
- Local response normalization, Overlapping Pooling
- Data augmentation
- Dropout
2. VGGNet
ImageNet Large-Scale Visual Recognition Challenge 2014 2nd
- 3 x 3 filters
- 동일한 receptive field를 유지하면서 number of parameters를 줄일 수 있음 (ex. 3 x 3 twice vs 5 x 5) - 1 x 1 convolution for fully connected layers
- Dropout
- VGG16, VGG19
3. GoogLeNet
ImageNet Large-Scale Visual Recognition Challenge 2014 1st
- 22 layers
- Inception blocks: 1 x 1 convolutional layer
- 1 x 1 convolutional layer를 활용함으로 number of parameters를 줄임
4. ResNet
Make deeper neural networks well trainable
- Add an identity map (skip connection): $f(x) \rightarrow x + f(x)$
- Bottlenet Architecture (utilize 1 x 1 convolutional layer)
5. DenseNet
- Use concatenation instead of addition
- Add transition block (1 x 1 convolutional layer) after dense block
'NAVER AI TECH' 카테고리의 다른 글
3주차 회고록 (DL Basic & Data Visualization & Git) (0) 2023.03.24 Computer Vision Applications (3주차) (0) 2023.03.21 2주차 회고록 (PyTorch) (0) 2023.03.17 PyTorch (1일차) (0) 2023.03.13 1주차 회고록 (Python & AI Math) (0) 2023.03.10