ALEXNET[1]
Basic Architecture
Totally 8 layers:
- 5 convolutional layers
- 3 full-connected layers (Final full-connected layer outputs n units in softmax)
conv1
andconv2
layers 之后直接跟的是Response-nomalization layer,也就是norm1
andnorm2
层。在每一个conv层以及full-connected层后紧跟的操作是ReLU操作。
Max pooling操作是紧跟在第一个norm1,norm2,以及第5个conv层,也就是conv5
Dropout操作是在最后两个full-connected层。
Layers
- input
- conv1
- filter unit: 96
- nonlinearity: rectify(ReLU)
- norm1
- pool1
- conv2
- filter unit: 256
- nonlinearity: rectify(ReLU)
- norm2
- pool2
- conv3
- filter unit: 384
- nonlinearity: rectify(ReLU)
- conv4
- filter unit: 384
- nonlinearity: rectify(ReLU)
- conv5
- filter unit: 384
- nonlinearity: rectify(ReLU)
- fc6
- filter unit: 4096
- nonlinearity: rectify(ReLU)
- droupout6
- fc7
- filter unit: 4096
- nonlinearity: rectify(ReLU)
- droupout7
- fc8
- filetr unit: 1000
- nonlinearity: softmax
[1]
ImageNet Classification with Deep Convolutional
Neural Networks (A Krizhevsky, I Sutskever, GE Hinton, 2012)
Deep learning – Convolutional neural networks and feature extraction with Python