ALEXNET[1]

AlexNet

http://blog.sina.com.cn/s/blog_eb3aea990102v47i.html

Basic Architecture

  1. Totally 8 layers:

    • 5 convolutional layers
    • 3 full-connected layers (Final full-connected layer outputs n units in softmax)
  2. conv1 and conv2 layers 之后直接跟的是Response-nomalization layer,也就是norm1 and norm2层。

  3. 在每一个conv层以及full-connected层后紧跟的操作是ReLU操作。

  4. Max pooling操作是紧跟在第一个norm1,norm2,以及第5个conv层,也就是conv5

  5. Dropout操作是在最后两个full-connected层。

Layers

  1. input
  2. conv1
    • filter unit: 96
    • nonlinearity: rectify(ReLU)
    • norm1
    • pool1
  3. conv2
    • filter unit: 256
    • nonlinearity: rectify(ReLU)
    • norm2
    • pool2
  4. conv3
    • filter unit: 384
    • nonlinearity: rectify(ReLU)
  5. conv4
    • filter unit: 384
    • nonlinearity: rectify(ReLU)
  6. conv5
    • filter unit: 384
    • nonlinearity: rectify(ReLU)
  7. fc6
    • filter unit: 4096
    • nonlinearity: rectify(ReLU)
    • droupout6
  8. fc7
    • filter unit: 4096
    • nonlinearity: rectify(ReLU)
    • droupout7
  9. fc8
    • filetr unit: 1000
    • nonlinearity: softmax

[1]
ImageNet Classification with Deep Convolutional Neural Networks (A Krizhevsky, I Sutskever, GE Hinton, 2012)

caffe study(5) - AlexNet 之结构篇

Deep learning – Convolutional neural networks and feature extraction with Python

results matching ""

    No results matching ""