Quick cheatsheet
How to run:
install tflearn:
pip install --user tflearn
copy the following code and run:
git clone https://github.com/tflearn/tflearn.git
cd tflearn/examples/images
nohup python alexnet.py &>alexnet_log &
alexnet.py
examples can be found here[1]
from __future__ import division, print_function, absolute_import
import tflearn
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import local_response_normalization
from tflearn.layers.estimator import regression
import tflearn.datasets.oxflower17 as oxflower17
X, Y = oxflower17.load_data(one_hot=True, resize_pics=(227, 227))
# Building 'AlexNet'
network = input_data(shape=[None, 227, 227, 3])
network = conv_2d(network, 96, 11, strides=4, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 256, 5, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 384, 3, activation='relu')
network = conv_2d(network, 256, 3, activation='relu')
network = max_pool_2d(network, 3, strides=2)
network = local_response_normalization(network)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, 0.5)
network = fully_connected(network, 4096, activation='tanh')
network = dropout(network, 0.5)
network = fully_connected(network, 17, activation='softmax')
network = regression(network, optimizer='momentum',
loss='categorical_crossentropy',
learning_rate=0.001)
# Training
model = tflearn.DNN(network, checkpoint_path='model_alexnet',
max_checkpoints=1, tensorboard_verbose=2)
model.fit(X, Y, n_epoch=1000, validation_set=0.1, shuffle=True,
show_metric=True, batch_size=64, snapshot_step=200,
snapshot_epoch=False, run_id='alexnet_oxflowers17')
usage of tensorboard[2]:
Add tensorboard_verbose and tensorboard_dir arguments when calling tflearn.DNN function
TFLearn supports a verbose level to automatically manage summaries:
0: Loss & Metric (Best speed).
1: Loss, Metric & Gradients.
2: Loss, Metric, Gradients & Weights.
3: Loss, Metric, Gradients, Weights, Activations & Sparsity (Best Visualization).
model = tflearn.DNN(network, clip_gradients=5.0, tensorboard_verbose=3, tensorboard_dir='~/tmp/tflearn_logs/')
Then, Tensorboard can be run to visualize network and performance:
$ tensorboard --logdir='~/tmp/tflearn_logs'