Getting started with TFLearn【1】
本帖最后由 Happy清子 于 2019-1-12 16:35 编辑Here is a basic guide that introduces TFLearn and its functionalities. First, highlighting TFLearn high-level API for fast neural network building and training, and then showing how TFLearn layers, built-in ops and helpers can directly benefit any model implementation with Tensorflow.
High-Level API usageTFLearn introduces a High-Level API that makes neural network building and training fast and easy. This API is intuitive and fully compatible with Tensorflow.
LayersLayers are a core feature of TFLearn. While completely defining a model using Tensorflow ops can be time consuming and repetitive, TFLearn brings "layers" that represent an abstract set of operations to make building neural networks more convenient. For example, a convolutional layer will:
[*]Create and initialize weights and biases variables
[*]Apply convolution over incoming tensor
[*]Add an activation function after the convolution
[*]Etc...
In Tensorflow, writing these kinds of operations can be quite tedious:with tf.name_scope('conv1'):
W = tf.Variable(tf.random_normal([5, 5, 1, 32]), dtype=tf.float32, name='Weights')
b = tf.Variable(tf.random_normal([32]), dtype=tf.float32, name='biases')
x = tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
x = tf.add_bias(x, b)
x = tf.nn.relu(x)
While in TFLearn, it only takes a line:tflearn.conv_2d(x, 32, 5, activation='relu', name='conv1')
Here is a list of all currently available layers:
FileLayers
coreinput_data, fully_connected, dropout, custom_layer, reshape, flatten, activation, single_unit, highway, one_hot_encoding, time_distributed
convconv_2d, conv_2d_transpose, max_pool_2d, avg_pool_2d, upsample_2d, conv_1d, max_pool_1d, avg_pool_1d, residual_block, residual_bottleneck, conv_3d, max_pool_3d, avg_pool_3d, highway_conv_1d, highway_conv_2d, global_avg_pool, global_max_pool
recurrentsimple_rnn, lstm, gru, bidirectionnal_rnn, dynamic_rnn
embeddingembedding
normalizationbatch_normalization, local_response_normalization, l2_normalize
mergemerge, merge_outputs
estimatorregression
Built-in OperationsBesides layers concept, TFLearn also provides many different ops to be used when building a neural network. These ops are firstly mean to be part of the above 'layers' arguments, but they can also be used independently in any other Tensorflow graph for convenience. In practice, just providing the op name as argument is enough (such as activation='relu' or regularizer='L2' for conv_2d), but a function can also be provided for further customization.
FileOps
activationslinear, tanh, sigmoid, softmax, softplus, softsign, relu, relu6, leaky_relu, prelu, elu
objectivessoftmax_categorical_crossentropy, categorical_crossentropy, binary_crossentropy, mean_square, hinge_loss, roc_auc_score, weak_cross_entropy_2d
optimizersSGD, RMSProp, Adam, Momentum, AdaGrad, Ftrl, AdaDelta
metricsAccuracy, Top_k, R2
initializationszeros, uniform, uniform_scaling, normal, truncated_normal, xavier, variance_scaling
lossesl1, l2
Below are some quick examples:# Activation and Regularization inside a layer:fc2 = tflearn.fully_connected(fc1, 32, activation='tanh', regularizer='L2')
# Equivalent to:fc2 = tflearn.fully_connected(fc1, 32)
tflearn.add_weights_regularization(fc2, loss='L2')
fc2 = tflearn.tanh(fc2)
# Optimizer, Objective and Metric:
reg = tflearn.regression(fc4, optimizer='rmsprop', metric='accuracy', loss='categorical_crossentropy')
# Ops can also be defined outside, for deeper customization:
momentum = tflearn.optimizers.Momentum(learning_rate=0.1, weight_decay=0.96, decay_step=200)
top5 = tflearn.metrics.Top_k(k=5)
reg = tflearn.regression(fc4, optimizer=momentum, metric=top5, loss='categorical_crossentropy')
tflearn
这个工具
用了一阵子后
也没啥声响了
页:
[1]