tensorflow/ vision/ nlp/
This tutorial is among a series explaining how to structure a deep learning project:
- first post: installation, get started with the code for the projects
- this post: (TensorFlow) explain the global structure of the code
- third post: (TensorFlow) how to build the data pipeline
- fourth post: (Tensorflow) how to build the model and train it
Goals of this tutorial
- learn more about TensorFlow
- learn an example of how to correctly structure a deep learning project in TensorFlow
- fully understand the code to be able to use it for your own projects
Table of Content
- Structure of the code
- Graph, Session and nodes
- A word about variable scopes
- How we deal with different Training / Evaluation Graphs
For an official introduction to the Tensorflow concepts of
Session(), check out the official introduction on tensorflow.org.
For a simple example on MNIST, read the official tutorial, but keep in mind that some of the techniques are not recommended for big projects (they use
placeholders instead of the new
tf.data pipeline, they don’t use
For a more detailed tour of Tensorflow, reading the programmer’s guide is definitely worth the time. You’ll learn more about Tensors, Variables, Graphs and Sessions, as well as the saving mechanism or how to import data.
For a more advanced use with concrete examples and code, we recommend reading the relevant tutorials for your project. You’ll find good code and explanations, going from sequence-to-sequence in Tensorflow to an introduction to TF layers for convolutionnal Neural Nets.
You might also be interested in Stanford’s CS20 class: Tensorflow for Deep Learning Research and its github repo containing some cool examples.
Structure of the code
The code for each Tensorflow example shares a common structure:
data/ experiments/ model/ input_fn.py model_fn.py utils.py training.py evaluation.py train.py search_hyperparams.py synthesize_results.py evaluate.py
Here is each
model/ file purpose:
model/input_fn.py: where you define the input data pipeline
model/model_fn.py: creates the deep learning model
model/utils.py: utility functions for handling hyperparams / logging
model/training.py: utility functions to train a model
model/evaluation.py: utility functions to evaluate a model
We recommend reading through
train.py to get a high-level overview.
Once you get the high-level idea, depending on your task and dataset, you might want to modify
model/model_fn.pyto change the model’s architecture, i.e. how you transform your input into your prediction as well as your loss, etc.
model/input_fnto change the process of feeding data to the model.
evaluate.pyto change the story-line (maybe you need to change the filenames, load a vocabulary, etc.)
Once you get something working for your dataset, feel free to edit any part of the code to suit your own needs.
Graph, Session and nodes
When designing a Model in Tensorflow, there are basically 2 steps
- building the computational graph, the nodes and operations and how they are connected to each other
- evaluating / running this graph on some data
As an example of step 1, if we define a TF constant (=a graph node), when we print it, we get a Tensor object (= a node) and not its value
x = tf.constant(1., dtype=tf.float32, name="my-node-x") print(x) > Tensor("my-node-x:0", shape=(), dtype=float32)
Now, let’s get to step 2, and evaluate this node. We’ll need to create a
tf.Session that will take care of actually evaluating the graph
with tf.Session() as sess: print(sess.run(x)) > 1.0
In the code examples,
A word about variable scopes
When creating a node, Tensorflow will have a name for it. You can add a prefix to the nodes names. This is done with the
with tf.variable_scope('model'): x1 = tf.get_variable('x', , dtype=tf.float32) # get or create variable with name 'model/x:0' print(x1) > <tf.Variable 'model/x:0' shape=() dtype=float32_ref>
What happens if I instantiate
with tf.variable_scope('model'): x2 = tf.get_variable('x', , dtype=tf.float32) > ValueError: Variable model/x already exists, disallowed.
When trying to create a new variable named
model/x, we run into an Exception as a variable with the same name already exists. Thanks to this naming mechanism, you can actually control which value you give to the different nodes, and at different points of your code, decide to have 2 python objects correspond to the same node !
with tf.variable_scope('model', reuse=True): x2 = tf.get_variable('x', , dtype=tf.float32) print(x2) > <tf.Variable 'model/x:0' shape=() dtype=float32_ref>
We can check that they indeed have the same value
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # Initialize the Variables sess.run(tf.assign(x1, tf.constant(1.))) # Change the value of x1 sess.run(tf.assign(x2, tf.constant(2.))) # Change the value of x2 print("x1 = ", sess.run(x1), " x2 = ", sess.run(x2)) > x1 = 2.0 x2 = 2.0
How we deal with different Training / Evaluation Graphs
Code examples design choice: theoretically, the graphs you define for training and inference can be different, but they still need to share their weights. To remedy this issue, there are two possibilities
- re-build the graph, create a new session and reload the weights from some file when we switch between training and inference.
- create all the nodes for training and inference in the graph and make sure that the python code does not create the nodes twice by using the
reuse=Truetrick explained above.
We decided to go for this option. As you’ll notice in
train.py we give an extra argument when we build our graphs
train_model_spec = model_fn('train', train_inputs, params) eval_model_spec = model_fn('eval', eval_inputs, params, reuse=True)
When we create the graph for the evaluation (
model_fn will encapsulate all the nodes in a
tf.variable_scope("model", reuse=True) so that the nodes that have the same names than in the training graph share their weights !
For those interested in the problem of making training and eval graphs coexist, you can read this discussion which advocates for the other option.
As a side note, option 1 is also the one used in
Now, let’s see how we can input data to our model.