@@ -21,3 +21,61 @@ A typical graph is looks like below:
2121
2222![ image] ( ../assets/graph_vis_animation.gif )
2323
24+
25+
26+ ### Save Model
27+
28+ Saving the model means saving all the values of the parameters and the graph.
29+
30+ ``` python
31+ saver = tf.train.Saver()
32+ saver.save(sess,' ./tensorflowModel.ckpt' )
33+ ```
34+
35+ After saving the model there will be four files:
36+
37+ * tensorflowModel.ckpt.meta:
38+ * tensorflowModel.ckpt.data-00000-of-00001:
39+ * tensorflowModel.ckpt.index
40+ * checkpoint
41+
42+ We also created a protocol buffer file .pbtxt. It is human readable if you want to convert it to binary: ` as_text: false ` .
43+
44+ * tensorflowModel.pbtxt:
45+
46+ This holds a network of nodes, each representing one operation, connected to each other as inputs and outputs.
47+
48+
49+
50+ ### Freezing the Graph
51+
52+ ##### * Why we need it?*
53+
54+ When we need to keep all the values of the variables and the Graph structure in a single file we have to freeze the graph.
55+
56+ ``` csharp
57+ from tensorflow .python .tools import freeze_graph
58+
59+ freeze_graph .freeze_graph (input_graph = 'logistic_regression/tensorflowModel.pbtxt' ,
60+ input_saver = " " ,
61+ input_binary = False ,
62+ input_checkpoint = 'logistic_regression/tensorflowModel.ckpt' ,
63+ output_node_names = " Softmax" ,
64+ restore_op_name = " save/restore_all" ,
65+ filename_tensor_name = " save/Const:0" ,
66+ output_graph = 'frozentensorflowModel.pb' ,
67+ clear_devices = True ,
68+ initializer_nodes = " " )
69+
70+ ```
71+
72+ ### Optimizing for Inference
73+
74+ To Reduce the amount of computation needed when the network is used only for inferences we can remove some parts of a graph that are only needed for training.
75+
76+
77+
78+ ### Restoring the Model
79+
80+
81+
0 commit comments