textRNNTrainLog20200423 - yunfanfan/Notes GitHub Wiki
Configuring RNN model... WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_model.py:38: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see:
- https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
- https://github.com/tensorflow/addons
- https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_model.py:50: LSTMCell.init (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version. Instructions for updating: This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0. WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_model.py:54: MultiRNNCell.init (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version. Instructions for updating: This class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0. WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_model.py:65: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_model.py:73: bidirectional_dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use keras.layers.Bidirectional(keras.layers.RNN(cell))
, which is equivalent to this API
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/venv/lib/python3.7/site-packages/tensorflow/python/ops/rnn.py:464: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use keras.layers.RNN(cell)
, which is equivalent to this API
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/venv/lib/python3.7/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/venv/lib/python3.7/site-packages/tensorflow/python/ops/rnn_cell_impl.py:961: calling Zeros.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/venv/lib/python3.7/site-packages/tensorflow/python/ops/rnn.py:244: add_dispatch_support..wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_model.py:82: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_model.py:99: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use rate
instead of keep_prob
. Rate should be set to rate = 1 - keep_prob
.
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_model.py:110: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow into the labels input on backprop by default.
See tf.nn.softmax_cross_entropy_with_logits_v2
.
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_model.py:115: The name tf.train.AdamOptimizer is deprecated. Please use tf.compat.v1.train.AdamOptimizer instead.
Configuring TensorBoard and Saver... Loading training and validation data... Building prefix dict from the default dictionary ... Loading model from cache /var/folders/dk/s223rnbd16lb49n8dv8n5kb80000gn/T/jieba.cache Loading model cost 0.633 seconds. Prefix dict has been built successfully. WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_train.py:39: The name tf.summary.scalar is deprecated. Please use tf.compat.v1.summary.scalar instead.
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_train.py:41: The name tf.summary.merge_all is deprecated. Please use tf.compat.v1.summary.merge_all instead.
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_train.py:42: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_train.py:43: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.
Time cost: 844.324 seconds...
WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_train.py:45: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
2020-04-23 15:25:48.673212: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA WARNING:tensorflow:From /Users/yunfan/PycharmProjects/NLP_Learning/text_rnn_attention/text_train.py:46: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.
Training and evaluating... Epoch: 1 step: 100,train loss: 0.392, train accuracy: 0.891, val loss: 0.306, val accuracy: 0.908,training speed: 0.902sec/batch *
step: 200,train loss: 0.406, train accuracy: 0.906, val loss: 0.249, val accuracy: 0.918,training speed: 0.730sec/batch *
step: 300,train loss: 0.165, train accuracy: 0.938, val loss: 0.215, val accuracy: 0.937,training speed: 0.713sec/batch *
step: 400,train loss: 0.365, train accuracy: 0.875, val loss: 0.182, val accuracy: 0.945,training speed: 0.912sec/batch *
step: 500,train loss: 0.193, train accuracy: 0.953, val loss: 0.174, val accuracy: 0.949,training speed: 0.691sec/batch *
step: 600,train loss: 0.364, train accuracy: 0.922, val loss: 0.226, val accuracy: 0.920,training speed: 0.842sec/batch
step: 700,train loss: 0.352, train accuracy: 0.906, val loss: 0.170, val accuracy: 0.947,training speed: 0.789sec/batch
Epoch: 2 step: 800,train loss: 0.211, train accuracy: 0.938, val loss: 0.145, val accuracy: 0.956,training speed: 0.129sec/batch *
step: 900,train loss: 0.091, train accuracy: 0.969, val loss: 0.146, val accuracy: 0.957,training speed: 0.858sec/batch *
step: 1000,train loss: 0.232, train accuracy: 0.969, val loss: 0.135, val accuracy: 0.962,training speed: 0.797sec/batch *
step: 1100,train loss: 0.235, train accuracy: 0.938, val loss: 0.119, val accuracy: 0.963,training speed: 0.792sec/batch *
step: 1200,train loss: 0.173, train accuracy: 0.938, val loss: 0.127, val accuracy: 0.965,training speed: 0.744sec/batch *
step: 1300,train loss: 0.202, train accuracy: 0.938, val loss: 0.130, val accuracy: 0.966,training speed: 0.790sec/batch *
step: 1400,train loss: 0.070, train accuracy: 0.984, val loss: 0.130, val accuracy: 0.965,training speed: 0.783sec/batch
step: 1500,train loss: 0.058, train accuracy: 0.984, val loss: 0.124, val accuracy: 0.963,training speed: 0.788sec/batch
Epoch: 3 step: 1600,train loss: 0.305, train accuracy: 0.906, val loss: 0.126, val accuracy: 0.962,training speed: 0.283sec/batch
step: 1700,train loss: 0.013, train accuracy: 1.000, val loss: 0.146, val accuracy: 0.957,training speed: 0.746sec/batch
step: 1800,train loss: 0.237, train accuracy: 0.906, val loss: 0.111, val accuracy: 0.969,training speed: 0.762sec/batch *
step: 1900,train loss: 0.024, train accuracy: 1.000, val loss: 0.120, val accuracy: 0.962,training speed: 0.673sec/batch
step: 2000,train loss: 0.123, train accuracy: 0.969, val loss: 0.103, val accuracy: 0.970,training speed: 0.689sec/batch *
step: 2100,train loss: 0.238, train accuracy: 0.938, val loss: 0.112, val accuracy: 0.969,training speed: 0.664sec/batch
step: 2200,train loss: 0.057, train accuracy: 0.969, val loss: 0.110, val accuracy: 0.968,training speed: 0.678sec/batch
step: 2300,train loss: 0.099, train accuracy: 0.969, val loss: 0.118, val accuracy: 0.967,training speed: 0.731sec/batch
Epoch: 4 step: 2400,train loss: 0.053, train accuracy: 0.984, val loss: 0.099, val accuracy: 0.972,training speed: 0.374sec/batch *
step: 2500,train loss: 0.068, train accuracy: 0.969, val loss: 0.101, val accuracy: 0.971,training speed: 0.691sec/batch
step: 2600,train loss: 0.010, train accuracy: 1.000, val loss: 0.121, val accuracy: 0.966,training speed: 0.698sec/batch
step: 2700,train loss: 0.053, train accuracy: 0.984, val loss: 0.119, val accuracy: 0.969,training speed: 0.681sec/batch
step: 2800,train loss: 0.097, train accuracy: 0.984, val loss: 0.114, val accuracy: 0.968,training speed: 0.664sec/batch
step: 2900,train loss: 0.109, train accuracy: 0.984, val loss: 0.117, val accuracy: 0.967,training speed: 0.730sec/batch
step: 3000,train loss: 0.090, train accuracy: 0.984, val loss: 0.107, val accuracy: 0.971,training speed: 0.700sec/batch
step: 3100,train loss: 0.057, train accuracy: 0.984, val loss: 0.120, val accuracy: 0.967,training speed: 0.725sec/batch
Epoch: 5 step: 3200,train loss: 0.043, train accuracy: 0.969, val loss: 0.106, val accuracy: 0.971,training speed: 0.498sec/batch
step: 3300,train loss: 0.048, train accuracy: 0.969, val loss: 0.099, val accuracy: 0.974,training speed: 0.696sec/batch *
step: 3400,train loss: 0.090, train accuracy: 0.984, val loss: 0.107, val accuracy: 0.972,training speed: 0.686sec/batch
step: 3500,train loss: 0.058, train accuracy: 0.969, val loss: 0.147, val accuracy: 0.960,training speed: 0.705sec/batch
step: 3600,train loss: 0.028, train accuracy: 0.984, val loss: 0.136, val accuracy: 0.965,training speed: 0.663sec/batch
step: 3700,train loss: 0.040, train accuracy: 0.984, val loss: 0.114, val accuracy: 0.970,training speed: 0.666sec/batch
step: 3800,train loss: 0.006, train accuracy: 1.000, val loss: 0.138, val accuracy: 0.962,training speed: 0.662sec/batch
step: 3900,train loss: 0.069, train accuracy: 0.984, val loss: 0.180, val accuracy: 0.951,training speed: 0.664sec/batch
Epoch: 6 step: 4000,train loss: 0.036, train accuracy: 0.984, val loss: 0.108, val accuracy: 0.973,training speed: 0.598sec/batch
step: 4100,train loss: 0.011, train accuracy: 1.000, val loss: 0.134, val accuracy: 0.967,training speed: 0.688sec/batch
step: 4200,train loss: 0.013, train accuracy: 1.000, val loss: 0.125, val accuracy: 0.970,training speed: 0.663sec/batch
step: 4300,train loss: 0.015, train accuracy: 1.000, val loss: 0.157, val accuracy: 0.960,training speed: 0.672sec/batch
No optimization over 1000 steps, stop training
Process finished with exit code 0