简体   繁体   中英

TensorFlow custom estimator stuck when calling evaluate after training

I made a custom estimator (see this colab ) in TensorFlow (v1.10) based on their guide .

I trained the toy model with:

tf.estimator.train_and_evaluate(est, train_spec, eval_spec)

and then, with some test set data, try to evaluate the model with:

test_fn = lambda: input_fn(DATASET['test'], run_params)
test_res = est.evaluate(input_fn=test_fn)

(where the train_fn and valid_fn are functionally identical to test_fn , eg sufficient for tf.estimator.train_and_evaluate to work).

I would expect something to happen, however this is what I get:

INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Starting evaluation at 2018-11-09-13:38:44
INFO:tensorflow:Graph was finalized.
INFO:tensorflow:Restoring parameters from ./test/model.ckpt-100
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.

and then it just runs forever.

How come?

This is because you repeat the dataset indefinitely:

# In input_fn
dataset = dataset.repeat().batch(batch_size)

By default, estimator.evaluate() runs until the input_fn raises an end-of-input exception. Because you repeat the test dataset indefinitely, it never raises the exception and keeps running.

You can either remove the repeat when testing, or run the evaluation for a given number of steps using the 'steps' argument as it is used in your original 'eval_spec'.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM