I get InternalError
with tf.nn.lrn()
when using an implementation of AlexNet
from here (with small modifications), at the time of debugging after enabling Eager Execution
. I used tf.enable_eager_execution()
at the start of the code block and was running the net on a two frame video in order to look for bugs. All the inputs are caste to type np.float64
right from the start of the computation. I get this error
InternalError Traceback (most recent call last)
<ipython-input-17-cf3fc8cede61> in pseudo_alexnet(feats)
146 conv_1 = tf.nn.bias_add(conv_1, biases["bc1"])
147 conv_1 = tf.nn.relu(conv_1)
--> 148 conv_1 = tf.nn.local_response_normalization(tf.cast(conv_1, dtype = np.float64), depth_radius=5.0, bias=2.0, alpha=1e-4, beta=0.75)
149 pool1 = max_pool_with_argmax(conv_1, filter_h = 3, filter_w = 3, stride_h = 2, stride_w = 2, name = 'pool1')
150
~\Anaconda3\envs\my_tensorflow_env\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py in lrn(input, depth_radius, bias, alpha, beta, name)
4399 return lrn_eager_fallback(
4400 input, depth_radius=depth_radius, bias=bias, alpha=alpha, beta=beta,
-> 4401 name=name, ctx=_ctx)
4402 except _core._NotOkStatusException as e:
4403 if name is not None:
~\Anaconda3\envs\my_tensorflow_env\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py in lrn_eager_fallback(input, depth_radius, bias, alpha, beta, name, ctx)
4430 "beta", beta, "T", _attr_T)
4431 _result = _execute.execute(b"LRN", 1, inputs=_inputs_flat, attrs=_attrs,
-> 4432 ctx=_ctx, name=name)
4433 _execute.record_gradient(
4434 "LRN", _inputs_flat, _attrs, _result, name)
~\Anaconda3\envs\my_tensorflow_env\lib\site-packages\tensorflow\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
64 else:
65 message = e.message
---> 66 six.raise_from(core._status_to_exception(e.code, message), None)
67 # pylint: enable=protected-access
68 return tensors
~\Anaconda3\envs\my_tensorflow_env\lib\site-packages\six.py in raise_from(value, from_value)
InternalError: Could not find valid device for node.
Node: {{node LRN}} = LRN[T=DT_DOUBLE, alpha=0.0001, beta=0.75, bias=2, depth_radius=5](dummy_input)
All kernels registered for op LRN :
device='CPU'; T in [DT_FLOAT]
device='CPU'; T in [DT_HALF]
[Op:LRN]
The error is because you are trying to load input dtype is float64
to the tf.nn.local_response_normalization
.
If you check tf.nn.local_response_normalization , the supported input types are half, bfloat16, float32
.
To fix, your problem you can use tf.cast and convert inputs into float32
.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.