简体   繁体   English

我在 text_classification_rnn 的 Google tensorflow2.0 教程中遇到了一个令人惊讶的错误

[英]I meet an surprised error in Google tensorflow2.0 tutorials in text_classification_rnn

When i reading Google tensorflow2.0 tutorials, i meet an surprised error when i try to test in my jupyter.当我阅读 Google tensorflow2.0 教程时,当我尝试在我的 jupyter 中进行测试时遇到了一个令人惊讶的错误。 It is so strange!It run fluently in Google colab!好奇怪!在谷歌colab中运行流畅! The tutorials is this My computer GPU is gtx1060 6G, and memory is 16G, I think my computer is ok to run this tutorials.教程是这个我的电脑GPU是gtx1060 6G,memory是16G,我觉得我的电脑运行这个教程是没问题的。

I try run it on Jupyter, and it run error.But run fluently on Google colab!我尝试在 Jupyter 上运行它,它运行错误。但在 Google colab 上运行流畅!

You can see the error code following or go to the web :您可以看到以下错误代码或go 到 web


from __future__ import absolute_import, division, print_function, unicode_literals

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

import tensorflow_datasets as tfds
# tfds.disable_progress_bar()
(train_data, test_data), info = tfds.load(
    'imdb_reviews/subwords8k', 
    split = (tfds.Split.TRAIN, tfds.Split.TEST), 
    with_info=True, as_supervised=True)
encoder = info.features['text'].encoder
padded_shapes = ([None],())
train_batches = train_data.shuffle(1000).padded_batch(10, padded_shapes = padded_shapes)
test_batches = test_data.shuffle(1000).padded_batch(10, padded_shapes = padded_shapes)

embedding_dim=16

model = keras.Sequential([
    layers.Embedding(encoder.vocab_size, embedding_dim,mask_zero=True),
    layers.Bidirectional(tf.keras.layers.LSTM(32)),
    layers.Dense(1, activation='sigmoid')
])

model.compile(optimizer='adam',
              loss='binary_crossentropy',
              metrics=['accuracy'])

history = model.fit(
    train_batches,
    epochs=10,
    validation_data=test_batches, validation_steps=20,verbose=2)


It is my first time meeting this error, and I don't know how to fix it, but it run fluently on Google colab, I don't know why?这是我第一次遇到这个错误,我不知道如何修复它,但它在谷歌colab上运行流畅,我不知道为什么? The error following:以下错误:


Epoch 1/10
---------------------------------------------------------------------------
CancelledError                            Traceback (most recent call last)
<ipython-input-2-8f27353fef79> in <module>
     31     train_batches,
     32     epochs=10,
---> 33     validation_data=test_batches, validation_steps=20,verbose=2)

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    726         max_queue_size=max_queue_size,
    727         workers=workers,
--> 728         use_multiprocessing=use_multiprocessing)
    729 
    730   def evaluate(self,

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in fit(self, model, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, **kwargs)
    322                 mode=ModeKeys.TRAIN,
    323                 training_context=training_context,
--> 324                 total_epochs=epochs)
    325             cbks.make_logs(model, epoch_logs, training_result, ModeKeys.TRAIN)
    326 

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py in run_one_epoch(model, iterator, execution_function, dataset_size, batch_size, strategy, steps_per_epoch, num_samples, mode, training_context, total_epochs)
    121         step=step, mode=mode, size=current_batch_size) as batch_logs:
    122       try:
--> 123         batch_outs = execution_function(iterator)
    124       except (StopIteration, errors.OutOfRangeError):
    125         # TODO(kaftan): File bug about tf function and errors.OutOfRangeError?

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py in execution_function(input_fn)
     84     # `numpy` translates Tensors to values in Eager mode.
     85     return nest.map_structure(_non_none_constant_value,
---> 86                               distributed_function(input_fn))
     87 
     88   return execution_function

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\eager\def_function.py in __call__(self, *args, **kwds)
    455 
    456     tracing_count = self._get_tracing_count()
--> 457     result = self._call(*args, **kwds)
    458     if tracing_count == self._get_tracing_count():
    459       self._call_counter.called_without_tracing()

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\eager\def_function.py in _call(self, *args, **kwds)
    485       # In this case we have created variables on the first call, so we run the
    486       # defunned version which is guaranteed to never create variables.
--> 487       return self._stateless_fn(*args, **kwds)  # pylint: disable=not-callable
    488     elif self._stateful_fn is not None:
    489       # Release the lock early so that multiple threads can perform the call

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\eager\function.py in __call__(self, *args, **kwargs)
   1821     """Calls a graph function specialized to the inputs."""
   1822     graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
-> 1823     return graph_function._filtered_call(args, kwargs)  # pylint: disable=protected-access
   1824 
   1825   @property

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\eager\function.py in _filtered_call(self, args, kwargs)
   1139          if isinstance(t, (ops.Tensor,
   1140                            resource_variable_ops.BaseResourceVariable))),
-> 1141         self.captured_inputs)
   1142 
   1143   def _call_flat(self, args, captured_inputs, cancellation_manager=None):

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\eager\function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1222     if executing_eagerly:
   1223       flat_outputs = forward_function.call(
-> 1224           ctx, args, cancellation_manager=cancellation_manager)
   1225     else:
   1226       gradient_name = self._delayed_rewrite_functions.register()

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\eager\function.py in call(self, ctx, args, cancellation_manager)
    509               inputs=args,
    510               attrs=("executor_type", executor_type, "config_proto", config),
--> 511               ctx=ctx)
    512         else:
    513           outputs = execute.execute_with_cancellation(

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\tensorflow_core\python\eager\execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     65     else:
     66       message = e.message
---> 67     six.raise_from(core._status_to_exception(e.code, message), None)
     68   except TypeError as e:
     69     keras_symbolic_tensors = [

c:\users\sha\anaconda3\envs\tensorflow2\lib\site-packages\six.py in raise_from(value, from_value)

CancelledError:  [_Derived_]RecvAsync is cancelled.
     [[{{node Reshape_11/_38}}]] [Op:__inference_distributed_function_16087]

Function call stack:
distributed_function

Thanks for anyone help me!感谢任何人帮助我!

Try to reduce the batch_size, it should work.尝试减少batch_size,它应该可以工作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 使用tensorflow2.0为RNN预处理不同大小的大数据集 - preprocessing big dataset with different size for RNN with tensorflow2.0 TensorFlow中具有RNN的文本分类-AttributeError:&#39;_IndicatorColumn&#39;对象没有属性&#39;key&#39; - Text Classification with RNN in TensorFlow - AttributeError: '_IndicatorColumn' object has no attribute 'key' UnicodeDecodeError 来自 Tensorflow 教程“使用 TF Hub 进行文本分类” - UnicodeDecodeError from Tensorflow Tutorials “Text classification with TF Hub” tensorflow2.0中是否有任何连续的function? - Is there any contiguous function in tensorflow2.0? 使用 tensorflow 2.0 运行 RNN LSTM model 时出现错误 - I'm getting an error while running the RNN LSTM model with tensorflow 2.0 Tensorflow RNN用于单输出分类 - Tensorflow RNN for classification with single output 使用多个 gpu 和 tensorflow2.0 训练得到错误:超出范围:序列结束 - train with muliple gpu with tensorflow2.0 get error: Out of range: End of sequence 我从 AutoEncoder tensorflow2.0 按时间顺序得到错误的数据 - I get wrong data form AutoEncoder tensorflow2.0 Data in chronological order 文本分类RNN-LSTM-错误检查目标 - Text-Classification RNN - LSTM - Error checking target 自定义 rnn 迁移到 tensorflow 2.0 - custom rnn migration to tensorflow 2.0
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM