简体   繁体   English

CNTK运行时错误

[英]CNTK Run Time Error

I am trying a simple lstm network in cntk and I get the following error: 我正在cntk中尝试一个简单的lstm网络,但出现以下错误:

RuntimeError                              Traceback (most recent call last)
<ipython-input-58-d0a0e4f580aa> in <module>()
      6         trainer.train_minibatch({x: x1, l: y1})
      7     if epoch % (EPOCHS / 10) == 0:
----> 8         training_loss = trainer.previous_minibatch_loss_average
      9         loss_summary.append(training_loss)
     10         print("epoch: {}, loss: {:.5f}".format(epoch, training_loss))

C:\Program Files\Anaconda3\envs\python2\lib\site-packages\cntk\train\trainer.pyc in previous_minibatch_loss_average(self)
    285         The average training loss per sample for the last minibatch trained
    286         '''
--> 287         return super(Trainer, self).previous_minibatch_loss_average()
    288 
    289     @property

C:\Program Files\Anaconda3\envs\python2\lib\site-packages\cntk\cntk_py.pyc in previous_minibatch_loss_average(self)
   2516 
   2517     def previous_minibatch_loss_average(self):
-> 2518         return _cntk_py.Trainer_previous_minibatch_loss_average(self)
   2519 
   2520     def previous_minibatch_evaluation_average(self):

RuntimeError: There was no preceeding call to TrainMinibatch or the minibatch was empty.

[CALL STACK]
    > CNTK::Trainer::  PreviousMinibatchLossAverage
    - 00007FFFA932A5F6 (SymFromAddr() error: Attempt to access invalid address.)
    - PyCFunction_Call
    - PyEval_GetGlobals
    - PyEval_EvalFrameEx
    - PyEval_GetFuncDesc
    - PyEval_GetGlobals
    - PyEval_EvalFrameEx
    - PyEval_EvalCodeEx
    - PyFunction_SetClosure
    - PyObject_Call (x2)
    - PyObject_CallFunction
    - PyObject_GenericGetAttrWithDict
    - PyType_Lookup
    - PyEval_EvalFrameEx

The relevant code is: 相关代码为:

# train
loss_summary = []
start = time.time()
for epoch in range(0, EPOCHS):
    for x1, y1 in next_batch(x_train, y_train):
        trainer.train_minibatch({x: x1, l: y1})
    if epoch % (EPOCHS / 10) == 0:
        training_loss = trainer.previous_minibatch_loss_average
        loss_summary.append(training_loss)
        print("epoch: {}, loss: {:.5f}".format(epoch, training_loss))

Now, I am stuck at this for hours now and cant understand what is happening. 现在,我在这里呆了几个小时,无法理解正在发生的事情。 I am following a tutorial at https://notebooks.azure.com/cntk/libraries/tutorials/html/CNTK_106A_LSTM_Timeseries_with_Simulated_Data.ipynb and searching google also is not helping. 我正在https://notebooks.azure.com/cntk/libraries/tutorials/html/CNTK_106A_LSTM_Timeseries_with_Simulated_Data.ipynb上学习教程,但搜索Google也无济于事。

Thanks for your help. 谢谢你的帮助。

Just an idea: Could it be, that your for (next minibatch) loop is never executed? 只是一个想法:可能永远不会执行for(下一个minibatch)循环吗?

I would try to debug it using pdb. 我会尝试使用pdb对其进行调试。 Just import pdb at the top of your jupyter cell and add a pdb.set_trace() before the for x1, y1 .. loop. 只需import pdb jupyter单元格的顶部,然后在for x1, y1 ..循环之前添加pdb.set_trace() Run the cell. 运行单元格。 You can use step (s) to go into the methods or use next (n) to go forward. 您可以使用步骤进入方法,或使用下一个步骤前进。 That could maybe help you to analyse the trace and you can use prints in pdb to proof the variables. 这也许可以帮助您分析跟踪,并且可以在pdb中使用打印来证明变量。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM