简体   繁体   English

在Tensorflow上训练更改输入大小RNN

[英]Training changing input size RNN on Tensorflow

I want to train an RNN with different input size of sentence X, without padding. 我想训练具有不同输入大小的句子X的RNN,而无需填充。 The logic used for this is that I am using Global Variables and for every step, I take an example, write the forward propagation ie build the graph, run the optimizer and then repeat the step again with another example. 为此使用的逻辑是,我正在使用全局变量,并且对于每个步骤,我都举一个例子,编写正向传播,即构建图形,运行优化器,然后再与另一个示例重复该步骤。 The program is extremely slow as compared to the numpy implementation of the same thing where I have implemented forward and backward propagation and using the same logic as above. 与相同的numpy实现相比,该程序非常慢,在该实现中,我实现了正向和反向传播并使用与上述相同的逻辑。 The numpy implementation takes a few seconds while Tensorflow is extremely slow. numpy实现需要几秒钟,而Tensorflow却非常慢。 Can running the same thing on GPU will be useful or I am doing some logical mistake ? 在GPU上运行相同的东西是否会有用或我在做一些逻辑错误?

As a general guideline, GPU boosts performance only if you have calculation intensive code and little data transfer. 作为一般准则,仅当您具有计算密集型代码且几乎没有数据传输时,GPU才能提高性能。 In other words, if you train your model one instance at a time (or on small batch sizes) the overhead for data transfer to/from GPU can even make your code run slower! 换句话说,如果您一次(或以小批大小)训练一个模型的实例,那么往/从GPU传输数据的开销甚至可能使您的代码运行速度变慢! But if you feed in a good chunk of samples, then GPU will definitely boost your code. 但是,如果您提供了大量的示例,那么GPU肯定会增强您的代码。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM