简体   繁体   English

tensorflow2.0中是否有任何连续的function?

[英]Is there any contiguous function in tensorflow2.0?

In PyTorch, people usually call tensor.permute(2,0,1,3).contiguous() .在 PyTorch 中,人们通常调用tensor.permute(2,0,1,3).contiguous() If I call this function in tensorflow 2.0, is it enough to just call tf.reshape(tensor, perm = [2, 0, 1, 3]) ?如果我在 tensorflow 2.0 中调用这个 function,只调用tf.reshape(tensor, perm = [2, 0, 1, 3])就足够了吗?

or what is a contiguous function in tensorflow 2.0?或者 tensorflow 2.0 中的连续 function 是什么?

From the official docs of tf.transpose ,来自tf.transpose的官方文档,

In NumPy , transposes are memory-efficient constant time operations as they simply return a new view of the same data with adjusted strides .NumPy中,转置是内存高效的常数时间操作,因为它们只是返回具有调整步幅的相同数据的新视图 TensorFlow does not support strides, so transpose returns a new tensor with the items permuted . TensorFlow不支持步幅,因此 transpose 返回一个新的张量,其中的项目是 permuted

Also, TensorFlow doesn't seem to support Fortran (Column-Major) ordering.此外, TensorFlow似乎不支持 Fortran(列主要)排序。 Hence, I think we automatically get Contiguous (Row-Major) ordering tensor.因此,我认为我们会自动获得 Contiguous (Row-Major) 排序张量。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 “ValueError:没有为任何变量提供梯度:['Variable:0']。” 在 TensorFlow2.0 - "ValueError: No gradients provided for any variable: ['Variable:0']." in Tensorflow2.0 使用tensorflow2.0为RNN预处理不同大小的大数据集 - preprocessing big dataset with different size for RNN with tensorflow2.0 Tensorflow2.0培训:model.compile与GradientTape - Tensorflow2.0 training: model.compile vs GradientTape 在Tensorflow2.0文档的keras功能api中创建图层的语法 - Syntax for creating layers in keras functional api for tensorflow2.0 documentation Tensorflow2.0 - 如何将张量转换为 numpy() 数组 - Tensorflow2.0 - How to convert Tensor to numpy() array tensorflow2.0 是否还有参数“可训练”? - Does tensorflow2.0 still have parameter 'trainable'? Tensorflow2.0 Keras:默认情况下是否在测试期间禁用辍学? - Tensorflow2.0 Keras: Is dropout disabled during testing by default? tensorflow2.0 逻辑模型每次输出 nan 权重和偏差 - tensorflow2.0 logistic model ouputs nan weight and bias every time 我在 text_classification_rnn 的 Google tensorflow2.0 教程中遇到了一个令人惊讶的错误 - I meet an surprised error in Google tensorflow2.0 tutorials in text_classification_rnn 计算 output 相对于 LSTM tensorflow2.0 中给定时间步的输入的导数 - Calculating the derivates of the output with respect to input for a give time step in LSTM tensorflow2.0
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM