简体   繁体   English

将大小为(n,n,m)的数组转换为(None,n,n,m)

[英]converting an array of size (n,n,m) to (None,n,n,m)

I am trying to reshape an array of size (14,14,3) to (None, 14,14,3). 我正在尝试将大小(14,14,3)的数组重塑为(None,14,14,3)。 I have seen that the output of each layer in convolutional neural network has shape in the format(None, n, n, m). 我已经看到卷积神经网络中每一层的输出都具有以下格式的形状:(None,n,n,m)。

Consider that the name of my array is arr 考虑我数组的名字是arr

I tried arr[None,:,:] but it converts it to a dimension of (1,14,14,3). 我尝试了arr[None,:,:]但将其转换为(1,14,14,3)的尺寸。

How should I do it? 我该怎么办?

https://www.tensorflow.org/api_docs/python/tf/TensorShape https://www.tensorflow.org/api_docs/python/tf/TensorShape

A TensorShape represents a possibly-partial shape specification for a Tensor. TensorShape表示Tensor的可能是部分形状的规格。 It may be one of the following: 可能是以下之一:

Partially-known shape: has a known number of dimensions, and an unknown size for one or more dimension. 部分已知的形状:具有已知的尺寸数量,而对于一个或多个尺寸的尺寸未知。 eg TensorShape([None, 256]) 例如TensorShape([None,256])

That is not possible in numpy . 这在numpy是不可能的。 All dimensions of a ndarray are known. ndarray所有维都是已知的。

arr[None,:,:] notation adds a new size 1 dimension, (1,14,14,3) . arr[None,:,:]表示法添加了一个新的大小为1的尺寸, (1,14,14,3) Under broadcasting rules, such a dimension may be changed to match a dimension of another array. 在广播规则下,可以更改此尺寸以匹配另一个阵列的尺寸。 In that sense we often treat the None as a flexible dimension. 从这个意义上讲,我们通常将None作为灵活的维度。


I haven't worked with tensorflow though I see a lot of questions with both tags. 我没有使用tensorflow尽管我看到两个标签都有很多问题。 tensorflow should have mechanisms for transfering values to and from tensors. 张量tensorflow应该具有在张量之间传递值的机制。 It knows about numpy , but numpy does not 'know' anything about tensorflow . 它知道有关numpy ,但numpy并不“了解”关于tensorflow的任何信息。

A ndarray is an object with known values, and its shape is used to access those values in a multidimensional way. ndarray是具有已知值的对象,其形状用于以多维方式访问这些值。 In contrast a tensor does not have values: 相反, tensor没有值:

https://www.tensorflow.org/api_docs/python/tf/Tensor https://www.tensorflow.org/api_docs/python/tf/Tensor

It does not hold the values of that operation's output, but instead provides a means of computing those values 它不保存该操作的输出值,而是提供一种计算这些值的方法

Looks like you can create a TensorProt from an array (and return an array from one as well): 看起来您可以从数组创建TensorProt (并从中返回一个数组):

https://www.tensorflow.org/api_docs/python/tf/make_tensor_proto https://www.tensorflow.org/api_docs/python/tf/make_tensor_proto

and to make a Tensor from an array: 并从数组中生成张量:

https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor https://www.tensorflow.org/api_docs/python/tf/convert_to_tensor

The shape (None, 14,14,3) represent ,(batch_size,imgH,imgW,imgChannel) now imgH and imgW can be use interchangeably depends on the network and the problem. 形状(None,14,14,3)表示((batch_size,imgH,imgW,imgChannel)),现在imgH和imgW可以互换使用取决于网络和问题。 But the batchsize is given as "None" in the neural network because we don't want to restrict our batchsize to some specific value as our batchsize depends on a lot of factors like memory available for our model to run etc. 但是在神经网络中,batchsize被指定为“ None”,因为我们不希望将batchsize限制为某个特定值,因为我们的batchsize取决于很多因素,例如可用于模型运行的内存等。

So lets say you have 4 images of size 14x14x3 then you can append each image into the array say L1, and now the L1 will have the shape 4x14x14x3 ie you made a batch of 4 images and now you can feed this to your neural network. 假设您有4张大小为14x14x3的图像,然后可以将每个图像附加到数组中,例如L1,现在L1的形状为4x14x14x3,即制作了4张图像的批处理,现在您可以将其馈送到神经网络。

NOTE here None will be replaced by 4 and for the whole training process it will be 4. Similarly when you feed your network only one image it assumes the batchsize of 1 and set None equal to 1 giving you the shape (1X14X14X3) 注意这里没有一个将被替换为4,并且在整个训练过程中将被替换为4。类似地,当您仅向网络馈送一幅图像时,它将假定批处理大小为1,并将None设置为1即可得到形状(1X14X14X3)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM