[英]Stacking arrays of multi dimensional arrays in Python
I can't really wrap my head around this... and I'm not sure if stacking is the right term to use here. 我真的无法解决这个问题...而且我不确定堆叠是否是在此处使用的正确术语。
A.shape = (28,28,1)
B.shape = (28,28,1)
If I want to merge/add/stack these arrays to this format: 如果我想将这些数组合并/添加/堆叠为这种格式:
C.shape = (2,28,28,1)
How do I do this? 我该怎么做呢? And is it a +=
version of this there I can add new arrays of shape (28,28,1)
into the existing stack to get (3,28,28,1)
. 并且是它的+=
版本,我可以在现有堆栈中添加形状为(28,28,1)
新数组以获取(3,28,28,1)
。
EDIT 编辑
I have this array of 100 grayscale images: (100, 784)
which I guess I can reshape to (100,28,28,1)
with tf.reshape
. 我有这个阵列的100个灰度图像: (100, 784)
其我想我可以重塑到(100,28,28,1)
与tf.reshape
。
I want to standardize all pixel values of the 100 images with tf.image.per_image_standardization
( doc ), but this function accepts only input shape (h,w,ch)
aka. 我想使用tf.image.per_image_standardization
( doc )标准化100张图像的所有像素值,但是该函数也仅接受输入形状(h,w,ch)
。 (28,28,1)
. (28,28,1)
。
Any suggestions on how to optimize this? 关于如何优化此的任何建议?
CODE 码
for i in range(epochs):
for j in range(samples/batch_size):
batch_xs, batch_ys = mnist.train.next_batch(batch_size) #(100,784)
batch_xsr = tf.reshape(batch_xs, [-1, 28, 28, 1]) # (100,28,28,1)
...
#somehow use tf.image.per_image_standardization (input shape =
#(28,28,1)) on each of the 100 images, and end up with
#shape (100,28,28,1) again.
...
_, loss = sess.run([train, loss_op], feed_dict={x: batch_xs, y: batch_ys})
Note to self: TensorFlow needs np.array in feed dict. 自我注意:TensorFlow需要在feed字典中使用np.array。
You could go like this... 你可以这样...
import numpy as np
A = np.zeros(shape=(28, 28, 1))
B = np.zeros(shape=(28, 28, 1))
A.shape # (28, 28, 1)
B.shape # (28, 28, 1)
C = np.array([A, B])
C.shape # (2, 28, 28, 1)
Then use this to add more, assuming 'new' here is the same shape as A or B. 然后使用它添加更多,假设此处的“新”形状与A或B相同。
def add_another(C, new):
return np.array(list(C) + [new])
You can use numpy's functions stack
and concatenate
您可以使用numpy的函数stack
并进行concatenate
import numpy as np
A = np.zeros((28, 28, 1))
B = np.zeros((28, 28, 1))
C = np.stack((A, B), axis=0)
print (C.shape)
>>> (2L, 28L, 28L, 1L)
Append further arrays of shape (28, 28, 1)
to an array of shape (x, 28, 28, 1)
by concatenating along axis=0
: 通过沿axis=0
进行级联(x, 28, 28, 1)
将形状(28, 28, 1)
28,28,1)的其他数组附加到形状(x, 28, 28, 1)
(28, 28, 1)
的数组:
D = np.ones((28,28,1))
C = np.concatenate([C, [D]], axis=0)
#C = np.append(C, [D], axis=0) # equivalent using np.append which is wrapper around np.concatenate
print (C.shape)
>>> (3L, 28L, 28L, 1L)
EDIT 编辑
I'm not familiar with tensorflow, but try this to normalize your images 我对tensorflow不熟悉,但是请尝试使用此方法来标准化您的图像
for i in range(epochs):
for j in range(samples/batch_size):
batch_xs, batch_ys = mnist.train.next_batch(batch_size) #(100,784)
batch_xsr = tf.reshape(batch_xs, [-1, 28, 28, 1]) # (100,28,28,1)
for i_image in range(batch_xsr.shape[0]):
batch_xsr[i_image,:,:,:] = tf.image.per_image_standardization(batch_xsr[i_image,:,:,:])
_, loss = sess.run([train, loss_op], feed_dict={x: batch_xs, y: batch_ys})
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.