[英]How to convert tensor to numpy array
我是tensorflow的初學者。 我在幫助下制作了簡單的自動編碼器。 我想將最終decoded
張量轉換為numpy數組。我嘗試使用.eval()
但無法工作。 如何將張量轉換為numpy?
我輸入的圖像尺寸為512 * 512 * 1,數據類型為原始圖像格式。
#input
image_size = 512
hidden = 256
input_image = np.fromfile('PATH',np.float32)
# Variables
x_placeholder = tf.placeholder("float", (image_size*image_size))
x = tf.reshape(x_placeholder, [image_size * image_size, 1])
w_enc = tf.Variable(tf.random_normal([hidden, image_size * image_size], mean=0.0, stddev=0.05))
w_dec = tf.Variable(tf.random_normal([image_size * image_size, hidden], mean=0.0, stddev=0.05))
b_enc = tf.Variable(tf.zeros([hidden, 1]))
b_dec = tf.Variable(tf.zeros([image_size * image_size, 1]))
#model
encoded = tf.sigmoid(tf.matmul(w_enc, x) + b_enc)
decoded = tf.sigmoid(tf.matmul(w_dec,encoded) + b_dec)
# Cost Function
cross_entropy = -1. * x * tf.log(decoded) - (1. - x) * tf.log(1. - decoded)
loss = tf.reduce_mean(cross_entropy)
train_step = tf.train.AdagradOptimizer(0.1).minimize(loss)
# Train
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print('Training...')
for _ in xrange(10):
loss_val, _ = sess.run([loss, train_step], feed_dict = {x_placeholder: input_image})
print loss_val
您可以將解碼添加到sess.run()返回的張量列表中,如下所示。 encoded_val將由numpy數組組成,您可以對其進行整形以獲得原始圖像形狀。
或者,您可以在訓練循環之外執行sess.run()以獲得最終的解碼圖像。
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
#load_image
image_size = 16
k = 64
temp = np.zeros((image_size, image_size))
# Variables
x_placeholder = tf.placeholder("float", (image_size, image_size))
x = tf.reshape(x_placeholder, [image_size * image_size, 1])
w_enc = tf.Variable(tf.random_normal([k, image_size * image_size], mean=0.0, stddev=0.05))
w_dec = tf.Variable(tf.random_normal([image_size * image_size, k], mean=0.0, stddev=0.05))
b_enc = tf.Variable(tf.zeros([k, 1]))
b_dec = tf.Variable(tf.zeros([image_size * image_size, 1]))
#model
encoded = tf.sigmoid(tf.matmul(w_enc, x) + b_enc)
decoded = tf.sigmoid(tf.matmul(w_dec,encoded) + b_dec)
# Cost Function
cross_entropy = -1. * x * tf.log(decoded) - (1. - x) * tf.log(1. - decoded)
loss = tf.reduce_mean(cross_entropy)
train_step = tf.train.AdagradOptimizer(0.1).minimize(loss)
# Train
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
print('Training...')
for _ in xrange(10):
loss_val, decoded_val, _ = sess.run([loss, decoded, train_step], feed_dict = {x_placeholder: temp})
print loss_val
print('Done!')
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.