简体   繁体   English

自动编码器在训练时不学习

[英]Autoencoder not learning while training

python 3.5.2, tensorflow 1.0.0 python 3.5.2,tensorflow 1.0.0

Somewhat new in programming with autoencoders. 自动编码器编程中的一些新功能。 I am trying to implement a simple network to get familiarize from here . 我正在尝试实现一个简单的网络,以便从这里开始熟悉。 I have used the same input data in which a CNN is able to classify perfectly with accuracy of 98%. 我使用了相同的输入数据,其中CNN能够以98%的准确度完美分类。 My data have 2000 row data and each row is a signal. 我的数据有2000行数据,每一行都是一个信号。 I am trying with 3 stacked layers of auto encoders with 512 256 and 64 nodes. 我正在尝试将3层自动编码器与512 256和64节点堆叠在一起。

class dimensions:
input_width, input_height = 1,1024
BATCH_SIZE = 50
layer = [input_width*input_height, 512, 256, 64]
learningrate = 0.001

def myencoder(x,corrupt_prob,dimensions):
current_input = corrupt(x) * corrupt_prob + x * (1 - corrupt_prob)
encoder = []
for layer_i, n_output in enumerate(dimensions.layer[1:]):
    n_input = int(current_input.get_shape()[1])
    W = tf.Variable(
        tf.random_uniform([n_input, n_output],
                          -1.0 / math.sqrt(n_input),
                          1.0 / math.sqrt(n_input)))
    b = tf.Variable(tf.zeros([n_output]))
    encoder.append(W)
    output = tf.nn.tanh(tf.matmul(current_input, W) + b)

    current_input = output

z = current_input
encoder.reverse()
# Build the decoder using the same weights
for layer_i, n_output in enumerate(model.layer[:-1][::-1]):
    W = tf.transpose(encoder[layer_i])
    b = tf.Variable(tf.zeros([n_output]))
    output = tf.nn.tanh(tf.matmul(current_input, W) + b)

    current_input = output
# now have the reconstruction through the network
y = current_input
# cost function measures pixel-wise difference
cost = tf.sqrt(tf.reduce_mean(tf.square(y - x)))

return z,y,cost

sess = tf.Session()
model = dimensions()
data_train,data_test,label_train,label_test = load_data(Datainfo,folder)

x = tf.placeholder(tf.float32,[model.BATCH_SIZE,model.input_height*model.input_width])
corrupt_prob = tf.placeholder(tf.float32,[1])
z,y,cost = myencoder(x,corrupt_prob,dimensions)
train_step = tf.train.AdamOptimizer(model.learningrate).minimize(cost)
lossfun = np.zeros(STEPS)
sess.run(tf.global_variables_initializer())

for i in range(STEPS):
    train_data = batchdata(data_train, model.BATCH_SIZE)
    epoch_loss = 0
    for j in range(model.BATCH_SIZE):
        sess.run(train_step,feed_dict={x:train_data,corrupt_prob:[1.0]})
        c = sess.run(cost, feed_dict={x: train_data, corrupt_prob: [1.0]})
        epoch_loss += c
    lossfun[i] = epoch_loss
    print('Epoch', i, 'completed out of', STEPS, 'loss:', epoch_loss)

my loss function appears like this 我的损失函数看起来像这样 在此处输入图片说明 xaxis - no of iterations, y axis - loss xaxis-不重复,y轴-损耗

the loss doesn't decrease and the network doesn't learn anything. 损失不会减少,网络也不会学到任何东西。 any help appreciated ! 任何帮助表示赞赏!

在函数myencoder中,权重变量W和b在每个训练步骤中都会初始化。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM