繁体   English   中英

使用np数组作为输入在Python中训练CNTK模型

[英]Training a CNTK model in Python with np arrays as inputs

我一直在尝试用CNTK重新编写一个简单的分类器。 但是我遇到的所有示例都使用带有输入映射的内置Reader,并且一旦读取我的数据就需要进行大量修改,因此我无法使用大多数示例演示的数据加载方法。 我在这里碰到了代码,似乎在演示如何使用直接np数组进行训练,但实际上并没有进行任何训练。

显示问题的最小工作示例:

import cntk as C
import numpy as np
from cntk.ops import relu
from cntk.layers import Dense, Convolution2D

outputs = 10

input_var = C.input_variable((7, 19, 19), name='features')
label_var = C.input_variable((outputs))

epochs = 20
minibatchSize = 100

cc = C.layers.Convolution2D((3,3), 64, activation=relu)(input_var)
net = C.layers.Dense(outputs)(cc)

loss = C.cross_entropy_with_softmax(net, label_var)

learner = C.adam(net.parameters, 0.0018, 0.9, minibatch_size=minibatchSize)

progressPrinter = C.logging.ProgressPrinter(tag='Training', num_epochs=epochs)

for i in range(epochs):
    X = np.zeros((minibatchSize, 7, 19, 19), dtype=np.float32)
    Y = np.ones((minibatchSize, outputs), dtype=np.float32)

    train_summary = loss.train((X, Y), parameter_learners=[learner], callbacks=[progressPrinter])

样本输出:

Learning rate per 100 samples: 0.0018
Finished Epoch[1 of 20]: [Training] loss = 2.302410 * 100, metric = 0.00% * 100 0.835s (119.8 samples/s);
Finished Epoch[2 of 20]: [Training] loss = 0.000000 * 0, metric = 0.00% * 0 0.003s (  0.0 samples/s);
Finished Epoch[3 of 20]: [Training] loss = 0.000000 * 0, metric = 0.00% * 0 0.001s (  0.0 samples/s);

发生这种情况的原因可能很明显,但我无法弄清楚。 任何有关如何解决此问题的想法将不胜感激!

事实证明,该解决方案实际上非常简单,您无需读者即可轻松创建输入字典。 这是解决培训问题的完整代码:

import cntk as C
import numpy as np
from cntk.ops import relu
from cntk.layers import Dense, Convolution2D

outputs = 10

input_var = C.input_variable((7, 19, 19), name='features')
label_var = C.input_variable((outputs))

epochs = 20
minibatchSize = 100

cc = C.layers.Convolution2D((3,3), 64, activation=relu)(input_var)
net = C.layers.Dense(outputs)(cc)

loss = C.cross_entropy_with_softmax(net, label_var)
pe = C.classification_error(net, label_var)    

learner = C.adam(net.parameters, 0.0018, 0.9, minibatch_size=minibatchSize)

progressPrinter = C.logging.ProgressPrinter(tag='Training', num_epochs=epochs)
trainer = C.Trainer(net, (loss, pe), learner, progressPrinter)    

for i in range(epochs):
    X = np.zeros((minibatchSize, 7, 19, 19), dtype=np.float32)
    Y = np.ones((minibatchSize, outputs), dtype=np.float32)

    trainer.train_minibatch({input_var : X, label_var : Y})

    trainer.summarize_training_progress()

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM