簡體   English   中英

CNTK序列模型錯誤:檢測到不同的小批量布局

[英]CNTK Sequence model error: Different minibatch layouts detected

我正在嘗試使用CNTK訓練模型,該模型接收兩個輸入序列並輸出二維標量標簽。 我已經定義了這樣的模型:

def create_seq_model(num_tokens):
    with C.default_options(init=C.glorot_uniform()):
        i1 = sequence.input(shape=num_tokens, is_sparse=True, name='i1')
        i2 = sequence.input(shape=num_tokens, is_sparse=True, name='i2')
        s1 = Sequential([Embedding(300), Fold(GRU(64))])(i1)
        s2 = Sequential([Embedding(300), Fold(GRU(64))])(i2)
        combined = splice(s1, s2)
        model = Sequential([Dense(64, activation=sigmoid),
                        Dropout(0.1, seed=42),
                        Dense(2, activation=softmax)])
        return model(combined)

我已將數據轉換為CTF格式。 當我嘗試使用以下代碼段訓練它時( 這里的示例非常輕微修改),我收到一個錯誤:

def train(reader, model, max_epochs=16):
    criterion = create_criterion_function(model)

    criterion.replace_placeholders({criterion.placeholders[0]: C.input(2, name='labels')})

    epoch_size = 500000
    minibatch_size=128

    lr_per_sample = [0.003]*4+[0.0015]*24+[0.0003]
    lr_per_minibatch= [x*minibatch_size for x in lr_per_sample]
    lr_schedule = learning_rate_schedule(lr_per_minibatch, UnitType.minibatch, epoch_size)

    momentum_as_time_constant = momentum_as_time_constant_schedule(700)

    learner = fsadagrad(criterion.parameters,
                   lr=lr_schedule, momentum=momentum_as_time_constant,
                   gradient_clipping_threshold_per_sample=15,
                   gradient_clipping_with_truncation=True)

    progress_printer = ProgressPrinter(freq=1000, first=10, tag='Training', num_epochs=max_epochs)

    trainer = Trainer(model, criterion, learner, progress_printer)

    log_number_of_parameters(model)

    t = 0
    for epoch in range(max_epochs):
        epoch_end = (epoch+1) * epoch_size
        while(t < epoch_end):
            data = reader.next_minibatch(minibatch_size, input_map={
                criterion.arguments[0]: reader.streams.i1,
                criterion.arguments[1]: reader.streams.i2,
                criterion.arguments[2]: reader.streams.labels
            })
            trainer.train_minibatch(data)
            t += data[criterion.arguments[1]].num_samples 
        trainer.summarize_training_progress()

錯誤是這樣的:

Different minibatch layouts detected (difference in sequence lengths or count or start flags) in data specified for the Function's arguments 'Input('i2', [#, *], [132033])' vs. 'Input('i1', [#, *], [132033])', though these arguments have the same dynamic axes '[*, #]'

我注意到如果我選擇兩個輸入序列長度相同的示例,那么訓練函數就可以工作。 不幸的是,這代表了非常少量的數據。 處理具有不同數據長度的序列的正確機制是什么? 我是否需要填充輸入(類似於Keras的pad_sequence())?

兩個序列i1i2被意外地視為具有相同的長度。 這是因為sequence.input(...)sequence_axis參數的默認值為default_dynamic_axis() 解決這個問題的一種方法是告訴CNTK這兩個序列的長度不同,只需給每個序列分配一個這樣的序列:

i1_axis = C.Axis.new_unique_dynamic_axis('1')
i2_axis = C.Axis.new_unique_dynamic_axis('2')
i1 = sequence.input(shape=num_tokens, is_sparse=True, sequence_axis=i1_axis, name='i1')
i2 = sequence.input(shape=num_tokens, is_sparse=True, sequence_axis=i2_axis, name='i2')

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM