簡體   English   中英

CTC 損失下降並停止

[英]CTC loss goes down and stops

我正在嘗試訓練驗證碼識別模型。 模型細節是 resnet 預訓練的 CNN 層 + 雙向 LSTM + 全連接。 它在 python 庫captcha生成的驗證上達到了 90% 的序列准確率。 問題是這些生成的驗證碼似乎每個字符的位置相似。 當我在字符之間隨機添加空格時,模型不再起作用。 所以我想知道LSTM是在學習的過程中學習分割的嗎? 然后我嘗試使用 CTC 損失。 起初,損失下降得很快。 但之后一直保持在16左右,沒有明顯下降。 我嘗試了不同層的 LSTM,不同數量的單元。 2 層 LSTM 達到了較低的損失,但仍未收斂。 3層就像2層。 損失曲線:

在此處輸入圖片說明

#encoding:utf8
import os
import sys
import torch
import warpctc_pytorch
import traceback

import torchvision
from torch import nn, autograd, FloatTensor, optim
from torch.nn import functional as F
from torch.utils.data import DataLoader
from torch.optim.lr_scheduler import MultiStepLR
from tensorboard import SummaryWriter
from pprint import pprint

from net.utils import decoder

from logging import getLogger, StreamHandler
logger = getLogger(__name__)
handler = StreamHandler(sys.stdout)
logger.addHandler(handler)

from dataset_util.utils import id_to_character
from dataset_util.transform import rescale, normalizer
from config.config import MAX_CAPTCHA_LENGTH, TENSORBOARD_LOG_PATH, MODEL_PATH


class CNN_RNN(nn.Module):
    def __init__(self, lstm_bidirectional=True, use_ctc=True, *args, **kwargs):
        super(CNN_RNN, self).__init__(*args, **kwargs)
        model_conv = torchvision.models.resnet18(pretrained=True)
        for param in model_conv.parameters():
            param.requires_grad = False

        modules = list(model_conv.children())[:-1]  # delete the last fc layer.
        for param in modules[8].parameters():
            param.requires_grad = True

        self.resnet = nn.Sequential(*modules)            # CNN with fixed parameters from resnet as feature extractor
        self.lstm_input_size = 512 * 2 * 2
        self.lstm_hidden_state_size = 512
        self.lstm_num_layers = 2
        self.chracter_space_length = 64
        self._lstm_bidirectional = lstm_bidirectional
        self._use_ctc = use_ctc
        if use_ctc:
            self._max_captcha_length = int(MAX_CAPTCHA_LENGTH * 2)
        else:
            self._max_captcha_length = MAX_CAPTCHA_LENGTH

        if lstm_bidirectional:
            self.lstm_hidden_state_size = self.lstm_hidden_state_size * 2           # so that hidden size for one direction in bidirection lstm is the same as vanilla lstm
            self.lstm = self.lstm = nn.LSTM(self.lstm_input_size, self.lstm_hidden_state_size // 2, dropout=0.5, bidirectional=True, num_layers=self.lstm_num_layers)
        else:
            self.lstm = nn.LSTM(self.lstm_input_size, self.lstm_hidden_state_size, dropout=0.5, bidirectional=False, num_layers=self.lstm_num_layers)  # dropout doen't work for one layer lstm

        self.ouput_to_tag = nn.Linear(self.lstm_hidden_state_size, self.chracter_space_length)
        self.tensorboard_writer = SummaryWriter(TENSORBOARD_LOG_PATH)
        # self.dropout_lstm = nn.Dropout()


    def init_hidden_status(self, batch_size):
        if self._lstm_bidirectional:
            self.hidden = (autograd.Variable(torch.zeros((self.lstm_num_layers * 2, batch_size, self.lstm_hidden_state_size // 2))),
                           autograd.Variable(torch.zeros((self.lstm_num_layers * 2, batch_size, self.lstm_hidden_state_size // 2)))) # number of layers, batch size, hidden dimention
        else:
            self.hidden = (autograd.Variable(torch.zeros((self.lstm_num_layers, batch_size, self.lstm_hidden_state_size))),
                           autograd.Variable(torch.zeros((self.lstm_num_layers, batch_size, self.lstm_hidden_state_size)))) # number of layers, batch size, hidden dimention


    def forward(self, image):
        '''
        :param image:  # batch_size, CHANNEL, HEIGHT, WIDTH
        :return:
        '''
        features = self.resnet(image)                 # [batch_size, 512, 2, 2]
        batch_size = image.shape[0]
        features = [features.view(batch_size, -1) for i in range(self._max_captcha_length)]
        features = torch.stack(features)
        self.init_hidden_status(batch_size)
        output, hidden = self.lstm(features, self.hidden)
        # output = self.dropout_lstm(output)
        tag_space = self.ouput_to_tag(output.view(-1, output.size(2)))      # [MAX_CAPTCHA_LENGTH * BATCH_SIZE, CHARACTER_SPACE_LENGTH]
        tag_space = tag_space.view(self._max_captcha_length, batch_size, -1)

        if not self._use_ctc:
            tag_score = F.log_softmax(tag_space, dim=2)             # [MAX_CAPTCHA_LENGTH, BATCH_SIZE, CHARACTER_SPACE_LENGTH]
        else:
            tag_score = tag_space

        return tag_score


    def train_net(self, data_loader, eval_data_loader=None, learning_rate=0.008, epoch_num=400):
        try:
            if self._use_ctc:
                loss_function = warpctc_pytorch.warp_ctc.CTCLoss()
            else:
                loss_function = nn.NLLLoss()

            # optimizer = optim.SGD(filter(lambda p: p.requires_grad, self.parameters()), momentum=0.9, lr=learning_rate)
            # optimizer = MultiStepLR(optimizer, milestones=[10,15], gamma=0.5)

            # optimizer = optim.Adadelta(filter(lambda p: p.requires_grad, self.parameters()))
            optimizer = optim.Adam(filter(lambda p: p.requires_grad, self.parameters()))
            self.tensorboard_writer.add_scalar("learning_rate", learning_rate)

            tensorbard_global_step=0
            if os.path.exists(os.path.join(TENSORBOARD_LOG_PATH, "resume_step")):
                with open(os.path.join(TENSORBOARD_LOG_PATH, "resume_step"), "r") as file_handler:
                    tensorbard_global_step = int(file_handler.read()) + 1

            for epoch_index, epoch in enumerate(range(epoch_num)):
                for index, sample in enumerate(data_loader):
                    optimizer.zero_grad()
                    input_image = autograd.Variable(sample["image"])        # batch_size, 3, 255, 255
                    tag_score = self.forward(input_image)

                    if self._use_ctc:
                        tag_score, target, tag_score_sizes, target_sizes = self._loss_preprocess_ctc(tag_score, sample)
                        loss = loss_function(tag_score, target, tag_score_sizes, target_sizes)
                        loss = loss / tag_score.size(1)

                    else:
                        target = sample["padded_label_idx"]
                        tag_score, target = self._loss_preprocess(tag_score, target)
                        loss = loss_function(tag_score, target)

                    print("Training loss: {}".format(float(loss)))
                    self.tensorboard_writer.add_scalar("training_loss", float(loss), tensorbard_global_step)
                    loss.backward()
                    optimizer.step()

                    if index % 250 == 0:
                        print(u"Processing batch: {} of {}, epoch: {}".format(index, len(data_loader), epoch_index))
                        self.evaluate(eval_data_loader, loss_function, tensorbard_global_step)

                    tensorbard_global_step += 1

                self.save_model(MODEL_PATH + "_epoch_{}".format(epoch_index))

        except KeyboardInterrupt:
            print("Exit for KeyboardInterrupt, save model")
            self.save_model(MODEL_PATH)

            with open(os.path.join(TENSORBOARD_LOG_PATH, "resume_step"), "w") as file_handler:
                file_handler.write(str(tensorbard_global_step))

        except Exception as excp:
            logger.error(str(excp))
            logger.error(traceback.format_exc())


    def predict(self, image):
        # TODO ctc version
        '''
        :param image: [batch_size, channel, height, width]
        :return:
        '''
        tag_score = self.forward(image)
        # TODO ctc
        # if self._use_ctc:
        #     tag_score = F.softmax(tag_score, dim=-1)
        #     decoder.decode(tag_score)

        confidence_log_probability, indexes = tag_score.max(2)

        predicted_labels = []
        for batch_index in range(indexes.size(1)):
            label = ""
            for character_index in range(self._max_captcha_length):
                if int(indexes[character_index, batch_index]) != 1:
                    label += id_to_character[int(indexes[character_index, batch_index])]
            predicted_labels.append(label)

        return predicted_labels, tag_score


    def predict_pil_image(self, pil_image):
        try:
            self.eval()
            processed_image = normalizer(rescale({"image": pil_image}))["image"].view(1, 3, 255, 255)
            result, tag_score = self.predict(processed_image)
            self.train()

        except Exception as excp:
            logger.error(str(excp))
            logger.error(traceback.format_exc())
            return [""], None

        return result, tag_score


    def evaluate(self, eval_dataloader, loss_function, step=0):
        total = 0
        sequence_correct = 0
        character_correct = 0
        character_total = 0
        loss_total = 0
        batch_size = eval_data_loader.batch_size
        true_predicted = {}
        self.eval()
        for sample in eval_dataloader:
            total += batch_size
            input_images = sample["image"]
            predicted_labels, tag_score = self.predict(input_images)

            for predicted, true_label in zip(predicted_labels, sample["label"]):
                if predicted == true_label:                  # dataloader is making label a list, use batch_size=1
                    sequence_correct += 1

                for index, true_character in enumerate(true_label):
                    character_total += 1
                    if index < len(predicted) and predicted[index] == true_character:
                        character_correct += 1

                true_predicted[true_label] = predicted

            if self._use_ctc:
                tag_score, target, tag_score_sizes, target_sizes = self._loss_preprocess_ctc(tag_score, sample)
                loss_total += float(loss_function(tag_score, target, tag_score_sizes, target_sizes) / batch_size)

            else:
                tag_score, target = self._loss_preprocess(tag_score, sample["padded_label_idx"])
                loss_total += float(loss_function(tag_score, target))  # averaged over batch index

        print("True captcha to predicted captcha: ")
        pprint(true_predicted)
        self.tensorboard_writer.add_text("eval_ture_to_predicted", str(true_predicted), global_step=step)

        accuracy = float(sequence_correct) / total
        avg_loss = float(loss_total) / (total / batch_size)
        character_accuracy = float(character_correct) / character_total
        self.tensorboard_writer.add_scalar("eval_sequence_accuracy", accuracy, global_step=step)
        self.tensorboard_writer.add_scalar("eval_character_accuracy", character_accuracy, global_step=step)
        self.tensorboard_writer.add_scalar("eval_loss", avg_loss, global_step=step)
        self.zero_grad()
        self.train()


    def _loss_preprocess(self, tag_score, target):
        '''
        :param tag_score:  value return by self.forward
        :param target:     sample["padded_label_idx"]
        :return:           (processed_tag_score, processed_target)  ready for NLLoss function
        '''
        target = target.transpose(0, 1)
        target = target.contiguous()
        target = target.view(target.size(0) * target.size(1))
        tag_score = tag_score.view(-1, self.chracter_space_length)

        return tag_score, target


    def _loss_preprocess_ctc(self, tag_score, sample):
        target_2d = [
            [int(ele) for ele in sample["padded_label_idx"][row, :] if int(ele) != 0 and int(ele) != 1]
            for row in range(sample["padded_label_idx"].size(0))]
        target = []
        for ele in target_2d:
            target.extend(ele)
        target = autograd.Variable(torch.IntTensor(target))

        # tag_score = F.softmax(F.sigmoid(tag_score), dim=-1)
        tag_score_sizes = autograd.Variable(torch.IntTensor([self._max_captcha_length] * tag_score.size(1)))
        target_sizes = autograd.Variable(sample["captcha_length"].int())

        return tag_score, target, tag_score_sizes, target_sizes


    # def visualize_graph(self, dataset):
    #     '''Since pytorch use dynamic graph, an input is required to visualize graph in tensorboard'''
    #     # warning: Do not run this, the graph is too large to visualize...
    #     sample = dataset[0]
    #     input_image = autograd.Variable(sample["image"].view(1, 3, 255, 255))
    #     tag_score = self.forward(input_image)
    #     self.tensorboard_writer.add_graph(self, tag_score)


    def save_model(self, model_path):
        self.tensorboard_writer.close()
        self.tensorboard_writer = None          # can't be pickled
        torch.save(self, model_path)
        self.tensorboard_writer = SummaryWriter(TENSORBOARD_LOG_PATH)


    @classmethod
    def load_model(cls, model_path=MODEL_PATH, *args, **kwargs):
        net = cls(*args, **kwargs)
        if os.path.exists(model_path):
            model = torch.load(model_path)
            if model:
                model.tensorboard_writer = SummaryWriter(TENSORBOARD_LOG_PATH)
                net = model

        return net


    def __del__(self):
        if self.tensorboard_writer:
            self.tensorboard_writer.close()


if __name__ == "__main__":
    from dataset_util.dataset import dataset, eval_dataset
    data_loader = DataLoader(dataset, batch_size=2, shuffle=True)
    eval_data_loader = DataLoader(eval_dataset, batch_size=2, shuffle=True)

    net = CNN_RNN.load_model()

    net.train_net(data_loader, eval_data_loader=eval_data_loader)
    # net.predict(dataset[0]["image"].view(1, 3, 255, 255))

    # predict_pil_image test code
    # from config.config import IMAGE_PATHS
    # import glob
    # from PIL import Image
    #
    # image_paths = glob.glob(os.path.join(IMAGE_PATHS.get("EVAL"), "*.png"))
    # for image_path in image_paths:
    #     pil_image = Image.open(image_path)
    #     predicted, score = net.predict_pil_image(pil_image)
    #     print("True value: {}, predicted: {}".format(os.path.split(image_path)[1], predicted))

    print("Done")

以上代碼是主要部分。 如果您需要其他組件使其運行,請發表評論。 卡在這里好久了。 任何有關培訓 crnn + ctc 的建議表示贊賞。

你有幾個問題,所以我會盡量一一回答。

首先,為什么在驗證碼中添加空格會破壞模型?

神經網絡學習處理它所訓練的數據。 如果您更改數據的分布(例如通過在字符之間添加空格),則無法保證網絡會泛化。 正如你在你的問題中暗示的那樣。 您訓練的驗證碼可能總是讓字符處於相同的位置,或者彼此之間的距離相同,因此您的模型會學習到這一點,並通過查看這些位置來學習利用這一點。 如果您希望您的網絡概括特定場景,您應該明確地針對該場景進行訓練。 因此,在您的情況下,您還應該在訓練期間添加隨機空格。

其次,為什么損失不低於 16?

顯然,由於您的訓練損失也停滯在 16(就像您的驗證損失),問題在於您的模型根本沒有能力處理問題的復雜性。 換句話說,您的模型欠擬合。 您有正確的反應來嘗試增加網絡容量。 你試圖增加 LSTM 的容量,但沒有幫助。 因此,下一個合乎邏輯的步驟是您網絡的卷積部分不夠強大。 所以這里有一些你可能想要嘗試的事情,從我認為最有可能成功到最不可能成功:

  1. 使 convnet 可訓練:我注意到您使用的是預訓練的 convnet,並且您沒有微調該 convnet 的權重。 那可能是個問題。 無論您的 convnet 接受過何種訓練,它都可能無法開發處理驗證碼所需的功能。 您也應該嘗試學習 convnet 的權重,以便為驗證碼開發有用的功能。

  2. 使用更深層次的卷積網絡:這是天真的做法。 您的 convnet 沒有足夠好的功能,請嘗試更強大的更深層次的功能。 (但你絕對應該在你使 convnet 可訓練后才使用它)。

根據我的經驗,訓練具有 CTC 損失的 RNN 模型並不是一項簡單的任務。 如果訓練設置不仔細,模型可能根本不會收斂。 以下是我的建議:

  1. 檢查訓練過程中的 CTC 損失輸出。 對於會收斂的模型,每個批次的 CTC 損失波動顯着。 如果您觀察到 CTC 損失幾乎單調地收縮到一個穩定值,那么模型很可能卡在局部最小值
  2. 使用短樣本來預訓練您的模型。 盡管我們有像LSTMGRU這樣的高級 RNN 結構,但仍然很難反向傳播 RNN 長的步驟。
  3. 擴大樣本種類。 您甚至可以添加人工樣本來幫助您的模型擺脫局部最小值。

僅供參考,我們剛剛開源了一個新的深度學習框架蒲公英,它具有內置的 CTC 目標,界面與 pytorch 非常相似。 您可以使用蒲公英嘗試您的模型,並將其與您當前的實現進行比較。

我一直在訓練 ctc loss 並遇到了同樣的問題。 我知道這是一個相當晚的答案,但希望它會幫助正在研究此問題的其他人。 經過反復試驗和大量研究,在使用 ctc 進行訓練時,有一些值得了解的事情(如果您的模型設置正確):

  1. 該模型降低成本的最快方法是僅預測毛坯。 這在一些論文和博客中有所說明:參見http://www.tbluche.com/ctc_and_blank.html
  2. 該模型首先學習僅預測空白,然后開始接收有關正確底層標簽的錯誤信號。 這也在上面的鏈接中進行了解釋。 在實踐中,我注意到我的模型在幾百個 epoch 后開始學習真正的底層標簽/目標,並且損失再次開始急劇下降。 類似於此處顯示的玩具示例: https : //thomasmesnard.github.io/files/CTC_Poster_Mesnard_Auvolat.pdf
  3. 這些參數對你的模型是否收斂有很大的影響——學習率、批量大小和紀元數。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM