简体   繁体   English

Keras 混合 model 在每个时期都给出相同的结果

[英]Keras mixed model gives same result in every epoch

I have created a mixed model with text and image.我创建了一个带有文本和图像的混合 model。 When i train my model i get same results in every epochs.当我训练我的 model 时,我在每个时期都得到相同的结果。 Below is my code.下面是我的代码。

import tensorflow as tf
import pandas as pd
import numpy as np

base_dir = "D:/Dataset/xxxx/datasets/xxx/xx/xxxxx/"

import os

train_dir = os.path.join(base_dir,"trin.jsonl")
test_dir = os.path.join(base_dir,"tst.jsonl")
dev_dir = os.path.join(base_dir,"dv.jsonl")

df_train = pd.read_json(train_dir,lines=True)
df_test = pd.read_json(test_dir,lines=True)
df_dev = pd.read_json(dev_dir,lines=True)

df_train=df_train.set_index('id')
df_dev=df_dev.set_index('id')
df_test=df_test.set_index('id')

from tensorflow.keras import optimizers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import re
import spacy
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences

nlp = spacy.load('en_core_web_md')

train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)

label_map = {1:"Hate",0:"No_Hate"}
df_dev['label']=df_dev['label'].map(label_map)
df_train['label']=df_train['label'].map(label_map)

train_generator = train_datagen.flow_from_dataframe(dataframe=df_train,directory=img_path,x_col="img",y_col="label",target_size=(224,224),batch_size=8500,class_mode="binary",shuffle=False)

def spacy_tokenizer(sentence):
    sentence = re.sub(r"[^a-zA-Z0-9]+"," ",sentence)
    sentence_list = [word.lemma_ for word in nlp(sentence) if not (word.is_space or word.is_stop or len(word)==1)]
    return ' '.join(sentence_list)
    
image_files = pd.Series(train_generator.filenames)
image_files = image_files.str.split('/', expand=True)[1].str[:-4]
image_files = list(map(int, image_files))

df_sorted = df_train.reindex(image_files)
df_sorted.head(1)

images,labels = next(train_generator)

tokenizer = Tokenizer(num_words=10000)

tokenizer.fit_on_texts(df_sorted['new_text'].values)
sequences = tokenizer.texts_to_sequences(df_sorted['new_text'].values)
train_padd = pad_sequences(sequences,maxlen=maxlen,padding='post',truncating='post')

from tensorflow.keras.models import Model
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras.layers import Embedding, Flatten, Dense
from tensorflow.keras.layers import Dense, LSTM, Embedding,Dropout,SpatialDropout1D,Conv1D,MaxPooling1D,GRU,BatchNormalization
from tensorflow.keras.layers import Input,Bidirectional,GlobalAveragePooling1D,GlobalMaxPooling1D,concatenate,LeakyReLU

def create_nlp():
    sequence_input=Input(shape=(maxlen))
    embedding_layer=Embedding(input_dim=text_embedding.shape[0],output_dim=text_embedding.shape[1],weights=[text_embedding],input_length=maxlen,trainable=False)
    embedded_sequence = embedding_layer(sequence_input)
    l_conv_1=Conv1D(128,5,activation='relu')(embedded_sequence)
    l_pool_1=MaxPooling1D(5)(l_conv_1)
    l_conv_2=Conv1D(128,5,activation='relu')(l_pool_1)
    l_pool_2=MaxPooling1D(5)(l_conv_2)
    l_flat = Flatten()(l_pool_2)
    model=Model(sequence_input,l_flat)
    return model
    
    
from tensorflow.keras.applications import VGG16
from tensorflow.keras import optimizers

def create_img():
    img_input=Input(shape=(224,224,3))
    conv_base = VGG16(weights='imagenet',include_top=False,input_shape=(224, 224, 3))
    conv_base.trainable = False
    conv_l_1=conv_base(img_input)
    flat_l = Flatten()(conv_l_1)
    dense_l = Dense(256,activation='relu')(flat_l)
    model = Model(img_input,dense_l)
    return model

nlp_1=create_nlp()
img_cnn=create_img()
combinedInput = concatenate([nlp_1.output, img_cnn.output])

x = Dense(4, activation="relu")(combinedInput)
x = Dense(1, activation="sigmoid")(x)
model1 = Model(inputs=[nlp_1.input, img_cnn.input], outputs=x)
opt = optimizers.Adam(lr=1e-3, decay=1e-3 / 200)
model1.compile(loss="binary_crossentropy", metrics=['acc'], optimizer=opt)

model1_history = model1.fit([train_padd, images], train_y, epochs=15, batch_size=16)

Below is my training results:下面是我的训练结果:

Epoch 1/15
532/532 [==============================] - 104s 196ms/step - loss: 0.6528 - acc: 0.6412
Epoch 2/15
532/532 [==============================] - 103s 193ms/step - loss: 0.6528 - acc: 0.6412
Epoch 3/15
532/532 [==============================] - 103s 195ms/step - loss: 0.6528 - acc: 0.6412
Epoch 4/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 5/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 6/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 7/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 8/15
532/532 [==============================] - 104s 195ms/step - loss: 0.6528 - acc: 0.6412
Epoch 9/15
532/532 [==============================] - 106s 200ms/step - loss: 0.6528 - acc: 0.6412
Epoch 10/15
532/532 [==============================] - 109s 204ms/step - loss: 0.6528 - acc: 0.6412
Epoch 11/15
532/532 [==============================] - 104s 196ms/step - loss: 0.6528 - acc: 0.6412
Epoch 12/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 13/15
532/532 [==============================] - 103s 194ms/step - loss: 0.6528 - acc: 0.6412
Epoch 14/15
532/532 [==============================] - 104s 195ms/step - loss: 0.6528 - acc: 0.6412
Epoch 15/15
532/532 [==============================] - 103s 193ms/step - loss: 0.6528 - acc: 0.6412

Also i am getting below logs in my terminal:我也在我的终端中得到以下日志:

Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.36GiB with freed_by_count=0.分配器 (GPU_0_bfc) 用完 memory 试图分配 freed_by_count=0 的 2.36GiB。 The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.调用者表示这不是故障,但可能意味着如果有更多 memory 可用,则可能会提高性能。

Take a look here , you might simply use improper optimizer.看看这里,你可能只是使用了不合适的优化器。 If that doesn't help, I would try to use 1 as batch size, to see if at least within first runs there is a change.如果这没有帮助,我会尝试使用 1 作为批量大小,以查看至少在第一次运行中是否有变化。 As well the learning rate might be a problem, try to play with its value and see if the accuracy changes.学习率也可能是一个问题,尝试使用它的值,看看准确性是否会发生变化。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM