简体   繁体   中英

How can I use transfer learning for a Keras regression problem?

I am trying to build a CNN using transfer learning and fine tuning. The task is to build a CNN with Keras getting a dataset of images (photos of houses) and CSV file (photos names and prices), and train CNN with these inputs. But I have a problem that I cannot fix.

This is my code:

import pandas as pd
from google.colab import drive
from sklearn.model_selection import train_test_split
from keras import applications
from keras import optimizers
from keras import backend
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model, load_model
from keras.layers import GlobalAveragePooling2D, Dense, Flatten
from matplotlib import pyplot

drive.mount('/content/gdrive')
!unzip -n '/content/gdrive/My Drive/HOUSEPRICES.zip' >> /dev/null

data_path = 'HOUSEPRICES/'
imgs_path = data_path + "images/"
labels_path = data_path + "prices.csv"

labels = pd.read_csv(labels_path), dtype = {"prices": "float64"})

seed = 0
train_data, test_data = train_test_split(labels, test_size=0.25, random_state=seed) 
dev_data, test_data = train_test_split(test_data, test_size=0.5, random_state=seed)  

train_data = train_data.reset_index(drop=True)
dev_data = dev_data.reset_index(drop=True)
test_data = test_data.reset_index(drop=True)

datagen = ImageDataGenerator(rescale=1./255)

img_width = 320
img_height = 240  
x_col = 'image_name'          
y_col = 'prices'


batch_size = 64              
train_dataset = datagen.flow_from_dataframe(dataframe=train_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                            class_mode="input", target_size=(img_width,img_height), batch_size=batch_size)
dev_dataset = datagen.flow_from_dataframe(dataframe=dev_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                          class_mode="input",target_size=(img_width,img_height), batch_size=batch_size)
test_dataset = datagen.flow_from_dataframe(dataframe=test_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                           class_mode="input", target_size=(img_width,img_height), batch_size=batch_size)


base_model = applications.InceptionV3(weights='imagenet', include_top=False, input_shape=(img_width,img_height,3))


for layer in base_model.layers:
    layer.trainable = False   

x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)

predictions = Dense(1, activation='linear')(x) 

model = Model(inputs=[base_model.input], outputs=[predictions])
model.summary()   

model.compile(loss='mse',     
              optimizer=optimizers.adam(lr=1e-5),  
              metrics=['mse'])


model.fit_generator(train_dataset,
                    epochs=20,  
                    verbose=2,  
                    steps_per_epoch=len(train_data)/batch_size,
                    validation_data=dev_dataset,
                    validation_steps=len(dev_data)/batch_size)

test_loss, test_mse = model.evaluate_generator(test_dataset,                                                   steps=len(test_data)/batch_size, verbose=1)

And I get this error:

ValueError: Input 0 is incompatible with layer flatten_9: expected min_ndim=3, found ndim=2

What is the problem with my code? Probably I am not building the dataset (images + numerical prices) properly? Or it has a problem with the model architecture? How can I fix the code?

Flatten() , converts higher dimensional vectors into 2 dimensional. If you already have a 2 dimensional vector, then you don't need Flatten() .

GlobalAveragePooling2D does pooling over the spatial data. The output shape is (batch_size, channels). So, this can be directly fed to a Dense layer without the need for a Flatten. To fix the code, remove this line:

x = Flatten()(x) 

Refer this link for more examples on how to fine-tune your network.

https://keras.io/applications/

class_mode="input" is for auto encoders; that is why there was an error about the target not having the same shape as input.

class_mode = 'other' works because y_col is defined.

https://keras.io/preprocessing/image/#flow_from_dataframe

import pandas as pd
from google.colab import drive
from sklearn.model_selection import train_test_split
from keras import applications
from keras import optimizers
from keras import backend
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model, load_model
from keras.layers import GlobalAveragePooling2D, Dense, Flatten
from matplotlib import pyplot

drive.mount('/content/gdrive')
!unzip -n '/content/gdrive/My Drive/HOUSEPRICES.zip' >> /dev/null

data_path = 'HOUSEPRICES/'
imgs_path = data_path + "images/"
labels_path = data_path + "prices.csv"

labels = pd.read_csv(labels_path), dtype = {"prices": "float64"})

seed = 0
train_data, test_data = train_test_split(labels, test_size=0.25, random_state=seed) 
dev_data, test_data = train_test_split(test_data, test_size=0.5, random_state=seed)  

train_data = train_data.reset_index(drop=True)
dev_data = dev_data.reset_index(drop=True)
test_data = test_data.reset_index(drop=True)

datagen = ImageDataGenerator(rescale=1./255)

img_width = 320
img_height = 240  
x_col = 'image_name'          
y_col = 'prices'


batch_size = 64              
train_dataset = datagen.flow_from_dataframe(dataframe=train_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                            class_mode="other", target_size=(img_width,img_height), batch_size=batch_size)
dev_dataset = datagen.flow_from_dataframe(dataframe=dev_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                          class_mode="other",target_size=(img_width,img_height), batch_size=batch_size)
test_dataset = datagen.flow_from_dataframe(dataframe=test_data, directory=imgs_path, x_col=x_col, y_col=y_col, has_ext=True,
                                           class_mode="other", target_size=(img_width,img_height), batch_size=batch_size)


base_model = applications.InceptionV3(weights='imagenet', include_top=False, input_shape=(img_width,img_height,3))


for layer in base_model.layers:
    layer.trainable = False   

x = base_model.output
x = GlobalAveragePooling2D()(x)    
x = Dense(256, activation='relu')(x)
x = Dropout(0.4)(x)
x = Dense(256, activation='relu')(x)

predictions = Dense(1, activation='linear')(x) 

model = Model(inputs=[base_model.input], outputs=[predictions])
model.summary()   

model.compile(loss='mse',     
              optimizer=optimizers.adam(lr=1e-5),  
              metrics=['mse'])


model.fit_generator(train_dataset,
                    epochs=20,  
                    verbose=2,  
                    steps_per_epoch=len(train_data)/batch_size,
                    validation_data=dev_dataset,
                    validation_steps=len(dev_data)/batch_size)

test_loss, test_mse = model.evaluate_generator(test_dataset, steps=len(test_data)/batch_size, verbose=1)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM