简体   繁体   中英

Transfer Learning with Tensorflow Problem

I am trying to solve a problem for a deep learning class and the block of code i have to modify looks like this

def alpaca_model(image_shape=IMG_SIZE, data_augmentation=data_augmenter()):
    """ Define a tf.keras model for binary classification out of the MobileNetV2 model
    Arguments:
        image_shape -- Image width and height
        data_augmentation -- data augmentation function
    Returns:
        tf.keras.model
    """
    
    
    input_shape = image_shape + (3,)
    
    # START CODE HERE


    base_model=tf.keras.applications.MobileNetV2(input_shape=input_shape, include_top=False, weights="imagenet")

    # Freeze the base model by making it non trainable
    base_model.trainable = None 

    # create the input layer (Same as the imageNetv2 input size)
    inputs = tf.keras.Input(shape=None) 
    
    # apply data augmentation to the inputs
    x = None
    
    # data preprocessing using the same weights the model was trained on
    x = preprocess_input(None) 
    
    # set training to False to avoid keeping track of statistics in the batch norm layer
    x = base_model(None, training=None) 
    
    # Add the new Binary classification layers
    # use global avg pooling to summarize the info in each channel
    x = None()(x) 
    #include dropout with probability of 0.2 to avoid overfitting
    x = None(None)(x)
        
    # create a prediction layer with one neuron (as a classifier only needs one)
    prediction_layer = None
    
    # END CODE HERE
    
    outputs = prediction_layer(x) 
    model = tf.keras.Model(inputs, outputs)
    
    return model

IMG_SIZE = (160, 160)
def data_augmentation():
    data = tl.keras.Sequential()
    data.add(RandomFlip("horizontal")
    data.add(RandomRotation(0.2)
    return data

I tried 3 times starting from that template following the directions and a lot of trial and error. I don't know what I am missing. I have gotten it to the point where it train a model and I can get the summary of it, but the summary is not correct.

Please help, I am going crazy trying to figure this out. I know it is super simple, but its the simple problems that trip me up.

You might have to use the below code to run your algorithm.

input_shape = image_shape + (3,)

### START CODE HERE

base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape,
                                               include_top=False, # <== Important!!!!
                                               weights='imagenet') # From imageNet

# Freeze the base model by making it non trainable
base_model.trainable = False 

# create the input layer (Same as the imageNetv2 input size)
inputs = tf.keras.Input(shape=input_shape) 

# apply data augmentation to the inputs
x = data_augmentation(inputs)

# data preprocessing using the same weights the model was trained on
x = preprocess_input(x) 

# set training to False to avoid keeping track of statistics in the batch norm layer
x = base_model(x, training=False) 

# Add the new Binary classification layers
# use global avg pooling to summarize the info in each channel
x = tf.keras.layers.GlobalAveragePooling2D()(x)
#include dropout with probability of 0.2 to avoid overfitting
x = tf.keras.layers.Dropout(0.2)(x)
    
# create a prediction layer with one neuron (as a classifier only needs one)
prediction_layer = tf.keras.layers.Dense(1 ,activation='linear')(x)

### END CODE HERE

outputs = prediction_layer
model = tf.keras.Model(inputs, outputs)

I had the same issue but my mistake was putting (x) in the dense layer before the end, here is the code that worked for me:

def alpaca_model(image_shape=IMG_SIZE, data_augmentation=data_augmenter()):
''' Define a tf.keras model for binary classification out of the MobileNetV2 model
Arguments:
    image_shape -- Image width and height
    data_augmentation -- data augmentation function
Returns:
Returns:
    tf.keras.model
'''


input_shape = image_shape + (3,)

### START CODE HERE

base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape,
                                               include_top=False, # <== Important!!!!
                                               weights='imagenet') # From imageNet

# Freeze the base model by making it non trainable
base_model.trainable = False 

# create the input layer (Same as the imageNetv2 input size)
inputs = tf.keras.Input(shape=input_shape) 

# apply data augmentation to the inputs
x = data_augmentation(inputs)

# data preprocessing using the same weights the model was trained on
x = preprocess_input(x) 

# set training to False to avoid keeping track of statistics in the batch norm layer
x = base_model(x, training=False) 

# Add the new Binary classification layers
# use global avg pooling to summarize the info in each channel
x = tfl.GlobalAveragePooling2D()(x) 
#include dropout with probability of 0.2 to avoid overfitting
x = tfl.Dropout(0.2)(x)
    
# create a prediction layer with one neuron (as a classifier only needs one)
prediction_layer = tfl.Dense(1, activation = 'linear')

### END CODE HERE

outputs = prediction_layer(x) 
model = tf.keras.Model(inputs, outputs)

return model

Under def data augmentation, your brackets are not well closed

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM