I have a data set with 6 possible labels of the type:
Class 1: Near-large
Class 2: Far- Large
Class 3: Near - Medium
Class 4: Far - Medium
Class 5: Near - small
Class 6: far - small
And I would like to modify the problem to separate the labels so each sample would be classified as far/near and small/medium/small independently, given different features for each classification as input.
My first idea was to train 2 different models for each sublabel and then make a custom function to join the predictions , but I wonder if there's a quicker way of doing it within the Keras framework.
I know I can use the functional API to create two models with independent inputs and two independent outputs. This would give me 2 predictions for 2 different sublabels. If I one hot encode the sublabels the output of those models would be something like this:
Model1.output = [ 0,1 ] or [1,0] ( far/near)
Model2.output = [ 1, 0, 0 ] or [0,1,0] or [0,0,1](small/medium/large)
But then how can I merge these two outputs to create a 6 dim vector for the complete labels ?
Model_merged.output = [1, 0, 0, 0, 0 ,0 ] , [010000], ...., [000001] (class1,... ,Class6)
You can reshape
model1 output to extend the axis, mulitply that with model2 output and flatten them both.
from keras.models import Model
reshaped = keras.layers.Reshape((2,-1))(model1.output)
combined = keras.layers.Multiply()([reshaped,model2.output])
flattened = keras.layers.Reshape((6,))(combined)
Combined_model = Model([model1.input,model2.input], flattened)
A simple numpy example of the above would be:
model1_output = np.array([0,1])[:,None] #Reshaped
#array([[0],
# [1]])
model2_output = np.array([1,0,0])
# array([1, 0, 0])
combined = model1_output*model2_output
#array([[0, 0, 0],
# [1, 0, 0]])
combined.ravel()
#array([0, 0, 0, 1, 0, 0])
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.