简体   繁体   中英

Why does L2 normalization in keras layer expands the dims?

I want to take the last layer of the FaceNet architecture, which contains this 3 final layers:

Dropout (Dropout) (None, 1792)
Bottleneck (Dense) (None, 128)
Bottleneck_BatchNorm (BatchNorm (None, 128)

and I want to add an additional layer of L2-normalization like this:

norm = FRmodel.outputs
norm = Lambda(lambda x: K.l2_normalize(x, axis=1)), name="Normalization")(norm)

And now the last layers look like that:

Dropout (Dropout) (None, 1792)
Bottleneck (Dense) (None, 128)
Bottleneck_BatchNorm (BatchNorm (None, 128)
Normalization (Lambda) (1, None, 128)

My question is why the dimensions of the L2-normaliztion change from (None, 128) to (1, None, 128) ? Because of that, I can't train my model because the outputs don't fit. If I try to train the model without the addition of the normalization, everything works good.

That happens because the outputs attribute of a Keras model, returns a list of output tensors (even if your model has only one output layer). Therefore, the Lambda layer you have created is applied on that list, instead of the single output tensor in it. To resolve that, extract the first element of that list and then apply the Lambda layer on it:

norm = FRmodel.outputs[0]
norm = Lambda(...)(norm)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM