简体   繁体   English

为什么 keras 层中的 L2 归一化会扩展暗淡?

[英]Why does L2 normalization in keras layer expands the dims?

I want to take the last layer of the FaceNet architecture, which contains this 3 final layers:我想采用 FaceNet 架构的最后一层,它包含最后 3 层:

Dropout (Dropout) (None, 1792)辍学(辍学)(无,1792)
Bottleneck (Dense) (None, 128)瓶颈(密集)(无,128)
Bottleneck_BatchNorm (BatchNorm (None, 128) Bottleneck_BatchNorm(BatchNorm(无,128)

and I want to add an additional layer of L2-normalization like this:我想添加一个额外的 L2 标准化层,如下所示:

norm = FRmodel.outputs
norm = Lambda(lambda x: K.l2_normalize(x, axis=1)), name="Normalization")(norm)

And now the last layers look like that:现在最后一层看起来像这样:

Dropout (Dropout) (None, 1792)辍学(辍学)(无,1792)
Bottleneck (Dense) (None, 128)瓶颈(密集)(无,128)
Bottleneck_BatchNorm (BatchNorm (None, 128) Bottleneck_BatchNorm(BatchNorm(无,128)
Normalization (Lambda) (1, None, 128)归一化 (Lambda) (1, 无, 128)

My question is why the dimensions of the L2-normaliztion change from (None, 128) to (1, None, 128) ?我的问题是为什么 L2 归一化的维度从(None, 128)变为(1, None, 128) Because of that, I can't train my model because the outputs don't fit.因此,我无法训练我的 model,因为输出不合适。 If I try to train the model without the addition of the normalization, everything works good.如果我尝试在不添加归一化的情况下训练 model,一切正常。

That happens because the outputs attribute of a Keras model, returns a list of output tensors (even if your model has only one output layer). That happens because the outputs attribute of a Keras model, returns a list of output tensors (even if your model has only one output layer). Therefore, the Lambda layer you have created is applied on that list, instead of the single output tensor in it.因此,您创建的Lambda层将应用于该列表,而不是其中的单个 output 张量。 To resolve that, extract the first element of that list and then apply the Lambda layer on it:要解决这个问题,请提取该列表的第一个元素,然后在其上应用Lambda层:

norm = FRmodel.outputs[0]
norm = Lambda(...)(norm)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM