简体   繁体   English

如何使用keras和tensorflow后端将密集层的输出作为numpy数组获取?

[英]How to get the output of dense layer as a numpy array using keras and tensorflow backend?

I am new to Keras and Tensorflow.我是 Keras 和 Tensorflow 的新手。 I am working on a face recognition project using deep learning.我正在使用深度学习进行人脸识别项目。 I am getting the class label of a subject as an output using this code (output of softmax layer ) and the accuracy is 97.5% for my custom dataset of 100 classes.我使用此代码( softmax 层的输出)获取主题的类标签作为输出,并且我的 100 个类的自定义数据集的准确度为 97.5%。

But now I'm interested in the feature vector representation, so I want to pass the test images through the network and extract the output from activated dense layer before softmax (last layer).但是现在我对特征向量表示感兴趣,所以我想通过网络传递测试图像并在 softmax(最后一层)之前从激活的密集层中提取输出。 I referred Keras documentation, but nothing seemed to work for me.我参考了 Keras 文档,但似乎对我没有任何作用。 Can anyone please help me how to extract the output from the dense layer activation and save as a numpy array?任何人都可以帮助我如何从密集层激活中提取输出并保存为 numpy 数组? Thanks in advance.提前致谢。

class Faces:
    @staticmethod
    def build(width, height, depth, classes, weightsPath=None):
        # initialize the model
        model = Sequential()
        model.add(Conv2D(100, (5, 5), padding="same",input_shape=(depth, height, width), data_format="channels_first"))
        model.add(Activation("relu"))
        model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2),data_format="channels_first"))

        model.add(Conv2D(100, (5, 5), padding="same"))
        model.add(Activation("relu"))
        model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), data_format="channels_first"))

        # 3 set of CONV => RELU => POOL
        model.add(Conv2D(100, (5, 5), padding="same"))
        model.add(Activation("relu"))
        model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2),data_format="channels_first"))

        # 4 set of CONV => RELU => POOL
        model.add(Conv2D(50, (5, 5), padding="same"))
        model.add(Activation("relu"))
        model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2),data_format="channels_first"))

        # 5 set of CONV => RELU => POOL
        model.add(Conv2D(50, (5, 5), padding="same"))
        model.add(Activation("relu"))
        model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), data_format="channels_first"))

        # 6 set of CONV => RELU => POOL
        model.add(Conv2D(50, (5, 5), padding="same"))
        model.add(Activation("relu"))
        model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2), data_format="channels_first"))

        # set of FC => RELU layers
        model.add(Flatten())
        #model.add(Dense(classes))
        #model.add(Activation("relu"))

        # softmax classifier
        model.add(Dense(classes))
        model.add(Activation("softmax"))

        return model

ap = argparse.ArgumentParser()
ap.add_argument("-l", "--load-model", type=int, default=-1,
    help="(optional) whether or not pre-trained model should be loaded")
ap.add_argument("-w", "--weights", type=str,
    help="(optional) path to weights file")
args = vars(ap.parse_args())


path = 'C:\\Users\\Project\\FaceGallery'
image_paths = [os.path.join(path, f) for f in os.listdir(path)]
images = []
labels = []
name_map = {}
demo = {}
nbr = 0
j = 0
for image_path in image_paths:
    image_pil = Image.open(image_path).convert('L')
    image = np.array(image_pil, 'uint8')
    cv2.imshow("Image",image)
    cv2.waitKey(5)
    name = image_path.split("\\")[4][0:5]
    print(name)
    # Get the label of the image
    if name in demo.keys():
        pass
    else:
        demo[name] = j
        j = j+1
    nbr =demo[name]

    name_map[nbr] = name
    images.append(image)
    labels.append(nbr)
print(name_map)
# Training and testing data split ratio = 60:40
(trainData, testData, trainLabels, testLabels) = train_test_split(images, labels, test_size=0.4)

trainLabels = np_utils.to_categorical(trainLabels, 100)
testLabels = np_utils.to_categorical(testLabels, 100)

trainData = np.asarray(trainData)
testData = np.asarray(testData)

trainData = trainData[:, np.newaxis, :, :] / 255.0
testData = testData[:, np.newaxis, :, :] / 255.0

opt = SGD(lr=0.01)
model = Faces.build(width=200, height=200, depth=1, classes=100,
                    weightsPath=args["weights"] if args["load_model"] > 0 else None)

model.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
if args["load_model"] < 0:
    model.fit(trainData, trainLabels, batch_size=10, epochs=300)
(loss, accuracy) = model.evaluate(testData, testLabels, batch_size=100, verbose=1)
print("Accuracy: {:.2f}%".format(accuracy * 100))
if args["save_model"] > 0:
    model.save_weights(args["weights"], overwrite=True)

for i in np.arange(0, len(testLabels)):
    probs = model.predict(testData[np.newaxis, i])
    prediction = probs.argmax(axis=1)
    image = (testData[i][0] * 255).astype("uint8")
    name = "Subject " + str(prediction[0])
    if prediction[0] in name_map:
        name = name_map[prediction[0]]
    cv2.putText(image, name, (5, 20), cv2.FONT_HERSHEY_PLAIN, 1.3, (255, 255, 255), 2)
    print("Predicted: {}, Actual: {}".format(prediction[0], np.argmax(testLabels[i])))
    cv2.imshow("Testing Face", image)
    cv2.waitKey(1000)

See https://keras.io/getting-started/faq/ How can I obtain the output of an intermediate layer?参见https://keras.io/getting-started/faq/如何获取中间层的输出?

You'll need to name the layer you want the output from by adding a 'name' argument to your definition.您需要通过在定义中添加“名称”参数来命名您想要输出的图层。 Like.. model.add(Dense(xx, name='my_dense'))喜欢.. model.add(Dense(xx, name='my_dense'))
You can then define an intermediate model and run it by doing something like...然后,您可以定义一个中间模型并通过执行以下操作来运行它...

m2 = Model(inputs=model.input, outputs=model.get_layer('my_dense').output)
Y = m2.predict(X)

You can use .numpy() if you use TensorFlow 2 as backend in your model to get NumPy array as output.如果您在模型中使用 TensorFlow 2 作为后端以获取 NumPy 数组作为输出,则可以使用.numpy() You can read this link to know more about it.您可以阅读此链接以了解更多信息。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM