简体   繁体   中英

Adjusted matrix, facial recognition

I have a huge matrix(77760x165) where each column represents an image and I have another matrix called avg_face(77760x1) that is the average of all faces. Now i need to subtract from every column the avg_face so I get the difference between every face and the avg_face in each column. This is my code right now but I'm working on jupyter and it takes way too much time and "kernel is killed". Is there a better way to do this? Here is my code:

adjusted_matrix = []
print("Database matrix:\n",database_matrix,"\n", "Shape:\n",database_matrix.shape,"\n")
print("Average face:\n", avg_face,"\n", "Shape:\n",avg_face.shape,"\n")
i = 0
for row in database_matrix:
    row = np.subtract(row,np.array(avg_face[i]))
    i += 1
    adjusted_matrix.append(np.array(row))

print("Adjusted matrix")
print(adjusted_matrix)

Current output: 在此处输入图像描述

As you can see the adjusted matrix isn't printed

After all, all I had to do was this:

adjusted_matrix = np.array(database_matrix - avg_face[:,np.newaxis])

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM