[英]Remove Blur or unmask an image
我正在使用 MTCNN 來檢測圖像中的人臉,使用 FAC.NET 從每個檢測到的人臉中提取和保存特征,並使用 OpenCV 高斯模糊過濾器來掩蓋檢測到的人臉。
def face_and_features(img):
boxes, _ = mtcnn.detect(img)
print(boxes)
for i, box in enumerate(boxes):
a,b,c,d = box
x1,y1,x2,y2 = int(a), int(b), int(c), int(d)
face = img[y1:y2, x1:x2]
cv2.imwrite(f"face{i}.jpg",face)
face = cv2.resize(face, (160, 160))
face = face.transpose((2, 0, 1))
face = torch.from_numpy(face).float()
face = face.unsqueeze(0)
features = facenet(face)
filename = "face_{}.npy".format(i)
np.save(filename, features.detach().numpy())
with open("bounding_boxes.txt", "a") as f:
f.write("{}, {}, {}, {}, {}, {}, {} \n".format(x1, y1, x2, y2, filename, datetime.datetime.now(), frame_number))
return features
def masking(img):
filename = "bounding_boxes.txt"
masked_img = img.copy()
with open(filename,'r') as file:
for line in file:
x,y,w,h,f_name,time, f_no = line.split(",")
x,y,w,h = int(x), int(y), int(w), int(h)
roi_color = masked_img[y:h, x:w]
masked_img[y:h, x:w] = cv2.GaussianBlur(roi_color, (51,51), 0)
return masked_img, f_name, time, f_no
我的最終目標是通過比較保存的面部特征(或任何其他方法)和僅取消遮罩目標面部來在遮罩圖像中找到目標面部。 有什么想法或建議嗎?
您的代碼基本上是使用 MTCNN 檢測人臉,使用 FAC.NET 從每張人臉中提取特征並保存,然后使用 OpenCV 高斯模糊濾鏡模糊圖像中的人臉。
要在蒙版圖像中找到目標人臉,您可以將保存的特征與從目標人臉中提取的特征進行比較。 您可以使用余弦相似度來計算保存的特征與目標人臉特征之間的相似度。 相似度最高的人臉就是目標人臉。
在遮罩 function 中,您可以添加一個步驟,通過用原始面部區域替換模糊區域來取消遮罩目標面部。
這是實現我上面提到的步驟的示例代碼:
def compare_features(target_face, saved_features):
target_face = cv2.resize(target_face, (160, 160))
target_face = target_face.transpose((2, 0, 1))
target_face = torch.from_numpy(target_face).float()
target_face = target_face.unsqueeze(0)
target_features = facenet(target_face)
similarity_scores = []
for i, feature in enumerate(saved_features):
score = cosine_similarity(target_features, feature)
similarity_scores.append((i, score))
return max(similarity_scores, key=lambda x: x[1])
def unmask_target_face(img, target_index, x, y, w, h):
target_face = img[y:h, x:w]
masked_img = img.copy()
masked_img[y:h, x:w] = target_face
return masked_img
def find_target_face(img, saved_features):
target_face, target_coords = detect_target_face(img)
target_index, score = compare_features(target_face, saved_features)
x, y, w, h = target_coords[0], target_coords[1], target_coords[2], target_coords[3]
unmasked_img = unmask_target_face(img, target_index, x, y, w, h)
return unmasked_img
saved_features = [np.load(f"face_{i}.npy") for i in range(len(boxes))]
masked_img, _, _, _ = masking(img)
unmasked_img = find_target_face(masked_img, saved_features)
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.