简体   繁体   中英

How can I dynamically adjust the frame size and the display frame region in OpenCV camera capture?

I was trying to build a model that can dynamically adjust the display region of the camera capture in OpenCV according to the detections. I found frame and resolution resizing methods, but what if I want to focus on a particular region of the entire capture? How can I do that?

I tried the cv2.resize() method, and the cap.set() method, which changed the frame size and the resolution respectively, but I could not make the feed to get focused on a particular region of the entire captured frame

If I've got your idea correct, you want to crop a part of an image with coordinates based on your detection. OpenCV represents images with arrays, so with exaple image:

import cv2
import matplotlib.pyplot as plt


img = cv2.imread('/content/drive/MyDrive/1.png')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)

print(img.shape)

在此处输入图像描述

Then you just access array's part by indexing and get its cropped part:

plt.imshow(img[250: 450, 250: 450])

在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM