简体   繁体   English

'NoneType' object has no attribute 'clip' error in ```cv2_imshow()```

[英]'NoneType' object has no attribute 'clip' error in ```cv2_imshow()```

I was trying to create a program which could detect my face expression(from webcam).我试图创建一个程序来检测我的面部表情(来自网络摄像头)。

However, while displaying my face, I get the following error但是,在展示我的脸时,出现以下错误

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-47-e0549b59dd89> in <module>()
     47         print("\n\n")
     48 
---> 49     cv2_imshow(frame)
     50     if cv2.waitKey(1) & 0xFF == ord('q'):
     51         break

/usr/local/lib/python3.6/dist-packages/google/colab/patches/__init__.py in cv2_imshow(a)
     20       image.
     21   """
---> 22   a = a.clip(0, 255).astype('uint8')
     23   # cv2 stores colors as BGR; convert to RGB
     24   if a.ndim == 3:

AttributeError: 'NoneType' object has no attribute 'clip'

I am using Python 3.6 on Google Colab.我在 Google Colab 上使用 Python 3.6。

I am using cv2_imshow() from Google patches, since Colab does not support cv2.imshow()我正在使用 Google 补丁中的cv2_imshow() ,因为 Colab 不支持cv2.imshow()

Here is my code:这是我的代码:

from google.colab.patches import cv2_imshow
from keras.models import load_model
from time import sleep
from keras.preprocessing.image import img_to_array
from keras.preprocessing import image
import cv2
import numpy as np

face_classifier = cv2.CascadeClassifier('/content/drive/My Drive/Colab Notebooks/haarcascade_frontalface_default.xml')
classifier = load_model('/content/drive/My Drive/Colab Notebooks/fer_68acc.h5')

class_labels = ['Angry','Happy','Neutral','Sad','Surprise']

cap = cv2.VideoCapture(0)



while True:
    # Grab a single frame of video
    ret, frame = cap.read()
    labels = []
    gray = cv2.imread(frame, cv2.IMREAD_GRAYSCALE)
    faces = face_classifier.detectMultiScale(gray,1.3,5)

    for (x,y,w,h) in faces:
        cv2.rectangle(frame,(x,y),(x+w,y+h),(255,0,0),2)
        roi_gray = gray[y:y+h,x:x+w]
        roi_gray = cv2.resize(roi_gray,(48,48),interpolation=cv2.INTER_AREA)


        if np.sum([roi_gray])!=0:
            roi = roi_gray.astype('float')/255.0
            roi = img_to_array(roi)
            roi = np.expand_dims(roi,axis=0)

        # make a prediction on the ROI, then lookup the class

            preds = classifier.predict(roi)[0]
            print("\nprediction = ",preds)
            label=class_labels[preds.argmax()]
            print("\nprediction max = ",preds.argmax())
            print("\nlabel = ",label)
            label_position = (x,y)
            cv2.putText(frame,label,label_position,cv2.FONT_HERSHEY_SIMPLEX,2,(0,255,0),3)
        else:
            cv2.putText(frame,'No Face Found',(20,60),cv2.FONT_HERSHEY_SIMPLEX,2,(0,255,0),3)
        print("\n\n")
    
    cv2_imshow(frame)
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Could someone please help?有人可以帮忙吗? Unfortunately, I can not run this on my local machine, so it would be helpful if someone gave a solution which can be run on Google Colab.不幸的是,我无法在我的本地机器上运行它,所以如果有人提供了可以在 Google Colab 上运行的解决方案,那将会很有帮助。

Thanks谢谢

Does the following give you a non-zero size:以下是否为您提供非零大小:

print(frame.shape)

If not, then the image is not loading properly.如果不是,则图像未正确加载。 Nonetype means there is nothing stored in the variable called frame Nonetype 表示在名为frame的变量中没有存储任何内容

I also faced the same problem.我也遇到了同样的问题。 I wanted to detect some objects using yolov4 with my webcam.我想用我的网络摄像头使用 yolov4 来检测一些物体。 Then I found ( The AI Guy ).然后我找到了( The AI Guy )。

He uses the code snippet for Camera Capture, which runs JavaScript code to utilize his computer's webcam.他使用 Camera Capture 的代码片段,该代码片段运行 JavaScript 代码以利用他计算机的网络摄像头。 The code snippet will take a webcam photo, which he would then pass into his YOLOv4 model for object detection.该代码片段将拍摄一张网络摄像头照片,然后他会将其传递到他的 YOLOv4 model 中以进行 object 检测。

Below is a helper function to take the webcam picture using JavaScript and then run YOLOv4.下面是一个 helper function 使用 JavaScript 拍摄网络摄像头照片,然后运行 YOLOv4。

Note:- These three single Apostrophe (''') after "js = Javascript(" act as a comment in StackOverflow but in collab code shell it will work as a code.注意:- “js = Javascript(”之后的这三个单撇号 (''') 在 StackOverflow 中充当注释,但在合作代码 shell 中它将作为代码使用。

Taking photos by webcam:通过网络摄像头拍照:

def take_photo(filename='photo.jpg', quality=0.8):
  js = Javascript('''       
    async function takePhoto(quality) {
      const div = document.createElement('div');
      const capture = document.createElement('button');
      capture.textContent = 'Capture';
      div.appendChild(capture);

      const video = document.createElement('video');
      video.style.display = 'block';
      const stream = await navigator.mediaDevices.getUserMedia({video: true});

      document.body.appendChild(div);
      div.appendChild(video);
      video.srcObject = stream;
      await video.play();

      // Resize the output to fit the video element.
      google.colab.output.setIframeHeight(document.documentElement.scrollHeight, true);

      // Wait for Capture to be clicked.
      await new Promise((resolve) => capture.onclick = resolve);

      const canvas = document.createElement('canvas');
      canvas.width = video.videoWidth;
      canvas.height = video.videoHeight;
      canvas.getContext('2d').drawImage(video, 0, 0);
      stream.getVideoTracks()[0].stop();
      div.remove();
      return canvas.toDataURL('image/jpeg', quality);
    }
    ''')
  display(js)

  # get photo data
  data = eval_js('takePhoto({})'.format(quality))
  # get OpenCV format image
  img = js_to_image(data) 
  
  # call our darknet helper on webcam image
  detections, width_ratio, height_ratio = darknet_helper(img, width, height)

  # loop through detections and draw them on webcam image
  for label, confidence, bbox in detections:
    left, top, right, bottom = bbox2points(bbox)
    left, top, right, bottom = int(left * width_ratio), int(top * height_ratio), int(right * width_ratio), int(bottom * height_ratio)
    cv2.rectangle(img, (left, top), (right, bottom), class_colors[label], 2)
    cv2.putText(img, "{} [{:.2f}]".format(label, float(confidence)),
                      (left, top - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
                      class_colors[label], 2)
  # save image
  cv2.imwrite(filename, img)

  return filename
try:
  filename = take_photo('photo.jpg')
  print('Saved to {}'.format(filename))
  
  # Show the image which was just taken.
  display(Image(filename))
except Exception as err:
  # Errors will be thrown if the user does not have a webcam or if they do not
  # grant the page permission to access it.
  print(str(err))

Below is another helper function to start up the video stream using similar JavaScript as was used for images.下面是另一个助手 function 来启动视频 stream,它使用与图像类似的 JavaScript。 The video stream frames are fed as input to YOLOv4.视频 stream 帧作为输入馈送到 YOLOv4。

# JavaScript to properly create our live video stream using our webcam as input
def video_stream():
  js = Javascript('''
    var video;
    var div = null;
    var stream;
    var captureCanvas;
    var imgElement;
    var labelElement;
    
    var pendingResolve = null;
    var shutdown = false;
    
    function removeDom() {
       stream.getVideoTracks()[0].stop();
       video.remove();
       div.remove();
       video = null;
       div = null;
       stream = null;
       imgElement = null;
       captureCanvas = null;
       labelElement = null;
    }
    
    function onAnimationFrame() {
      if (!shutdown) {
        window.requestAnimationFrame(onAnimationFrame);
      }
      if (pendingResolve) {
        var result = "";
        if (!shutdown) {
          captureCanvas.getContext('2d').drawImage(video, 0, 0, 640, 480);
          result = captureCanvas.toDataURL('image/jpeg', 0.8)
        }
        var lp = pendingResolve;
        pendingResolve = null;
        lp(result);
      }
    }
    
    async function createDom() {
      if (div !== null) {
        return stream;
      }

      div = document.createElement('div');
      div.style.border = '2px solid black';
      div.style.padding = '3px';
      div.style.width = '100%';
      div.style.maxWidth = '600px';
      document.body.appendChild(div);
      
      const modelOut = document.createElement('div');
      modelOut.innerHTML = "<span>Status:</span>";
      labelElement = document.createElement('span');
      labelElement.innerText = 'No data';
      labelElement.style.fontWeight = 'bold';
      modelOut.appendChild(labelElement);
      div.appendChild(modelOut);
           
      video = document.createElement('video');
      video.style.display = 'block';
      video.width = div.clientWidth - 6;
      video.setAttribute('playsinline', '');
      video.onclick = () => { shutdown = true; };
      stream = await navigator.mediaDevices.getUserMedia(
          {video: { facingMode: "environment"}});
      div.appendChild(video);

      imgElement = document.createElement('img');
      imgElement.style.position = 'absolute';
      imgElement.style.zIndex = 1;
      imgElement.onclick = () => { shutdown = true; };
      div.appendChild(imgElement);
      
      const instruction = document.createElement('div');
      instruction.innerHTML = 
          '<span style="color: red; font-weight: bold;">' +
          'When finished, click here or on the video to stop this demo</span>';
      div.appendChild(instruction);
      instruction.onclick = () => { shutdown = true; };
      
      video.srcObject = stream;
      await video.play();

      captureCanvas = document.createElement('canvas');
      captureCanvas.width = 640; //video.videoWidth;
      captureCanvas.height = 480; //video.videoHeight;
      window.requestAnimationFrame(onAnimationFrame);
      
      return stream;
    }
    async function stream_frame(label, imgData) {
      if (shutdown) {
        removeDom();
        shutdown = false;
        return '';
      }

      var preCreate = Date.now();
      stream = await createDom();
      
      var preShow = Date.now();
      if (label != "") {
        labelElement.innerHTML = label;
      }
            
      if (imgData != "") {
        var videoRect = video.getClientRects()[0];
        imgElement.style.top = videoRect.top + "px";
        imgElement.style.left = videoRect.left + "px";
        imgElement.style.width = videoRect.width + "px";
        imgElement.style.height = videoRect.height + "px";
        imgElement.src = imgData;
      }
      
      var preCapture = Date.now();
      var result = await new Promise(function(resolve, reject) {
        pendingResolve = resolve;
      });
      shutdown = false;
      
      return {'create': preShow - preCreate, 
              'show': preCapture - preShow, 
              'capture': Date.now() - preCapture,
              'img': result};
    }
    ''')

  display(js)
  
def video_frame(label, bbox):
  data = eval_js('stream_frame("{}", "{}")'.format(label, bbox))
  return data

# start streaming video from webcam
video_stream()
# label for video
label_html = 'Capturing...'
# initialze bounding box to empty
bbox = ''
count = 0 
while True:
    js_reply = video_frame(label_html, bbox)
    if not js_reply:
        break

    # convert JS response to OpenCV Image
    frame = js_to_image(js_reply["img"])

    # create transparent overlay for bounding box
    bbox_array = np.zeros([480,640,4], dtype=np.uint8)

    # call our darknet helper on video frame
    detections, width_ratio, height_ratio = darknet_helper(frame, width, height)

    # loop through detections and draw them on transparent overlay image
    for label, confidence, bbox in detections:
      left, top, right, bottom = bbox2points(bbox)
      left, top, right, bottom = int(left * width_ratio), int(top * height_ratio), int(right * width_ratio), int(bottom * height_ratio)
      bbox_array = cv2.rectangle(bbox_array, (left, top), (right, bottom), class_colors[label], 2)
      bbox_array = cv2.putText(bbox_array, "{} [{:.2f}]".format(label, float(confidence)),
                        (left, top - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
                        class_colors[label], 2)

    bbox_array[:,:,3] = (bbox_array.max(axis = 2) > 0 ).astype(int) * 255
    # convert overlay of bbox into bytes
    bbox_bytes = bbox_to_bytes(bbox_array)
    # update bbox so next frame gets new overlay
    bbox = bbox_bytes

If you don't get any benefit from the above code then please check my collab where I execute this code successfully.如果您没有从上面的代码中获得任何好处,那么请检查我成功执行此代码的协作。

collab link 协作链接

Use below two functions from the opencv-python package in Google colab:在 Google colab 中使用 opencv-python package 中的以下两个函数:

  1. Import cv2 and cv2_imshow from google.colab.patches as below:从 google.colab.patches 导入 cv2 和 cv2_imshow 如下:

    import cv2导入cv2

    from google.colab.patches import cv2_imshow从 google.colab.patches 导入 cv2_imshow

  2. Read image using cv2 and display same using cv2_imshow as below:使用 cv2 读取图像并使用 cv2_imshow 显示相同图像,如下所示:

    img = cv2.imread('Folder_Name/Img.jpg') img = cv2.imread('Folder_Name/Img.jpg')

    cv2_imshow(img) cv2_imshow(img)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM