簡體   English   中英

在處理中從相機獲取兩個連續的幀

[英]Getting two consecutive frames from camera in Processing

我一直在嘗試使用Processing實現此目的的幾種方法,但是每次都無法獲得完美連續的效果。 有人知道這樣做的“正確方法”嗎?

提前致謝!

我查看了KetaiCamera的來源問題報告 ,尤其是這份 報告 不幸的是,該代碼並未構建為提供實時攝像機幀。

您可以嘗試將AndroidCapture項目作為起點 ,然后修改本機Android 以實現您的目標。

從理論上講,應該監聽captureEvent()以獲取一個新幀,並跟蹤是否先前記錄了第一幀,如果是的話,則跟蹤第二幀。

這是一個基本的注釋草圖,用於說明要點(按任意鍵可抓取另一對框架):

import processing.video.*;

Capture camera;

PImage firstFrame;
PImage secondFrame;

void setup(){
  size(1920,480);

  camera = new Capture(this,640,480);
  camera.start();
}
void draw(){
  image(camera,0,0);
  if(firstFrame != null){
    image(firstFrame,640,0);
  }
  if(secondFrame != null){
    image(secondFrame,1280,0);
  }
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
  //read a new frame
  c.read();
  //if the first frame wasn't recorded yet, record(copy) it's pixels
  if(firstFrame == null){
    firstFrame = c.get();
  }
  //same for the second frame, but check if the first frame has been recorded first
  if(firstFrame != null && secondFrame == null){
    secondFrame = c.get();
  }
}

void keyPressed(){
  //reset consecutive frames on keypress
  firstFrame = secondFrame = null;
}

理論上(如您在Processing Video Library的源代碼中所看到 ),只有在准備好新的攝像機樣本時才觸發captureEvent。 在實踐中,您會發現兩個連續的幀可能看起來相同(即使它們在時間上可能相隔一秒鍾),甚至您在注釋中指出的噪聲也是如此。

感覺就像是一個連續的幀,但是與前一個幀有很大的不同。 如果是這種情況,您可以使用FrameDifferencing示例播放( 處理>示例>庫>視頻>捕獲> FrameDifferencing

這是上述草圖的修改版本,使用Golan Levin的FrameDifferencing代碼僅在第二幀略有不同時才抓取第二幀:

import processing.video.*;

Capture camera;

PImage firstFrame;
PImage secondFrame;
PImage diff;

void setup(){
  size(1920,960);

  camera = new Capture(this,640,480);
  camera.start();

  diff = createImage(640,480,RGB);
}
void draw(){
  image(camera,0,0);
  if(firstFrame != null){
    image(firstFrame,640,0);
  }
  if(secondFrame != null){
    image(secondFrame,1280,0);
  }
  image(diff,0,480);
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
  //read a new frame
  c.read();
  //if the first frame wasn't recorded yet, record(copy) it's pixels
  if(firstFrame == null){
    firstFrame = c.get();
    println("recorded first frame at",new java.util.Date());
  }
  //same for the second frame, but check if the first frame has been recorded first
  if(firstFrame != null && secondFrame == null){
    //if the difference between the first frame cand the current frame is even ever so slightly off, record the second frame
    if(difference(firstFrame,camera) > 100){
      secondFrame = c.get();
    }
  }

}

int difference(PImage first,PImage second){
  final int numPixels = 640*480;
  camera.loadPixels();
  int movementSum = 0; // Amount of movement in the frame
  for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
    color currColor = first.pixels[i];
    color prevColor = second.pixels[i];
    // Extract the red, green, and blue components from current pixel
    int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
    int currG = (currColor >> 8) & 0xFF;
    int currB = currColor & 0xFF;
    // Extract red, green, and blue components from previous pixel
    int prevR = (prevColor >> 16) & 0xFF;
    int prevG = (prevColor >> 8) & 0xFF;
    int prevB = prevColor & 0xFF;
    // Compute the difference of the red, green, and blue values
    int diffR = abs(currR - prevR);
    int diffG = abs(currG - prevG);
    int diffB = abs(currB - prevB);
    // Render the difference image to the screen
    diff.pixels[i] = color(diffR, diffG, diffB);
    // Add these differences to the running tally
    movementSum += diffR + diffG + diffB;
  }
  diff.updatePixels();
  return movementSum;
}

void keyPressed(){
  //reset consecutive frames on keypress
  firstFrame = secondFrame = null;
}

在上面的示例中,100是任意值。 最大值為255*3*640*480 (每通道0-255 *通道數*寬度*高度)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM