繁体   English   中英

在处理中从相机获取两个连续的帧

[英]Getting two consecutive frames from camera in Processing

我一直在尝试使用Processing实现此目的的几种方法,但是每次都无法获得完美连续的效果。 有人知道这样做的“正确方法”吗?

提前致谢!

我查看了KetaiCamera的来源问题报告 ,尤其是这份 报告 不幸的是,该代码并未构建为提供实时摄像机帧。

您可以尝试将AndroidCapture项目作为起点 ,然后修改本机Android 以实现您的目标。

从理论上讲,应该监听captureEvent()以获取一个新帧,并跟踪是否先前记录了第一帧,如果是的话,则跟踪第二帧。

这是一个基本的注释草图,用于说明要点(按任意键可抓取另一对框架):

import processing.video.*;

Capture camera;

PImage firstFrame;
PImage secondFrame;

void setup(){
  size(1920,480);

  camera = new Capture(this,640,480);
  camera.start();
}
void draw(){
  image(camera,0,0);
  if(firstFrame != null){
    image(firstFrame,640,0);
  }
  if(secondFrame != null){
    image(secondFrame,1280,0);
  }
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
  //read a new frame
  c.read();
  //if the first frame wasn't recorded yet, record(copy) it's pixels
  if(firstFrame == null){
    firstFrame = c.get();
  }
  //same for the second frame, but check if the first frame has been recorded first
  if(firstFrame != null && secondFrame == null){
    secondFrame = c.get();
  }
}

void keyPressed(){
  //reset consecutive frames on keypress
  firstFrame = secondFrame = null;
}

理论上(如您在Processing Video Library的源代码中所看到 ),只有在准备好新的摄像机样本时才触发captureEvent。 在实践中,您会发现两个连续的帧可能看起来相同(即使它们在时间上可能相隔一秒钟),甚至您在注释中指出的噪声也是如此。

感觉就像是一个连续的帧,但是与前一个帧有很大的不同。 如果是这种情况,您可以使用FrameDifferencing示例播放( 处理>示例>库>视频>捕获> FrameDifferencing

这是上述草图的修改版本,使用Golan Levin的FrameDifferencing代码仅在第二帧略有不同时才抓取第二帧:

import processing.video.*;

Capture camera;

PImage firstFrame;
PImage secondFrame;
PImage diff;

void setup(){
  size(1920,960);

  camera = new Capture(this,640,480);
  camera.start();

  diff = createImage(640,480,RGB);
}
void draw(){
  image(camera,0,0);
  if(firstFrame != null){
    image(firstFrame,640,0);
  }
  if(secondFrame != null){
    image(secondFrame,1280,0);
  }
  image(diff,0,480);
}
//this is the callback from the video library when a new camera frame is available
void captureEvent(Capture c){
  //read a new frame
  c.read();
  //if the first frame wasn't recorded yet, record(copy) it's pixels
  if(firstFrame == null){
    firstFrame = c.get();
    println("recorded first frame at",new java.util.Date());
  }
  //same for the second frame, but check if the first frame has been recorded first
  if(firstFrame != null && secondFrame == null){
    //if the difference between the first frame cand the current frame is even ever so slightly off, record the second frame
    if(difference(firstFrame,camera) > 100){
      secondFrame = c.get();
    }
  }

}

int difference(PImage first,PImage second){
  final int numPixels = 640*480;
  camera.loadPixels();
  int movementSum = 0; // Amount of movement in the frame
  for (int i = 0; i < numPixels; i++) { // For each pixel in the video frame...
    color currColor = first.pixels[i];
    color prevColor = second.pixels[i];
    // Extract the red, green, and blue components from current pixel
    int currR = (currColor >> 16) & 0xFF; // Like red(), but faster
    int currG = (currColor >> 8) & 0xFF;
    int currB = currColor & 0xFF;
    // Extract red, green, and blue components from previous pixel
    int prevR = (prevColor >> 16) & 0xFF;
    int prevG = (prevColor >> 8) & 0xFF;
    int prevB = prevColor & 0xFF;
    // Compute the difference of the red, green, and blue values
    int diffR = abs(currR - prevR);
    int diffG = abs(currG - prevG);
    int diffB = abs(currB - prevB);
    // Render the difference image to the screen
    diff.pixels[i] = color(diffR, diffG, diffB);
    // Add these differences to the running tally
    movementSum += diffR + diffG + diffB;
  }
  diff.updatePixels();
  return movementSum;
}

void keyPressed(){
  //reset consecutive frames on keypress
  firstFrame = secondFrame = null;
}

在上面的示例中,100是任意值。 最大值为255*3*640*480 (每通道0-255 *通道数*宽度*高度)

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM