简体   繁体   English

如何实现视觉跟踪的粒子滤波器?

[英]How would I implement a particle filter for vision tracking?

So I just did sebsastian thrun's course on AI. 所以我只是在人工智能上做了sebsastian thrun的课程。 In there, he mentions how to build a particle filter for tracking a moving xy robot based on heading theta and forward movement. 在那里,他提到了如何构建一个粒子滤波器,用于跟踪基于航向θ和前进运动的移动xy机器人。

The code is here: https://gist.github.com/soulslicer/b4765ee8e01958374d3b 代码在这里: https//gist.github.com/soulslicer/b4765ee8e01958374d3b

In his implementation, he does the following: 在他的实施中,他做了以下事情:

1. Get Range from Sensor of all bearings after moving R=1, Theta=0.5
2. Move all the particles by R=1, Theta=0.5
3. Compute the weights of all particles ranges against the measured range from sensor
4. Resample and draw new particles

This works for a motion model great. 这适用于运动模型很棒。 How on earth would this work for computer vision tracking? 这怎么能用于计算机视觉跟踪? For example, I want to track a yellow circular blob. 例如,我想跟踪黄色圆形斑点。 How would I "move" the particles? 我将如何“移动”粒子? What might my cost functions be? 我的成本函数可能是什么? Especially the moving part, I'm not sure how I'd do that step for computer vision tracking 特别是移动部分,我不知道我是如何为计算机视觉跟踪做这一步的


Here is how I think it might work, but I'm probably wrong: 这是我认为它可能有用的方式,但我可能错了:

1. Get features from image, and compute the optical flow velocities of each feature
2. Place alot of particles in the scene with varying x,y,xvel,yvel
3. For the computation of weights, we can compare the each particle's velocity and position against all features
    If we can threshold out the object based on color/shape, can match image features to shapes and put that in the cost function
4. Resample and draw new particles

To use particle filtering, you need: 要使用粒子过滤,您需要:

  • a transition model (eg, a motion model used for moving the robot) and 过渡模型(例如,用于移动机器人的运动模型)和
  • an observation model (ie, a model used to compute weights given sensor readings). 观察模型(即,用于计算给定传感器读数的权重的模型)。

it is also helpful to clearly define spaces for 明确定义空间也很有帮助

  • observations (eg, a range for sensor readings) 观察(例如,传感器读数的范围)
  • tracking states (eg, a range for robot positions) 跟踪状态(例如,机器人位置的范围)

Now, based on the description in your question, I'm assuming the goal is tracking positions of the yellow blob based on computed optimal flow features. 现在,根据您问题中的描述,我假设目标是根据计算出的最佳流量特征跟踪黄色斑点的位置。 Then I would model 然后我会建模

  • transition as a function that samples a new position given previous position by sampling noise only, eg, imagine using only + random.gauss(0.0, self.turn_noise) or + random.gauss(0.0, self.forward_noise) part in def move(self, turn, forward): 过渡作为一种函数,通过仅采样噪声对先前位置给出的新位置进行采样,例如,想象在def move(self, turn, forward):仅使用+ random.gauss(0.0, self.turn_noise)+ random.gauss(0.0, self.forward_noise)部分def move(self, turn, forward):
  • observation as a function that returns high-scores for likely observation and state input pairs 观察作为返回可能观察和状态输入对的高分的函数

The problem I see is defining the observation model, a likelihood function between blob positions and optimal flow outputs, is not trivial/intuitive, eg, is the yellow blob likely be in the center of high optical flow outputs area? 我看到的问题是定义观察模型,斑点位置和最佳流量输出之间的似然函数不是微不足道/直观的,例如,黄色斑点是否可能位于高光流输出区域的中心? if so how can I express such relationship as a likelihood function? 如果是这样我怎么能表达这种关系作为似然函数? For this reason, I would look into using different observations, eg, using outputs from noisy yellow blob detector. 出于这个原因,我会研究使用不同的观察结果,例如,使用嘈杂的黄色斑点探测器的输出。

My answer is based on pg 16. of the particle-filters.ppt file available at http://www.probabilistic-robotics.org/ 我的答案是基于http://www.probabilistic-robotics.org/上的particle-filters.ppt文件第16页。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM