简体   繁体   English

Object 跟踪没有 openCV

[英]Object tracking without openCV

I am trying to build up an algorithm to detect some objects and track them over time.我正在尝试建立一种算法来检测一些对象并随着时间的推移跟踪它们。 My input data is a tif multi-stack file, which I read as a np array.我的输入数据是一个 tif 多堆栈文件,我将其作为np数组读取。 I apply a U-Net model to create a binary mask and then identify the coordinates of single objects using scipy .我应用 U-Net model 创建二进制掩码,然后使用scipy识别单个对象的坐标。

Up to here everything kind of works but I just cannot get my head around the tracking.到这里为止,一切都正常,但我无法理解跟踪。 I have a dictionary where keys are the frame numbers and values are lists of tuples.我有一本字典,其中键是帧号,值是元组列表。 Each tuple contain the coordinates of each object.每个元组包含每个 object 的坐标。

Now I have to link the objects together, which on paper seems pretty simple.现在我必须将这些对象链接在一起,这在纸上看起来很简单。 I was hoping there was a function or a package to do so (ideally something similar to trackMate or M2track on ImageJ ), but I cannot find anything like that.我希望有一个 function 或一个 package 这样做(理想情况下类似于ImageJ上的trackMate或 M2track ),但我找不到类似的东西。 I am considering writing my own nearest neighbor tool but I'd like to know whether there is a less painful way (and also, I would like to consider also more advanced metrics).我正在考虑编写自己的最近邻工具,但我想知道是否有一种不那么痛苦的方法(而且,我还想考虑更高级的指标)。

The other option I considered is using cv2 , but this would require converting the data in a format cv2 likes, which will significantly slow down the code.我考虑的另一个选项是使用cv2 ,但这需要将数据转换为cv2喜欢的格式,这将显着减慢代码速度。 In addition, I would like to keep the data as close as possible to the original input, so no cv2 for me.另外,我想让数据尽可能接近原始输入,所以我没有cv2

I solved it using trackpy.我使用trackpy解决了它。 http://soft-matter.github.io/trackpy/v0.5.0/ trackpy properly reads multistack tiff files (OpenCv can't). http://soft-matter.github.io/trackpy/v0.5.0/ trackpy 正确读取多堆栈 tiff 文件(OpenCv 不能)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM