简体   繁体   中英

Converting images to time series

I'm experimenting using reservoir computing techniques to classify images, but I'm not sure how to convert an arbitrary image to a time series.

I found this approach but it doesn't seem to be general enough.

Thanks!

As defined in that article, a time series is just a single-value function of one variable. However, an image is, in general, a multi-value function of two variables. So, in order to convert from an image to a 'time series', you're projecting down from a higher dimensional space to a lower dimensional one (for example, the radial scanning technique described collapses the image as a whole into an outline, which reduces the dimension to one). A key point is that these projections all 'lose data'. Since they're all lossy, there isn't going to be a 'general' solution that works for all uses of all images.. choosing what data you can afford to lose based on your intended application is a key aspect of using this technique. So, I guess my answer is that there is no single general way to convert an image to a 'time series' that works well for all applications.

I would think along those lines.

An image is a static two dimensional array of pixels recorded of a period in time.

A time series is non-static just like a video is a series of images going from one frame to the next in time.

Not sure I answered the question but I hope this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM