繁体   English   中英

使用科学python进行时间序列数据分析:对多个文件进行连续分析

[英]Time-series data analysis using scientific python: continuous analysis over multiple files

问题

我正在进行时间序列分析。 测量数据来自对50 kHz传感器的电压输出进行采样,然后将该数据作为小时块中的单独文件转储到磁盘。 使用pytables作为CArray将数据保存到HDF5文件中。 选择此格式是为了保持与MATLAB的互操作性。

完整的数据集现在是多TB,太大而无法加载到内存中。

我的一些分析要求我对整个数据集进行迭代。 对于需要我抓取数据块的分析,我可以通过创建生成器方法看到前进的路径。 我对如何进行需要连续时间序列的分析有点不确定。

例如,假设我正在寻找使用一些移动窗口过程(例如小波分析)或应用FIR滤波器来查找和分类瞬变。 如何处理文件的末尾或开头或块边界的边界? 我希望数据显示为一个连续的数据集。

请求

我喜欢:

  • 根据需要加载数据,以降低内存占用率。
  • 保留内存中的整个数据集的映射,以便我可以像常规的pandas Series对象一样处理数据集,例如data [time1:time2]。

我正在使用科学python(Enthought发行版)和所有常规的东西:numpy,scipy,pandas,matplotlib等。我最近才开始将pandas合并到我的工作流程中,我仍然不熟悉它的所有功能。

我查看了相关的stackexchange线程,并没有看到任何完全解决我的问题。

编辑:最终解决方案。

基于有用的提示,我构建了一个迭代器,它跨越文件并返回任意大小的块 - 一个移动窗口,希望能够优雅地处理文件边界。 我添加了使用数据(重叠窗口)填充每个窗口的正面和背面的选项。 然后,我可以将一系列过滤器应用于重叠窗口,然后在末尾删除重叠。 我希望这能给我带来连续性。

我还没有实现__getitem__但它在我要做的事情列表中。

这是最终的代码。 为简洁起见,省略了一些细节。

class FolderContainer(readdata.DataContainer):

    def __init__(self,startdir):
        readdata.DataContainer.__init__(self,startdir)

        self.filelist = None
        self.fs = None
        self.nsamples_hour = None
        # Build the file list
        self._build_filelist(startdir)


    def _build_filelist(self,startdir):
        """
        Populate the filelist dictionary with active files and their associated
        file date (YYYY,MM,DD) and hour.

        Each entry in 'filelist' has the form (abs. path : datetime) where the
        datetime object contains the complete date and hour information.
        """
        print('Building file list....',end='')
        # Use the full file path instead of a relative path so that we don't
        # run into problems if we change the current working directory.
        filelist = { os.path.abspath(f):self._datetime_from_fname(f)
                for f in os.listdir(startdir)
                if fnmatch.fnmatch(f,'NODE*.h5')}

        # If we haven't found any files, raise an error
        if not filelist:
            msg = "Input directory does not contain Illionix h5 files."
            raise IOError(msg)
        # Filelist is a ordered dictionary. Sort before saving.
        self.filelist = OrderedDict(sorted(filelist.items(),
                key=lambda t: t[0]))
        print('done')

    def _datetime_from_fname(self,fname):
        """
        Return the year, month, day, and hour from a filename as a datetime
        object

        """
        # Filename has the prototype: NODE##-YY-MM-DD-HH.h5. Split this up and
        # take only the date parts. Convert the year form YY to YYYY.
        (year,month,day,hour) = [int(d) for d in re.split('-|\.',fname)[1:-1]]
        year+=2000
        return datetime.datetime(year,month,day,hour)


    def chunk(self,tstart,dt,**kwargs):
        """
        Generator expression from returning consecutive chunks of data with
        overlaps from the entire set of Illionix data files.

        Parameters
        ----------
        Arguments:
            tstart: UTC start time [provided as a datetime or date string]
            dt: Chunk size [integer number of samples]

        Keyword arguments:
            tend: UTC end time [provided as a datetime or date string].
            frontpad: Padding in front of sample [integer number of samples].
            backpad: Padding in back of sample [integer number of samples]

        Yields:
            chunk: generator expression

        """
        # PARSE INPUT ARGUMENTS

        # Ensure 'tstart' is a datetime object.
        tstart = self._to_datetime(tstart)
        # Find the offset, in samples, of the starting position of the window
        # in the first data file
        tstart_samples = self._to_samples(tstart)

        # Convert dt to samples. Because dt is a timedelta object, we can't use
        # '_to_samples' for conversion.
        if isinstance(dt,int):
            dt_samples = dt
        elif isinstance(dt,datetime.timedelta):
            dt_samples = np.int64((dt.day*24*3600 + dt.seconds + 
                    dt.microseconds*1000) * self.fs)
        else:
            # FIXME: Pandas 0.13 includes a 'to_timedelta' function. Change
            # below when EPD pushes the update.
            t = self._parse_date_str(dt)
            dt_samples = np.int64((t.minute*60 + t.second) * self.fs)

        # Read keyword arguments. 'tend' defaults to the end of the last file
        # if a time is not provided.
        default_tend = self.filelist.values()[-1] + datetime.timedelta(hours=1)
        tend = self._to_datetime(kwargs.get('tend',default_tend))
        tend_samples = self._to_samples(tend)

        frontpad = kwargs.get('frontpad',0)
        backpad = kwargs.get('backpad',0)


        # CREATE FILE LIST

        # Build the the list of data files we will iterative over based upon
        # the start and stop times.
        print('Pruning file list...',end='')
        tstart_floor = datetime.datetime(tstart.year,tstart.month,tstart.day,
                tstart.hour)
        filelist_pruned = OrderedDict([(k,v) for k,v in self.filelist.items()
                if v >= tstart_floor and v <= tend])
        print('done.')
        # Check to ensure that we're not missing files by enforcing that there
        # is exactly an hour offset between all files.
        if not all([dt == datetime.timedelta(hours=1) 
                for dt in np.diff(np.array(filelist_pruned.values()))]):
            raise readdata.DataIntegrityError("Hour gap(s) detected in data")


        # MOVING WINDOW GENERATOR ALGORITHM

        # Keep two files open, the current file and the next in line (que file)
        fname_generator = self._file_iterator(filelist_pruned)
        fname_current = fname_generator.next()
        fname_next = fname_generator.next()

        # Iterate over all the files. 'lastfile' indicates when we're
        # processing the last file in the que.
        lastfile = False
        i = tstart_samples
        while True:
            with tables.openFile(fname_current) as fcurrent, \
                    tables.openFile(fname_next) as fnext:
                # Point to the data
                data_current = fcurrent.getNode('/data/voltage/raw')
                data_next = fnext.getNode('/data/voltage/raw')
                # Process all data windows associated with the current pair of
                # files. Avoid unnecessary file access operations as we moving
                # the sliding window.
                while True:
                    # Conditionals that depend on if our slice is:
                    #   (1) completely into the next hour
                    #   (2) partially spills into the next hour
                    #   (3) completely in the current hour.
                    if i - backpad >= self.nsamples_hour:
                        # If we're already on our last file in the processing
                        # que, we can't continue to the next. Exit. Generator
                        # is finished.
                        if lastfile:
                            raise GeneratorExit
                        # Advance the active and que file names. 
                        fname_current = fname_next
                        try:
                            fname_next = fname_generator.next()
                        except GeneratorExit:
                            # We've reached the end of our file processing que.
                            # Indicate this is the last file so that if we try
                            # to pull data across the next file boundary, we'll
                            # exit.
                            lastfile = True
                        # Our data slice has completely moved into the next
                        # hour.
                        i-=self.nsamples_hour
                        # Return the data
                        yield data_next[i-backpad:i+dt_samples+frontpad]
                        # Move window by amount dt
                        i+=dt_samples
                        # We've completely moved on the the next pair of files.
                        # Move to the outer scope to grab the next set of
                        # files.
                        break  
                    elif i + dt_samples + frontpad >= self.nsamples_hour:
                        if lastfile:
                            raise GeneratorExit
                        # Slice spills over into the next hour
                        yield np.r_[data_current[i-backpad:],
                                data_next[:i+dt_samples+frontpad-self.nsamples_hour]]
                        i+=dt_samples
                    else:
                        if lastfile:
                            # Exit once our slice crosses the boundary of the
                            # last file.
                            if i + dt_samples + frontpad > tend_samples:
                                raise GeneratorExit
                        # Slice is completely within the current hour
                        yield data_current[i-backpad:i+dt_samples+frontpad]
                        i+=dt_samples


    def _to_samples(self,input_time):
        """Convert input time, if not in samples, to samples"""
        if isinstance(input_time,int):
            # Input time is already in samples
            return input_time
        elif isinstance(input_time,datetime.datetime):
            # Input time is a datetime object
            return self.fs * (input_time.minute * 60 + input_time.second)
        else:
            raise ValueError("Invalid input 'tstart' parameter")


    def _to_datetime(self,input_time):
        """Return the passed time as a datetime object"""
        if isinstance(input_time,datetime.datetime):
            converted_time = input_time
        elif isinstance(input_time,str):
            converted_time = self._parse_date_str(input_time)
        else:
            raise TypeError("A datetime object or string date/time were "
                    "expected")
        return converted_time


    def _file_iterator(self,filelist):
        """Generator for iterating over file names."""
        for fname in filelist:
            yield fname

@Sean这是我的2c

看一下我在这里创建的这个问题。 这基本上就是你想要做的。 这有点不重要。

在不知道更多细节的情况下,我会提出一些建议:

  • HDFStore CAN读取标准CArray类型的格式,请参见此处

  • 您可以轻松创建一个类似于“系列”的对象,该对象具有以下优点:a)知道每个文件的位置及其范围,并使用__getitem__ “选择”这些文件,例如s[time1:time2] 从顶层视图来看,这可能是一个非常好的抽象,然后您可以调度操作。

例如

class OutOfCoreSeries(object):

     def __init__(self, dir):
            .... load a list of the files in the dir where you have them ...

     def __getitem__(self, key):
            .... map the selection key (say its a slice, which 'time1:time2' resolves) ...
            .... to the files that make it up .... , then return a new Series that only
            .... those file pointers ....

     def apply(self, func, **kwargs):
            """ apply a function to the files """
            results = []
            for f in self.files:
                     results.append(func(self.read_file(f)))
            return Results(results)

这很容易变得非常复杂。 例如,如果您应用可以适合内存的减少操作,则结果可以简单地为pandas.Series(或Frame)。 但是,您可能正在进行转换,这需要您编写一组新的转换数据文件。 如果是这样,那么你必须处理这个问题。

还有几个建议:

  • 可能希望以多种可能有用的方式保留数据。 例如,您说您在1小时切片中保存了多个值。 可能是您可以将这些1小时文件拆分为您正在保存的每个变量的文件,但保存更长的切片然后变为可读取的内存。

  • 您可能希望将数据重新采样到较低频率,并对其进行处理,根据需要将数据加载到特定切片中以进行更详细的工作。

  • 您可能想要创建一个可以跨时间查询的数据集,例如说不同频率的高低峰值,例如,可能使用表格格式,请参见此处

因此,您可能有相同数据的多个变体。 磁盘空间通常比主存储器便宜/易于管理。 利用这一点很有意义。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM