简体   繁体   中英

Grouping floating point numbers

I have an application where I need to block average a list of data (currently in a pandas.DataFrame ) according to a timestamp, which may be a floating point value. For example, I may need to average the following df into groups of 0.3 secs:

+------+------+         +------+------+
| secs |  A   |         | secs |  A   |
+------+------+         +------+------+
| 0.1  |  ..  |         | 0.3  |  ..  | <-- avg of 0.1, 0.2, 0.3
| 0.2  |  ..  |   -->   | 0.6  |  ..  | <-- avg of 0.4, 0.5, 0.6
| 0.3  |  ..  |         | ...  | ...  | <-- etc
| 0.4  |  ..  |         +------+------+
| 0.5  |  ..  |
| 0.6  |  ..  |
| ...  | ...  |
+------+------+

Currently I am using the following (minimal) solution:

import pandas as pd
import numpy as np

def block_avg ( df : pd.DataFrame, duration : float ) -> pd.DataFrame:
    grouping = (df['secs'] - df['secs'][0]) // duration
    df = df.groupby( grouping, as_index=False ).mean()
    df['secs'] = duration * np.arange(1,1+len(df))
    return df

which works just fine for integer duration s, but floating point values at the edges of blocks can fall on the wrong side. A simple test that the blocks are being created properly is to average by the same duration that the data is already in ( 0.1 in this example). This should return the input, but often doesn't. (eg x=.1*np.arange(1,20); (xx[0])//.1) .)

I found that the error with this method is usually that the LSB is 1 low, so a tentative fix is to add np.spacing(df['secs']) to the numerator in the grouping . (That is, x=.1*np.arange(1,20); all( (xx[0]+np.spacing(x)) // .1 == np.arange(19) ) returns True .)

However, I am concerned that this is not a robust solution. Is there a better or preferred way to group floats which passes the above test?

I have had similar issues with a (perhaps more straightforward) algorithm which groups using x[ (duration*i < x) & (x <= duration*(i+1)) ] and looping i over an appropriate range.

To be extra careful (of float inaccuracy) I'd round early before doing the groupby:

In [11]: np.round(300 + df.secs * 1000).astype(int) // 300
Out[11]:
0    1
1    1
2    1
3    2
4    2
5    2
Name: secs, dtype: int64

In [12]: (np.round(300 + df.secs * 1000).astype(int) // 300) * 0.3
Out[12]:
0    0.3
1    0.3
2    0.3
3    0.6
4    0.6
5    0.6
Name: secs, dtype: float64

In [13]: df.groupby(by=(np.round(300 + df.secs * 1000).astype(int) // 300) * 0.3)["A"].sum()
Out[13]:
secs
0.3    1.753843
0.6    2.687098
Name: A, dtype: float64

I would prefer to use a timedelta:

In [21]: s = pd.to_timedelta(np.round(df["secs"], 1), unit="S")

In [22]: df["secs"] = pd.to_timedelta(np.round(df["secs"], 1), unit="S")

In [23]: df.groupby(pd.Grouper(key="secs", freq="0.3S")).sum()
Out[23]:
                        A
secs
00:00:00         1.753843
00:00:00.300000  2.687098

or with a resample :

In [24]: res = df.set_index("secs").resample("300ms").sum()

In [25]: res
Out[25]:
                        A
secs
00:00:00         1.753843
00:00:00.300000  2.687098

you can set the index to correct the labelling*

In [26]: res.index += np.timedelta64(300, "ms")

In [27]: res
Out[27]:
                        A
secs
00:00:00.300000  1.753843
00:00:00.600000  2.687098

* There ought to be a way to set that through a resample argument, but they don't seem to work...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM