I am trying to benchmark the performance of dask
vs pandas
.
def make_pandas(n):
df = pd.DataFrame(np.random.randint(10, size=(n, 3)))
return df
def make_dask(n):
df = da.from_array(np.random.randint(10, size=(n, 3)), chunks=10)
return df
def make_numpy(n):
return np.random.randint(10, size=(n, 3))
def sum_pandas(x): return x[0].sum()
def sum_dask(x): return x[1].sum()
def sum_numpy(x): return x[2].sum()
perfplot.show(
setup=lambda n: [make_pandas(n), make_dask(n), make_numpy(n)],
kernels=[sum_pandas, sum_dask, sum_numpy],
n_range=[2**k for k in range(2, 15)],
equality_check=False,
xlabel='len(df)')
Can someone explain the results:
Increasing the chunks to 1000, 8000 and 10000 gives these respectively:
Isn't dask
supposed to parallelize and be better as size increases?
chunks
关键字是chunksize的缩写,不是chunk的数量
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.