[英]How can I more efficiently extract data from netCDF files in python?
I have written the following code to extract data from several netCDF files.我编写了以下代码来从几个 netCDF 文件中提取数据。 I have 192 files of each time step of half hour (ie 4 days of data).
我有 192 个半小时的每个时间步长的文件(即 4 天的数据)。 I have considered 10 latitude and 10 longitude values (ie 100 data points).
我已经考虑了 10 个纬度和 10 个经度值(即 100 个数据点)。
In the output I am getting the 192 time steps in the row and data for different points in column.在 output 中,我得到了行中的 192 个时间步长和列中不同点的数据。
The output is what I wanted, but I think the code is not efficient. output 是我想要的,但我认为代码效率不高。
import glob
from netCDF4 import Dataset
import pandas as pd
import numpy as np
#Record all the years of the netCDF files into a python list
all_hh = [] # ##all_years is a python list
date_range = pd.date_range(start = str(2000)+'-12-30',
end = str(2001)+'-01-03',
freq ='30 min')
d_range_mod = date_range.drop(pd.Timestamp("2001-01-03T00:00:00.000000000"))
lng = range(0,10,1)
ltd = range(0,10,1)
intn = []
for file in glob.glob('*.nc4'):
# print(file)
data = Dataset(file, 'r')
all_hh.append(file)
for i in all_hh:
data = Dataset(i,'r')
temp = data.variables['precipitationCal']
for x in lng:
for y in ltd:
inten = temp[0,x,y]
intn.append(inten)
df1 = pd.DataFrame(intn, columns=['Intensity (mm/hr)'])
df2 = np.array(df1)
df3 = np.reshape(df2, (192,100))
df4 = pd.DataFrame(df3, index = d_range_mod)
df4.to_excel('intensity_tS.xlsx')
This can be done more easily with xarray:这可以通过 xarray 更轻松地完成:
import xarray as xr
(
xr.open_mfdataset(glob.glob('*.nc4'))
.to_dataframe()
.to_excel('intensity_tS.xlsx')
)
With some modifications, obviously, depending on precisely what is in your data files.显然,进行一些修改,具体取决于您的数据文件中的内容。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.