简体   繁体   中英

Read compressed GRIB file from URL into xarray dataframe entirely in memory in Python

I am trying to read the gzipped grib2 files at tis URL: https://mtarchive.geol.iastate.edu/2022/12/24/mrms/ncep/SeamlessHSR/

I want to read the grib file into an xarray DataFrame. I know I could write a script to download the file to disk, decompress it, and read it in, but ideally I want to be able to do this entirely in-memory.

I feel like I should be able to do this with some combination of the urllib and gzip packages, but I can't quite figure it out.

I have the following code so far:

import urllib
import io
import gzip

URL = 'https://mtarchive.geol.iastate.edu/2022/12/24/mrms/ncep/SeamlessHSR/SeamlessHSR_00.00_20221224-000000.grib2.gz'

response = urllib.request.urlopen(URL)
compressed_file = io.BytesIO(response.read())
decompressed_file = gzip.GzipFile(fileobj=compressed_file)

But I can't figure out how to read decompressed_file into xarray.

Bonus points if you can figure out how to open_mfdataset on all of the URLs there at once.

One way that works for me is writing the decompressed data into a temporary file which can then be opened with xarray .

import urllib
import gzip
import tempfile

import xarray as xr

URL = 'https://mtarchive.geol.iastate.edu/2022/12/24/mrms/ncep/SeamlessHSR/SeamlessHSR_00.00_20221224-000000.grib2.gz'


response = urllib.request.urlopen(URL)
compressed_file = response.read()

with tempfile.NamedTemporaryFile(suffix=".grib2") as f:
    f.write(gzip.decompress(compressed_file))
    xx = xr.load_dataset(f.name)

display(xx)

在此处输入图像描述

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM