[英]Python - seek in http response stream
使用urllibs
(或urllibs2
)并想要我想要的东西是没有希望的。 有什么解决办法吗?
我不确定 C# 实现是如何工作的,但是,由于 Internet 流通常是不可搜索的,我的猜测是它将所有数据下载到本地文件或内存中的对象并从那里搜索。 与此等效的 Python 将按照 Abafei 的建议进行操作,并将数据写入文件或 StringIO 并从那里查找。
但是,如果正如您对 Abafei 回答的评论所暗示的那样,您只想检索文件的特定部分(而不是通过返回的数据来回查找),则还有另一种可能性。 urllib2
可用于检索网页的某个部分(或 HTTP 术语中的“范围”),前提是服务器支持此行为。
range
标题当您向服务器发送请求时,请求的参数在各种标头中给出。 其中之一是Range
标头,在RFC2616 (定义 HTTP/1.1 的规范)的第 14.35 节中定义。 此标头允许您执行一些操作,例如检索从第 10,000 个字节开始的所有数据,或检索字节 1,000 和 1,500 之间的数据。
不需要服务器支持范围检索。 某些服务器将返回Accept-Ranges
标头( RFC2616 的第 14.5 节)以及报告是否支持范围的响应。 这可以使用 HEAD 请求进行检查。 但是,没有特别需要这样做; 如果服务器不支持范围,它将返回整个页面,然后我们可以像以前一样在 Python 中提取所需的数据部分。
如果服务器返回一个范围,它必须随响应一起发送Content-Range
标头(RFC2616 的第 14.16 节)。 如果它出现在响应的标头中,我们就知道返回了一个范围; 如果不存在,则返回整个页面。
urllib2
允许我们向请求添加标头,从而允许我们向服务器询问范围而不是整个页面。 以下脚本在命令行上获取 URL、开始位置和(可选)长度,并尝试检索页面的给定部分。
import sys
import urllib2
# Check command line arguments.
if len(sys.argv) < 3:
sys.stderr.write("Usage: %s url start [length]\n" % sys.argv[0])
sys.exit(1)
# Create a request for the given URL.
request = urllib2.Request(sys.argv[1])
# Add the header to specify the range to download.
if len(sys.argv) > 3:
start, length = map(int, sys.argv[2:])
request.add_header("range", "bytes=%d-%d" % (start, start + length - 1))
else:
request.add_header("range", "bytes=%s-" % sys.argv[2])
# Try to get the response. This will raise a urllib2.URLError if there is a
# problem (e.g., invalid URL).
response = urllib2.urlopen(request)
# If a content-range header is present, partial retrieval worked.
if "content-range" in response.headers:
print "Partial retrieval successful."
# The header contains the string 'bytes', followed by a space, then the
# range in the format 'start-end', followed by a slash and then the total
# size of the page (or an asterix if the total size is unknown). Lets get
# the range and total size from this.
range, total = response.headers['content-range'].split(' ')[-1].split('/')
# Print a message giving the range information.
if total == '*':
print "Bytes %s of an unknown total were retrieved." % range
else:
print "Bytes %s of a total of %s were retrieved." % (range, total)
# No header, so partial retrieval was unsuccessful.
else:
print "Unable to use partial retrieval."
# And for good measure, lets check how much data we downloaded.
data = response.read()
print "Retrieved data size: %d bytes" % len(data)
使用它,我可以检索 Python 主页的最后 2,000 个字节:
blair@blair-eeepc:~$ python retrieverange.py http://www.python.org/ 17387
Partial retrieval successful.
Bytes 17387-19386 of a total of 19387 were retrieved.
Retrieved data size: 2000 bytes
或者距离主页中间 400 个字节:
blair@blair-eeepc:~$ python retrieverange.py http://www.python.org/ 6000 400
Partial retrieval successful.
Bytes 6000-6399 of a total of 19387 were retrieved.
Retrieved data size: 400 bytes
但是,Google 主页不支持范围:
blair@blair-eeepc:~$ python retrieverange.py http://www.google.com/ 1000 500
Unable to use partial retrieval.
Retrieved data size: 9621 bytes
在这种情况下,有必要在任何进一步处理之前在 Python 中提取感兴趣的数据。
最好将数据写入文件(甚至写入字符串,使用StringIO ),然后在该文件(或字符串)中查找。
我没有找到任何现有的带有seek()到HTTP URL的类似文件的接口的实现,所以我推出了我自己的简单版本: https : //github.com/valgur/pyhttpio 。 它取决于urllib.request
但可能很容易修改以使用requests
,如有必要。
完整代码:
import cgi
import time
import urllib.request
from io import IOBase
from sys import stderr
class SeekableHTTPFile(IOBase):
def __init__(self, url, name=None, repeat_time=-1, debug=False):
"""Allow a file accessible via HTTP to be used like a local file by utilities
that use `seek()` to read arbitrary parts of the file, such as `ZipFile`.
Seeking is done via the 'range: bytes=xx-yy' HTTP header.
Parameters
----------
url : str
A HTTP or HTTPS URL
name : str, optional
The filename of the file.
Will be filled from the Content-Disposition header if not provided.
repeat_time : int, optional
In case of HTTP errors wait `repeat_time` seconds before trying again.
Negative value or `None` disables retrying and simply passes on the exception (the default).
"""
super().__init__()
self.url = url
self.name = name
self.repeat_time = repeat_time
self.debug = debug
self._pos = 0
self._seekable = True
with self._urlopen() as f:
if self.debug:
print(f.getheaders())
self.content_length = int(f.getheader("Content-Length", -1))
if self.content_length < 0:
self._seekable = False
if f.getheader("Accept-Ranges", "none").lower() != "bytes":
self._seekable = False
if name is None:
header = f.getheader("Content-Disposition")
if header:
value, params = cgi.parse_header(header)
self.name = params["filename"]
def seek(self, offset, whence=0):
if not self.seekable():
raise OSError
if whence == 0:
self._pos = 0
elif whence == 1:
pass
elif whence == 2:
self._pos = self.content_length
self._pos += offset
return self._pos
def seekable(self, *args, **kwargs):
return self._seekable
def readable(self, *args, **kwargs):
return not self.closed
def writable(self, *args, **kwargs):
return False
def read(self, amt=-1):
if self._pos >= self.content_length:
return b""
if amt < 0:
end = self.content_length - 1
else:
end = min(self._pos + amt - 1, self.content_length - 1)
byte_range = (self._pos, end)
self._pos = end + 1
with self._urlopen(byte_range) as f:
return f.read()
def readall(self):
return self.read(-1)
def tell(self):
return self._pos
def __getattribute__(self, item):
attr = object.__getattribute__(self, item)
if not object.__getattribute__(self, "debug"):
return attr
if hasattr(attr, '__call__'):
def trace(*args, **kwargs):
a = ", ".join(map(str, args))
if kwargs:
a += ", ".join(["{}={}".format(k, v) for k, v in kwargs.items()])
print("Calling: {}({})".format(item, a))
return attr(*args, **kwargs)
return trace
else:
return attr
def _urlopen(self, byte_range=None):
header = {}
if byte_range:
header = {"range": "bytes={}-{}".format(*byte_range)}
while True:
try:
r = urllib.request.Request(self.url, headers=header)
return urllib.request.urlopen(r)
except urllib.error.HTTPError as e:
if self.repeat_time is None or self.repeat_time < 0:
raise
print("Server responded with " + str(e), file=stderr)
print("Sleeping for {} seconds before trying again".format(self.repeat_time), file=stderr)
time.sleep(self.repeat_time)
一个潜在的用法示例:
url = "https://www.python.org/ftp/python/3.5.0/python-3.5.0-embed-amd64.zip"
f = SeekableHTTPFile(url, debug=True)
zf = ZipFile(f)
zf.printdir()
zf.extract("python.exe")
编辑:在这个答案中实际上有一个几乎相同的实现,如果稍微少一点的话: https : //stackoverflow.com/a/7852229/2997179
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.