简体   繁体   中英

Python requests library - Strange behavior vs curl

I was comparing these two code snippets:

subprocess.call('curl -XGET http://localhost:81/proxy/bparc/my_key > /dev/null' ,shell=True)

vs

response = requests.get('http://localhost:81/proxy/bparc/my_key')
print len(response.text)

And the first one will always run in under .01 seconds. But the second one will some times take up to 30 seconds, and other times take less than .01 seconds.

Any ideas what could be going on? Is requests doing something fancy that's slowing things down? Is it bad to run len?

Ok, changing to response.content fixed it. response.text does a lot of extra stuff you don't need for binary data.

response = requests.get('http://localhost:81/proxy/bparc/my_key')
print len(response.content)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM