简体   繁体   中英

Why gevent can speed up requests to download?

I think requests.get should be block, so there should be no difference between run and run2.

import sys

import gevent
import requests
from gevent import monkey
monkey.patch_all()
def download():
    requests.get('http://www.baidu.com').status_code
def run():
    ls = [gevent.spawn(download) for i in range(100)]
    gevent.joinall(ls)
def run2():
    for i in range(100):
        download()
if __name__ == '__main__':
    from timeit import Timer
    t = Timer(stmt="run();", setup="from __main__ import run")
    print('good', t.timeit(3))
    t = Timer(stmt="run2();", setup="from __main__ import run2")
    print('bad', t.timeit(3))
    sys.exit(0)

but result is:

good 5.006664161000117
bad 29.077525214999696

so are there all kind read, write could be speed up by gevent?

PS: I run it on mac/python3/requests 2.10.0/gevent 1.1.2

From the gevent website :

Fast event loop based on libev (epoll on Linux, kqueue on FreeBSD).

Lightweight execution units based on greenlet.

API that re-uses concepts from the Python standard library (for example there are gevent.event.Events and gevent.queue.Queues).

Cooperative sockets with SSL support

DNS queries performed through threadpool or c-ares.

Monkey patching utility to get 3rd party modules to become cooperative

Basically, just for looping a bunch of requests.get() calls is slow due to the fact that you're, well, for looping through a bunch of requests.get() calls. Gevent'ing a bunch of requests.get() calls isn't slow due to the fact that you're throwing those calls into a threaded Queue instead, which then uses gevent's powerful API to run through those calls incredibly efficiently.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM