简体   繁体   English

python3.4 Pyqt4 Web请求异步

[英]python3.4 Pyqt4 web request asyncio

Is it possible to perform in asynchrone(like with asyncio) web requests under Pyqt4 (QwebPage)? 是否可以在Pyqt4(QwebPage)下以异步方式(与asyncio一样)执行Web请求?

For example, how can I call multiple urls in parallel with this code: 例如,如何与此代码并行调用多个网址:

#!/usr/bin/env python3.4

import sys
import signal

from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.QtWebKit import QWebPage

class Crawler( QWebPage ):
    def __init__(self, url):
        QWebPage.__init__( self )
        self._url = url
        self.content = ''

    def crawl( self ):
        signal.signal( signal.SIGINT, signal.SIG_DFL )
        self.connect( self, SIGNAL( 'loadFinished(bool)' ), self._finished_loading )
        self.mainFrame().load( QUrl( self._url ) )

    def _finished_loading( self, result ):
        self.content = self.mainFrame().toHtml()
        print(self.content)
        sys.exit( 0 )

    def main():
        app = QApplication( sys.argv )
        crawler = Crawler( self._url, self._file )
        crawler.crawl()
        sys.exit( app.exec_() )

if __name__ == '__main__':
     crawl = Crawler( 'http://www.example.com')
     crawl.main()

Thanks 谢谢

You cannot make self.mainFrame().load(QUrl(self._url)) working through asyncio, sorry -- the method implemented in Qt itself. self.mainFrame().load(QUrl(self._url))您无法使self.mainFrame().load(QUrl(self._url))正常工作-在Qt本身中实现的方法。

But you can install quamash event loop and asynchronously call aiohttp.request coroutine to get web pages. 但是您可以安装quamash事件循环并异步调用aiohttp.request协程以获取网页。

The way doesn't work with QWebPage though. 该方法不适用于QWebPage

Requests are already done asynchronously, so you all you need to do is create multiple instances of QWebPage . 请求已经异步完成,因此您需要做的就是创建多个QWebPage实例。

Here's a simple demo based on your example script: 这是一个基于示例脚本的简单演示:

import sys, signal
from PyQt4 import QtCore, QtGui, QtWebKit

urls = [
    'http://qt-project.org/doc/qt-4.8/qwebelement.html',
    'http://qt-project.org/doc/qt-4.8/qwebframe.html',
    'http://qt-project.org/doc/qt-4.8/qwebinspector.html',
    'http://qt-project.org/doc/qt-4.8/qwebpage.html',
    'http://qt-project.org/doc/qt-4.8/qwebsettings.html',
    'http://qt-project.org/doc/qt-4.8/qwebview.html',
    ]

class Crawler(QtWebKit.QWebPage):
    def __init__(self, url, identifier):
        super(Crawler, self).__init__()
        self.loadFinished.connect(self._finished_loading)
        self._id = identifier
        self._url = url
        self.content = ''

    def crawl(self):
        self.mainFrame().load(QtCore.QUrl(self._url))

    def _finished_loading(self, result):
        self.content = self.mainFrame().toHtml()
        print('[%d] %s' % (self._id, self._url))
        print(self.content[:250].rstrip(), '...')
        print()
        self.deleteLater()

if __name__ == '__main__':

    app = QtGui.QApplication( sys.argv )
    signal.signal( signal.SIGINT, signal.SIG_DFL)
    crawlers = []
    for index, url in enumerate(urls):
        crawlers.append(Crawler(url, index))
        crawlers[-1].crawl()
    sys.exit( app.exec_() )

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM