简体   繁体   中英

Scraping websites with Javascript enabled?

I'm trying to scrape and submit information to websites that heavily rely on Javascript to do most of its actions. The website won't even work when i disable Javascript in my browser.

I've searched for some solutions on Google and SO and there was someone who suggested i should reverse engineer the Javascript, but i have no idea how to do that.

So far i've been using Mechanize and it works on websites that don't require Javascript.

Is there any way to access websites that use Javascript by using urllib2 or something similar? I'm also willing to learn Javascript, if that's what it takes.

I wrote a small tutorial on this subject, this might help:

http://koaning.io.s3-website.eu-west-2.amazonaws.com/dynamic-scraping-with-python.html

Basically what you do is you have the selenium library pretend that it is a firefox browser, the browser will wait until all javascript has loaded before it continues passing you the html string. Once you have this string, you can then parse it with beautifulsoup.

You should look into using Ghost , a Python library that wraps the PyQt4 + WebKit hack.

This makes g the WebKit client:

import ghost
g = ghost.Ghost()

You can grab a page with g.open(url) and then g.content will evaluate to the document in its current state.

Ghost has other cool features, like injecting JS and some form filling methods, and you can pass the resulting document to BeautifulSoup and so on: soup = bs4.BeautifulSoup(g.content) .

So far, Ghost is the only thing I've found that makes this kind of thing easy in Python. The only limitation I've come across is that you can't easily create more than one instance of the client object, ghost.Ghost , but you could work around that.

I've had exactly the same problem. It is not simple at all, but I finally found a great solution, using PyQt4.QtWebKit .

You will find the explanations on this webpage : http://blog.motane.lu/2009/07/07/downloading-a-pages-content-with-python-and-webkit/

I've tested it, I currently use it, and that's great !

Its great advantage is that it can run on a server, only using X, without a graphic environment.

Check out crowbar . I haven't had any experience with it, but I was curious about the answer to your question so I started googling around. I'd like to know if this works out for you.

http://grep.codeconsult.ch/2007/02/24/crowbar-scrape-javascript-generated-pages-via-gecko-and-rest/

Maybe you could use Selenium Webdriver , which has python bindings I believe. I think it's mainly used as a tool for testing websites, but I guess it should be usable for scraping too.

I would actually suggest using Selenium. Its mainly designed for testing Web-Applications from a "user perspective however it is basically a "FireFox" driver. I've actually used it for this purpose ... although I was scraping an dynamic AJAX webpage. As long as the Javascript form has a recognizable "Anchor Text" that Selenium can "click" everything should sort itself out.

Hope that helps

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM