简体   繁体   中英

Scraping JavaScript-generated website with Node.js

When I parse a static html page, my node.js app works well. However, when the url is a JavaScript-generated page, the app doesn't work. How can I scrape a JavaScript-generated web page?

My app.js

var express = require('express'),
  fs = require('fs'),
  request = require('request'),
  cheerio = require('cheerio'),
  app = express();

app.get('/scrape', function( req, res ) {

  url = 'http://www.apache.org/';

  request( url, function( error, response, html ) {
    if( !error ) {
      var $ = cheerio.load(html);

      var title, release, rating;
      var json = { title : "" };

      $('body').filter(function() {
        var data = $(this);
        title = data.find('.panel-title').text();
        json.title = title;
      })
    }

    fs.writeFile('output.json', JSON.stringify(json, null, 4), function(err) {
      console.log( 'File successfully written! - Check your project directory for the output.json file' );
    });

    // Finally, we'll just send out a message to the browser reminding you that this app does not have a UI.
    res.send( 'Check your console!' );
  });
});

app.listen('8081');
console.log('Magic happens on port 8081');
exports = module.exports = app;

Cheerio won't execute the javascript on the page as it's just made for parsing plain HTML.

I'd suggest a different approach using something like PhantomJS: http://phantomjs.org/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM