简体   繁体   中英

is it possible to write web crawler in javascript?

I want to crawl the page and check for the hyperlinks in that respective page and also follow those hyperlinks and capture data from the page

Generally, browser JavaScript can only crawl within the domain of its origin, because fetching pages would be done via Ajax , which is restricted by the Same-Origin Policy .

If the page running the crawler script is on www.example.com , then that script can crawl all the pages on www.example.com, but not the pages of any other origin (unless some edge case applies, eg, the Access-Control-Allow-Origin header is set for pages on the other server).

If you really want to write a fully-featured crawler in browser JS, you could write a browser extension: for example, Chrome extensions are packaged Web application run with special permissions, including cross-origin Ajax . The difficulty with this approach is that you'll have to write multiple versions of the crawler if you want to support multiple browsers. (If the crawler is just for personal use, that's probably not an issue.)

If you use server-side javascript it is possible. You should take a look at node.js

And an example of a crawler can be found in the link bellow:

http://www.colourcoding.net/blog/archive/2010/11/20/a-node.js-web-spider.aspx

We could crawl the pages using Javascript from server side with help of headless webkit. For crawling, we have few libraries like PhantomJS, CasperJS, also there is a new wrapper on PhantomJS called Nightmare JS which make the works easier.

Google's Chrome team has released puppeteer on August 2017, a node library which provides a high-level API for both headless and non-headless Chrome (headless Chrome being available since 59 ).

It uses an embedded version of Chromium, so it is guaranteed to work out of the box. If you want to use an specific Chrome version, you can do so by launching puppeteer with an executable path as parameter, such as:

const browser = await puppeteer.launch({executablePath: '/path/to/Chrome'});

An example of navigating to a webpage and taking a screenshot out of it shows how simple it is (taken from the GitHub page):

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('https://example.com');
  await page.screenshot({path: 'example.png'});

  await browser.close();
})();

There are ways to circumvent the same-origin policy with JS. I wrote a crawler for facebook, that gathered information from facebook profiles from my friends and my friend's friends and allowed filtering the results by gender, current location, age, martial status (you catch my drift). It was simple. I just ran it from console. That way your script will get privilage to do request on the current domain. You can also make a bookmarklet to run the script from your bookmarks.

Another way is to provide a PHP proxy. Your script will access the proxy on current domain and request files from another with PHP. Just be carefull with those. These might get hijacked and used as a public proxy by 3rd party if you are not carefull.

Good luck, maybe you make a friend or two in the process like I did :-)

My typical setup is to use a browser extension with cross origin privileges set, which is injecting both the crawler code and jQuery.

Another take on Javascript crawlers is to use a headless browser like phantomJS or casperJS (which boosts phantom's powers)

这就是你需要的http://zugravu.com/products/web-crawler-spider-scraping-javascript-regular-expression-nodejs-mongodb他们使用 NodeJS、MongoDB 和 ExtJs 作为 GUI

yes it is possible

  1. Use NODEJS (its server side JS)
  2. There is NPM (package manager that handles 3rd party modules) in nodeJS
  3. Use PhantomJS in NodeJS (third party module that can crawl through websites is PhantomJS)

There is a client side approach for this, using Firefox Greasemonkey extention. with Greasemonkey you can create scripts to be executed each time you open specified urls.

here an example:

if you have urls like these:

http://www.example.com/products/pages/1

http://www.example.com/products/pages/2

then you can use something like this to open all pages containing product list(execute this manually)

var j = 0;
for(var i=1;i<5;i++)
{ 
  setTimeout(function(){
  j = j + 1;
  window.open('http://www.example.com/products/pages/ + j, '_blank');

}, 15000 * i);

}

then you can create a script to open all products in new window for each product list page and include this url in Greasemonkey for that.

http://www.example.com/products/pages/ *

and then a script for each product page to extract data and call a webservice passing data and close window and so on.

I made an example javascript crawler on github.

It's event driven and use an in-memory queue to store all the resources(ie. urls).

How to use in your node environment

var Crawler = require('../lib/crawler')
var crawler = new Crawler('http://www.someUrl.com');

// crawler.maxDepth = 4;
// crawler.crawlInterval = 10;
// crawler.maxListenerCurrency = 10;
// crawler.redisQueue = true;
crawler.start();

Here I'm just showing you 2 core method of a javascript crawler.

Crawler.prototype.run = function() {
  var crawler = this;
  process.nextTick(() => {
    //the run loop
    crawler.crawlerIntervalId = setInterval(() => {

      crawler.crawl();

    }, crawler.crawlInterval);
    //kick off first one
    crawler.crawl();
  });

  crawler.running = true;
  crawler.emit('start');
}


Crawler.prototype.crawl = function() {
  var crawler = this;

  if (crawler._openRequests >= crawler.maxListenerCurrency) return;


  //go get the item
  crawler.queue.oldestUnfetchedItem((err, queueItem, index) => {
    if (queueItem) {
      //got the item start the fetch
      crawler.fetchQueueItem(queueItem, index);
    } else if (crawler._openRequests === 0) {
      crawler.queue.complete((err, completeCount) => {
        if (err)
          throw err;
        crawler.queue.getLength((err, length) => {
          if (err)
            throw err;
          if (length === completeCount) {
            //no open Request, no unfetcheditem stop the crawler
            crawler.emit("complete", completeCount);
            clearInterval(crawler.crawlerIntervalId);
            crawler.running = false;
          }
        });
      });
    }

  });
};

Here is the github link https://github.com/bfwg/node-tinycrawler . It is a javascript web crawler written under 1000 lines of code. This should put you on the right track.

You can make a web crawler driven from a remote json file that opens all links from a page in new tabs as soon as each tab loads except ones that have already been opened. If you set up a with a browser extension running in a basic browser (nothing runs except the web browser and an internet config program) and had it shipped and installed somewhere with good internet, you could make a database of webpages with an old computer. That would just need to retrieve the content of each tab. You could do that for about $2000, contrary to most estimates for search engine costs. You'd just need to basically make your algorithm provide pages based on how much a term appears in the innerText property of the page, keywords, and description. You could also set up another PC to recrawl old pages from the one-time database and add more. I'd estimate it would take about 3 months and $20000, maximum.

Axios + Cheerio

You can do this with axios and cheerios. Check axios docs for response format.

const cheerio = require('cheerio');
const axios = require('axios');
//crawl
//get url
var url = 'http://amazon.com';

axios.get(url)
.then((res) => {

//response format
var body = res.data;
var statusCode = res.status;
var statusText =  res.statusText;
var headers = res.headers;
var request = res.request;
var config = res.config;

//jquery
let $ = cheerio.load(body);

//example

//meta tags
var title = $('meta[name=title]').attr('content');
if(title == undefined || title == 'undefined'){
title = $('title').text();
}else{
title = title;
}
var description = $('meta[name=description]').attr('content');
var keywords = $('meta[name=keywords]').attr('content');
var author = $('meta[name=author]').attr('content');
var type = $('meta[http-equiv=content-type]').attr('content');
var favicon = $('link[rel="shortcut icon"]').attr('href');


}).catch(function (e) {
console.log(e);
});

Node-Fetch + Cheerio

You can do the same thing with node-fetch and cheerio.

fetch(url, {
method: "GET",
}).then(function(response){

//response
var html = response.text();

//return
return html;

})
.then(function(res) {

//response html    
var html = res;

//jquery
let $ = cheerio.load(html);

//meta tags
var title = $('meta[name=title]').attr('content');
if(title == undefined || title == 'undefined'){
title = $('title').text();
}else{
title = title;
}
var description = $('meta[name=description]').attr('content');
var keywords = $('meta[name=keywords]').attr('content');
var author = $('meta[name=author]').attr('content');
var type = $('meta[http-equiv=content-type]').attr('content');
var favicon = $('link[rel="shortcut icon"]').attr('href');

})
.catch((error) => {

console.error('Error:', error);

});

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM