简体   繁体   中英

How to buffering an Ajax Request?

I have a simple Ajax function, something like this:

var x;
var myRequest = new Array();

function CreateXmlHttpReq(handler) {
    var xmlhttp = null;
    try {
        xmlhttp = new XMLHttpRequest();
    } catch (e) {
        try {
            xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
        } catch (e) {
            xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
        }
    }
    xmlhttp.onreadystatechange = handler;
    return xmlhttp;
}

function getResults() {
    var r = Math.random();
    var someVar = document.getElementById("myvar").value;
    var myUrl = "url/of/my/phpScript.php?";
    myUrl += "r=" + r;
    //encodeURIComponent() instead of escape() when i aspect normal text
    myUrl += "&someVar=" + escape(someVar);
    //startLoading just show an overlay with a small rotating gif
    startLoading();
    x++;
    myRequest[x] = CreateXmlHttpReq(function () {
        printResultHandler(x);
    });
    myRequest[x].open("GET", myUrl);
    myRequest[x].send(null);
}

//example handler
function printResultHandler(x) {
    if (myRequest[x].readyState == 4 && myRequest[x].status == 200) {
        //usually i use innerHTML for quick requests, the DOM for more complex req
        document.getElementById(div).innerHTML = myRequest[x].responseText;
        //this will hide the overlay showed ith startLoading()
        stopLoading();
    }
}

and that works fine. I just have some problems when the return flux is big (it can be XML, HTML, or whatever), the browser seems to 'fall asleep' for a while. I don't like to have a big amount of text (XML, HTML) all in one. It isn't nice to handle that.

I'm wondering if there exists some way to buffer that request. When the request is done and returns the 200 status, is there a way to get the responseText piece by piece (let's say 2048 bytes or line by line)? I suppose something like:

function printResultHandler(x) {
    if (myRequest[x].readyState == 4 && myRequest[x].status == 200) {
        //usually i use innerHTML for quick requests, the DOM for more complex req
        //document.getElementById(div).innerHTML = myRequest[x].responseText;
        var answer;
        while ((answer = readline(myRequest[x].responseText))) {
            //to something;
        }
        //this will hide the overlay showed ith startLoading()
        stopLoading();
    }
}

In short, the equivalent of the readdir() or fread() of PHP.

Agreed, buffering a request is not really something you can do.

You can consider staggering a user request for data over a set of HTTP requests, parsing and processing each HTTP request as it comes back.

For example, if the user wishes to request records 1 to 1000, the client could first request records 1 to 100, process, parse and render that, then request records 101 to 200 and so on. The first 100 records would display relatively quickly and, after a short period, the next 100 records would display. So long as the second 100 records displays before the user has managed to deal with the first 100 records it should be ok. The overall time to complete the request will be longer, however the web app will appear more responsive and the perceived task completion time will be lower.

You should also consider switching from XML to JSON if you're not just updating the innerHTML property of an element with data.

To display to the user the response to an AJAX request, the response must first be parsed into a data structure and then rendered. Surprisingly, the parse time for both XML and JSON is pretty much the same. The difference lies in the time required to traverse and read the resulting data structure.

Browser functions for traversing and accessing the data within the DOM of a parsed response are relatively slow. Browser DOM API methods mask the complexity of what is involved in DOM traversing and make a slow process look nice and simple.

Accessing data in JavaScript objects resulting from the parsing of a JSON-formatted response is much quicker. Traversing a JavaScript object is easily 2 to 3 times faster than traversing a DOM tree for the same set of data.

In recent tests I carried out with FireFox 3.1 beta 2 using 10Mb of source data, traversing the DOM of an XML response took about 30 seconds. Doing the same for a JavaScript object populated from the same original large data set took about 15 seconds.

No, there is no way to buffer the request. If you return a huge amount of data and then try to insert it into the page all at once it is always going to take a long time for all that to be parsed.

You might want to consider if there is another way to get the results you want.. Is there is reason you have to insert such a large amount of data into the page with an AJAX request?

You have to do it manually (aka code it for yourself).

An easy solution is the following (C=client, S=server)

  • C send the request
  • S prepare the whole output
  • S generates some kind of data identifier key (md5 of the data for example)
  • S cuts th data into chunks and saves them (and determine the chunk count)
  • S returns the data identifier (and maybe the chunk count)
  • C iterates from the first chunk to the last, sends the server the data key (and the chunk number)
  • S returns the requested chunk
  • C displays the chunk (or a content is loading progress bar)

The caveat is that if you don't go on the progress bar way but instead the instand processing, then S have to cut the data into chunks as partially correct pieces of code what is processable by C.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM