简体   繁体   中英

JQuery Datatables and .NET WebAPI Performance Considerations

I just wanted to get some community input regarding a observation that i made whilst working on a Project that made use of JQuery data-tables to present data that would pull data from a setup Web API project.

So, considering a price of code on the Web API controller that would return a list of users (GetAllUsers) - using the repository pattern:

public List<SampleUser> GetAllUsers()
{
  return Uow.SampleUser.GetAll().toList();
}

would return a list of users that could range from a number of records such as 10 to 10000.

However, from an observation we would consume on the front end and process the data returned as such. Meaning that tasks such as Paging, Searching, Filtering and Sorting are handled on the client using the for mentioned Plugin (JQuery datatable).

Irrespective of entry point of data onto the client, whether it be Via an Ajax call or to an MVC Controller, essentially the data send to the client would be the entire set of records (of which could range to hundreds and thousands) on every load of a particular page that may request it.

Looking at the setup and initialization of JQuery datatables is to the point and rather straight forward:

$(document).ready(function () {
        var table = $('#reportTable').DataTable({
            responsive: true,
            "bServerSide": false,
            "sAjaxSource": "GetRefugeeApplicant",
            "processing": false,
            "bFilter": true,
            "sDom": "lrtip"
        });

As such, paging tasks, filtering, sorting and searching become rather easy to consider. However, the issue arises in the performance on the Initial Page load, being as the table of data grows so to will the number of records being pulled down. To clarify, if there exists 1000 records of users then all 1000 users would be pulled down to the client end and result in more overhead on the initial call, as such tasks like paging, searching and filtering do not make calls to the server and work with what data that has been pulled down on the initial request.

So the question that arises here becomes one that enquirers about whether or not it would be feasible or not to load up all data to the client and work client side with said data, or alternatively have calls to the API that would handing paging tasks server side an essentially pull down a fixed number of records - say 10 - to the client. Like Such:

public IHttpActionResult GetAllPersons(Guid id, int page = 0, int pageSize = 0, string searchQuery = "", string orderBy = "")
        {
            var response = ResponseMessage(new HttpResponseMessage(HttpStatusCode.InternalServerError));
            var personList = new Person(Uow).GetAllPersons(searchQuery);
            if (personList == null) return response;
            var totalCount = personList.Count;
            var totalPages = (int)Math.Ceiling((double)totalCount / pageSize);

            switch (orderBy)
            {
                default:
                    personList = personList.OrderBy(c => c.PersonId).ToList();
                    break;
            }

            personList = personList
                .Skip(pageSize * page)
                .Take(pageSize)
                .ToList();

            var result = new
            {
                personList,
                pagingDetails = new
                {
                    totalPages,
                    totalCount,
                    prevLink = "",
                    nextLink = "",
                    currentCount = personList.Count
                }
            };

            var responseMessage = Request.CreateResponse(HttpStatusCode.OK, new JsonResponseObject
            {
                Data = result,
                Message = "Person Details Retrieved Successfully.",
                IsSuccessful = true
            });

            response = ResponseMessage(responseMessage);
            return response;
        }

However this approach (1) would not work correctly with JQuery Datatables due to the face it would be only returning the number of rows required and not all rows that is usually would need to work "in-memory" with. (2) using this Approach would mean that on every paging call depending on the client you would have to hit the server to perform querying tasks.

So i'm a bit divided in my head here, as to which would be the best solution, and if I had to with using JQuery Datatables on the client side is there any other performance considerations to look at other than those that i spoke about.

Thanks

You don't want to be pulling more data to your client than you have a reasonable expectation of using in a given session.

This situation is what things like OData is designed to address. It provides a standardised way of handling things like sorting, paging and filtering data between a client and an API.

Web.API has support for OData, as do a number of JavaScript components.

Definitely recommend implementing server side paging and filtering. I'm using Datatables over WebApi exactly like this and it's working great with good performance.

However, a point of caution in your example: the way you implemented the GetAllPersons endpoint, it will read ALL records from the database in memory:

var personList = new Person(Uow).GetAllPersons(searchQuery);

Then it will apply paging:

personList = personList
                .Skip(pageSize * page)
                .Take(pageSize)
                .ToList();

This will be somewhat faster because you're not sending all data to the client, but you should apply the paging to the query so you read only the current page from the db, and request the total number of records as Count. This can be done in a stored procedure that returns a table and a scalar value, but it is somewhat more complicated to map this with EF and not a "pure" repository pattern.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM