简体   繁体   中英

How to improve query performance

I have a lot of records in table. When I execute the following query it takes a lot of time. How can I improve the performance?

SET ROWCOUNT 10
SELECT StxnID
      ,Sprovider.description as SProvider
      ,txnID
      ,Request
      ,Raw
      ,Status
      ,txnBal
      ,Stxn.CreatedBy
      ,Stxn.CreatedOn
      ,Stxn.ModifiedBy
      ,Stxn.ModifiedOn
      ,Stxn.isDeleted
  FROM Stxn,Sprovider
  WHERE Stxn.SproviderID = SProvider.Sproviderid
  AND Stxn.SProviderid = ISNULL(@pSProviderID,Stxn.SProviderid)
  AND Stxn.status = ISNULL(@pStatus,Stxn.status)
  AND Stxn.CreatedOn BETWEEN ISNULL(@pStartDate,getdate()-1) and  ISNULL(@pEndDate,getdate())
  AND Stxn.CreatedBy = ISNULL(@pSellerId,Stxn.CreatedBy)  
  ORDER BY StxnID DESC

The stxn table has more than 100,000 records.

The query is run from a report viewer in asp.net c#.

This is my go-to article when I'm trying to do a search query that has several search conditions which might be optional.

http://www.sommarskog.se/dyn-search-2008.html

The biggest problem with your query is the column=ISNULL(@column, column) syntax. MSSQL won't use an index for that. Consider changing it to (column = @column AND @column IS NOT NULL)

You should consider using the execution plan and look for missing indexes. Also, how long it takes to execute? What is slow for you?

Maybe you could also not return so many rows, but that is just a guess. Actually we need to see your table and indexes plus the execution plan.

Check sql-tuning-tutorial

For one, use SELECT TOP () instead of SET ROWCOUNT - the optimizer will have a much better chance that way. Another suggestion is to use a proper inner join instead of potentially ending up with a cartesian product using the old style table,table join syntax (this is not the case here but it can happen much easier with the old syntax). Should be:

...
FROM Stxn INNER JOIN Sprovider
  ON Stxn.SproviderID = SProvider.Sproviderid
...

And if you think 100K rows is a lot, or that this volume is a reason for slowness, you're sorely mistaken. Most likely you have really poor indexing strategies in place, possibly some parameter sniffing, possibly some implicit conversions... hard to tell without understanding the data types, indexes and seeing the plan.

There are a lot of things that could impact the performance of query. Although 100k records really isn't all that many.

Items to consider (in no particular order)

Hardware:

  1. Is SQL Server memory constrained? In other words, does it have enough RAM to do its job? If it is swapping memory to disk, then this is a sure sign that you need an upgrade.
  2. Is the machine disk constrained. In other words, are the drives fast enough to keep up with the queries you need to run? If it's memory constrained, then disk speed becomes a larger factor.
  3. Is the machine processor constrained? For example, when you execute the query does the processor spike for long periods of time? Or, are there already lots of other queries running that are taking resources away from yours...

Database Structure:

  1. Do you have indexes on the columns used in your where clause? If the tables do not have indexes then it will have to do a full scan of both tables to determine which records match.
  2. Eliminate the ISNULL function calls. If this is a direct query, have the calling code validate the parameters and set default values before executing. If it is in a stored procedure, do the checks at the top of the s'proc. Unless you are executing this with RECOMPILE that does parameter sniffing, those functions will have to be evaluated for each row..

Network:

  1. Is the network slow between you and the server? Depending on the amount of data pulled you could be pulling GB's of data across the wire. I'm not sure what is stored in the "raw" column. The first question you need to ask here is "how much data is going back to the client?" For example, if each record is 1MB+ in size, then you'll probably have disk and network constraints at play.

General:

  1. I'm not sure what "slow" means in your question. Does it mean that the query is taking around 1 second to process or does it mean it's taking 5 minutes? Everything is relative here.

Basically, it is going to be impossible to give a hard answer without a lot of questions asked by you. All of these will bear out if you profile the queries, understand what and how much is going back to the client and watch the interactions amongst the various parts.

Finally depending on the amount of data going back to the client there might not be a way to improve performance short of hardware changes.

Make sure Stxn.SproviderID, Stxn.status, Stxn.CreatedOn, Stxn.CreatedBy, Stxn.StxnID and SProvider.Sproviderid all have indexes defined.

(NB -- you might not need all, but it can't hurt.)

I don't see much that can be done on the query itself, but I can see things being done on the schema :

  • Create an index / PK on Stxn.SproviderID
  • Create an index / PK on SProvider.Sproviderid
  • Create indexes on status, CreatedOn, CreatedBy, StxnID

Something to consider: When ROWCOUNT or TOP are used with an ORDER BY clause, the entire result set is created and sorted first and then the top 10 results are returned.

How does this run without the Order By clause?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM