简体   繁体   中英

Bulk Loads, Indexes and Data Truncation: How do you optimize?

I'm bulk loading files so they can be tied to internal data. After I join the data, the use of the database is moot. I load the file, join the data, and basically truncate the file. This is a multi tenant user situation so data from one job get's truncated while another job's data remains (all into one table with a job id to manage who's got what data.) Now the first thing I thought would help with all this would be temp tables but all this work is being called by a WCF service that runs under an administrative account and (help me if I'm wrong) the service keeps using the connection pool and my tempdb table just get's dropped between calls. All this work is organized to return progress to the end user so I'm finding hard to bulk operations together.

So here's my question: Is there a way to optimize bulk loads that will eventually get truncated to avoid high index fragmentation?

You could have a fresh partition (or table) for each loading operation. You can efficiently delete the contents of a partition or table.

You could create them as named tables in tempdb if you can tolerate complete data loss at any time (due to unexpected restart or failover which must be assumed to occur at any time).

Creating partitions or tables of course requires DDL rights.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM