[英]Why is a bulk insert to a memory-optimized non-durable table the same speed as a durable table?
When running a large bulk insert to a durable and a non-durable memory optimized table, I'm getting the same speeds for both. 当将大容量插入插入到持久且非持久的内存优化表中时,两者的速度相同。 Shouldn't the speed of of a bulk insert to a non-durable memory-optimized table be faster than a durable memory-optimized table? 批量插入到非持久性内存优化表的速度不应该比持久性内存优化表快吗? If so, what am I doing wrong here? 如果是这样,我在这里做错了什么?
My test is as below, it takes ~30 seconds consistently. 我的测试如下,持续约30秒。 This is on SQL Server 2016 SP1. 这是在SQL Server 2016 SP1上。 The bulk insert is 10 million rows from a csv file that I generated. 批量插入是我生成的csv文件中的1000万行。
SQL 的SQL
CREATE TABLE Users_ND (
Id INT NOT NULL IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=10000000),
Username VARCHAR(200) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_ONLY);
CREATE TABLE Users_D (
Id INT NOT NULL IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=10000000),
Username VARCHAR(200) NOT NULL
) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_AND_DATA);
SET STATISTICS TIME ON;
SET NOCOUNT ON;
BULK INSERT Users_ND
FROM 'users-huge.csv'
WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', BATCHSIZE = 1000000);
BULK INSERT Users_D
FROM 'users-huge.csv'
WITH (FIRSTROW = 2, FIELDTERMINATOR = ',', ROWTERMINATOR = '\n', BATCHSIZE = 1000000);
users-huge.csv users-huge.csv
Id, Username
,user1
,user2
...
,user10000000
事实证明,由于将大量插入的源文件保存在慢速的HDD上,所以出现了这个问题,因此读取文件时遇到了瓶颈。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.