简体   繁体   English

删除 MySQL 中的数百万行

[英]Deleting millions of rows in MySQL

I recently found and fixed a bug in a site I was working on that resulted in millions of duplicate rows of data in a table that will be quite large even without them (still in the millions).我最近在我正在处理的一个站点中发现并修复了一个错误,该错误导致表中存在数百万行重复的数据,即使没有它们也会非常大(仍然是数百万行)。 I can easily find these duplicate rows and can run a single delete query to kill them all.我可以很容易地找到这些重复的行,并且可以运行一个删除查询来将它们全部杀死。 The problem is that trying to delete this many rows in one shot locks up the table for a long time, which I would like to avoid if possible.问题是尝试一次删除这么多行会长时间锁定表,如果可能的话,我想避免这种情况。 The only ways I can see to get rid of these rows, without taking down the site (by locking up the table) are:我可以看到摆脱这些行的唯一方法是:

  1. Write a script that will execute thousands of smaller delete queries in a loop.编写一个脚本,将循环执行数千个较小的删除查询。 This will theoretically get around the locked table issue because other queries will be able to make it into the queue and run in between the deletes.从理论上讲,这将解决锁定表问题,因为其他查询将能够使其进入队列并在删除之间运行。 But it will still spike the load on the database quite a bit and will take a long time to run.但它仍然会大大增加数据库的负载,并且需要很长时间才能运行。
  2. Rename the table and recreate the existing table (it'll now be empty).重命名表并重新创建现有表(它现在为空)。 Then do my cleanup on the renamed table.然后对重命名的表进行清理。 Rename the new table, name the old one back and merge the new rows into the renamed table.重命名新表,重新命名旧表并将新行合并到重命名的表中。 This is way takes considerably more steps, but should get the job done with minimal interruption.这种方式需要更多的步骤,但应该以最小的中断完成工作。 The only tricky part here is that the table in question is a reporting table, so once it's renamed out of the way and the empty one put in its place all historic reports go away until I put it back in place.这里唯一棘手的部分是有问题的表格是一个报告表,所以一旦它被重命名并且空的那个放在它的位置,所有历史报告都会消失,直到我把它放回原位。 Plus the merging process could be a bit of a pain because of the type of data being stored.另外,由于存储的数据类型,合并过程可能会有点麻烦。 Overall this is my likely choice right now.总的来说,这是我现在可能的选择。

I was just wondering if anyone else has had this problem before and, if so, how you dealt with it without taking down the site and, hopefully, with minimal if any interruption to the users?我只是想知道以前是否有其他人遇到过这个问题,如果有,你是如何在不关闭网站的情况下处理它的,并且希望对用户的干扰最小? If I go with number 2, or a different, similar, approach, I can schedule the stuff to run late at night and do the merge early the next morning and just let the users know ahead of time, so that's not a huge deal.如果我采用 2 号或其他类似的方法,我可以安排这些东西在深夜运行,并在第二天一大早进行合并,并提前让用户知道,所以这不是什么大不了的事。 I'm just looking to see if anyone has any ideas for a better, or easier, way to do the cleanup.我只是想看看是否有人对更好或更简单的清理方法有任何想法。

DELETE FROM `table`
WHERE (whatever criteria)
ORDER BY `id`
LIMIT 1000

Wash, rinse, repeat until zero rows affected.清洗,冲洗,重复直到零行受到影响。 Maybe in a script that sleeps for a second or three between iterations.也许在迭代之间休眠一到三秒的脚本中。

I had a use case of deleting 1M+ rows in the 25M+ rows Table in the MySQL.我有一个用例在 MySQL 的 25M+ 行表中删除 1M+ 行。 Tried different approaches like batch deletes (described above).尝试了不同的方法,例如批量删除(如上所述)。
I've found out that the fastest way (copy of required records to new table):我发现最快的方法(将所需记录复制到新表):

  1. Create Temporary Table that holds just ids.创建仅包含 id 的临时表。

CREATE TABLE id_temp_table ( temp_id int);创建表 id_temp_table ( temp_id int);

  1. Insert ids that should be removed:插入应删除的 ID:

insert into id_temp_table (temp_id) select.....插入 id_temp_table (temp_id) 选择.....

  1. Create New table table_new创建新表 table_new

  2. Insert all records from table to table_new without unnecessary rows that are in id_temp_table将表中的所有记录插入到 table_new 中,而 id_temp_table 中没有不必要的行

insert into table_new .... where table_id NOT IN (select distinct(temp_id) from id_temp_table);插入 table_new .... where table_id NOT IN (select distinct(temp_id) from id_temp_table);

  1. Rename tables重命名表

The whole process took ~1hr.整个过程耗时约 1 小时。 In my use case simple delete of batch on 100 records took 10 mins.在我的用例中,简单地删除 100 条记录的批处理需要 10 分钟。

the following deletes 1,000,000 records, one at a time.以下内容一次删除 1,000,000 条记录。

 for i in `seq 1 1000`; do 
     mysql  -e "select id from table_name where (condition) order by id desc limit 1000 " | sed 's;/|;;g' | awk '{if(NR>1)print "delete from table_name where id = ",$1,";" }' | mysql; 
 done

you could group them together and do delete table_name where IN (id1,id2,..idN) im sure too w/o much difficulty您可以将它们组合在一起并删除 table_name where IN (id1,id2,..idN) 我确定太难了

I'd also recommend adding some constraints to your table to make sure that this doesn't happen to you again.我还建议在您的表中添加一些约束,以确保这种情况不会再发生在您身上。 A million rows, at 1000 per shot, will take 1000 repetitions of a script to complete.一百万行,每次拍摄 1000 行,将需要重复 1000 次脚本才能完成。 If the script runs once every 3.6 seconds you'll be done in an hour.如果脚本每 3.6 秒运行一次,您将在一小时内完成。 No worries.不用担心。 Your clients are unlikely to notice.您的客户不太可能注意到。

I think the slowness is due to MySQl's "clustered index" where the actual records are stored within the primary key index - in the order of the primary key index.我认为缓慢是由于 MySQl 的“聚集索引”,其中实际记录存储在主键索引中 - 按照主键索引的顺序。 This means access to a record via the primary key is extremely fast because it only requires one disk fetch because the record on the disk is right there where it found the correct primary key in the index.这意味着通过主键访问记录非常快,因为它只需要一次磁盘提取,因为磁盘上的记录就在它在索引中找到正确主键的位置。

In other databases without clustered indexes the index itself does not hold the record but just an "offset" or "location" indicating where the record is located in the table file and then a second fetch must be made in that file to retrieve the actual data.在其他没有聚集索引的数据库中,索引本身不保存记录,而只是一个“偏移量”或“位置”,指示记录在表文件中的位置,然后必须在该文件中进行第二次提取以检索实际数据.

You can imagine that when deleting a record in a clustered index (like MySQL uses) all records above that record in the index (=table) must be moved downwards to avoid massive holes being created in the index (well that is what I recall from a few years ago at least - version 8.x may have improved this issue).您可以想象,当删除聚集索引(如 MySQL 使用)中的记录时,索引(=表)中该记录之上的所有记录都必须向下移动,以避免在索引中创建大量空洞(这就是我记得的至少在几年前——8.x 版可能已经改善了这个问题)。

Armed with knowledge of the above 'under the hood' operations, what we discovered that really sped up deletes in MySQL 5.x was to perform the deletes in reverse order.有了上述“幕后”操作的知识,我们发现在 MySQL 5.x 中真正加快删除速度的是以相反的顺序执行删除。 This produces the least amount of record movement because you are deleting records from the end first meaning that subsequent deletes have less records to relocate - logical right?!这会产生最少的记录移动量,因为您首先从末尾删除记录,这意味着后续删除需要重新定位的记录更少 - 逻辑对吗?!

Here's the recommended practice:这是推荐的做法:

rows_affected = 0
do {
 rows_affected = do_query(
   "DELETE FROM messages WHERE created < DATE_SUB(NOW(),INTERVAL 3 MONTH)
   LIMIT 10000"
 )
} while rows_affected > 0

Deleting 10,000 rows at a time is typically a large enough task to make each query efficient, and a short enough task to minimize the impact on the server4 (transactional storage engines might benefit from smaller transactions).一次删除 10,000 行通常是一个足够大的任务,可以使每个查询高效,并且任务足够短,可以最大限度地减少对服务器的影响4(事务存储引擎可能会从较小的事务中受益)。 It might also be a good idea to add some sleep time between the DELETE statements to spread the load over time and reduce the amount of time locks are held.在 DELETE 语句之间添加一些休眠时间以随着时间的推移分散负载并减少持有锁的时间也可能是一个好主意。

Reference MySQL High Performance参考MySQL 高性能

I faced a similar problem.我遇到了类似的问题。 We had a really big table, about 500 GB in size with no partitioning and one only one index on the primary_key column.我们有一个非常大的表,大小约为 500 GB,没有分区,primary_key 列上只有一个索引。 Our master was a hulk of a machine, 128 cores and 512 Gigs of RAM and we had multiple slaves too.我们的主人是一台机器,有 128 个内核和 512 Gigs 的 RAM,我们也有多个奴隶。 We tried a few techniques to tackle the large-scale deletion of rows.我们尝试了一些技术来解决大规模删除行的问题。 I will list them all here from worst to best that we found-我将在这里列出我们发现的从最差到最好的所有内容-

  1. Fetching and Deleting one row at a time.一次获取和删除一行。 This is the absolute worst that you could do.这绝对是你能做的最糟糕的事情。 So, we did not even try this.所以,我们甚至没有尝试这个。
  2. Fetching first 'X' rows from the database using a limit query on the primary_key column, then checking the row ids to delete in the application and firing a single delete query with a list of primary_key ids.使用对 primary_key 列的限制查询从数据库中获取前“X”行,然后检查要在应用程序中删除的行 id,并使用 primary_key id 列表触发单个删除查询。 So, 2 queries per 'X' rows.因此,每“X”行有 2 个查询。 Now, this approach was fine but doing this using a batch job deleted about 5 million rows in 10 minutes or so, due to which the slaves of our MySQL DB were lagged by 105 seconds.现在,这种方法很好,但是使用批处理作业在 10 分钟左右删除了大约 500 万行,因此我们的 MySQL 数据库的从属服务器滞后了 105 秒。 105-second lag in 10-minute activity.在 10 分钟的活动中滞后 105 秒。 So, we had to stop.所以,我们不得不停下来。
  3. In this technique, we introduced a 50 ms lag between our subsequent batch fetch and deletions of size 'X' each.在这种技术中,我们在后续的批量获取和删除大小为“X”的每个之间引入了 50 毫秒的延迟。 This solved the lag problem but we were now deleting 1.2-1.3 million rows per 10 minutes as compared to 5 million in technique #2.这解决了滞后问题,但我们现在每 10 分钟删除 1.2-130 万行,而技术 #2 中删除了 500 万行。
  4. Partitioning the database table and then deleting the entire partitions when not needed.对数据库表进行分区,然后在不需要时删除整个分区。 This is the best solution we have but it requires a pre-partitioned table.这是我们拥有的最佳解决方案,但它需要一个预分区表。 We followed step 3 because we had a non-partitioned very old table with only indexing on the primary_key column.我们遵循第 3 步,因为我们有一个未分区的非常旧的表,仅在 primary_key 列上建立索引。 Creating a partition would have taken too much time and we were in a crisis mode.创建分区会花费太多时间,而且我们处于危机模式。 Here are some links related to partitioning that I found helpful- Official MySQL Reference , Oracle DB daily partitioning .以下是一些我发现有用的与分区相关的链接—— 官方 MySQL 参考Oracle DB 每日分区

So, IMO, if you can afford to have the luxury of creating a partition in your table, go for the option #4, otherwise, you are stuck with option #3.因此,IMO,如果您有能力在表中创建一个分区,请选择选项#4,否则,您将被选项#3 卡住。

I'd use mk-archiver from the excellent Maatkit utilities package (a bunch of Perl scripts for MySQL management) Maatkit is from Baron Schwartz, the author of the O'Reilly "High Performance MySQL" book.我会使用来自优秀的Maatkit实用程序包(一组用于 MySQL 管理的 Perl 脚本)中的mk-archiver Maatkit 来自 Baron Schwartz,他是 O'Reilly “高性能 MySQL”一书的作者。

The goal is a low-impact, forward-only job to nibble old data out of the table without impacting OLTP queries much.目标是一个低影响、只进的工作,从表中剔除旧数据,而不会对 OLTP 查询产生太大影响。 You can insert the data into another table, which need not be on the same server.您可以将数据插入到另一个表中,而不必在同一台服务器上。 You can also write it to a file in a format suitable for LOAD DATA INFILE.您还可以将其写入适合 LOAD DATA INFILE 格式的文件。 Or you can do neither, in which case it's just an incremental DELETE.或者你不能这样做,在这种情况下它只是一个增量删除。

It's already built for archiving your unwanted rows in small batches and as a bonus, it can save the deleted rows to a file in case you screw up the query that selects the rows to remove.它已经为小批量归档不需要的行而构建,并且作为奖励,它可以将已删除的行保存到文件中,以防您搞砸选择要删除的行的查询。

No installation required, just grab http://www.maatkit.org/get/mk-archiver and run perldoc on it (or read the web site) for documentation.无需安装,只需获取http://www.maatkit.org/get/mk-archiver并在其上运行 perldoc(或阅读网站)以获取文档。

For us, the DELETE WHERE %s ORDER BY %s LIMIT %d answer was not an option, because the WHERE criteria was slow (a non-indexed column), and would hit master.对我们来说, DELETE WHERE %s ORDER BY %s LIMIT %d答案不是一个选项,因为 WHERE 标准很慢(非索引列),并且会命中 master。

SELECT from a read-replica a list of primary keys that you wish to delete.从只读副本中选择要删除的主键列表。 Export with this kind of format:以这种格式导出:

00669163-4514-4B50-B6E9-50BA232CA5EB
00679DE5-7659-4CD4-A919-6426A2831F35

Use the following bash script to grab this input and chunk it into DELETE statements [requires bash ≥ 4 because of mapfile built-in ]:使用以下 bash 脚本来获取此输入并将其分块为 DELETE 语句[需要 bash ≥ 4,因为mapfile内置]:

sql-chunker.sh (remember to chmod +x me, and change the shebang to point to your bash 4 executable) : sql-chunker.sh (记住chmod +x me,并将 shebang 更改为指向您的 bash 4 可执行文件)

#!/usr/local/Cellar/bash/4.4.12/bin/bash

# Expected input format:
: <<!
00669163-4514-4B50-B6E9-50BA232CA5EB
00669DE5-7659-4CD4-A919-6426A2831F35
!

if [ -z "$1" ]
  then
    echo "No chunk size supplied. Invoke: ./sql-chunker.sh 1000 ids.txt"
fi

if [ -z "$2" ]
  then
    echo "No file supplied. Invoke: ./sql-chunker.sh 1000 ids.txt"
fi

function join_by {
    local d=$1
    shift
    echo -n "$1"
    shift
    printf "%s" "${@/#/$d}"
}

while mapfile -t -n "$1" ary && ((${#ary[@]})); do
    printf "DELETE FROM my_cool_table WHERE id IN ('%s');\n" `join_by "','" "${ary[@]}"`
done < "$2"

Invoke like so:像这样调用:

./sql-chunker.sh 1000 ids.txt > batch_1000.sql

This will give you a file with output formatted like so (I've used a batch size of 2):这将为您提供一个输出格式如下的文件(我使用的批量大小为 2):

DELETE FROM my_cool_table WHERE id IN ('006CC671-655A-432E-9164-D3C64191EDCE','006CD163-794A-4C3E-8206-D05D1A5EE01E');
DELETE FROM my_cool_table WHERE id IN ('006CD837-F1AD-4CCA-82A4-74356580CEBC','006CDA35-F132-4F2C-8054-0F1D6709388A');

Then execute the statements like so:然后像这样执行语句:

mysql --login-path=master billing < batch_1000.sql

For those unfamiliar with login-path , it's just a shortcut to login without typing password in the command line.对于不熟悉login-path的人来说,它只是一种无需在命令行中输入密码即可登录的快捷方式。

I have had the same case earlier.我之前也有过同样的情况。 There were more than 45 million duplicate data stored during database migration.数据库迁移期间存储了超过 4500 万个重复数据。 Yeah, it happened.是的,它发生了。 :) :)

What I did was:我所做的是:

  • Created a temporary table filtering only unique创建了一个临时表过滤唯一的
  • Truncated the original table截断原始表
  • Inserted back to the original table from the temporary table.从临时表插入回原表。
  • After making sure the data is correct, I deleted the temporary table.在确定数据正确后,我删除了临时表。

Overall, it took around 2.5 minutes I guess.总的来说,我猜大概花了 2.5 分钟。

Example:例子:

CREATE TABLE mytable_temp AS SELECT * FROM my_original_table WHERE my_condition;
TRUNCATE TABLE my_original_table;
INSERT INTO my_original_table  SELECT * FROM mytable_temp;

Do it in batches of lets say 2000 rows at a time.分批进行,一次说 2000 行。 Commit in-between.介于两者之间。 A million rows isn't that much and this will be fast, unless you have many indexes on the table.一百万行并不多,这会很快,除非表上有很多索引。

I had a really loaded base that needed to delete some older entries all the time.我有一个真正加载的基础,需要一直删除一些旧条目。 Some of the delete queries started to hang so I needed to kill them, and if there are too many deletes the whole base become unresponsive so I needed to restrict the parallel runs.一些删除查询开始挂起,所以我需要杀死它们,如果删除太多,整个基地就会变得无响应,所以我需要限制并行运行。 So I've created a cron job running every minute starting this script:所以我创建了一个cron job ,每分钟运行一次,开始这个脚本:

#!/bin/bash

#######################
#
i_size=1000
max_delete_queries=10
sleep_interval=15
min_operations=8
max_query_time=1000

USER="user"
PASS="super_secret_password"

log_max_size=1000000
log_file="/var/tmp/clean_up.log"
#
#######################

touch $log_file
log_file_size=`stat -c%s "$log_file"`
if (( $log_file_size > $log_max_size ))
then
    rm -f "$log_file"
fi 

delete_queries=`mysql -u user -p$PASS -e  "SELECT * FROM information_schema.processlist WHERE Command = 'Query' AND INFO LIKE 'DELETE FROM big.table WHERE result_timestamp %';"| grep Query|wc -l`

## -- here the hanging DELETE queries will be stopped
mysql-u $USER -p$PASS -e "SELECT ID FROM information_schema.processlist WHERE Command = 'Query' AND INFO LIKE 'DELETE FROM big.table WHERE result_timestamp %'and TIME>$max_query_time;" |grep -v ID| while read -r id ; do
    echo "delete query stopped on `date`" >>  $log_file
    mysql -u $USER -p$PASS -e "KILL $id;"
done

if (( $delete_queries > $max_delete_queries ))
then
  sleep $sleep_interval

  delete_queries=`mysql-u $USER -p$PASS -e  "SELECT * FROM information_schema.processlist WHERE Command = 'Query' AND INFO LIKE 'DELETE FROM big.table WHERE result_timestamp %';"| grep Query|wc -l`

  if (( $delete_queries > $max_delete_queries ))
  then

      sleep $sleep_interval

      delete_queries=`mysql -u $USER -p$PASS -e  "SELECT * FROM information_schema.processlist WHERE Command = 'Query' AND INFO LIKE 'DELETE FROM big.table WHERE result_timestamp %';"| grep Query|wc -l`

      # -- if there are too many delete queries after the second wait
      #  the table will be cleaned up by the next cron job
      if (( $delete_queries > $max_delete_queries ))
        then
            echo "clean-up skipped on `date`" >> $log_file
            exit 1
        fi
  fi

fi

running_operations=`mysql-u $USER -p$PASS -p -e "SELECT * FROM INFORMATION_SCHEMA.PROCESSLIST WHERE COMMAND != 'Sleep';"| wc -l`

if (( $running_operations < $min_operations ))
then
    # -- if the database is not too busy this bigger batch can be processed
    batch_size=$(($i_size * 5))
else 
    batch_size=$i_size
fi

echo "starting clean-up on `date`" >>  $log_file

mysql-u $USER -p$PASS -e 'DELETE FROM big.table WHERE result_timestamp < UNIX_TIMESTAMP(DATE_SUB(NOW(), INTERVAL 31 DAY))*1000 limit '"$batch_size"';'

if [ $? -eq 0 ]; then
    # -- if the sql command exited normally the exit code will be 0
    echo "delete finished successfully on `date`" >>  $log_file
else
    echo "delete failed on `date`" >>  $log_file
fi

With this I've achieved around 2 million deletes per day which was ok for my usecase.有了这个,我每天完成了大约 200 万次删除,这对我的用例来说还可以。

I have faced similar issue while deleting multiple records from transaction table after moving them to archival table.在将事务表中的多条记录移动到存档表后,我遇到了类似的问题。

I used to use temporary table to identify records to be deleted.我曾经使用临时表来识别要删除的记录。

The temporary table that I used 'archive_temp' to store ids created in memory without any indexes.我使用“archive_temp”存储在内存中创建的 id 的临时表,没有任何索引。

Hence while deleting records from original transaction table as eg DELETE from tat where id in (select id from archive_temp);因此,在从原始事务表中删除记录时,例如 DELETE from tat where id in (select id from archive_temp); query used to return an error "LOST Connection to server"用于返回错误“与服务器的连接丢失”的查询

I created index on that temporary table as follows after creating it: ALTER TABLE archive_temp ADD INDEX( id );创建该临时表后,我按如下方式在该临时表上创建了索引: ALTER TABLE archive_temp ADD INDEX( id );

After this my delete query used to execute in less than seconds irrespective of number of records to be deleted from transaction table.在此之后,无论要从事务表中删除的记录数如何,我的删除查询都可以在不到几秒钟的时间内执行。

Hence it would be better to check indexes.因此,最好检查索引。 Hope this might help.希望这可能会有所帮助。

This queries Delete a BIG TABLES in seconds:这会在几秒钟内查询 Delete a BIG TABLES:

CREATE TABLE <my_table_temp> LIKE <my_table> ;创建表<my_table_temp> LIKE <my_table> ;

RENAME TABLE <my_table> TO <my_table_delete> ;将表<my_table>重命名为<my_table_delete>

RENAME TABLE <my_table_temp> TO <my_table> ;将表<my_table_temp>重命名为<my_table>

DROP TABLE <my_table_delete> ;删除表<my_table_delete> ;

Based on @rich's answer, I wrote this signe line command :根据@rich 的回答,我写了这个signe line 命令:

for i in {1..1000}; do mysql -vv --user=THE_USER --password=THE_PWD --host=YOUR_DB_HOST THE_DB_NAME -e "DELETE FROM THE_DB_NAME.THE_TABLE WHERE 'date' < NOW() - INTERVAL 4 MONTH LIMIT 10000;"; sleep 1; done;
  • -vv : displays the DELETE result, so I can check the deleted rows count -vv :显示删除结果,所以我可以检查删除的行数
  • --host : I'm running the request in another server, so I have to define the mysql host address --host : 我在另一台服务器上运行请求,所以我必须定义mysql主机地址
  • 'date' : using simple quotes (and not `) allowed me to escape the column name 'date' :使用简单引号(而不是`)允许我转义列名
  • NOW() - INTERVAL 4 MONTH : delete only old entries (more than 4 months) NOW() - INTERVAL 4 MONTH :仅删除旧条目(超过 4 个月)
  • sleep 1 : wait on second to avoid crashing the server sleep 1 : 等待第二秒以避免服务器崩溃

I have not scripted anything to do this, and doing it properly would absolutely require a script, but another option is to create a new, duplicate table and select all the rows you want to keep into it.我没有编写任何脚本来执行此操作,并且正确执行此操作绝对需要脚本,但另一种选择是创建一个新的重复表并选择要保留的所有行。 Use a trigger to keep it up-to-date while this process completes.在此过程完成时,使用触发器使其保持最新。 When it is in sync (minus the rows you want to drop), rename both tables in a transaction, so that the new one takes the place of the old.当它同步时(减去您要删除的行),重命名事务中的两个表,以便新表取代旧表。 Drop the old table, and voila!放下旧桌子,瞧!

This (obviously) requires a lot of extra disk space, and may tax your I/O resources, but otherwise, can be much faster.这(显然)需要大量额外的磁盘空间,并且可能会占用您的 I/O 资源,但否则会更快。

Depending on the nature of the data or in an emergency, you could rename the old table and create a new, empty table in it's place, and select the "keep" rows into the new table at your leisure...根据数据的性质或在紧急情况下,您可以重命名旧表并在其位置创建一个新的空表,然后在闲暇时选择“保留”行到新表中......

According to the mysql documentation , TRUNCATE TABLE is a fast alternative to DELETE FROM .根据mysql 文档TRUNCATE TABLEDELETE FROM的快速替代方案。 Try this:尝试这个:

TRUNCATE TABLE table_name

I tried this on 50M rows and it was done within two mins.我在 50M 行上尝试了这个,它在两分钟内完成。

Note: Truncate operations are not transaction-safe;注意:截断操作不是事务安全的; an error occurs when attempting one in the course of an active transaction or active table lock尝试在活动事务或活动表锁定过程中发生错误

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM