[英]Why is Hibernate splitting my batch insert into 3 queries
我目前正在尝试使用 Hibernate 实现批量插入。 以下是我实现的几件事:
1. 实体
@Entity
@Table(name = "my_bean_table")
@Data
public class MyBean {
@Id
@GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "seqGen")
@SequenceGenerator(name = "seqGen", sequenceName = "bean_c_seq", allocationSize=50)
@Column(name = "my_bean_id")
private Long id;
@Column(name = "my_bean_name")
private String name;
@Column(name = "my_bean_age")
private int age;
public MyBean(String name, int age) {
this.name = name;
this.age = age;
}
}
2.application.properties
Hibernate 和数据源是这样配置的:
spring.datasource.url=jdbc:postgresql://{ip}:{port}/${db}?reWriteBatchedInserts=true&loggerLevel=TRACE&loggerFile=pgjdbc.log
spring.jpa.show-sql=truespring.jpa.properties.hibernate.jdbc.batch_size=50
spring.jpa.properties.hibernate.order_inserts=true
注意: &loggerLevel=TRACE&loggerFile=pgjdbc.log
用于调试目的
3. 我的 PostgresSQL 数据库中的元素
CREATE TABLE my_bean_table
(
my_bean_id bigint NOT NULL DEFAULT nextval('my_bean_seq'::regclass),
my_bean_name "char(100)" NOT NULL,
my_bean_age smallint NOT NULL,
CONSTRAINT bean_c_table_pkey PRIMARY KEY (bean_c_id)
)
CREATE SEQUENCE my_bean_seq
INCREMENT 50
START 1
MINVALUE 1
MAXVALUE 9223372036854775807
CACHE 1;
编辑:添加 ItemWriter
public class MyBeanWriter implements ItemWriter<MyBean> {
private Logger logger = LoggerFactory.getLogger(MyBeanWriter .class);
@Autowired
MyBeanRepository repository;
@Override
public void write(List<? extends BeanFluxC> items) throws Exception {
repository.saveAll(items);
}
}
提交间隔也设置为 50。
在 jdbc 驱动程序提供的日志文件中,我得到以下几行:
avr. 10, 2020 7:26:48 PM org.postgresql.core.v3.QueryExecutorImpl execute
FINEST: batch execute 3 queries, handler=org.postgresql.jdbc.BatchResultHandler@1317ac2c, maxRows=0, fetchSize=0, flags=5
avr. 10, 2020 7:26:48 PM org.postgresql.core.v3.QueryExecutorImpl sendParse
FINEST: FE=> Parse(stmt=null,query="insert into my_bean_table (my_bean_age, my_bean_name, my_bean_id) values ($1, $2, $3),($4, $5, $6),($7, $8, $9),($10, $11, $12),($13, $14, $15),($16, $17, $18),($19, $20, $21),($22, $23, $24),($25, $26, $27),($28, $29, $30),($31, $32, $33),($34, $35, $36),($37, $38, $39),($40, $41, $42),($43, $44, $45),($46, $47, $48),($49, $50, $51),($52, $53, $54),($55, $56, $57),($58, $59, $60),($61, $62, $63),($64, $65, $66),($67, $68, $69),($70, $71, $72),($73, $74, $75),($76, $77, $78),($79, $80, $81),($82, $83, $84),($85, $86, $87),($88, $89, $90),($91, $92, $93),($94, $95, $96)",oids={23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20})
...
FINEST: FE=> Execute(portal=null,limit=1)
avr. 10, 2020 7:26:48 PM org.postgresql.core.v3.QueryExecutorImpl sendParse
FINEST: FE=> Parse(stmt=null,query="insert into my_bean_table (my_bean_age, my_bean_name, my_bean_id) values ($1, $2, $3),($4, $5, $6),($7, $8, $9),($10, $11, $12),($13, $14, $15),($16, $17, $18),($19, $20, $21),($22, $23, $24),($25, $26, $27),($28, $29, $30),($31, $32, $33),($34, $35, $36),($37, $38, $39),($40, $41, $42),($43, $44, $45),($46, $47, $48)",oids={23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20,23,1043,20})
...
avr. 10, 2020 7:26:48 PM org.postgresql.core.v3.QueryExecutorImpl sendParse
FINEST: FE=> Parse(stmt=null,query="insert into my_bean_table (my_bean_age, my_bean_name, my_bean_id) values ($1, $2, $3),($4, $5, $6)",oids={23,1043,20,23,1043,20})
这是我的问题:为什么将批处理查询拆分为 3 个查询:
注意:我尝试将批量大小设置为 100 和 200,但仍然得到 3 个不同的查询。
在调试PgPreparedStatement class 及其transformQueriesAndParameters()
方法时发现:
@Override
protected void transformQueriesAndParameters() throws SQLException {
...
BatchedQuery originalQuery = (BatchedQuery) preparedQuery.query;
// Single query cannot have more than {@link Short#MAX_VALUE} binds, thus
// the number of multi-values blocks should be capped.
// Typically, it does not make much sense to batch more than 128 rows: performance
// does not improve much after updating 128 statements with 1 multi-valued one, thus
// we cap maximum batch size and split there.
...
final int highestBlockCount = 128;
final int maxValueBlocks = bindCount == 0 ? 1024 /* if no binds, use 1024 rows */
: Integer.highestOneBit( // deriveForMultiBatch supports powers of two only
Math.min(Math.max(1, (Short.MAX_VALUE - 1) / bindCount), highestBlockCount));
}
我现在使用 128 作为数据库中的序列增量和客户端的批量大小参数,它就像一个魅力。
我没有确凿的答案,但这种行为似乎非常相似,并且可能出于与批量获取相同的原因。
它使用不同的语句,参数集的数量等于 2 的幂。 这是为了尽量减少执行的不同语句的数量。 数据库需要解析语句并使用缓存来保存解析的语句。 如果客户端执行大量的语句,这些语句本质上是做同样的事情,但参数集的数量不同,这将使缓存变得无用。
另一方面,我没有在批量插入中看到它,而只是在批量获取操作中看到了它。 我有几个猜测为什么会发生这种情况:
您的 id 由数据库生成,因此在将数据写入数据库之前,需要从数据库序列中查询 id。 也许 select 行为比泄漏到插入
这可能是由正在重写此类 auf 语句的 JDBC 驱动程序完成的优化。
Hibernate 一直这样做,我只是错过了。 虽然我认为当参数集的数量等于批量大小时这样做很奇怪。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.