[英]Prevent violating of UNIQUE constraint with Hibernate
I have a table like (id INTEGER, sometext VARCHAR(255), ....)
with id
as the primary key and a UNIQUE constraint on sometext
. 我有一个像
(id INTEGER, sometext VARCHAR(255), ....)
其中id
是主键,并且对sometext
具有UNIQUE约束。 It gets used in a web server, where a request needs to find the id
corresponding to a given sometext
if it exists, otherwise a new row gets inserted. 它在Web服务器中使用,其中请求需要找到与给定
sometext
对应的id
如果存在),否则将插入新行。
This is the only operation on this table. 这是此表上的唯一操作。 There are no updates and no other operations on this table.
此表上没有更新,也没有其他操作。 Its sole purpose is to persistently number of encountered values of
sometext
. 它的唯一目的是持久地
sometext
值。 This means that I can't drop the id
and use sometext
as the PK. 这意味着我不能删除
id
并使用sometext
作为PK。
I do the following: 我执行以下操作:
sometext
. sometext
查找行。 Usually, this works and again, I'm done. This works fine, except when there are two overlapping requests with the same sometext
. 这工作正常,除非有两个重叠的请求具有相同的
sometext
。 Then an ConstraintViolationException
results. 然后产生
ConstraintViolationException
。 I'd need something like INSERT IGNORE
or INSERT ... ON DUPLICATE KEY UPDATE
(Mysql syntax) or MERGE (Firebird syntax). 我需要类似
INSERT IGNORE
或INSERT ... ON DUPLICATE KEY UPDATE
(Mysql语法)或MERGE (Firebird语法)上。
I wonder what are the options? 我想知道有哪些选择?
AFAIK Hibernate merge
works on PK only, so it's inappropriate. AFAIK Hibernate
merge
仅适用于PK,因此不适当。 I guess, a native query might help or not, as it may or may not be committed when the second INSERT takes place. 我猜想,本机查询可能有所帮助,因为第二次INSERT发生时可能会提交,也可能不会提交。
Just let the database handle the concurrency. 只需让数据库处理并发。 Start a secondary transaction purely for inserting the new row.
纯粹为了插入新行而启动辅助事务。 if it fails with a ConstraintViolationException, just roll that transaction back and read the new row.
如果失败并显示ConstraintViolationException,则只需回滚该事务并读取新行。
Not sure this scales well if the likelihood of a duplicate is high, a lot of extra work if some percent (depends on database) of transactions have to fail the insert and then reselect. 如果重复的可能性很高,则不确定扩展性如何,如果一定比例的事务(取决于数据库)必须使插入失败然后重新选择,那么这将导致大量额外工作。
A secondary transaction minimizes the length of time the transaction to add the new text takes, assuming the database supports it correctly, it might be possible for the thread 1 transaction to cause the thread 2 select/insert to hang until the thread 1 transaction is committed or rolled back. 辅助事务可最大程度地减少添加新文本的事务所花费的时间,假设数据库正确支持,则线程1事务可能会挂起线程2选择/插入,直到提交线程1事务为止或回滚。 Overall database design might also affect transaction throughput.
总体数据库设计也可能会影响事务吞吐量。
I don't necessarily question why sometext can't be a PK, wondering why you need to break it out at all. 我不必质疑为什么某些文本不能成为PK,想知道为什么您需要将其全部突破。 Of course, large volumes might substantially save space if sometext records are large, it almost seems like you're trying to emulate a lucene index to give you a complete list of text values.
当然,如果某些文本记录很大,则大容量可能会实质上节省空间,这似乎就像您在尝试模拟lucene索引以提供完整的文本值列表一样。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.