[英]rejected from slick.util.AsyncExecutor on "large" Future.sequence
I spent all day trying to figure out how to solve this issue.我花了一整天的时间试图弄清楚如何解决这个问题。
The purpose is to insert several sequence of strings into a single column of a table.目的是将多个字符串序列插入到表的单个列中。
I have a method like this:我有这样的方法:
case class Column(strings: Seq[String])
def insertColumns(columns: Seq[Column]) = for {
_ <- Future.sequence(columns.map(col => insert(col)))
} yield()
private def insert(column: Column) =
db.run((stringTable ++= rows)) //slick batch insert
This is working to a point.这在一定程度上起作用。 I tested for a sequence of 2100 columns (with 100 strings in each), and it works fine.
我测试了 2100 列的序列(每列有 100 个字符串),它工作正常。 But as soon as I increase the number of columns to 3100+, I have this error
但是一旦我将列数增加到 3100+,我就有这个错误
Task slick.basic.BasicBackend$DatabaseDef$$anon$3@293ce053 rejected from slick.util.AsyncExecutor$$anon$1$$anon$2@3e423930[Running, pool size = 10, active threads = 10, queued tasks = 1000, completed tasks = 8160]
I have read on several places that doing something like this would help我在几个地方读过,做这样的事情会有所帮助
case class Column(strings: Seq[String])
val f = Future.sequence(columns.map(col => insert(col)))
def insertColumns(columns: Seq[Column]) = for {
_ <- f
} yield()
private def insert(column: Column) =
db.run((stringTable ++= rows)) //slick batch insert
it does not.它不是。
I tried several combination of changes inside insert
我在
insert
中尝试了几种更改组合
Future.sequence(
rows.grouped(500).toSeq.map(group => db.run(DBIO.seq(stringTable ++= group)))
)
Source(rows).buffer(500, OverflowStrategy.backpressure)
.via(
Slick.flow(row => stringTable += row)
)
.log("nr-of-inserted-rows")
.runWith(Sink.ignore)
Source(rows)
.runWith(Slick.sink(1, row => stringTable += row))
I tried:我试过了:
reWriteBatchedInserts=true
inside my configreWriteBatchedInserts=true
(dataColumnStringsTable ++= rows).transactionally
option (dataColumnStringsTable ++= rows).transactionally
选项implicit val ec: ExecutionContext = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(1))
to try to execute the futures sequentiallyimplicit val ec: ExecutionContext = ExecutionContext.fromExecutor(Executors.newFixedThreadPool(1))
尝试顺序执行期货I don't have any other idea than reworking my subscriber to receive and block my messages (sequence of strings) and handle the back pressure at queue messaging side.除了修改订阅者以接收和阻止我的消息(字符串序列)并处理队列消息传递端的背压之外,我没有任何其他想法。
I am using slick (with alpakka-slick) 3.3.3 / HikariCP 3.2.0 / Postgres 13.2我正在使用 slick (with alpakka-slick) 3.3.3 / HikariCP 3.2.0 / Postgres 13.2
My config is as such我的配置是这样的
slick {
profile = "slick.jdbc.PostgresProfile$"
db {
connectionPool = "HikariCP"
dataSourceClass = "slick.jdbc.DriverDataSource"
properties = {
driver = "org.postgresql.Driver"
user = "postgres"
password = "password"
url = "jdbc:postgresql://"${slick.db.host}":5432/slick?reWriteBatchedInserts=true"
}
host = "localhost"
numThreads = 10
maxConnections = 100
minConnections = 1
}
}
Thank your for your help.感谢您的帮助。
You shouldn't use Future.sequence
with collections of more than a few elements.您不应将
Future.sequence
与多个元素的 collections 一起使用。 Every Future
is a computation running in the background.每个
Future
都是在后台运行的计算。 So when you run this for a collection of, let's say, 3000 columns
:因此,当您针对 3000
columns
的集合运行此命令时:
Future.sequence(columns.map(col => insert(col)))
you effectively spawn 3000 operations at once.您一次有效地产生了 3000 个操作。 As a result, the executor may start rejecting new tasks.
结果,执行者可能会开始拒绝新任务。
The solution is to process the input collection with Akka Streams.解决方案是使用 Akka Streams 处理输入集合。 In your case, this means creating a
Source
from columns
(not from rows
).在您的情况下,这意味着从
columns
(而不是从rows
)创建Source
。 This will ensure that the executor is not overwhelmed with too many parallel operations.这将确保执行器不会被过多的并行操作所淹没。 I haven't used
alpakka-slick
, but looking at the docs , the solution should look something like this:我没有使用
alpakka-slick
,但是查看文档,解决方案应该如下所示:
Source(columns)
.via(
Slick.flow(column => stringTable ++= column.rows)
)
// further processing here
What's more, if "columns" are coming from a message queue, it's possible that you don't even need an intermediate Seq[Column]
.更重要的是,如果“列”来自消息队列,您甚至可能不需要中间
Seq[Column]
。 You may simply need to define a Source
of Column
that reads from the queue, and process it with a Slick flow.您可能只需要定义从队列中读取的
Column
的Source
,并使用 Slick 流对其进行处理。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.