简体   繁体   English

如何防止数据库连接在Rails中超时?

[英]How can I prevent database connections from timing out in Rails?

I have a Rails system in which every half hour, the following is done: 我有一个Rails系统,每隔半小时,完成以下工作:

  • There are 15 clients somewhere else on the network 网络上的其他地方有15个客户端
  • The server creates a record called Measurement for each of these clients 服务器为每个客户端创建一个名为Measurement的记录
  • The measurement records are configured, and then they are run asynchronously via Sidekiq, using MeasurementWorker.perform_async(m.id) 配置测量记录,然后使用MeasurementWorker.perform_async(m.id)通过Sidekiq异步运行它们
  • The connection to the client is done with Celluloid actors and a WebSocket client 与客户端的连接是通过Celluloid actor和WebSocket客户端完成的
  • Each measurement, when run, creates a number of event records that are stored in the database 运行时,每个测量都会创建存储在数据库中的许多事件记录

The system has been running well with 5 clients, but now I am at 15, and many of the measurements don't run anymore when I start them at the same time, with the following error: 该系统已经运行良好,有5个客户端,但现在我已经15岁了,当我同时启动时,许多测量不再运行,出现以下错误:

2015-02-04T07:30:10.410Z 35519 TID-owd4683iw MeasurementWorker JID-15f6b396ae9e3e3cb2ee3f66 INFO: fail: 5.001 sec
2015-02-04T07:30:10.412Z 35519 TID-owd4683iw WARN: {"retry"=>false, "queue"=>"default", "backtrace"=>true, "class"=>"MeasurementWorker", "ar
gs"=>[6504], "jid"=>"15f6b396ae9e3e3cb2ee3f66", "enqueued_at"=>1423035005.4078047}
2015-02-04T07:30:10.412Z 35519 TID-owd4683iw WARN: could not obtain a database connection within 5.000 seconds (waited 5.000 seconds)
2015-02-04T07:30:10.412Z 35519 TID-owd4683iw WARN: /home/webtv/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/activerecord-4.1.4/lib/active_
record/connection_adapters/abstract/connection_pool.rb:190:in `block in wait_poll'
....

Now, my production environment looks like this: 现在,我的生产环境如下所示:

config/sidekiq.yml

production:
  :verbose: false
  :logfile: ./log/sidekiq.log
  :poll_interval: 5
  :concurrency: 50

config/unicorn.rb

...
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 60
...

config/database.yml

production:
  adapter: postgresql
  database: ***
  username: ***
  password: ***
  host: 127.0.0.1
  pool: 50

postgresql.conf

max_connections = 100 # default

As you see, I've already increased the concurrency of Sidekiq to 50, to cater for a high number of possible concurrent measurements. 如您所见,我已经将Sidekiq的并发性提高到50,以满足大量可能的并发测量。 I've set the database pool to 50, which already looks like overkill to me. 我已经将数据库池设置为50,这对我来说已经太过分了。

I should add that the server itself is quite powerful, with 8 GB RAM and a quad-core Xeon E5-2403 1.8 GHz. 我应该补充一点,服务器本身非常强大,具有8 GB RAM和四核Xeon E5-2403 1.8 GHz。

What should these values ideally be set to? 理想情况下,这些值应该设置为什么? What formula can I use to calculate them? 我可以使用什么公式来计算它们? (Eg number of maximum DB connections = Unicorn workers × Sidekiq concurrency × N ) (例如,最大数据库连接数=独角兽工人×Sidekiq并发数×N

It looks to me like your pool configuration of 100 is not taking affect. 在我看来,你的游泳池配置100没有生效。 Each process will need a max of 50 so change 100 to 50. I don't know if you are using Heroku but it is notoriously tough to configure the pool size. 每个进程最多需要50个,所以将100改为50.我不知道你是否使用Heroku,但配置池大小是非常困难的。

Inside mysql, your max connection count should look like this: 在mysql内部,您的最大连接数应如下所示:

((Unicorn processes) * 1) + ((sidekiq processes) * 50)

Unicorn is single threaded and never needs more than one connection unless you are spinning up your own threads in your Rails app for some reason. Unicorn是单线程的,除非您出于某种原因在Rails应用程序中启动自己的线程,否则永远不需要多个连接。

I'm sure the creator of sidekiq @MikePerham is more than suited to the task of fixing your sidekiq issues but as a ruby dev two things stand out. 我确信sidekiq @MikePerham的创建者不仅适合修复你的sidekiq问题,而且作为一个红宝石开发者,两件事情都很突出。

If you're doing a lot of database operations via ruby can you push some of them into the database as triggers ? 如果您通过ruby执行大量数据库操作,您可以将其中一些作为触发器推入数据库吗? You could still start them on the appside with a sidekiq process of course. 你当然还可以通过sidekiq进程在appside上启动它们。 :) :)

Second every half hour screams to me of a rake task run via cron. 第二个半小时对我说通过cron运行的rake任务。 Hope you're doing that too. 希望你也这样做。 FWIW I usually use the Whenever gem to create the cron line I have to drop into the crontab of the user running the app. FWIW我通常使用Whenever gem来创建cron行,我必须将其放入运行应用程序的用户的crontab中。 Note its designed to autocreate the crontask in a scripted deploy but in a non-scripted one you can still leverage it to give you the lines you have to paste into your crontab though via the whenever command. 请注意,它设计用于在脚本化部署中自动创建crontask,但在非脚本化的部署中,您仍然可以利用它来为您提供必须通过when命令粘贴到crontab中的行。

Also you mention this is for measurements. 你还提到这是用于测量。

Have you considered leveraging something like elasticsearch and the searchkick gem ? 您是否考虑过利用弹性搜索searchkick gem之类的东西? This is a little more of a complex setup, be sure to firewall the server you install ES on. 这是一个复杂的设置,请务必防火墙上安装ES的服务器。 But this might make your code a lot more manageable as you grow. 但是,随着您的成长,这可能会使您的代码更易于管理。 Also it gives you a good search mechanism almost for free and its distributed and more language agnostic, eg Bloodhound , Java . 它还为您提供了一个很好的搜索机制,几乎是免费的,它的分布式和更多的语言无关,例如BloodhoundJava :) Plus kibana gives you a nice window into the ES records :)加上kibana为您提供了一个很好的ES记录窗口

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM