简体   繁体   English

处理mysql数据库连接

[英]Handling mysql db connection

I am working on a high-performance real-time app, with node.js and mysql as backend. 我正在使用node.js和mysql作为后端的高性能实时应用程序。

In order to increase performance, I have a separate node.js process updating underlaying mysql bd. 为了提高性能,我在mysql bd底下进行了单独的node.js进程更新。 Update requests are enqueued to garantee sequencial execution (a must). 更新请求被排队以保证顺序执行(必须)。

I am thinking about keeping a permanently open db connection in this process in order not to lose time on openning it on each request. 我正在考虑在此过程中保持永久打开的数据库连接,以免浪费时间在每个请求上打开它。

Other DB-requests (updates or reads) are served from the web-server node-js instance directly, possibly in parallel. 其他DB请求(更新或读取)是直接从Web服务器node-js实例提供的,可能是并行提供的。 These db-connections are of course created/freed in each request. 这些数据库连接当然是在每个请求中创建/释放的。

Do you see some cons of this approach? 您看到这种方法的一些弊端吗?

UPDATE: 更新:

Important additional information. 重要的附加信息。 I've chosen this stand-alone process solution basically because of the following reason... 我之所以选择此独立流程解决方案,基本上是因为以下原因...

the logic that must be executed before each update is relatively complex and depends on data-structure in the database. 在每次更新之前必须执行的逻辑相对复杂,并且取决于数据库中的数据结构。 Several additional queries would be necessary before each update. 每次更新之前,都需要几个其他查询。 This stand-alone process has the complete data-structure in-memory and can perform these check-ups very fast and with no db access (performance boost). 这个独立的过程具有完整的内存数据结构,并且可以非常快速地执行这些检查,而无需数据库访问(性能提升)。

Another con to your approach. 您的方法的另一个缺点。

MySQL is notorious for closing connections that have been open for a long time, based on timeouts. MySQL因超时而关闭长时间打开的连接而臭名昭著。

Lost connection to MySQL server during query 查询期间失去与MySQL服务器的连接

@duffymo is correct: using short-lived connections from a pool is much more likely to keep working for hundreds of hours than a long-opened connection. @duffymo是正确的:从池​​中使用短期连接比长期打开连接更有可能保持数百小时的工作时间。

node.js + mysql connection pooling node.js + mysql连接池

I wonder: you say that sequential execution is a must. 我想知道:您说顺序执行是必须的。 Large scale DBMSs (including MySQL on a large server) are very competent at handling concurrent queries from multiple connections without corrupting data. 大型DBMS(包括大型服务器上的MySQL)非常有能力处理来自多个连接的并发查询,而不会破坏数据。 Your system will probably be more robust if you can work out exactly what is mandatory about the sequencing of your updates. 如果您可以准确地确定更新顺序的强制性要求,则系统可能会更强大。 If you can implement that sequencing in the SQL itself, or possibly in some transactions, you'll have a more failure-resistant system than if you insist that only one process do the updates. 如果您可以在SQL本身或可能在某些事务中实现该排序,则与坚持只由一个进程进行更新相比,您将拥有一个更具抗故障能力的系统。 Single-purpose processes like the one you mention aren't easy to debug in system test: they are notorious for failing after hundreds of hours for all kinds of reasons. 像您提到的那种单用途进程在系统测试中不容易调试:它们因各种原因在数百小时后失败而臭名昭著。 When they fail in production, everybody's scrambling to restore them so nobody has time to troubleshoot them. 当它们的生产失败时,每个人都在争先恐后地恢复它们,因此没有人有时间对它们进行故障排除。

I see cons. 我看到缺点。

Transactions are the biggest one. 交易是最大的交易。 What happens if an UPDATE fails? 如果UPDATE失败会怎样? Do you rollback and try again? 您是否回滚并重试? Put the message back on the queue? 将消息放回队列?

I think a better approach would be to open, use, and close connections in the narrowest scope possible. 我认为更好的方法是在尽可能小的范围内打开,使用和关闭连接。 Create and maintain a pool of connections to amortize the cost of creation over many transactions. 创建并维护一个连接池,以分摊许多事务的创建成本。

I'd recommend looking into vert.x instead of node.js. 我建议您查看vert.x而不是node.js。 It's node.js for the JVM. 这是用于JVM的node.js。 It uses non-blocking I/O and a ring-buffer event bus to guarantee sequential, fast operations. 它使用非阻塞I / O和环形缓冲区事件总线来保证顺序,快速的操作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM