简体   繁体   English

Node.js Web套接字服务器:我的数据管理构想是否稳定/可扩展?

[英]Node.js web sockets server: Is my idea for data management stable/scalable?

I'm developing a html5 browser multi-player RPG with node.js running in the backend with a web sockets plug-in for client data transfer. 我正在开发一个html5浏览器多人RPG,其中node.js在后端运行,并带有用于客户端数据传输的Web套接字插件。 The problem i'm facing is accessing and updating user data, as you can imagine this process will be taking place many times a second even with few users connected. 我面临的问题是访问和更新用户数据,因为您可以想象这个过程将每秒发生很多次,即使连接的用户很少。

I've done some searching and found only 2 plug-ins for node.js that enable MySQL capabilities but they are both in early development and I've figured that querying the database for every little action the user makes is not efficient. 我进行了一些搜索,仅找到了2个启用MySQL功能的node.js插件,但它们都处于早期开发阶段,而且我发现针对用户执行的每个小动作查询数据库都是无效的。

My idea is to get node.js to access the database using PHP when a user connects and retrieve all the information related to that user. 我的想法是让一个用户连接时,node.js使用PHP访问数据库并检索与该用户相关的所有信息。 The information collected will then be stored in an JavaScript object in node.js. 然后,收集到的信息将存储在node.js中的JavaScript对象中。 This will happen for all users playing. 这对于所有玩游戏的用户都会发生。 Updates will then be applied to the object. 然后将更新应用于该对象。 When a user logs off the data stored in the object will be updated to the database and deleted from the object. 当用户注销时,存储在对象中的数据将被更新到数据库并从对象中删除。

A few things to note are that I will separate different types of data into different objects so that more commonly accessed data isn't mixed together with data that would slow down lookups. 需要注意的几件事是,我将把不同类型的数据分为不同的对象,以使更常访问的数据不会与会降低查找速度的数据混合在一起。 Theoretically if this project gained a lot of users I would introduce a cap to how many users can log onto a single server at a time for obvious reasons. 从理论上讲,如果这个项目吸引了很多用户,那么我将引入一个限制,即出于显而易见的原因,一次可以登录到一台服务器上的用户数量上限。

I would like to know if this is a good idea. 我想知道这是否是个好主意。 Would having large objects considerably slow down the node.js server? 大对象会大大降低node.js服务器的速度吗? If you happen to have any ideas on another possible solutions for my situation I welcome them. 如果您碰巧对我的情况有其他解决方案的想法,欢迎您。

Thanks 谢谢

As far as your strategy goes, for keeping the data in intermediate objects in php, you are adding a very high level of complexity to your application. 就您的策略而言,为了将数据保留在php的中间对象中,您正在为应用程序添加非常高的复杂性。

Just the communication between node.js and php seems complex, and there is no guarantee this will be any faster than just putting things right in mysql. 只是node.js和php之间的通信似乎很复杂,并且不能保证这比将其正确放置在mysql中要快得多。 Putting any uneeded barier between you and your data is going to make things more difficult to manage. 在您和您的数据之间放置任何不必要的障碍将使事情变得更难管理。

It seems like you need a more rapid data solution. 似乎您需要更快的数据解决方案。 You could consider using an asynchronous database like mongodb, or redis that will read and write quickly (redis will write in memory, should be incredibly fast) 您可以考虑使用可快速读写的异步数据库(如mongodb或redis)(redis将在内存中写入,应该非常快)

These are both commonly used with node.js just for the reason that they can handle the real time data load. 它们都可以与node.js一起使用,只是因为它们可以处理实时数据加载。

Actually redis is what your really asking for, it actually stores things in memory and then persists it to the disk periodically. 实际上,redis是您真正想要的,它实际上将内容存储在内存中,然后定期将其持久化到磁盘上。 You can´t get any faster than that, but you will need enough ram. 您无法获得比这更快的速度,但是您将需要足够的内存。 If ram looks like an issue, go with mongodb which is still really fast. 如果ram看起来很麻烦,请选择mongodb,它仍然非常快。

The disadvantage is you will need to relearn the ideas about data persistance, and that is hard. 缺点是您将需要重新学习有关数据持久性的想法,这很难。 I´m in the process of doing that myself! 我正在自己做这件事!

I have an application doing allmost what you describe- I choosed to do it that way since th MYSQL drivers for node was unstable/ undocumented at the time of development. 我有一个应用程序按照您的描述进行所有操作-我之所以选择这样做,是因为在开发时,用于节点的MYSQL驱动程序是不稳定的/未记录在案。

I have 200 connected users - requesting data 3-5 times each second, and fetch entire tables through php pages (each 200-800 ms) returning JSON from apache , with approx 1000 lines and put the contents in arrays. 我有200个已连接的用户-每秒请求3-5次数据,并通过php页(每个200-800毫秒)从apache返回JSON来获取整个表,大约有1000行,并将内容放入数组中。 I loop through the arrays and find the relevant data on request - it works, and its fast - putting no significant load on cpu and memory. 我遍历数组并根据请求查找相关数据-它可以正常工作,而且速度很快-对CPU和内存没有明显的负担。

All data insertion/updating, which is limited goes through php/mysql. 所有数据插入/更新(仅限于此)都通过php / mysql。

Advantages: 1. its a simple solution, with known stable services. 优点:1.它是一种简单的解决方案,具有已知的稳定服务。 2. Only 1 client connecting to apache/php/mysql each 200-800 ms 3. all node clients get the benefit of non-blocking. 2.每200-800毫秒,只有1个客户端连接到apache / php / mysql。3.所有节点客户端都获得了非阻塞的好处。 4. Runs on 2 small "pc-style" servers - and handles about 8000 req/second. 4.在2台小型“ pc样式”服务器上运行-并处理大约8000 req / second。 (apache bench) (阿帕奇长凳)

Disadvantages: 1. many - but it gets the job done. 缺点:1.很多-但是可以完成工作。

I found that my node script COULD stop -1-2 times a week- maybe due to some connection problems (unsolved) - but Combined with Upstart and Monit it restarts and alerts with no problems..... 我发现我的节点脚本可能每周停止1-2次-可能是由于某些连接问题(未解决)而引起的-但是与Upstart和Monit结合使用,它可以重新启动并发出警报,而没有任何问题.....

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM