简体   繁体   中英

Synchronize multiple processes across multiple machines through file system windows

I'm writing an application, which receives some packages over the network and then some client receives these packages from this application. Now I would like to make the architecture scalable. Packages are stored temporarily on file system until they are received by consumer application. If I store ready to send packages in some folder I need some way to distinguish packages, which are being processed, from packages, which are still waiting to be processed. The idea is to create special lock file in package folder and keep it locked until package transfer is completed (on completion delete this lock file and the package folder itself). The alternative ideas are to use some kind of shared resource (for example, message queue or database). The question is whether using the file system for this kind of interprocess communication safe enough?

You can use any medium to store your data (it will end up in the filesystem one way or the other, anyways). Main question you should probably ask yourself is what functionality you want to have at hand: it might be quite problematic to scale raw files, it might be hard to implement layers of security (role-based access, for example), etc.

Seems like your case is nicely solved by RabbitMQ as it provides versatile queuing mechanism as well as persistence and scalability. I don't know details of your project, but I'd advice to look at Redis for similar functionality plus ability to modify objects in the "queue".

In any case, file-based storage might be a good idea to experiment with (create MVP), but I'd advice against it in the future.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM