简体   繁体   中英

PHP application to Desktop Application

I have developed a PHP application for my company, the problem is that I have just discovered that the application must work offline too. The application works like this: some data is loaded from a MySQL Database, then you have to compile some checklists, insert new data in the database and, in the end, generate a JSON.

The application will be used by a lot of people of our company, so we thought about installing on their computers a webserver (Apache) and make the application run on their machines. The problem is that, if we decide to go this way, we have to:

  • Download all the data from MySQL BEFORE starting the application (when the user has access to internet) and save this data into a JSON file
    • Change all the queries in the project in order to take the data from the JSON instead of the database
    • Also, there are a lot of functions which insert data into the database in real-time, so we would have to use SQLite and then transfer the data to the MySQL Database
    • Finally, in this way, the people who use this program would have access to ALL PHP files, and they could modify them at any time.

We don't have the time to think about a real Desktop Java application because this app will be used starting from January, so we don't have the time to develop it.

Have you got any suggestions? Is there something I'm not thinking about, or a technology which could help me? Thank you!

PS. I have considered programs like Nightrain of PHP Desktop but they only avoid the installation of Apache, nothing more...

Introduction

Since you obviously need a fast solution, I'll give you one. This is based on the pieces of information we know. Warning, this solution is not elegant, and you WILL NEED to replace it when you get the chance.

Solution

  1. Clear all of your primary and foreign keys.
  2. Replace them with BINARY(16) with an index.

Every record will need its pseudo-primary-key to be randomly generated with a CSRNG, Binary 16 is just convenient to follow the UUID standard. This will ensure each new record remains uniquely indexed despite lack of knowledge of the other distributions.

You're tables won't have primary key indexes, because these are unique, and since the database will be distributed, it won't be possible to check the uniqueness of the keys anyway, so there is no point using it.

  1. Each laptop will need a copy of the entire database.
  2. Each laptop will only be allowed to add new data, never delete or modify base data.

In fact, as a rule, all data on the central database will be write-once/read-only from now on. It doesn't matter how erroneous the newly merged data is, it must never be deleted or modified.

  1. New data should be regarded as "updates" based on their timestamp.

So every table will need a timestamp.

  1. Finally a record of when a copy distribution was made should be kept to retain knowledge of which data to merge back to the central database.

What you are left with, is a central database that takes on all data, and changes to data will be represented by the presence of newer data.

Conclusion

I'd only use this solution if I really had too. In fact, I'd estimate only an 80% chance of it even working with sub-standard quality. It also assumes that you can devote all remaining development time to the re-factoring of data insertion methods.

You are going to have to deal with the fact that a LOT of administration work will be needed on the central database to manage the integrity of the data, and you will need to work with the assumption that you can't change the format of the input being merged from the laptops.

Every new feature will need to be backwards compatible with old data.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM