简体   繁体   中英

Mainframe to Cloud - File Based Processing

We are working on migrating a high volume mainframe batch application to distributed/cloud using Java/Java Batch. The current application deals with a lot files (VSAM and FLAT - altogether 100+ files from different sources) and IO modules. We are thinking of loading this data to Oracle database and then retrieve and process. So there will be millions of transactions hitting Oracle DB.

We are concerned about performance on Oracle because of millions of transactions hitting the DB during the batch window.

Other approach we are considering is consuming files itself.

With NAS storage, the argument is even when reading/writing a file, its over the network.

Will we have any downside of file based processing in Cloud environment.

How can we scale the application depending on the size/load?

In the mordern world, how these kind of applications are migrated/rearchitected in cloud based/cloud friendly.

Using Hadoop/Spark clusters is not an option due to what ever reason.

Any suggestions? Thank you!!

Why not convert some of the workload over to zLinux? Load it onto DB/2 on z/OS, but use zLinux hosts for consumption? zLinux can run under z/VM so you could have many instances. This helps maximize your hardware investments.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM