简体   繁体   中英

How does data locality work with OpenStack Swift on IBM Bluemix?

I'm currently playing around with the Apache Spark Service in IBM Bluemix. Since the IBM Cloud relies on OpenStack Swift as Data Storage for this service I'm wondering if there is any data locality (at least possible) with that architecture.

If I'm right with HDFS the SparkDriver asks the HDFS namenode about the datanodes containing the various blocks of a file and then schedules the work to the SparkWorkers.

So I've checked the Swift API there is a Range parameter which would allow the SparkWorker to at least read only local blocks, but how can the SparkDriver find out these ranges?

Any ideas?

This is the disaggregation of compute and storage. That is, the spark compute nodes are not at all shared with the swift cluster storage nodes. This confers benefits on scalability of compute separate from storage, and vice versa. But in this model, you cannot have data locality ... by definition. So how this works, roughly, is that each spark executor can pull its own range of blocks of the object from the swift cluster, such that each executor does not need to pull in all the object data only operate on its own portion; which would be inefficient. But the blocks are still pulled from the remote swift cluster, then are not local. The only question here is how long it takes to pull the blocks into each executor so that doesn't slow you down. In the case of the Bluemix Apache Spark Service and the Bluemix or Softlayer Object Storage service, there is low latency and a fast network between them.

re: "Since the IBM Cloud relies on OpenStack Swift as Data Storage for this service". There will be other data sources available to the spark service as the beta progresses, so it will not be 100% reliance.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM