I'm using the MirroredStrategy
to perform multi-gpu training and it doesn't appear to be properly sharding the data. How do you go about manually sharding data?
I know that I could use the shard
method for a tf.data
dataset, but for that I need access to the worker ID and I can't figure out how to get that. How do I access the worker ids?
MirroredStrategy
runs on a single worker (for multiple workers there is MultiWorkerMirroredStrategy ). Because it runs on only one worker, MirroredStrategy
runs a single Dataset
pipeline without any data sharding. At each step, MirroredStrategy
requests one dataset element per worker.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.