简体   繁体   English

tf.keras 手动设备放置

[英]tf.keras manual device placement

Migrating to the TF2.0 I'm trying to use the tf.keras approach for solving things.迁移到 TF2.0 我正在尝试使用tf.keras方法来解决问题。 In standard TF, I can use with tf.device(...) to control where ops are.在标准 TF 中,我可以使用with tf.device(...)来控制操作的位置。

For example, I might have a model something like例如,我可能有一个类似的模型


model = tf.keras.Sequential([tf.keras.layers.Input(..),
                             tf.keras.layers.Embedding(...),
                             tf.keras.layers.LSTM(...),
                             ...])

Assuming I want to have the network up until Embedding (including) on the CPU and the and from there on on the GPU, how will I go about that?假设我想让网络一直运行到在 CPU 上Embedding (包括),然后在 GPU 上Embedding ,我将如何处理? (This is just an example, the layers could have nothing to do with embeddings) (这只是一个例子,层可能与嵌入无关)

If the solution involves subclassing tf.keras.Model that is OK too, I don't mind not using Sequential如果解决方案涉及子类化tf.keras.Model也可以,我不介意不使用Sequential

You can use the Keras functional API:您可以使用 Keras 函数式 API:

inputs = tf.keras.layers.Input(..)
with tf.device("/GPU:0"):
    model = tf.keras.layers.Embedding(...)(inputs)
outputs = tf.keras.layers.LSTM(...)(model)

model = tf.keras.Model(inputs=inputs, outputs=outputs)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM