簡體   English   中英

TF.Keras model.predict 比直線 Numpy 慢?

[英]TF.Keras model.predict is slower than straight Numpy?

謝謝大家幫助我理解下面的問題。 我已經更新了問題並生成了僅 CPU運行和僅 GPU運行。 一般來說,在任何一種情況下,直接numpy計算都比model. predict()快數百倍。 希望這能澄清這似乎不是CPU vs GPU問題(如果是,我希望得到解釋)。


讓我們用 keras 創建一個經過訓練的 model。

import tensorflow as tf

(X,Y),(Xt,Yt) = tf.keras.datasets.mnist.load_data()

model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(1000,'relu'),
    tf.keras.layers.Dense(100,'relu'),
    tf.keras.layers.Dense(10,'softmax'),
])
model.compile('adam','sparse_categorical_crossentropy')
model.fit(X,Y,epochs=20,batch_size=1024)

現在讓我們使用 numpy 重新創建model.predict numpy

import numpy as np

W = model.get_weights()

def predict(X):
    X      = X.reshape((X.shape[0],-1))           #Flatten
    X      = X @ W[0] + W[1]                      #Dense
    X[X<0] = 0                                    #Relu
    X      = X @ W[2] + W[3]                      #Dense
    X[X<0] = 0                                    #Relu
    X      = X @ W[4] + W[5]                      #Dense
    X      = np.exp(X)/np.exp(X).sum(1)[...,None] #Softmax
    return X

我們可以很容易地驗證這些是相同的 function(實現中的模塊機器錯誤)。

print(model.predict(X[:100]).argmax(1))
print(predict(X[:100]).argmax(1))

我們還可以測試這些函數的運行速度。 使用ipython

%timeit model.predict(X[:10]).argmax(1) # 10   loops takes 37.7 ms
%timeit predict(X[:10]).argmax(1)       # 1000 loops takes 356 µs

我得到的predict運行速度比model. predict10,000倍。預測在低批次時減少到大約100倍的速度在較大的批次。 無論如何,為什么predict要快得多? 事實上, predict甚至沒有優化,我們可以使用numba ,甚至直接在C代碼中重寫predict並編譯它。

考慮到部署目的,為什么手動從 model 中提取權重並重寫 function 比keras內部執行的操作快數千倍? 這也意味着編寫腳本以利用.h5文件或類似文件,可能比手動重寫預測 function 慢得多。一般來說,這是真的嗎?


Ipython Output(中央處理器):

Python 3.8.5 (default, Sep  3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.
PyDev console: using IPython 7.19.0
Python 3.8.5 (default, Sep  3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] on win32
import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"    
import tensorflow as tf
(X,Y),(Xt,Yt) = tf.keras.datasets.mnist.load_data()
model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(1000,'relu'),
    tf.keras.layers.Dense(100,'relu'),
    tf.keras.layers.Dense(10,'softmax'),
])
model.compile('adam','sparse_categorical_crossentropy')
model.fit(X,Y,epochs=20,batch_size=1024)
2021-04-19 15:10:58.323137: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
2021-04-19 15:11:01.990590: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library nvcuda.dll
2021-04-19 15:11:02.039285: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2021-04-19 15:11:02.042553: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: DESKTOP-G0U8S3P
2021-04-19 15:11:02.043134: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: DESKTOP-G0U8S3P
2021-04-19 15:11:02.128834: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:127] None of the MLIR optimization passes are enabled (registered 2)
Epoch 1/20
59/59 [==============================] - 4s 60ms/step - loss: 35.3708
Epoch 2/20
59/59 [==============================] - 3s 58ms/step - loss: 0.8671
Epoch 3/20
59/59 [==============================] - 3s 56ms/step - loss: 0.5641
Epoch 4/20
59/59 [==============================] - 3s 56ms/step - loss: 0.4359
Epoch 5/20
59/59 [==============================] - 3s 56ms/step - loss: 0.3447
Epoch 6/20
59/59 [==============================] - 3s 56ms/step - loss: 0.2891
Epoch 7/20
59/59 [==============================] - 3s 56ms/step - loss: 0.2371
Epoch 8/20
59/59 [==============================] - 3s 57ms/step - loss: 0.1977
Epoch 9/20
59/59 [==============================] - 3s 57ms/step - loss: 0.1713
Epoch 10/20
59/59 [==============================] - 3s 57ms/step - loss: 0.1381
Epoch 11/20
59/59 [==============================] - 4s 61ms/step - loss: 0.1203
Epoch 12/20
59/59 [==============================] - 3s 57ms/step - loss: 0.1095
Epoch 13/20
59/59 [==============================] - 3s 56ms/step - loss: 0.0877
Epoch 14/20
59/59 [==============================] - 3s 57ms/step - loss: 0.0793
Epoch 15/20
59/59 [==============================] - 3s 56ms/step - loss: 0.0727
Epoch 16/20
59/59 [==============================] - 3s 56ms/step - loss: 0.0702
Epoch 17/20
59/59 [==============================] - 3s 56ms/step - loss: 0.0701
Epoch 18/20
59/59 [==============================] - 3s 57ms/step - loss: 0.0631
Epoch 19/20
59/59 [==============================] - 3s 56ms/step - loss: 0.0539
Epoch 20/20
59/59 [==============================] - 3s 58ms/step - loss: 0.0493
Out[3]: <tensorflow.python.keras.callbacks.History at 0x143069fdf40>

import numpy as np
W = model.get_weights()
def predict(X):
    X      = X.reshape((X.shape[0],-1))           #Flatten
    X      = X @ W[0] + W[1]                      #Dense
    X[X<0] = 0                                    #Relu
    X      = X @ W[2] + W[3]                      #Dense
    X[X<0] = 0                                    #Relu
    X      = X @ W[4] + W[5]                      #Dense
    X      = np.exp(X)/np.exp(X).sum(1)[...,None] #Softmax
    return X

%timeit model.predict(X[:10]).argmax(1) # 10   loops takes 37.7 ms
%timeit predict(X[:10]).argmax(1)       # 1000 loops takes 356 µs

52.8 ms ± 2.13 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
640 µs ± 10.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Ipython Output(GPU):

Python 3.7.7 (default, Mar 26 2020, 15:48:22) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.4.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import tensorflow as tf 
   ...:  
   ...: (X,Y),(Xt,Yt) = tf.keras.datasets.mnist.load_data() 
   ...:  
   ...: model = tf.keras.models.Sequential([ 
   ...:     tf.keras.layers.Flatten(), 
   ...:     tf.keras.layers.Dense(1000,'relu'), 
   ...:     tf.keras.layers.Dense(100,'relu'), 
   ...:     tf.keras.layers.Dense(10,'softmax'), 
   ...: ]) 
   ...: model.compile('adam','sparse_categorical_crossentropy') 
   ...: model.fit(X,Y,epochs=20,batch_size=1024)                                                                                                                                                                   
2020-07-01 15:50:46.008518: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-07-01 15:50:46.054495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:05:00.0
2020-07-01 15:50:46.059582: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-07-01 15:50:46.114562: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-07-01 15:50:46.142058: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-07-01 15:50:46.152899: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-07-01 15:50:46.217725: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-07-01 15:50:46.260758: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-07-01 15:50:46.374328: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-07-01 15:50:46.376747: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-07-01 15:50:46.377688: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX FMA
2020-07-01 15:50:46.433422: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 4018875000 Hz
2020-07-01 15:50:46.434383: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x563e4d0d71c0 executing computations on platform Host. Devices:
2020-07-01 15:50:46.435119: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): Host, Default Version
2020-07-01 15:50:46.596077: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x563e4a9379f0 executing computations on platform CUDA. Devices:
2020-07-01 15:50:46.596119: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): GeForce RTX 2080 Ti, Compute Capability 7.5
2020-07-01 15:50:46.597894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:05:00.0
2020-07-01 15:50:46.597961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-07-01 15:50:46.597988: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-07-01 15:50:46.598014: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-07-01 15:50:46.598040: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-07-01 15:50:46.598065: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-07-01 15:50:46.598090: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-07-01 15:50:46.598115: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-07-01 15:50:46.599766: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2020-07-01 15:50:46.600611: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-07-01 15:50:46.603713: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-07-01 15:50:46.603751: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2020-07-01 15:50:46.603763: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2020-07-01 15:50:46.605917: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10311 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:05:00.0, compute capability: 7.5)
Train on 60000 samples
Epoch 1/20
2020-07-01 15:50:49.995091: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
60000/60000 [==============================] - 2s 26us/sample - loss: 9.9370
Epoch 2/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.6094
Epoch 3/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.3672
Epoch 4/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.2720
Epoch 5/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.2196
Epoch 6/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.1673
Epoch 7/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.1367
Epoch 8/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.1082
Epoch 9/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0895
Epoch 10/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0781
Epoch 11/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0666
Epoch 12/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0537
Epoch 13/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0459
Epoch 14/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0412
Epoch 15/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0401
Epoch 16/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0318
Epoch 17/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0275
Epoch 18/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0237
Epoch 19/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0212
Epoch 20/20
60000/60000 [==============================] - 0s 4us/sample - loss: 0.0199
Out[1]: <tensorflow.python.keras.callbacks.History at 0x7f7c9000b550>

In [2]: import numpy as np 
   ...:  
   ...: W = model.get_weights() 
   ...:  
   ...: def predict(X): 
   ...:     X      = X.reshape((X.shape[0],-1))           #Flatten 
   ...:     X      = X @ W[0] + W[1]                      #Dense 
   ...:     X[X<0] = 0                                    #Relu 
   ...:     X      = X @ W[2] + W[3]                      #Dense 
   ...:     X[X<0] = 0                                    #Relu 
   ...:     X      = X @ W[4] + W[5]                      #Dense 
   ...:     X      = np.exp(X)/np.exp(X).sum(1)[...,None] #Softmax 
   ...:     return X 
   ...:                                                                                                                                                                                                            

In [3]: print(model.predict(X[:100]).argmax(1)) 
   ...: print(predict(X[:100]).argmax(1))                                                                                                                                                                          
[5 0 4 1 9 2 1 3 1 4 3 5 3 6 1 7 2 8 6 9 4 0 9 1 1 2 4 3 2 7 3 8 6 9 0 5 6
 0 7 6 1 8 7 9 3 9 8 5 9 3 3 0 7 4 9 8 0 9 4 1 4 4 6 0 4 5 6 1 0 0 1 7 1 6
 3 0 2 1 1 7 5 0 2 6 7 8 3 9 0 4 6 7 4 6 8 0 7 8 3 1]
/home/bobbyocean/anaconda3/bin/ipython3:12: RuntimeWarning: overflow encountered in exp
/home/bobbyocean/anaconda3/bin/ipython3:12: RuntimeWarning: invalid value encountered in true_divide
[5 0 4 1 9 2 1 3 1 4 3 5 3 6 1 7 2 8 6 9 4 0 9 1 1 2 4 3 2 7 3 8 6 9 0 5 6
 0 7 6 1 8 7 9 3 9 8 5 9 3 3 0 7 4 9 8 0 9 4 1 4 4 6 0 4 5 6 1 0 0 1 7 1 6
 3 0 2 1 1 7 5 0 2 6 7 8 3 9 0 4 6 7 4 6 8 0 7 8 3 1]

In [4]: %timeit model.predict(X[:10]).argmax(1)                                                                                                                                                                    
37.7 ms ± 806 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [5]: %timeit predict(X[:10]).argmax(1)                                                                                                                                                                          
361 µs ± 13.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

我們觀察到主要問題是Eager Execution模式的原因 我們根據CPUGPU基礎對您的代碼和相應的結果進行淺顯的了解。 確實numpy不在GPU上運行,因此與tf-gpu不同,它不會遇到任何數據移位開銷。

但是,與model. predict相比,使用np定義的predict方法完成的計算速度也很明顯 model. predict tf. keras tf. keras ,而輸入測試集只有10 個樣本 但是,我們不會進行任何深入的分析,就像您可能喜歡閱讀的一件藝術品一樣


我的設置如下。 我正在使用Colab環境並檢查CPUGPU模式。

TensorFlow 1.15.2
Keras 2.3.1
Numpy 1.19.5

TensorFlow 2.4.1
Keras 2.4.0
Numpy 1.19.5

TF 1.15.2 - 中央處理器

%tensorflow_version 1.x

import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"   

import tensorflow as tf
from tensorflow.python.client import device_lib

print(tf.__version__)
print('A: ', tf.test.is_built_with_cuda)
print('B: ', tf.test.gpu_device_name())
local_device_protos = device_lib.list_local_devices()
([x.name for x in local_device_protos if x.device_type == 'GPU'], 
 [x.name for x in local_device_protos if x.device_type == 'CPU'])
TensorFlow 1.x selected.
1.15.2
A:  <function is_built_with_cuda at 0x7f122d58dcb0>
B:  
([], ['/device:CPU:0'])

現在,運行您的代碼。

import tensorflow as tf
import keras
print(tf.executing_eagerly()) # False

(X,Y),(Xt,Yt) = keras.datasets.mnist.load_data()
model = keras.models.Sequential([])
model.compile
model.fit

%timeit model.predict(X[:10]).argmax(1) # yours: 10   loops takes 37.7 ms
%timeit predict(X[:10]).argmax(1)       # yours: 1000 loops takes 356 µs

1000 loops, best of 5: 1.07 ms per loop
1000 loops, best of 5: 1.48 ms per loop

我們可以看到執行時間與舊的keras相當 現在,讓我們也用GPU進行測試。


TF 1.15.2 - GPU

%tensorflow_version 1.x

import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"   

import tensorflow as tf
from tensorflow.python.client import device_lib

print(tf.__version__)
print('A: ', tf.test.is_built_with_cuda)
print('B: ', tf.test.gpu_device_name())
local_device_protos = device_lib.list_local_devices()
([x.name for x in local_device_protos if x.device_type == 'GPU'], 
 [x.name for x in local_device_protos if x.device_type == 'CPU'])
1.15.2
A:  <function is_built_with_cuda at 0x7f0b5ad46830>
B:  /device:GPU:0
(['/device:GPU:0'], ['/device:CPU:0'])
...
...
%timeit model.predict(X[:10]).argmax(1) # yours: 10   loops takes 37.7 ms
%timeit predict(X[:10]).argmax(1)       # yours: 1000 loops takes 356 µs

1000 loops, best of 5: 1.02 ms per loop
1000 loops, best of 5: 1.44 ms per loop

現在,這里的執行時間也與舊的keras和無 Eager 模式相當。 現在讓我們看看新的tf. keras tf. keras首先使用 Eager 模式,然后我們觀察沒有 Eager 模式。


TF 2.4.1 - 中央處理器

熱切地

import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"   

import tensorflow as tf
from tensorflow.python.client import device_lib

print(tf.__version__)
print('A: ', tf.test.is_built_with_cuda)
print('B: ', tf.test.gpu_device_name())
local_device_protos = device_lib.list_local_devices()
([x.name for x in local_device_protos if x.device_type == 'GPU'], 
 [x.name for x in local_device_protos if x.device_type == 'CPU'])
2.4.1
A:  <function is_built_with_cuda at 0x7fed85de3560>
B:  
([], ['/device:CPU:0'])

現在,以 Eager 模式運行代碼。

import tensorflow as tf
import keras

print(tf.executing_eagerly())  # True
(X,Y),(Xt,Yt) = keras.datasets.mnist.load_data()

model = keras.models.Sequential([ ])
model.compile
model.fit

%timeit model.predict(X[:10]).argmax(1) # yours: 10   loops takes 37.7 ms
%timeit predict(X[:10]).argmax(1)       # yours: 1000 loops takes 356 µs

10 loops, best of 5: 28 ms per loop
1000 loops, best of 5: 1.73 ms per loop

急切地禁用

現在,如果我們禁用 Eager 模式並運行以下相同的代碼,那么我們將得到:

import tensorflow as tf
import keras

# # Disables eager execution
tf.compat.v1.disable_eager_execution()
# or, 
# Disables eager execution of tf.functions.
# tf.config.run_functions_eagerly(False)
print(tf.executing_eagerly())
False
(X,Y),(Xt,Yt) = keras.datasets.mnist.load_data()
model = keras.models.Sequential([])
model.compile
model.fit

%timeit model.predict(X[:10]).argmax(1) # yours: 10   loops takes 37.7 ms
%timeit predict(X[:10]).argmax(1)       # yours: 1000 loops takes 356 µs

1000 loops, best of 5: 1.37 ms per loop
1000 loops, best of 5: 1.57 ms per loop

現在,我們可以看到在 new tf. keras中禁用急切模式的執行時間相當 tf. keras 現在,讓我們也使用GPU模式進行測試。


TF 2.4.1 - GPU

熱切地

import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"   

import tensorflow as tf
from tensorflow.python.client import device_lib

print(tf.__version__)
print('A: ', tf.test.is_built_with_cuda)
print('B: ', tf.test.gpu_device_name())
local_device_protos = device_lib.list_local_devices()
([x.name for x in local_device_protos if x.device_type == 'GPU'], 
 [x.name for x in local_device_protos if x.device_type == 'CPU'])
2.4.1
A:  <function is_built_with_cuda at 0x7f16ad88f680>
B:  /device:GPU:0
(['/device:GPU:0'], ['/device:CPU:0'])
import tensorflow as tf
import keras

print(tf.executing_eagerly()) # True
(X,Y),(Xt,Yt) = keras.datasets.mnist.load_data()
model = keras.models.Sequential([ ])
model.compile
model.fit

%timeit model.predict(X[:10]).argmax(1) # yours: 10   loops takes 37.7 ms
%timeit predict(X[:10]).argmax(1)       # yours: 1000 loops takes 356 µs

10 loops, best of 5: 26.3 ms per loop
1000 loops, best of 5: 1.48 ms per loop

急切地禁用

最后,如果我們禁用 Eager 模式並運行以下相同的代碼,我們將得到:

# Disables eager execution
tf.compat.v1.disable_eager_execution()
# or, 
# Disables eager execution of tf.functions.
# tf.config.run_functions_eagerly(False)
print(tf.executing_eagerly()) # False 

(X,Y),(Xt,Yt) = keras.datasets.mnist.load_data()
model = keras.models.Sequential([ ])
model.compile
model.fit

%timeit model.predict(X[:10]).argmax(1) # yours: 10   loops takes 37.7 ms
%timeit predict(X[:10]).argmax(1)       # yours: 1000 loops takes 356 µs

1000 loops, best of 5: 1.12 ms per loop
1000 loops, best of 5: 1.45 ms per loop

和以前一樣,執行時間new tf. keras tf. keras 這就是為什么, Eager 模式是導致tf. keras tf. keras比直numpy

另一個答案在“如何使tf keras 預測更快”方面更有用,但我認為以下內容可以幫助更多“它在做什么需要這么多時間”? 即使禁用了急切模式,您也可能想知道執行的樣子(例如,提供或不提供batch_size 等)。

要回答這個問題,您可能會發現跟蹤分析器很有用。 跟蹤執行會增加很多開銷(特別是對於有一堆非常輕量級的 python 調用的地方),但總的來說應該讓您對正在執行 python 代碼的哪一部分有相當多的了解,因為,好吧,它只是准確記錄正在發生的事情。 您可以嘗試pytracing ,因為它會生成 Chrome 瀏覽器在其內置chrome://tracing頁面上很好地可視化的文件。 要使用它,例如在 google colab 中,您可以執行以下操作:

首先,安裝pytracing:

!pip install pytracing

然后生成跟蹤:

from pytracing import TraceProfiler
tp = TraceProfiler(output=open('/root/trace.out', 'wt'))
with tp.traced():
  for i in range(2): 
    model.predict(X[:1000], batch_size=1000)

然后下載跟蹤:

from google.colab import files
files.download('/root/trace.out') 

之后在 Chrome 瀏覽器中打開chrome://tracing頁面,點擊“Load”按鈕,然后 select trace.out 文件就下載好了。

您將看到類似以下內容 - 您可以單擊任何元素,查看 python 的全名 function 和文件它來自 + 所花費的時間(再次,由於跟蹤開銷,所有這些都高於正常運行): 在此處輸入圖像描述

您可以看到禁用/啟用急切執行或更改批處理大小將如何更改 output 並且可以親自查看花費最多的時間。 從我目前看到的情況來看(在非急切模式下+調用model.predict(X[:1000], batch_size=1000) )相當多的時間花在:

標准化您的數據(無論是什么意思):~2.5ms(包括跟蹤開銷:):

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training_v1.py:2336:_standardize_user_data

准備回調(即使我們沒有設置任何回調):~2ms(包括跟蹤開銷)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/callbacks.py:133:configure_callbacks

至於numpy版本沒有優化的說法——我不同意。 The numpy implementation here is quite optimized - python is not making any pure python calls in it (execution of predict only results call to functions in C - I couldn't believe it at first but it seems to be the case), so overhead from Python 真的很小。 通過優化 ReLU 的方式並消除額外的分配/釋放,您可能會有所收獲,但這只會導致非常小的性能改進。

正如其他人所指出的那樣,有問題的 Tensorflow 二進制文件是為 GPU 優化而編譯的:雖然 GPU 由於具有極高數量的計算核心而在密集的數字處理方面表現出色,但在將數據移回時它們卻非常緩慢來回。

當 model 在顯卡上執行時,所有必要的數據都必須突發傳輸到 GPU——它無法訪問主機系統的 RAM(主機系統也無法訪問視頻內存)。 一旦 GPU 完成處理,所有結果都必須運回主機系統。

所有這些數據的移動都需要大量時間。 此外,據我所知,編譯為使用 GPU/CUDA 執行的 Tensorflow 二進制文件不包括任何用於在 CPU 上執行的標准優化(例如使用更快的擴展指令集,例如 AVX、AVX2 等)。

因此,您正在比較一個高度 CPU 優化的科學庫,它可以處理數據,甚至無需將 go 一半時間返回 RAM(CPU 寄存器和芯片上的緩存存儲); 代碼必須在將所有數據發送到顯卡並返回之前收集它需要的最后一點。 我還省略了 Tensorflow 引擎蓋下進行的所有數據操作。 畢竟,它適用於自己的數據結構。

我想,急切的執行也是效率低下的另一層。

至於部署 Keras 模型的最佳實踐,我認為它就像軟件中的其他一切一樣:過早優化是萬惡之源。 如果您不需要它快速和精簡,那么讓它緩慢、模塊化、可重用和直觀。 但是,嘿,如果你需要或想要效率,那就給你力量。 Keras 設計用於快速開發和研究,而非生產代碼。

簡而言之,答案是出於同樣的原因 C++ 比 Python 快(因為 Python 解釋器具有更多的開銷)

而不是 model.predict(input),嘗試簡單的 model(input)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM