简体   繁体   中英

Correct way of measuring execution time of Keras layers

I am trying to check the execution speed on different layers of a Keras model (Using keras from tensorflow 2.3.0 v)

I took the code from this repo and just modified it, to calculate the time using timer() from from timeit import default_timer

import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from timeit import default_timer as timer

def time_per_layer(model):
    new_model = model
    times = np.zeros((len(model.layers), 2))
    inp = np.ones((70, 140, 1))
    for i in range(1, len(model.layers)):
        new_model = tf.keras.models.Model(inputs=[model.input], outputs=[model.layers[-i].output])
        # new_model.summary()
        new_model.predict(inp[None, :, :, :])
        t_s = timer()
        new_model.predict(inp[None, :, :, :])
        t_e2 = timer() - t_s
        times[i, 1] = t_e2
        del new_model
    for i in range(0, len(model.layers) - 1):
        times[i, 0] = abs(times[i + 1, 1] - times[i, 1])
    times[-1, 0] = times[-1, 1]
    return times


times = time_per_layer(model)
plt.style.use('ggplot')
x = [model.layers[-i].name for i in range(1,len(model.layers))]
#x = [i for i in range(1,len(model.layers))]
g = [times[i,0] for i in range(1,len(times))]
x_pos = np.arange(len(x))
plt.bar(x, g, color='#7ed6df')
plt.xlabel("Layers")
plt.ylabel("Processing Time")
plt.title("Processing Time of each Layer")
plt.xticks(x_pos, x,rotation=90)

plt.show()

Is this the right way of measuring the execution time of different layers?

I would say that there is no right way to measure execution time of a different layers just like that because

  1. Neural networks work as a whole (the whole is more than the sum of its parts). You cannot unplug a layer from the middle of a trained network without breaking the system therefore measuring how long it processes something is not particularly useful.

  2. The execution time of a layer also depends on the previous layer. If you change previous layers from having 1 neuron to having [insert large number] of neurons, the execution time of the following layer will change even it the layer itself stays unchanged. So it is basically impossible to measure execution time of a layer in an insolation.

One reasonable thing to measure is how much the execution time changes if you add additional layer - comparison of the overall execution time of a network with the layer vs network without the layer. But this will require you to retrain the model.

Another thing that you could measure is how much the execution time changes when you add additional layer to the base of the network (similar to what you are doing but only comparing overall execution time of the first N layers to execution time of N+1 layers). This might be slightly useful when you are considering how many base layers you want to keep when doing transfer learning (assuming that the NN architecture allows for that) but even then the accuracy is probably going to be the deciding factor so...

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM