简体   繁体   English

如何使神经网络功能更快?

[英]how to make neural network function faster?

I have this function for a neural network and it's the function to calculate the next layer from a list of inputs and a list of weights. 我具有用于神经网络的此功能,它是根据输入列表和权重列表计算下一层的功能。 Is there any way to make this faster or more efficient? 有什么方法可以使其更快或更有效吗? the arguments inp is the input, weights are the weights, layerlength is the length of the next layer and rounds is just the length to round the output to. 参数inp是输入,权重是权重,layerlength是下一层的长度,rounds只是将输出取整的长度。

def output(inp,weights,layerlength,rounds):
    layer=[]
    count=0
    lappend=layer.append
    for a in range(layerlength):
        total=0
        for b in range(len(inp)):  
            total+=inp[b]*weights[count]
            count+=1
        lappend(round(total,rounds))
    return layer

In general, try not to use for loop constructs in Python. 通常,尽量不要在Python中使用for循环构造。 They are extremely slow. 他们非常慢。 Use Matrix operations programmed with numpy instead, then the loops will run under the hood in C++ instead (50 to 100 times faster). 改用numpy编程的Matrix运算,然后循环将在C ++中运行(快50到100倍)。 You can easily reformulate your above piece of code without any Python for loops by defining your layer and inp vectors and your weights matrix all as numpy.array() and then perform matrix multiplication on them. 通过将layerinp向量以及weights矩阵都定义为numpy.array() ,然后对其进行矩阵乘法,可以轻松地重新编写以上代码,而无需任何Python for循环。

EDIT: I hope I am not helping you cheat on your homework here ;) 编辑:我希望我不会在这里帮助你作弊;)

import numpy as np
# 10 dimensional input
inpt = np.arange(10)
# 20 neurons in the first (fully connected) layer
weights = np.random.rand(10, 20)
# mat_mul: to calculate the input to the non-linearity of the first layer
# you need to multiply each input dimension with all the weights assigned to a specific neuron of the first layer
# and then sum them up, and this for all the neurons in that layer
# you can do all of that in this single Matrix multiplication
layer = np.matmul(inpt, weights)
print(inpt.shape)
print()
print(weights.shape)
print()
print(layer.shape)

So I'm assuming you're computing the activations of one layer. 因此,我假设您正在计算一层的激活。

Make sure you use linear algebra libraries like Numpy (or Tensorflow, PyTorch, etc). 确保使用线性代数库,例如Numpy(或Tensorflow,PyTorch等)。 These will make sure your computations run much more efficient on the CPU (or GPU). 这些将确保您的计算在CPU(或GPU)上的运行效率更高。 Typically using for loops give a lot of computational overhead. 通常,使用for循环会产生大量计算开销。

For example, in numpy you can write your feedforward pass for one layer as: 例如,在numpy中,您可以将一层的前馈过程编写为:

output = inp.dot(weights)

inp is here your n by m input matrix, weights is your m by k weight matrix. inp是您的n x m输入矩阵, weightsm x k权重矩阵。 output will then be a n by k matrix of your forward step activations. output将是您的前向步激活的n × k矩阵。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM