In the two versions of the code, both v1 and v2 are large vectors (length ranging from 1,000 1,000,000 with len(v1)=len(v2)). I expected code 2 to be much master than code 1 , but it turns out code 1 is much faster and I do not know why. Could you please explain why code 2 is slow? Thank you.
Code 1:
norm1=math.sqrt(np.dot(v1,v1))
norm2=math.sqrt(np.dot(v2,v2))
kern=np.dot(v1,v2)/(norm1*norm2)
Code 2:
kern=0
for i in range(0, len(v1)):
kern+=min(v1[i], v2[i])
The np.dot()
calls also require loops through the vectors, but these loops are implemented (typically) natively / in C++. Loops implemented explicitly in python (as in your code 2) are notoriously slow in comparison to such C++-based loops.
In the first code, you are using functions from standard libraries numpy
. These are highly optimised libraries for faster computation because implemented in C. refer this
In the second code, along with overhead of for
loop, overhead of function call to min(v1[i]+v2[i])
each time you iterate the loop is also added and don't forget the overhead of range(0, len(v1))
.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.