For my neural network, I am trying to create a cost function. I am using the following cost function:
C = sum((an - yn)^2)
# C = Cost Function, sum = sigma, an = actual_output, yn = desired_output
Here is a way in which I implemented it in python:
def cost(actual_outputs, desired_outputs):
# actual_outputs and desired outputs are numpy arrays
costs = [(actual_output - desired_output) ** 2 for actual_output, desired_output in zip(actual_outputs, desired_outputs)]
return sum(costs)
Is there a more efficient way of doing this using numpy (or any other method)?
You could use linalg.norm :
import numpy as np
def cost(actual_outputs, desired_outputs):
return np.linalg.norm(np.array(actual_outputs) - np.array(desired_outputs)) ** 2
This answer assumes your input are not numpy arrays, otherwise you can compute using actual_outputs
and desired_outputs
directly.
import timeit
import random
import numpy as np
def cost(actual_outputs, desired_outputs):
# actual_outputs and desired outputs are numpy arrays
costs = [(actual_output - desired_output) ** 2 for actual_output, desired_output in zip(actual_outputs, desired_outputs)]
return sum(costs)
def cost2(actual_outputs,desired_outputs):
return ((actual_outputs-desired_outputs)**2).sum()
actual = [random.random() for _ in range(1000)]
desired = [random.random() for _ in range(1000)]
actual2 = np.array(actual)
desired2 = np.array(desired)
if __name__ == "__main__":
print(timeit.timeit('cost(actual,desired)','from __main__ import cost,actual,desired',number=10))
# 0.00271458847557
print(timeit.timeit('cost2(actual2,desired2)','from __main__ import cost2,actual2,desired2',number=10))
# 0.000187942916669
looks faster ... assuming its already a numpy array ... it will probably take longer if you have to convert it to a numpy array
the bigger the list size the greater your gains will be
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.