Suppose I have a Neural Network. I can perform a handful of operations in it, like so:
def one_iteration(my_op, data, ...):
for i in range(...):
...
my_op(data)
...
Imagine that for
loop is doing a bit more than just being a for loop, and it's how I iterate over a collection of objects. Now, if pass as my_op
a training
function, I'm set. But, if I want to pass a evaluate
function as my_op
, now I want to keep my intermediate results to know how I'm doing. So I would do:
def one_iteration(my_op, data, ...):
results = []
for i in range(...):
...
result = my_op(data)
results.append(result)
return results
And I would have a list/array with the results for me to evaluate how this network performed over that iteration. Now, suppose I want my_op
to be validate
, and for that, I need even more intermediate information. I could maybe do something like this:
def one_iteration(my_op, data, is_validation, ...):
results = []
if is_validation:
more_results = []
for i in range(...):
...
result, other_result = my_op(data)
results.append(result)
if is_validation:
more_results.append(other_result)
return results
But, it starts getting all sorts of messy at this point. I know I could implement a separate train
, evaluate
and validate
method, and within each one, I could do what's needed, but the problem I see is that each method will be repeating the way it iterates over the object (in this case, the for loop), and so, if I change how I iterate, I have to change it in 3 different places. Is there a design pattern I'm missing here?
So pass two methods: one that is the meta-operation, which accepts the operation:
def one_iteration(meta_op, my_op, data, ...):
for i in range(...):
...
meta_op(my_op, data)
...
And let meta_op
decide how to handle the results.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.