I did this for share the learning_rate
between all my neurons :
class neural_network {
public:
neural_network(float learning_rate = 0.005f)
: learning_rate(new float(learning_rate)){};
shared_ptr<float> learning_rate;
private:
vector<neuron> neurons;
};
class neuron {
public:
neuron(const float learning_rate) {
this->learningRate = make_shared<float>(learningRate);
};
private:
const shared_ptr<const float> learning_rate;
};
Is this a good solution to have same learning_rate
on all my neurons?
shared_ptr
is reasonably expensive, I don't see the need for it here, only the network needs to "own" the learning rate. Don't be afraid to use raw pointers where appropriate, just avoid new
and delete
:
class neuron {
public:
neuron(const float& learning_rate)
: learning_rate(&learning_rate){};
private:
const float* learning_rate;
};
class neural_network {
public:
neural_network(float learning_rate = 0.005f)
: learning_rate(learning_rate){};
float learning_rate;
void make_neuron()
{
neurons.push_back(neuron(learning_rate));
}
private:
vector<neuron> neurons;
};
shared_ptr
is to share ownership, not to "share an instance".
There is a well defined relation between lifetime of some instance X
and its members. In the easiest case the members will be alive till they are destroyed in X
s destructor. Members typically do not stay alive beyond the lifetime of X
. Hence, there is no need for shared ownership. You could use raw pointers to emphasize that the neurons
do not participate in the ownership of learning_rate
.
class neural_network
{
public:
neural_network(float learning_rate = 0.005f)
: learning_rate(learnin_rate) {};
float learning_rate;
private:
vector<neuron> neurons;
}
class neuron
{
public:
neuron(const float* learning_rate) : learning_rate(learning_rate){}
private:
const float* learning_rate;
}
PS: Not sure, but I think I would apply a rather different design. Make learning_rate
a (non-const non-pointer) member of neurons
. Then if the neural_network
changes the learning_rate
it would call the neurons set_learning_rate
method to update their learning rate. In this way neuron
s have a chance to react when the learning rate changes.
For a single float
, I really think all this is overkill. If your learning rate could grow more complex, that's one thing, but in the code given here? I suggest just going with a float member in neural_network
, and a const neural_network* owner;
in neuron
set during construction.
Then you slap a public getLearningRate()
on neural_network
and you're done. And you might have all sorts of network-wide state to track, so the individual neurons can get a lot of utility out of that one pointer. Examples might be logs, a serialization stream, or maybe a dirty flag.
Bonus: No dynamic allocations, which is always a nice efficiency gain when you can get it. No pointer-related cache misses, no
new
ing ordelete
ing.
Furthermore, I'd think your call to make_shared()
in neuron
's constructor will create a new shared pointer, pointing to a new instance of that same float
value. This results in changes to the root learning_rate
not affecting the existing neuron
instances at all. (and a lot of extra, unwanted memory allocation)
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.