简体   繁体   English

使用 shared_ptr 在类及其成员对象之间共享变量是一个很好的解决方案吗?

[英]Is using a shared_ptr to share variables between class and its member objects a good solution?

I did this for share the learning_rate between all my neurons :我这样做是为了在我的所有神经元之间共享learning_rate

class neural_network {
public:
  neural_network(float learning_rate = 0.005f)
      : learning_rate(new float(learning_rate)){};
  shared_ptr<float> learning_rate;

private:
  vector<neuron> neurons;
};

class neuron {
public:
  neuron(const float learning_rate) {
    this->learningRate = make_shared<float>(learningRate);
  };

private:
  const shared_ptr<const float> learning_rate;
};

Is this a good solution to have same learning_rate on all my neurons?这是在我的所有神经元上具有相同learning_rate的好解决方案吗?

shared_ptr is reasonably expensive, I don't see the need for it here, only the network needs to "own" the learning rate. shared_ptr相当昂贵,我认为这里不需要它,只有网络需要“拥有”学习率。 Don't be afraid to use raw pointers where appropriate, just avoid new and delete :不要害怕在适当的地方使用原始指针,只需避免newdelete

class neuron {
public:
    neuron(const float& learning_rate)
        : learning_rate(&learning_rate){};

private:
    const float* learning_rate;
};

class neural_network {
public:
    neural_network(float learning_rate = 0.005f)
        : learning_rate(learning_rate){};
    float learning_rate;

    void make_neuron()
    {
        neurons.push_back(neuron(learning_rate));
    }

private:
    vector<neuron> neurons;
};

shared_ptr is to share ownership, not to "share an instance". shared_ptr是共享所有权,而不是“共享实例”。

There is a well defined relation between lifetime of some instance X and its members.某些实例X生命周期与其成员之间存在明确定义的关系。 In the easiest case the members will be alive till they are destroyed in X s destructor.在最简单的情况下,成员将一直活着,直到它们在X的析构函数中被销毁。 Members typically do not stay alive beyond the lifetime of X .成员通常不会在X的生命周期之后存活。 Hence, there is no need for shared ownership.因此,不需要共享所有权。 You could use raw pointers to emphasize that the neurons do not participate in the ownership of learning_rate .您可以使用原始指针来强调neurons不参与learning_rate的所有权。

class neural_network
{
public:
    neural_network(float learning_rate = 0.005f)
        : learning_rate(learnin_rate) {};
    float learning_rate;
private:
    vector<neuron> neurons;
}

class neuron
{
public:
    neuron(const float* learning_rate) : learning_rate(learning_rate){}
private:
    const float* learning_rate;
}

PS: Not sure, but I think I would apply a rather different design. PS:不确定,但我想我会应用一个相当不同的设计。 Make learning_rate a (non-const non-pointer) member of neurons .使learning_rate成为neurons的(非常量非指针)成员。 Then if the neural_network changes the learning_rate it would call the neurons set_learning_rate method to update their learning rate.这时如果neural_network改变learning_rate它会调用元set_learning_rate方法来更新他们的学习速度。 In this way neuron s have a chance to react when the learning rate changes.这样neuron就有机会在学习率变化时做出反应。

For a single float , I really think all this is overkill.对于单个float ,我真的认为这一切都太过分了。 If your learning rate could grow more complex, that's one thing, but in the code given here?如果您的学习率可以变得更复杂,那是一回事,但在此处给出的代码中? I suggest just going with a float member in neural_network , and a const neural_network* owner;我建议只使用neural_network的浮动成员和const neural_network* owner; in neuron set during construction.在构建过程中的neuron组中。

Then you slap a public getLearningRate() on neural_network and you're done.然后,你就打一个公共getLearningRate()neural_network就大功告成了。 And you might have all sorts of network-wide state to track, so the individual neurons can get a lot of utility out of that one pointer.而且您可能需要跟踪各种网络范围的状态,因此单个神经元可以从那个指针中获得很多效用。 Examples might be logs, a serialization stream, or maybe a dirty flag.示例可能是日志、序列化流或脏标志。

Bonus: No dynamic allocations, which is always a nice efficiency gain when you can get it.奖励:没有动态分配,当你可以获得它时,这总是一个很好的效率增益。 No pointer-related cache misses, no new ing or delete ing.没有与指针相关的缓存未命中,没有new ing 或delete ing。

Furthermore, I'd think your call to make_shared() in neuron 's constructor will create a new shared pointer, pointing to a new instance of that same float value.此外,我还以为你调用make_shared()neuron的构造函数将创建一个新的共享指针,指向相同的新实例float值。 This results in changes to the root learning_rate not affecting the existing neuron instances at all.这导致根learning_rate变化根本不影响现有的neuron实例。 (and a lot of extra, unwanted memory allocation) (以及很多额外的、不需要的内存分配)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM