简体   繁体   中英

Is appropriate to call virtual function in derived class destructor?

I have a inheritance that the destructor of base class applies Template Method Pattern . The destructor has to do some works before calling the virtual clean function, and do some another works after calling.

We know that Never Call Virtual Functions during Construction or Destruction . So the following code definitely is not available.

class base
{
public:
    virtual ~base()
    {
        // ... do something before do_clear()
        do_clear();
        // ... do something after do_clear()
    }

private:
    virtual void do_clear() = 0;
};

class d1
    : public base
{
public:
    d1() : m_x(new int) {}
    ~d1() {}

private:
    virtual void do_clear()
    {
        delete m_x;
    }
    int *m_x;
};

But if I moved the destruction of process to the destructor of derived class, for example:

class base
{
public:
    virtual ~base()
    {
    }

protected:
    void clear()
    {
        // ... do something before do_clear()
        do_clear();
        // ... do something after do_clear()
    }

private:
    virtual void do_clear() = 0;
};

class d1
    : public base
{
public:
    d1() : m_x(new int) {}
    ~d1()
    {
        clear();
    }

private:
    virtual void do_clear()
    {
        delete m_x;
    }
    int *m_x;
};

If the client write that:

base *x = new d1;
delete x;

It will call ~d1() , then call base::clear() , eventually call the virtual function d1::do_clear() properly.

The base::clear() can been moved to public, and the client can make things safely by calling base::clear() before destruction. A precondition is the client must know and not forget to call, I think it isn't convenient and breaks encapsulation.

My question is:

  1. Is the design dangerous/risk?
  2. Is existing other better design for this?

There are two problems with your current design. The first is that it violates the rule of five/rule of zero. This means that ordinary usage of these classes is almost certain to result in memory leaks or double deletions.

The second problem is that you are using inheritance to model something that is probably better modelled with composition. base wants d1 to provide some extra functionality to its destructor, with the exact form of this functionality specified at run-time. The use of a replaceable interface is therefore internal to base , and so should not be externally visible.

Here is how I would write this code (using wheels::value_ptr ):

struct D_interface {
    //Providing virtual functions for run-time modifiable behaviour
    virtual D_interface *clone() const = 0;
    virtual ~D_interface(){}
};

struct D_Delete {
    //Here is the code to call through to the virtual `D` object behaviour:
    void operator()(D_interface *p) {
        // ... do something before delete p;
        delete p;
        // ... do something after delete p;
    }
    //Put (pointers to) relevant data here,
    //initialise them when constructing the `value_ptr`, or
    //at a later stage with get_deleter
};

struct d1 : D_interface {
    wheels::value_ptr<int> mx;
    virtual D_interface *clone() const {
        return new d1(*this);
    }
};

//Nothing derives from `base`, because the polymorphism that is needed is internal
//to the implementation of `base`
//To add new functionality, add a new `D_interface` implementation.
class base
{
    wheels::value_ptr<D_interface, wheels::DefaultCloner<D_interface>, D_Delete> D_impl;
    public:
    base(D_interface *D_impl)
        : D_impl(D_impl)
    {
    }
};

For my part I think that this pattern it is sure because you need to implement the virtual function in the derived classes. This is the philosophy of virtual classes.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM