简体   繁体   中英

C++ vector of objects vs. vector of pointers to objects

I am writing an application using openFrameworks, but my question is not specific to just oF; rather, it is a general question about C++ vectors in general.

I wanted to create a class that contains multiple instances of another class, but also provides an intuitive interface for interacting with those objects. Internally, my class used a vector of the class, but when I tried to manipulate an object using vector.at(), the program would compile but not work properly (in my case, it would not display a video).

// instantiate object dynamically, do something, then append to vector
vector<ofVideoPlayer> videos;
ofVideoPlayer *video = new ofVideoPlayer;
video->loadMovie(filename);
videos.push_back(*video);

// access object in vector and do something; compiles but does not work properly
// without going into specific openFrameworks details, the problem was that the video would
// not draw to screen
videos.at(0)->draw();

Somewhere, it was suggested that I make a vector of pointers to objects of that class instead of a vector of those objects themselves. I implemented this and indeed it worked like a charm.

vector<ofVideoPlayer*> videos;
ofVideoPlayer * video = new ofVideoPlayer;
video->loadMovie(filename);
videos.push_back(video);
// now dereference pointer to object and call draw
videos.at(0)->draw();

I was allocating memory for the objects dynamically, ie ofVideoPlayer = new ofVideoPlayer;

My question is simple: why did using a vector of pointers work, and when would you create a vector of objects versus a vector of pointers to those objects?

What you have to know about vectors in c++ is that they have to use the copy operator of the class of your objects to be able to enter them into the vector. If you had memory allocation in these objects that was automatically deallocated when the destructor was called, that could explain your problems: your object was copied into the vector then destroyed.

If you have, in your object class, a pointer that points towards a buffer allocated, a copy of this object will point towards the same buffer (if you use the default copy operator). If the destructor deallocates the buffer, when the copy destructor will be called, the original buffer will be deallocated, therefore your data won't be available anymore.

This problem doesn't happen if you use pointers, because you control the life of your elements via new/destroy, and the vector functions only copy pointer towards your elements.

My question is simple: why did using a vector of pointers work, and when would you create a vector of objects versus a vector of pointers to those objects?

std::vector is like a raw array allocated with new and reallocated when you try to push in more elements than its current size.

So, if it contains A pointers, it's like if you were manipulating an array of A* . When it needs to resize (you push_back() an element while it's already filled to its current capacity), it will create another A* array and copy in the array of A* from the previous vector.

If it contains A objects, then it's like you were manipulating an array of A , so A should be default-constructible if there are automatic reallocations occuring. In this case, the whole A objects get copied too in another array.

See the difference? The A objects in std::vector<A> can change address if you do some manipulations that requires the resizing of the internal array. That's where most problems with containing objects in std::vector comes from.

A way to use std::vector without having such problems is to allocate a large enough array from the start. The keyword here is "capacity". The std::vector capacity is the real size of the memory buffer in which it will put the objects. So, to setup the capacity, you have two choices:

1) size your std::vector on construction to build all the object from the start, with maximum number of objects - that will call constructors of each objects.

2) once the std::vector is constructed (but has nothing in it), use its reserve() function : the vector will then allocate a large enough buffer (you provide the maximum size of the vector). The vector will set the capacity. If you push_back() objects in this vector or resize() under the limit of the size you've provided in the reserve() call, it will never reallocate the internal buffer and your objects will not change location in memory, making pointers to those objects always valid (some assertions to check that change of capacity never occurs is an excellent practice).

If you are allocating memory for the objects using new , you are allocating it on the heap. In this case, you should use pointers. However, in C++, the convention is generally to create all objects on the stack and pass copies of those objects around instead of passing pointers to objects on the heap.

Why is this better? It is because C++ does not have garbage collection, so memory for objects on the heap will not be reclaimed unless you specifically delete the object. However, objects on the stack are always destroyed when they leave scope. If you create objects on the stack instead of the heap, you minimize your risk of memory leaks.

If you do use the stack instead of the heap, you will need to write good copy constructors and destructors. Badly written copy constructors or destructors can lead to either memory leaks or double frees.

If your objects are too large to be efficiently copied, then it is acceptable to use pointers. However, you should use reference-counting smart pointers (either the C++0x auto_ptr or one the Boost library pointers) to avoid memory leaks.

vector addition and internal housekeeping use copies of the original object - if taking a copy is very expensive or impossible, then using a pointer is preferable.

If you make the vector member a pointer, use a smart pointer to simplify your code and minimize the risk of leaks.

Maybe your class does not do proper (ie. deep) copy construction/assignment? If so, pointers would work but not object instances as the vector member.

Usually I don't store classes directly in std::vector . The reason is simple: you would not know if the class is derived or not.

Eg:

In headers:

class base
{
public:
  virtual base * clone() { new base(*this); };
  virtual ~base(){};
};
class derived : public base
{
public:
  virtual base * clone() { new derived(*this); };
};
void some_code(void);
void work_on_some_class( base &_arg );

In source:

void some_code(void)
{
  ...
  derived instance;
  work_on_some_class(derived instance);
  ...
}

void work_on_some_class( base &_arg )
{
  vector<base> store;
  ...
  store.push_back(*_arg.clone());
  // Issue!
  // get derived * from clone -> the size of the object would greater than size of base
}

So I prefer to use shared_ptr :

void work_on_some_class( base &_arg )
{
  vector<shared_ptr<base> > store;
  ...
  store.push_back(_arg.clone());
  // no issue :)
}

The main idea of using vector is to store objects in a continue space, when using pointer or smart pointer that won't happen

Here also need to keep in mind the performance of memory usage by CPU.

  • std::vector vector guarantees(not sure) that the mem block is continuous.
  • std::vectorstd::unique_ptr<Object> will keep smart-pointers in continuous memory, but real memory blocks for objects can be placed in different positions in RAM.

So I can guess that std::vector will be faster for cases when the size of the vector is reserved and known. However, std::vectorstd::unique_ptr<Object> will be faster if we don't know the planned size or we have plans to change the order of objects.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM