简体   繁体   English

线程安全向量实现

[英]Thread-safe vector implementation

I'm working on a thread-safe std::vector implementation and the following is a completed preliminary attempt:我正在研究线程安全的 std::vector 实现,以下是一个完整的初步尝试:

    #ifndef THREADSAFEVECTOR_H
    #define THREADSAFEVECTOR_H
    #include <iostream>
    #include <vector>
    #include <mutex>
    #include <cstdlib>
    #include <memory>
    #include <iterator>
    #include <algorithm>
    #include <initializer_list>
    #include <functional>
    template <class T, class Alloc=std::allocator<T>>
    class ThreadSafeVector
    {
        private:
            std::vector<T> threadSafeVector;
            std::mutex vectorMutex;

        public:
            /*need to use typename here because std::allocator<T>::size_type, std::allocator<T>::value_type, std::vector<T>::iterator, 
            and std::vector<T>::const_reverse_iterator are 'dependent names' in that because we are working with a templated class, 
            these expressions may depend on types of type template parameters and values of non-template parameters*/

            typedef typename std::vector<T>::size_type size_type;

            typedef typename std::vector<T>::value_type value_type;

            typedef typename std::vector<T>::iterator iterator;

            typedef typename std::vector<T>::const_iterator const_iterator;

            typedef typename std::vector<T>::reverse_iterator reverse_iterator;

            typedef typename std::vector<T>::const_reverse_iterator const_reverse_iterator;

            typedef typename std::vector<T>::reference reference;

            typedef typename std::vector<T>::const_reference const_reference;

            /*wrappers for three different at() functions*/
            template <class InputIterator>
            void assign(InputIterator first, InputIterator last)
            {
                //using a local lock_guard to lock mutex guarantees that the mutex will be unlocked on destruction and in the case of an exception being thrown
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.assign(first, last);
            }

            void assign(size_type n, const value_type& val)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.assign(n, val);
            }

            void assign(std::initializer_list<value_type> il)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.assign(il.begin(), il.end());
            }

            /*wrappers for at() functions*/
            reference at(size_type n)
            {
                return threadSafeVector.at(n);
            }

            const_reference at(size_type n) const
            {
                return threadSafeVector.at(n);
            }   

            /*wrappers for back() functions*/
            reference back()
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                return threadSafeVector.back();
            }

            const reference back() const
            {
                return threadSafeVector.back();
            }

            /*wrappers for begin() functions*/
            iterator begin()
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                return threadSafeVector.begin();
            }

            const iterator begin() const noexcept
            {
                return threadSafeVector.begin();
            }

            /*wrapper for capacity() fucntion*/
            size_type capacity() const noexcept
            {
                return threadSafeVector.capacity();
            }

            /*wrapper for cbegin() function*/
            const iterator cbegin()
            {
                return threadSafeVector.cbegin();
            }

            /*wrapper for cend() function*/
            const iterator cend()
            {
                return threadSafeVector.cend();
            }

            /*wrapper for clear() function*/
            void clear()
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.clear();
            }

            /*wrapper for crbegin() function*/
            const_reverse_iterator crbegin() const noexcept
            {
                return threadSafeVector.crbegin();
            }

            /*wrapper for crend() function*/
            const_reverse_iterator crend() const noexcept
            {
                return threadSafeVector.crend();
            }

            /*wrappers for data() functions*/
            value_type* data()
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                return threadSafeVector.data();
            }

            const value_type* data() const noexcept
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                return threadSafeVector.data();
            }

            /*wrapper for emplace() function*/
            template <class... Args>
            void emplace(const iterator position, Args&&... args)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.emplace(position, args...);
            }

            /*wrapper for emplace_back() function*/
            template <class... Args>
            void emplace_back(Args&&... args)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.emplace_back(args...);
            }

            /*wrapper for empty() function*/
            bool empty() const noexcept
            {
                return threadSafeVector.empty();
            }

            /*wrappers for end() functions*/
            iterator end()
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                return threadSafeVector.end();
            }

            const iterator end() const noexcept
            {
                return threadSafeVector.end();
            }

            /*wrapper functions for erase()*/
            iterator erase(const_iterator position)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.erase(position);
            }

            iterator erase(const_iterator first, const_iterator last)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.erase(first, last);
            }

            /*wrapper functions for front()*/
            reference front()
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                return threadSafeVector.front();
            }

            const reference front() const
            {
                return threadSafeVector.front();
            }

            /*wrapper function for get_allocator()*/
            value_type get_allocator() const noexcept
            {
                return threadSafeVector.get_allocator();
            }

            /*wrapper functions for insert*/
            iterator insert(const_iterator position, const value_type& val)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.insert(position, val); 
            }

            iterator insert(const_iterator position, size_type n, const value_type& val)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.insert(position, n, val);
            }

            template <class InputIterator>
            iterator insert(const_iterator position, InputIterator first, InputIterator last)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.insert(position, first, last);
            }

            iterator insert(const_iterator position, value_type&& val)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.insert(position, val);
            }

            iterator insert(const_iterator position, std::initializer_list<value_type> il)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.insert(position, il.begin(), il.end());
            }

            /*wrapper function for max_size*/
            size_type max_size() const noexcept
            {
                return threadSafeVector.max_size();
            }

            /*wrapper functions for operator =*/
            std::vector<T>& operator= (const std::vector<T>& x)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.swap(x);
            }

            std::vector<T>& operator= (std::vector<T>&& x)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector=std::move(x);
            }

            std::vector<T>& operator= (std::initializer_list<value_type> il)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.assign(il.begin(), il.end());

                return *this; //is this safe to do?
            }

            /*wrapper functions for operator []*/
            reference operator[] (size_type n)
            {
                return std::ref(n);
            }

            const_reference operator[] (size_type n) const
            {
                return std::cref(n);
            }

            /*wrapper function for pop_back()*/
            void pop_back()
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.pop_back();
            }

            /*wrapper functions for push_back*/
            void push_back(const value_type& val)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.push_back(val);
            }

            void push_back(value_type&& val)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.push_back(val);
            }

            /*wrapper functions for rbegin()*/
            reverse_iterator rbegin() noexcept
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                return threadSafeVector.rbegin();
            }

            const_reverse_iterator rbegin() const noexcept
            {
                return threadSafeVector.rbegin();
            }

            /*wrapper functions for rend()*/
            reverse_iterator rend() noexcept
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                return threadSafeVector.rend();
            }

            const_reverse_iterator rend() const noexcept
            {
                return threadSafeVector.rend();
            }

            /*wrapper function for reserve()*/
            void reserve(size_type n)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.reserve(n);
            }

            /*wrapper functions for resize()*/      
            void resize(size_type n)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.resize(n);
            }

            void resize(size_type n, const value_type& val)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.resize(n, val);
            }

            void shrink_to_fit()
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.shrink_to_fit();
            }

            //add function for size
            size_type size() const noexcept
            {
                return threadSafeVector.size();
            }

            /*wrapper function for swap()*/
            void swap(std::vector<T>& x)
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                threadSafeVector.swap(x);
            }

            void print()
            {
                std::lock_guard<std::mutex> vectorLockGuard(vectorMutex);

                for(const auto & element : threadSafeVector)
                {
                    std::cout << element << std::endl;
                }

                std::cout << std::endl;
            }
    };
    #endif

I have based my implementation on the description of the vector class and its member functions found on cplusplus.com and with help from the implementation of the vector class from the STL .我的实现基于在cplusplus.com 上找到的向量类及其成员函数的描述,并借助STL 中向量类的实现 Now, a few questions I have about the code I've written so far:现在,我对到目前为止编写的代码有几个问题:

  1. When returning iterators, I wasn't sure if I should lock the mutex and then return the iterator because of the possibility that the validity of the iterator(s) might change due to multiple threads trying to access it, so I went ahead and locked the mutex for all non-const iterators.返回迭代器时,我不确定是否应该锁定互斥锁然后返回迭代器,因为迭代器的有效性可能会因多个线程试图访问它而改变,所以我继续并锁定所有非常量迭代器的互斥锁。 Is this the right approach?这是正确的方法吗?

  2. It is my understanding that one should not return pointers from functions when dealing with multithreaded code since this provides a "backdoor" (for lack of a better term) for user code to preform some potentially questionable activity.我的理解是,在处理多线程代码时不应从函数返回指针,因为这为用户代码提供了一个“后门”(因为缺乏更好的术语)来执行一些可能有问题的活动。 So, for the implementation of the assignment operator , is there another way to write these functions so that it doesn't return *this?那么,对于赋值运算符的实现,是否有另一种方法来编写这些函数,使其不返回 *this?

  3. I opted to use all local instances of lock_guard instead of having one as a private data member.我选择使用 lock_guard 的所有本地实例,而不是将一个实例作为私有数据成员。 Would it be better if I had one as a private data member instead?如果我有一个作为私有数据成员会更好吗?

Many thanks in advance :-)提前谢谢了 :-)

Synchronization between threads is a global problem;线程之间的同步是一个全球性的问题; it can't be solved locally.本地解决不了。 So the right answer is to unask the question.所以正确的答案是不问这个问题。

This approach is simply the wrong level of granularity.这种方法只是错误的粒度级别。 Preventing conflicting simultaneous calls to member functions does not make a container thread-safe in any useful sense;防止对成员函数的同时调用发生冲突并不会使容器线程安全从任何有用的意义上讲; users still have to ensure that sequences of operations are thread-safe, and that means holding a lock while a sequence of operations is going on.用户仍然必须确保操作序列是线程安全的,这意味着在操作序列进行时持有锁。

For a simple example, consider对于一个简单的例子,考虑

void swap(vector<int>& v, int idx0, int idx1) {
    int temp = v[idx0];
    v[idx0] = v[idx1];
    v[idx1] = temp;
}

Now, what happens if, after copying v[idx1] into v[idx0] , some other thread comes along and erases all the data in the vector?现在,如果在将v[idx1]复制到v[idx0] ,某个其他线程出现并擦除向量中的所有数据,会发生什么? The assignment to v[idx1] writes into random memory. v[idx1]的赋值写入随机内存。 That's not a good thing.这可不是什么好事。 To prevent this, user code must ensure that throughout the execution of swap no other thread is messing with the vector.为了防止这种情况,用户代码必须确保在整个swap执行过程中没有其他线程干扰向量。 The implementation of vector can't do that. vector的实现不能做到这一点。

If you want a consistent implementation with not too much complication, you need at least two new member functions, for example disable_write() and enable_write() .如果你想要一个不那么复杂的一致实现,你至少需要两个新成员函数,例如disable_write()enable_write()

Code which is using your vector can opt if it wants consistent state during some reading code block or not.使用您的向量的代码可以选择是否在某些读取代码块期间需要一致的状态。 It will call disable_write() at the beginning of read block and call enable_write() when it finishes block where consistent vector state was needed.它将在读取块的开头调用disable_write()并在完成需要一致向量状态的块时调用enable_write()

Add another "write_lock" mutex and in each member function which makes changes use this mutex.添加另一个“write_lock”互斥锁,并在每个进行更改的成员函数中使用此互斥锁。

While the write lock is active, read operations should be freely permitted so there's no need for "write_lock" mutex for member functions which only read data.当写锁处于活动状态时,应该自由地允许读操作,因此只读取数据的成员函数不需要“write_lock”互斥锁。 Mutex you already have is enough.您已经拥有的互斥锁就足够了。

Also, you could add another class for write-locking you vector, for example some equivalent of lock_guard and even make disable_write() and enable_write() private and friend with this class, to prevent accidental write lock which is never released.此外,您可以添加另一个类来对您的向量进行写锁定,例如某些等效的lock_guard ,甚至可以将disable_write()enable_write()私有并与此类成为朋友,以防止永远不会释放的意外写锁。

Vector seem inherently not thread safe. Vector 似乎本质上不是线程安全的。 That can seem brutal, but the simplest way to synchronize a vector for me is to encapsulate it in another object who is thread safe...这看起来很残酷,但对我来说同步向量的最简单方法是将它封装在另一个线程安全的对象中......

struct guardedvector {
   std::mutex guard;
   std::vector myvector;
}

guardedvector v;
v.guard.lock();
... use v.myvector
v.guard.unlock();

In win32 you can also use slim reader/writer lock (SRW).在 win32 中,您还可以使用纤薄的读/写锁 (SRW)。 They are light and fast mutexes, that can work with multiple readers/one writer.它们是轻量级和快速的互斥锁,可以与多个读取器/一个写入器一起使用。 In this case, you replace the guard by a srwlock.在这种情况下,您可以用 srwlock 替换防护装置。 The caller code has the responsability to call the correct locking methods.调用者代码有责任调用正确的锁定方法。 Finally you can template the struct and inline it.最后,您可以对结构进行模板化并将其内联。

After discussing with myself, the best way to thread safe a vector is to inherit from vector and add a guard.与我自己讨论后,线程安全向量的最佳方法是从向量继承并添加保护。

enter code here
template <class t> class guardedvector : public std::vector<t> {
    std::mutex guard;
};

guardedvector v;

v.lock();
//...iterate, add, remove...
v.unlock();

You can use a swrlock in place of the mutex.您可以使用 swrlock 代替互斥锁。 But if you have a lot of threads that read the vector, that can introduce a latency for the writer thread.但是,如果您有很多线程读取向量,则可能会给写入线程带来延迟。 In certain case you can recreate a custom threading model using just atomics operations with a priority model for the writer, but this require a deep know how the processor work.在某些情况下,您可以仅使用原子操作和作者的优先级模型重新创建自定义线程模型,但这需要深入了解处理器的工作原理。 For simplicity, you can change the priority level of the writer thread.为简单起见,您可以更改编写器线程的优先级。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM