[英]Merging a lot of containers with STL Algorithms
I have a lot of lists, vectors, sets ... (what ever you prefer) of pointers called RealAlgebraicNumberPtr
to certain class. 我有很多列表,向量,集合...(无论您喜欢使用什么)指向某些类的称为
RealAlgebraicNumberPtr
的指针。 They are sorted. 它们已排序。
I want to merge them and of course I want to do it fast and efficient. 我想合并它们,当然我想快速高效地做到这一点。
What is the best choice? 最佳选择是什么?
std::merge
? std::merge
吗? Or maybe std::set
? 或者也许
std::set
? I can provide both an < and == ordering. 我可以同时提供<和==排序。
Any ideas? 有任何想法吗?
As mentioned, std::merge
is ok. 如前所述,
std::merge
可以。
Only for std::list, you can profit from the optimization that std::list::merge
member function implements: it splices the list nodes from the source into the target. 仅对于std :: list,您可以从
std::list::merge
成员函数实现的优化中受益:它将列表节点从源拼接到目标。 That way, the source list will become empty, but it will avoid resource (re)allocation 这样,源列表将变为空,但是它将避免资源(重新)分配
std::set
std::set
you could in fact std::merge into a std::set to get unique values in one go. 实际上,您可以将std :::合并到std :: set中,以一次性获得唯一的值。 With generic merge, duplicate values are not filtered, but the result is sorted, so you could apply
std::unique
to the result. 使用通用合并,不会过滤重复的值,但是会对结果进行排序,因此您可以将
std::unique
应用于结果。 If you expect a lot of duplicates, you might be quicker using a std::set
如果您期望重复很多,则使用
std::set
可能会更快
std::merge
is as efficient as it gets. std::merge
效率很高。 Which underlying container you use depends on your requirements. 您使用哪个基础容器取决于您的要求。
std::vector
has the smallest memory-overhead of all standard containers, so if your data are large, you should stick with that. std::vector
具有所有标准容器中最小的内存开销,因此,如果您的数据很大,则应坚持这样做。
If you use std::vector
, you should resize
the target-vector before merging to avoid reallocations (you should be able to calculate the required size up-front), instead of using an std::back_inserter
. 如果使用
std::vector
,则应在合并之前resize
目标向量的resize
,以避免重新分配(您应该能够预先计算所需的大小),而不要使用std::back_inserter
。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.