OK, so... I have an array of hashes like this:
[
{ :id => 0, :text => "someText" },
{ :id => 1, :text => "anotherText" },
{ :id => 2, :text => "someText" }
]
What I want, is to filter
hashes with removing duplicate :text
values, so that the result is:
[
{ :id => 0, :text => "someText" },
{ :id => 1, :text => "anotherText" }
]
How can I do that?
PS Of course, I can find a way and do it. What I'm asking for is the best (& fastest) Ruby-friendly way, given that I'm not such a Ruby guru. ;-)
Try this by using Array#uniq :
arr.uniq{|h| h[:text]} # Returns a new array by removing duplicate values
=> [{:id=>0, :text=>"someText"}, {:id=>1, :text=>"anotherText"}]
# OR
arr.uniq!{|h| h[:text]} # Removes duplicate elements from self.
=> [{:id=>0, :text=>"someText"}, {:id=>1, :text=>"anotherText"}]
There will be many different approaches to achieve your goal but as you are looking for fastest way, Here is benchmark of both uniq and group_by . This is just for sample. Like this you can test yourself different approaches and check for the solution as per your requirements..
require 'benchmark'
arr = [{ :id => 0, :text => "someText" }, { :id => 1, :text => "anotherText" }, { :id => 2, :text => "someText" }]
Benchmark.bm do |x|
x.report("group_by:") { arr.group_by { |e| e[:text] }.values.map &:first }
x.report("uniq:") { arr.uniq{|h| h[:text]} }
end
# output
user system total real
group_by: 0.000000 0.000000 0.000000 ( 0.000039)
uniq: 0.000000 0.000000 0.000000 ( 0.000012)
While uniq
is a perfect solution for the problem as it stated, there is more flexible approach where you are to specify an additional condition on what is to be picked out of multiple variants:
# ⇓⇓⇓⇓⇓⇓⇓
arr.group_by { |e| e[:text] }.values.map &:first
One might put any condition there to select only those elements with even :id
, or whatever.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.