简体   繁体   English

从字典列表中删除重复项

[英]Remove duplicates from list of dicts

I presently have the ability to remove duplicates if there is no key in front of the nested dictionary.如果嵌套字典前面没有键,我目前有能力删除重复项。 An example of my list of dicts that works with this function is:与此功能一起使用的字典列表的一个示例是:

 [{'asndb_prefix': '164.39.xxx.0/17',
  'cidr': '164.39.xxx.0/17',
  'cymru_asn': 'XXX',
  'cymru_country': 'GB',
  'cymru_owner': 'XXX , GB',
  'cymru_prefix': '164.39.xxx.0/17',
  'ips': ['164.39.xxx.xxx'],
  'network_id': '164.39.xxx.xxx/24',},
 {'asndb_prefix': '54.192.xxx.xxx/16',
  'cidr': '54.192.0.0/16',
  'cymru_asn': '16509',
  'cymru_country': 'US',
  'cymru_owner': 'AMAZON-02 - Amazon.com, Inc., US',
  'cymru_prefix': '54.192.144.0/22',
  'ips': ['54.192.xxx.xxx', '54.192.xxx.xxx'],
  'network_id': '54.192.xxx.xxx/24',
  }]

def remove_dict_duplicates(list_of_dicts):
    """
    "" Remove duplicates in dict 
    """
    list_of_dicts = [dict(t) for t in set([tuple(d.items()) for d in list_of_dicts])]
    # remove the {} before and after - not sure why these are placed as 
    # the first and last element 
    return list_of_dicts[1:-1]

However, I would like to be able to remove duplicates based on the key and all values associated within that dictionary.但是,我希望能够根据该字典中关联的键和所有值删除重复项。 So if there the same key with different values inside I would like to not remove it, but if there is a complete copy then remove it.因此,如果内部存在具有不同值的相同键,我不想将其删除,但如果有完整副本,则将其删除。

    [{'50.16.xxx.0/24': {'asndb_prefix': '50.16.0.0/16',
   'cidr': '50.16.0.0/14',
   'cymru_asn': 'xxxx',
   'cymru_country': 'US',
   'cymru_owner': 'AMAZON-AES - Amazon.com, Inc., US',
   'cymru_prefix': '50.16.0.0/16',
   'ip': '50.16.221.xxx',
   'network_id': '50.16.xxx.0/24',
   'pyasn_asn': xxxx,
   'whois_asn': 'xxxx'}},
   // This would be removed
   {'50.16.xxx.0/24': {'asndb_prefix': '50.16.0.0/16',
   'cidr': '50.16.0.0/14',
   'cymru_asn': 'xxxxx',
   'cymru_country': 'US',
   'cymru_owner': 'AMAZON-AES - Amazon.com, Inc., US',
   'cymru_prefix': '50.16.0.0/16',
   'ip': '50.16.221.xxx',
   'network_id': '50.16.xxx.0/24',
   'pyasn_asn': xxxx,
   'whois_asn': 'xxxx'}},
   // This would NOT be removed
   {'50.16.xxx.0/24': {'asndb_prefix': '50.999.0.0/16',
   'cidr': '50.999.0.0/14',
   'cymru_asn': 'xxxx',
   'cymru_country': 'US',
   'cymru_owner': 'AMAZON-AES - Amazon.com, Inc., US',
   'cymru_prefix': '50.16.0.0/16',
   'ip': '50.16.221.xxx',
   'network_id': '50.16.xxx.0/24',
   'pyasn_asn': xxxx,
   'whois_asn': 'xxxx'}}]

How do I go about doing this?我该怎么做? Thank you.谢谢你。

To remove duplicates from a list of dicts:要从字典列表中删除重复项:

list_of_unique_dicts = []
for dict_ in list_of_dicts:
    if dict_ not in list_of_unique_dicts:
        list_of_unique_dicts.append(dict_)

If the order in the result is not important, you can use a set to remove the duplicates by converting the dicts into frozen sets:如果结果中的顺序不重要,您可以使用集合通过将 dict 转换为冻结集来删除重复项:

def remove_dict_duplicates(list_of_dicts):
    """
    Remove duplicates.
    """
    packed = set(((k, frozenset(v.items())) for elem in list_of_dicts for
                 k, v in elem.items()))
    return [{k: dict(v)} for k, v in packed]

This assumes that all values of the innermost dicts are hashable.这假设最里面的字典的所有值都是可散列的。

​Giving up the order yields potential speedups for large lists.放弃订单会为大型列表带来潜在的加速。 For example, creating a list with 100,000 elements:例如,创建一个包含 100,000 个元素的列表:

inner = {'asndb_prefix': '50.999.0.0/16',
         'cidr': '50.999.0.0/14',
         'cymru_asn': '14618',
         'cymru_country': 'US',    
         'cymru_owner': 'AMAZON-AES - Amazon.com, Inc., US',    
         'cymru_prefix': '50.16.0.0/16',    
         'ip': '50.16.221.xxx',    
         'network_id': '50.16.xxx.0/24',    
         'pyasn_asn': 14618,    
          'whois_asn': '14618'}

large_list = list_of_dicts + [{x: inner} for x in range(int(1e5))]

It takes quite a while checking for duplicates in the result list again and again:一次又一次地检查结果列表中的重复项需要很长时间:

def remove_dupes(list_of_dicts):
    """Source: answer from wim
    """ 
    list_of_unique_dicts = []
    for dict_ in list_of_dicts
        if dict_ not in list_of_unique_dicts:
            list_of_unique_dicts.append(dict_)
    return list_of_unique_dicts

%timeit  remove_dupes(large_list
1 loop, best of 3: 2min 55s per loop

My approach, using a set is a bit faster:我的方法,使用集合要快一点:

%timeit remove_dict_duplicates(large_list)
1 loop, best of 3: 590 ms per loop

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM