I have a pandas dataframe with 500k rows. Structured like this, where the document
column are strings:
document_id document
0 0 Here is our forecast
1 1 Traveling to have a business meeting takes the...
2 2 test successful. way to go!!!
3 3 Randy, Can you send me a schedule of the salar...
4 4 Let's shoot for Tuesday at 11:45.
When I de-dupe the dataframe based on the contents of the document column using df.drop_duplicates(subset='document')
, I end up with half the number of documents.
Now that I have my original dataframe and a second dataframe with the unique set of document
values, I would like to compare the two to get a list of document_id
's that are duplicates.
For example, if the associated document
for document_id
4, 93, and 275 are all 'Let's shoot for Tuesday at 11:45.', then how do I get a dataframe with document
in one column, and list of associated duplicate document_id
's in another column?
document_ids document
...
4 [4, 93, 275] Let's shoot for Tuesday at 11:45.
I know that I could use a for loop, and compare each document every other document in the dataframe, and save all matches, but I am trying to avoid iterating over 500k lines multiple times. What instead is the most pythonic way of going about this?
I would like to compare the two to get a list of document_id's that are duplicates.
You should be able to do this using your "initial" DataFrame with .duplicated(keep=False)
. Here's an example:
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({
...: 'document_id': range(10),
...: 'document': list('abcabcdedb') # msg 'e' is not duplicated
...: })
In [3]: dupes = df.document.duplicated(keep=False)
In [4]: df.loc[dupes].groupby('document')['document_id'].apply(list).reset_index()
Out[4]:
document document_id
0 a [0, 3]
1 b [1, 4, 9]
2 c [2, 5]
3 d [6, 8]
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.