简体   繁体   English

带有函数的pandas df列中子列表上嵌套for循环的列表理解

[英]List comprehension with nested for loops on sublists within pandas df columns with function

Summary概括

I need to run a function (full code below is reproducible/executable for dataframe, how to use function and the function - see below) that takes each element per row in col1 ( myllc for row 1) and runs the function get_top_matches against each element per row per sublist in col2 .我需要运行一个函数(下面的完整代码对于数据帧是可重现/可执行的,如何使用函数和函数 - 见下文),该函数获取col1每行的每个元素(第 1 行的myllc )并针对每个元素运行函数get_top_matches col2每个子列表的每一行。


What DF looks like: DF 的样子:

parent_org_name_list    children_org_name_sublists
0   [myllc,]    [[myalyk, oleksandr, nychyporovych, pp], [myli...
1   [ydea, srl,]    [[yd, confecco, ltda], [yda], [yda, insaat, sa...
2   [hyonix,]   [[hymax, talk, solutions], [hynix, semiconduct...
3   [mjn, enterprises,] [[mjm, interant, inc], [mjn, enterprises], [sh...
4   [ltd, yuriapharm,]  [[ltd, yuriapharm], [yuriypra, law, offic, pc]]

What the code needs to do for each line:代码需要为每一行做什么:

  • Take element in col1 ([myllc,] for example) and run get_top_matches function on [myalyk, oleksandr, nychyporovych, pp] & then run it on the next sublist ['myliu', 'srl'] ... and do this for each sublist the corresponding row in col2col1元素(例如 [myllc,])并在 [myalyk, oleksandr, nychyporovych, pp] 上运行get_top_matches函数,然后在下一个子列表 ['myliu', 'srl'] ... 上运行它......每个子列出col2的相应行

What using the function does:使用该函数的作用:

  • The function takes two arguments: a string and a list and it compares the string to each element in the list, like this:该函数接受两个参数:一个字符串和一个列表,它将字符串与列表中的每个元素进行比较,如下所示:
get_top_matches('myllc', [
                   'myalyk oleksandr nychyporovych pp'
                  ,'myliu srl'
                  ,'myllc'
                  ,'myloc manag IT ag'])

results: 
[('myllc', 1.0),
 ('myloc manag IT ag', 0.77),
 ('myliu srl', 0.75),
 ('myalyk oleksandr nychyporovych pp', 0.65)]

Here's what I've got so far:这是我到目前为止所得到的:

  • I need to create a df column with the results shown below, but they need to contain each of the words in each sublist with the score in tuple form.我需要创建一个 df 列,结果如下所示,但它们需要包含每个子列表中的每个单词,并以元组形式给出分数。 I am terrible at list comprehension, it's so confusing.我在列表理解方面很糟糕,这太令人困惑了。
df['func_scores'] = [
[[df.agg(lambda x: get_top_matches(u,v), axis=1) for u in x ]
    for v in zip(*y)]
        for x,y in zip(df['col1'], df1['col2'])
]

results: #it only grabs the first word of the sublists and runs the function 3 times for those same 3 words...
[[0    [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79), 
...1    [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79), 
...2    [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79), 
...3    [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79), 
...4    [(myllc, 0.97), (myloc, 0.88), (myliu, 0.79), 
...dtype: object]]

That's it.就是这样。 Above this is the question, what I've tried so far, an example of the output and the function, and below is the executable code for the df and the function - so you don't have to recreate anything!上面是问题,到目前为止我已经尝试过,输出和函数的一个例子,下面是 df 和函数的可执行代码 - 所以你不必重新创建任何东西!


Expectation期待

These are made up numbers!这些都是数字!

(This example: row 1 has 4 sublists, row 2 has 2 sublists. the function runs on each word in each column 1 for each word in each sublist in column 2 and puts the results in a sublist in a new column.) (此示例:第 1 行有 4 个子列表,第 2 行有 2 个子列表。该函数针对第 2 列中每个子列表中的每个单词运行第 1 列中的每个单词,并将结果放入新列的子列表中。)

[[['myalyk',.97], ['oleksandr',.54], ['nychyporovych',.3], ['pp',0]], [['myliu',.88], ['srl',.43]], [['myllc',1.0]], [['myloc',1.0], ['manag',.45], ['IT',.1], ['ag',0]]], 
[[['ltd',.34], ['yuriapharm',.76]], [['yuriypra',.65], ['law',.54], ['offic',.45], ['pc',.34]]],
...

. . . . . .

. . . . . .

. . . . . .

Executable code snippets: just run these two:可执行代码片段:只需运行这两个:

Dataframe数据框

data = {'col1':  [['myllc,'],
                 ['ydea', 'srl,'],
                 ['hyonix,'],
                 ['mjn', 'enterprises,'],
                 ['ltd', 'yuriapharm,']]
        ,
        'col2': [[['myalyk', 'oleksandr', 'nychyporovych', 'pp'],
                  ['myliu', 'srl'],
                  ['myllc'],
                  ['myloc', 'manag', 'IT', 'ag']],
                 [['yd', 'confecco', 'ltda'],
                  ['yda'],
                  ['yda', 'insaat', 'sanayi', 'veticaret', 'as'],
                  ['ydea'],
                  ['ydea', 'srl'],
                  ['ydea', 'srl'],
                  ['ydh'],
                  ['ydh', 'japan', 'inc']],
                 [['hymax', 'talk', 'solutions'],
                  ['hynix', 'semiconductor', 'inc'],
                  ['hyonix'],
                  ['hyonix', 'llc'],
                  ['intercan', 'hyumok'],
                  ['kim', 'hyang', 'soon'],
                  ['sk', 'hynix', 'america'],
                  ['smecla2012022843470sam', 'hyang', 'precis', 'corporation'],
                  ['smecpz2017103044085sung', 'hyung', 'precis', 'CO', 'inc']],
                 [['mjm', 'interant', 'inc'],
                  ['mjn', 'enterprises'],
                  ['shanti', 'town', 'mjini', 'clients']],
                 [['ltd', 'yuriapharm'], ['yuriypra', 'law', 'offic', 'pc']]]
        }

df = pd.DataFrame (data, columns = ['col1','col2'])
df

Functions:职能:

The function at the bottom get_top_matches is the only function I am running - but it uses all the other functions.底部的函数get_top_matches是我正在运行的唯一函数 - 但它使用所有其他函数。 All these functions do is generate a score on how close two strings are to each other (character distances and stuff like that):所有这些函数所做的就是生成一个关于两个字符串彼此之间有多接近的分数(字符距离和类似的东西):

#jaro version
def sort_token_alphabetically(word):
    token = re.split('[,. ]', word)
    sorted_token = sorted(token)
    return ' '.join(sorted_token)

def get_jaro_distance(first, second, winkler=True, winkler_ajustment=True,
                      scaling=0.1, sort_tokens=True):
    """
    :param first: word to calculate distance for
    :param second: word to calculate distance with
    :param winkler: same as winkler_ajustment
    :param winkler_ajustment: add an adjustment factor to the Jaro of the distance
    :param scaling: scaling factor for the Winkler adjustment
    :return: Jaro distance adjusted (or not)
    """
    if sort_tokens:
        first = sort_token_alphabetically(first)
        second = sort_token_alphabetically(second)

    if not first or not second:
        raise JaroDistanceException(
            "Cannot calculate distance from NoneType ({0}, {1})".format(
                first.__class__.__name__,
                second.__class__.__name__))

    jaro = _score(first, second)
    cl = min(len(_get_prefix(first, second)), 4)

    if all([winkler, winkler_ajustment]):  # 0.1 as scaling factor
        return round((jaro + (scaling * cl * (1.0 - jaro))) * 100.0) / 100.0

    return jaro

def _score(first, second):
    shorter, longer = first.lower(), second.lower()

    if len(first) > len(second):
        longer, shorter = shorter, longer

    m1 = _get_matching_characters(shorter, longer)
    m2 = _get_matching_characters(longer, shorter)

    if len(m1) == 0 or len(m2) == 0:
        return 0.0

    return (float(len(m1)) / len(shorter) +
            float(len(m2)) / len(longer) +
            float(len(m1) - _transpositions(m1, m2)) / len(m1)) / 3.0

def _get_diff_index(first, second):
    if first == second:
        pass

    if not first or not second:
        return 0

    max_len = min(len(first), len(second))
    for i in range(0, max_len):
        if not first[i] == second[i]:
            return i

    return max_len

def _get_prefix(first, second):
    if not first or not second:
        return ""

    index = _get_diff_index(first, second)
    if index == -1:
        return first

    elif index == 0:
        return ""

    else:
        return first[0:index]

def _get_matching_characters(first, second):
    common = []
    limit = math.floor(min(len(first), len(second)) / 2)

    for i, l in enumerate(first):
        left, right = int(max(0, i - limit)), int(
            min(i + limit + 1, len(second)))
        if l in second[left:right]:
            common.append(l)
            second = second[0:second.index(l)] + '*' + second[
                                                       second.index(l) + 1:]

    return ''.join(common)

def _transpositions(first, second):
    return math.floor(
        len([(f, s) for f, s in zip(first, second) if not f == s]) / 2.0)

def get_top_matches(reference, value_list, max_results=None):
    scores = []
    if not max_results:
        max_results = len(value_list)
    for val in value_list:
#     for val in value_list.split():
        score_sorted = get_jaro_distance(reference, val)
        score_unsorted = get_jaro_distance(reference, val, sort_tokens=False)
        scores.append((val, max(score_sorted, score_unsorted)))
    scores.sort(key=lambda x: x[1], reverse=True)

    return scores[:max_results]

class JaroDistanceException(Exception):
    def __init__(self, message):
        super(Exception, self).__init__(message)

Illustrates two methods说明两种方法

  1. List comprehension列表理解
  2. Using DataFrame Apply使用 DataFrame 应用

Code代码

# Generate DataFrame
df = pd.DataFrame (data, columns = ['col1','col2'])

# Clean Data (strip out trailing commas on some words)
df['col1'] = df['col1'].map(lambda lst: [x.rstrip(',') for x in lst])

# 1. List comprehension Technique
# zip provides pairs of col1, col2 rows
result = [[get_top_matches(u, [v]) for u in x for w in y for v in w] for x, y in zip(df['col1'], df['col2'])]

# 2. DataFrame Apply Technique
def func(x, y):
return [get_top_matches(u, [v]) for u in x for w in y for v in w] 

df['func_scores'] = df.apply(lambda row: func(row['col1'], row['col2']), axis = 1)

# Verify two methods are equal
print(df['func_scores'].equals(pd.Series(result)))  # True

print(df['func_scores'].to_string(index=False))

Output输出

[[(myalyk, 0.76)], [(oleksandr, 0.44)], [(nychyporovych, 0.52)], [(pp, 0.0)], [(myliu, 0.81)], [(srl, 0.51)], [(myllc, 1.0)], [(myloc, 0.91)], [(manag, 0.52)], [(IT, 0.0)], [(ag, 0.0)]]
 [[(yd, 0.87)], [(confecco, 0.46)], [(ltda, 0.67)], [(yda, 0.93)], [(yda, 0.93)], [(insaat, 0.47)], [(sanayi, 0.47)], [(veticaret, 0.57)], [(as, 0.0)], [(ydea, 1.0)], [(ydea, 1.0)], [(srl, 0.0)], [(ydea, 1.0)], [(srl, 0.0)], [(ydh, 0.78)], [(ydh, 0.78)], [(japan, 0.48)], [(inc, 0.0)], [(yd, 0.0)], [(confecco, 0.0)], [(ltda, 0.0)], [(yda, 0.0)], [(yda, 0.0)], [(insaat, 0.0)], [(sanayi, 0.55)], [(veticaret, 0.0)], [(as, 0.61)], [(ydea, 0.0)], [(ydea, 0.0)], [(srl, 1.0)], [(ydea, 0.0)], [(srl, 1.0)], [(ydh, 0.0)], [(ydh, 0.0)], [(japan, 0.0)], [(inc, 0.0)]]
                                                                             
[[(hymax, 0.76)], [(talk, 0.0)], [(solutions, 0.52)], [(hynix, 0.96)], [(semiconductor, 0.47)], [(inc, 0.0)], [(hyonix, 1.0)], [(hyonix, 1.0)], [(llc, 0.0)], [(intercan, 0.43)], [(hyumok, 0.73)], [(kim, 0.0)], [(hyang, 0.76)], [(soon, 0.61)], [(sk, 0.0)], [(hynix, 0.96)], [(america, 0.44)], [(smecla2012022843470sam, 0.0)], [(hyang, 0.76)], [(precis, 0.44)], [(corporation, 0.42)], [(smecpz2017103044085sung, 0.0)], [(hyung, 0.76)], [(precis, 0.44)], [(CO, 0.56)], [(inc, 0.0)]]
                                                                                                                                                                                                                                                     
[[(mjm, 0.82)], [(interant, 0.49)], [(inc, 0.56)], [(mjn, 1.0)], [(enterprises, 0.47)], [(shanti, 0.5)], [(town, 0.53)], [(mjini, 0.89)], [(clients, 0.0)], [(mjm, 0.0)], [(interant, 0.54)], [(inc, 0.47)], [(mjn, 0.47)], [(enterprises, 1.0)], [(shanti, 0.59)], [(town, 0.39)], [(mjini, 0.43)], [(clients, 0.65)]]
                                                                                                                                                                                                                                                                                                                                                                        
[[(ltd, 1.0)], [(yuriapharm, 0.0)], [(yuriypra, 0.0)], [(law, 0.6)], [(offic, 0.0)], [(pc, 0.0)], [(ltd, 0.0)], [(yuriapharm, 1.0)], [(yuriypra, 0.89)], [(law, 0.0)], [(offic, 0.43)], [(pc, 0.0)]]

To Get Scores for func_scores获取 func_scores 的分数

  1. We get score using get_top_matches(u, [v])[0][1]我们使用 get_top_matches(u, [v])[0][1] 获得分数
  2. Based upon get_top_matches(...) result of the form [(name, value)]基于 [(name, value)] 形式的 get_top_matches(...) 结果
  3. Reform list looping over (get_top_matches(u, [v])[0][1])改革列表循环 (get_top_matches(u, [v])[0][1])

Code代码

# List comprehension Technique
result = [[[(get_top_matches(u, [v])[0][1]) for v in w] for u in x for w in y] for x, y in zip(df['col1'], df['col2'])]

# DataFrame Apply Technique
def func(x, y):
    return [[(get_top_matches(u, [v])[0][1]) for v in w] for u in x for w in y] 

df['func_scores'] = df.apply(lambda row: func(row['col1'], row['col2']), axis = 1)

# Verify two are equal
print(df['func_scores'].equals(pd.Series(result)))  # True

print(df['func_scores'].to_string(index=False))

# Output
[[0.76, 0.44, 0.52, 0.0], [0.81, 0.51], [1.0], [0.91, 0.52, 0.0, 0.0]]
 [[0.87, 0.46, 0.67], [0.93], [0.93, 0.47, 0.47, 0.57, 0.0], [1.0], [1.0, 0.0], [1.0, 0.0], [0.78], [0.78, 0.48, 0.0], [0.0, 0.0, 0.0], [0.0], [0.0, 0.0, 0.55, 0.0, 0.61], [0.0], [0.0, 1.0], [0.0, 1.0], [0.0], [0.0, 0.0, 0.0]]
                                                          
[[0.76, 0.0, 0.52], [0.96, 0.47, 0.0], [1.0], [1.0, 0.0], [0.43, 0.73], [0.0, 0.76, 0.61], [0.0, 0.96, 0.44], [0.0, 0.76, 0.44, 0.42], [0.0, 0.76, 0.44, 0.56, 0.0]]
                                                                                                           
[[0.82, 0.49, 0.56], [1.0, 0.47], [0.5, 0.53, 0.89, 0.0], [0.0, 0.54, 0.47], [0.47, 1.0], [0.59, 0.39, 0.43, 0.65]]
                                                                                                                                                        
[[1.0, 0.0], [0.0, 0.6, 0.0, 0.0], [0.0, 1.0], [0.89, 0.0, 0.43, 0.0]]

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM