簡體   English   中英

使用 JSON 文件中的整篇文章的連續對之間的余弦相似度

[英]cosine-similarity between consecutive pairs using whole articles in JSON file

我想計算 JSON 文件中連續文章對的余弦相似度。 到目前為止,我設法做到了,但是....我只是意識到在轉換每篇文章的 tfidf 時,我沒有使用文件中所有可用文章中的術語,而只使用每一對中的術語。 這是我正在使用的代碼,它提供了每對連續文章的余弦相似系數。

import json
import nltk
with open('SDM_2015.json') as f:
    data = [json.loads(line) for line in f]

## Loading the packages needed:
import nltk, string
from sklearn.feature_extraction.text import TfidfVectorizer

## Defining our functions to filter the data

# Short for stemming each word (common root)
stemmer = nltk.stem.porter.PorterStemmer()

# Short for removing puctuations etc
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)

## First function that creates the tokens
def stem_tokens(tokens):
    return [stemmer.stem(item) for item in tokens]

## Function that incorporating the first function, converts all words into lower letters and removes puctuations maps (previously specified)
def normalize(text):
    return stem_tokens(nltk.word_tokenize(text.lower().translate(remove_punctuation_map)))

## Lastly, a super function is created that contains all the previous ones plus stopwords removal
vectorizer = TfidfVectorizer(tokenizer=normalize, stop_words='english')

## Calculation one by one of the cosine similatrity

def foo(x, y):
    tfidf = vectorizer.fit_transform([x, y])
    return ((tfidf * tfidf.T).A)[0,1]

my_funcs = {}
for i in range(len(data) - 1):
    x = data[i]['body']
    y = data[i+1]['body']
    foo.func_name = "cosine_sim%d" % i
    my_funcs["cosine_sim%d" % i] = foo
    print(foo(x,y))

知道如何使用 JSON 文件中所有可用文章的全部術語而不是僅使用每對文章的全部術語來開發余弦相似度嗎?

親切的問候,

安德烈斯

我認為,基於我們上面的討論,您需要更改 foo 函數以及下面的所有內容。 請參閱下面的代碼。 請注意,我實際上並沒有運行它,因為我沒有您的數據,也沒有提供示例行。

## Loading the packages needed:
import nltk, string
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import pairwise_distances
from scipy.spatial.distance import cosine
import json
from  sklearn.metrics.pairwise import cosine_similarity

with open('SDM_2015.json') as f:
    data = [json.loads(line) for line in f]

## Defining our functions to filter the data

# Short for stemming each word (common root)
stemmer = nltk.stem.porter.PorterStemmer()

# Short for removing puctuations etc
remove_punctuation_map = dict((ord(char), None) for char in string.punctuation)

## First function that creates the tokens
def stem_tokens(tokens):
    return [stemmer.stem(item) for item in tokens]

## Function that incorporating the first function, converts all words into lower letters and removes puctuations maps (previously specified)
def normalize(text):
    return stem_tokens(nltk.word_tokenize(text.lower().translate(remove_punctuation_map)))

## tfidf
vectorizer = TfidfVectorizer(tokenizer=normalize, stop_words='english')
tfidf_data = vectorizer.fit_transform(data)

#cosine dists
similarity matrix  = cosine_similarity(tfidf_data)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM