簡體   English   中英

為什么Stanford CoreNLP服務器將命名實體拆分為單個令牌?

[英]Why does Stanford CoreNLP server split named entities into single tokens?

我正在使用此命令發布數據(斯坦福站點的一些復制面食):

wget --post-data 'Barack Obama was President of the United States of America in 2016' 'localhost:9000/?properties={"annotators": "ner", "outputFormat": "json"}' -O out.json

響應如下所示:

{
    "sentences": [{
        "index": 0,
        "tokens": [{
            "index": 1,
            "word": "Barack",
            "originalText": "Barack",
            "lemma": "Barack",
            "characterOffsetBegin": 0,
            "characterOffsetEnd": 6,
            "pos": "NNP",
            "ner": "PERSON",
            "before": "",
            "after": " "
        }, {
            "index": 2,
            "word": "Obama",
            "originalText": "Obama",
            "lemma": "Obama",
            "characterOffsetBegin": 7,
            "characterOffsetEnd": 12,
            "pos": "NNP",
            "ner": "PERSON",
            "before": " ",
            "after": " "
        }, {
            "index": 3,
            "word": "was",
            "originalText": "was",
            "lemma": "be",
            "characterOffsetBegin": 13,
            "characterOffsetEnd": 16,
            "pos": "VBD",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 4,
            "word": "President",
            "originalText": "President",
            "lemma": "President",
            "characterOffsetBegin": 17,
            "characterOffsetEnd": 26,
            "pos": "NNP",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 5,
            "word": "of",
            "originalText": "of",
            "lemma": "of",
            "characterOffsetBegin": 27,
            "characterOffsetEnd": 29,
            "pos": "IN",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 6,
            "word": "the",
            "originalText": "the",
            "lemma": "the",
            "characterOffsetBegin": 30,
            "characterOffsetEnd": 33,
            "pos": "DT",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 7,
            "word": "United",
            "originalText": "United",
            "lemma": "United",
            "characterOffsetBegin": 34,
            "characterOffsetEnd": 40,
            "pos": "NNP",
            "ner": "LOCATION",
            "before": " ",
            "after": " "
        }, {
            "index": 8,
            "word": "States",
            "originalText": "States",
            "lemma": "States",
            "characterOffsetBegin": 41,
            "characterOffsetEnd": 47,
            "pos": "NNPS",
            "ner": "LOCATION",
            "before": " ",
            "after": " "
        }, {
            "index": 9,
            "word": "of",
            "originalText": "of",
            "lemma": "of",
            "characterOffsetBegin": 48,
            "characterOffsetEnd": 50,
            "pos": "IN",
            "ner": "LOCATION",
            "before": " ",
            "after": " "
        }, {
            "index": 10,
            "word": "America",
            "originalText": "America",
            "lemma": "America",
            "characterOffsetBegin": 51,
            "characterOffsetEnd": 58,
            "pos": "NNP",
            "ner": "LOCATION",
            "before": " ",
            "after": " "
        }, {
            "index": 11,
            "word": "in",
            "originalText": "in",
            "lemma": "in",
            "characterOffsetBegin": 59,
            "characterOffsetEnd": 61,
            "pos": "IN",
            "ner": "O",
            "before": " ",
            "after": " "
        }, {
            "index": 12,
            "word": "2016",
            "originalText": "2016",
            "lemma": "2016",
            "characterOffsetBegin": 62,
            "characterOffsetEnd": 66,
            "pos": "CD",
            "ner": "DATE",
            "normalizedNER": "2016",
            "before": " ",
            "after": "",
            "timex": {
                "tid": "t1",
                "type": "DATE",
                "value": "2016"
            }
        }]
    }]
}

難道我做錯了什么? 我有Java客戶端代碼,至少可以將Barack ObamaBarack ObamaUnited States of America識別為完整的NER,但是使用該服務似乎可以分別對待每個令牌。 有什么想法嗎?

您應該將圖entitymentions注釋器添加到注釋器列表中。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM