簡體   English   中英

基於正則表達式標記推文

[英]Tokenize tweet based on Regex

我有以下示例文本/推文:

RT @trader $AAPL 2012 is o´o´o´o´o´pen to ‘Talk’ about patents with GOOG definitely not the treatment #samsung got:-) heh url_that_cannot_be_posted_on_SO

我想在Li, T, van Dalen, J, & van Rees, PJ (Pieter Jan) 中遵循表 1 的程序 (2017)。 不僅僅是噪音? 考察金融市場股票微博的信息內容。 信息技術雜志。 doi:10.1057/s41265-016-0034-2以清理推文。

他們以這樣的方式清理推文,最終結果是:

 {RT|123456} {USER|56789} {TICKER|AAPL} {NUMBER|2012} notooopen nottalk patent {COMPANY|GOOG} notdefinetli treatment {HASH|samsung} {EMOTICON|POS} haha {URL}

我使用以下腳本根據正則表達式標記推文:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import re

emoticon_string = r"""
(?:
  [<>]?
  [:;=8]                     # eyes
  [\-o\*\']?                 # optional nose
  [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth      
  |
  [\)\]\(\[dDpP/\:\}\{@\|\\] # mouth
  [\-o\*\']?                 # optional nose
  [:;=8]                     # eyes
  [<>]?
)"""

regex_strings = (
# URL:
r"""http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+"""
,
# Twitter username:
r"""(?:@[\w_]+)"""
,
# Hashtags:
r"""(?:\#+[\w_]+[\w\'_\-]*[\w_]+)"""
,
# Cashtags:
r"""(?:\$+[\w_]+[\w\'_\-]*[\w_]+)"""
,
# Remaining word types:
r"""
(?:[+\-]?\d+[,/.:-]\d+[+\-]?)  # Numbers, including fractions, decimals.
|
(?:[\w_]+)                     # Words without apostrophes or dashes.
|
(?:\.(?:\s*\.){1,})            # Ellipsis dots. 
|
(?:\S)                         # Everything else that isn't whitespace.
"""
)

word_re = re.compile(r"""(%s)""" % "|".join(regex_strings), re.VERBOSE | re.I | re.UNICODE)

emoticon_re = re.compile(regex_strings[1], re.VERBOSE | re.I | re.UNICODE)

######################################################################

class Tokenizer:
   def __init__(self, preserve_case=False):
       self.preserve_case = preserve_case

   def tokenize(self, s):
       try:
           s = str(s)
       except UnicodeDecodeError:
           s = str(s).encode('string_escape')
           s = unicode(s)
       # Tokenize:
       words = word_re.findall(s)
       if not self.preserve_case:
           words = map((lambda x: x if emoticon_re.search(x) else x.lower()), words)
       return words

if __name__ == '__main__':
    tok = Tokenizer(preserve_case=False)
    test = ' RT @trader $AAPL 2012 is oooopen to ‘Talk’ about patents with GOOG definitely not the treatment #samsung got:-) heh url_that_cannot_be_posted_on_SO'
    tokenized = tok.tokenize(test)
    print("\n".join(tokenized))

這會產生以下輸出:

rt
@trader
$aapl
2012
is
oooopen 
to
‘
talk
’
about
patents
with
goog
definitely
not
the
treatment
#samsung
got
:-)
heh
url_that_cannot_be_posted_on_SO

如何調整此腳本以獲得:

rt
{USER|trader}
{CASHTAG|aapl}
{NUMBER|2012}
is
oooopen 
to
‘
talk
’
about
patents
with
goog
definitely
not
the
treatment
{HASHTAG|samsung}
got
{EMOTICON|:-)}
heh
{URL|url_that_cannot_be_posted_on_SO}

在此先感謝您幫助我度過了大半時間!

您確實需要使用命名捕獲組( 由 thebjorn 提到),並使用groupdict()在每次匹配時獲取名稱-值對。 不過,它需要一些后期處理:

  • 必須丟棄值為None所有對
  • 如果self.preserve_case為 false 該值可以立即變為小寫
  • 如果組名是WORDELLIPSISELSE則將值按原樣添加到words
  • 如果組名是HASHTAGCASHTAGUSERURL ,則添加的值首先在開頭去掉$#@字符,然后作為{<GROUP_NAME>|<VALUE>}項添加到words
  • 所有其他匹配項都作為{<GROUP_NAME>|<VALUE>}項添加到words中。

請注意, \\w默認匹配下划線,因此[\\w_] = \\w 我稍微優化了模式。

這是一個固定的代碼片段

import re

emoticon_string = r"""
(?P<EMOTICON>
  [<>]?
  [:;=8]                     # eyes
  [-o*']?                    # optional nose
  [][()dDpP/:{}@|\\]         # mouth      
  |
  [][()dDpP/:}{@|\\]         # mouth
  [-o*']?                    # optional nose
  [:;=8]                     # eyes
  [<>]?
)"""

regex_strings = (
# URL:
r"""(?P<URL>https?://(?:[-a-zA-Z0-9_$@.&+!*(),]|%[0-9a-fA-F][0-9a-fA-F])+)"""
,
# Twitter username:
r"""(?P<USER>@\w+)"""
,
# Hashtags:
r"""(?P<HASHTAG>\#+\w+[\w'-]*\w+)"""
,
# Cashtags:
r"""(?P<CASHTAG>\$+\w+[\w'-]*\w+)"""
,
# Remaining word types:
r"""
(?P<NUMBER>[+-]?\d+(?:[,/.:-]\d+[+-]?)?)  # Numbers, including fractions, decimals.
|
(?P<WORD>\w+)                     # Words without apostrophes or dashes.
|
(?P<ELLIPSIS>\.(?:\s*\.)+)            # Ellipsis dots. 
|
(?P<ELSE>\S)                         # Everything else that isn't whitespace.
"""
)

word_re = re.compile(r"""({}|{})""".format(emoticon_string, "|".join(regex_strings)), re.VERBOSE | re.I | re.UNICODE)
#print(word_re.pattern)
emoticon_re = re.compile(regex_strings[1], re.VERBOSE | re.I | re.UNICODE)

######################################################################

class Tokenizer:
   def __init__(self, preserve_case=False):
       self.preserve_case = preserve_case

   def tokenize(self, s):
       try:
           s = str(s)
       except UnicodeDecodeError:
           s = str(s).encode('string_escape')
           s = unicode(s)
       # Tokenize:
       words = []
       for x in word_re.finditer(s):
           for key, val in x.groupdict().items():
               if val:
                   if not self.preserve_case:
                       val = val.lower()
                   if key in ['WORD','ELLIPSIS','ELSE']:
                       words.append(val)
                   elif key in ['HASHTAG','CASHTAG','USER','URL']: # Add more here if needed
                       words.append("{{{}|{}}}".format(key, re.sub(r'^[#@$]+', '', val)))
                   else:
                       words.append("{{{}|{}}}".format(key, val))
       return words

if __name__ == '__main__':
    tok = Tokenizer(preserve_case=False)
    test = ' RT @trader $AAPL 2012 is oooopen to ‘Talk’ about patents with GOOG definitely not the treatment #samsung got:-) heh http://some.site.here.com'
    tokenized = tok.tokenize(test)
    print("\n".join(tokenized))

使用test = ' RT @trader $AAPL 2012 is oooopen to 'Talk' about patents with GOOG definitely not the treatment #samsung got:-) heh http://some.site.here.com' ,它輸出

rt
{USER|trader}
{CASHTAG|aapl}
{NUMBER|2012}
is
oooopen
to
‘
talk
’
about
patents
with
goog
definitely
not
the
treatment
{HASHTAG|samsung}
got
{EMOTICON|:-)}
heh
{URL|http://some.site.here.com}

在線查看正則表達式演示

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM