简体   繁体   中英

RegEx Tokenizer to split a text into words, digits and punctuation marks

What I want to do is to split a text into his ultimate elements.

For example:

from nltk.tokenize import *
txt = "A sample sentences with digits like 2.119,99 or 2,99 are awesome."
regexp_tokenize(txt, pattern='(?:(?!\d)\w)+|\S+')
['A','sample','sentences','with','digits','like','2.199,99','or','2,99','are','awesome','.']

You can see it works fine. My Problem is: What happens if the digit is at the end of a text?

txt = "Today it's 07.May 2011. Or 2.999."
regexp_tokenize(txt, pattern='(?:(?!\d)\w)+|\S+') 
['Today', 'it', "'s", '07.May', '2011.', 'Or', '2.999.'] 

The result should be: ['Today', 'it', "'s", '07.May', '2011','.', 'Or', '2.999','.']

What I have to do to get the result above?

I created a pattern to try to include periods and commas occurring inside words, numbers. Hope this helps:

txt = "Today it's 07.May 2011. Or 2.999."
regexp_tokenize(txt, pattern=r'\w+([.,]\w+)*|\S+')
['Today', 'it', "'s", '07.May', '2011', '.', 'Or', '2.999', '.']

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM