简体   繁体   中英

unique words dictionary remove special characters and numbers

I want to make a dictionary from a book, unfortunately I have a problem

import re

with open('vechny.txt', encoding='utf-8') as fname:
    text = fname.read()
    lst = list(set(text.split()))
    str1 = ' '.join(str(e) for e in lst)
    print(str1, file=open("1000.txt", "a", encoding='utf-8'))



in_file = open("1000.txt", "r", encoding='utf-8')
lines = in_file.read().split(' ')
in_file.close()

out_file = open("file.txt", "w", encoding='utf-8')
out_file.write("\n".join(lines))
out_file.close()

this script works well but need to remove special characters

, .-, ect ... from plain text

example have words Hay, split takes it as one word and therefore does not remove it

how to make text

input
Hay, hello,% lost. 15 čas řad
output im search is
hay hello lost cas rad

What about this?

import re
str1 = '#@-/abcüšščřžý'
r = re.findall(r'\b\d*[^\W\d_][^\W_]*\b', str1, re.UNICODE)
str2 = ''.join(r)
print(str2)

Try this:

import re
re.sub('[^A-Za-z0-9]+', ' ', 'Hay, hello,% lost. 15')

Let me know if is ok!

from unidecode import unidecode
import random
import re

random = (random.randint(1000, 2000))

n = (input("jmenosouboru:"))

with open(""+str(n)+".txt", encoding='utf-8') as fname:
    text = fname.read()
    r = re.findall(r'\b\d*[^\W\d_][^\W_]*\b', text, re.UNICODE)
    str2 = ' '.join(r)
    uni=(unidecode(str2))
    lst = list(set(uni.split()))
    str1 = ' '.join(str(e) for e in lst)
    lines = str1.split(' ')
    text1 = ("\n".join(lines))
    text2 = ''.join(filter(lambda x: not x.isdigit(), text1))
    print(text2, file=open(""+str(random)+"-.txt", "a", encoding='utf-8'))
    print("done")

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM