简体   繁体   English

如何在 python(从 utf-8 编码的文本文件导入)中将组合变音符号 ɔ̃、ɛ̃ 和 ɑ̃ 的字符与非重音字符进行比较?

[英]How do I compare characters with combining diacritic marks ɔ̃, ɛ̃ and ɑ̃ to unaccented ones in python (imported from a utf-8 encoded text file)?

Summary: I want to compare ɔ̃, ɛ̃ and ɑ̃ to ɔ, ɛ and a, which are all different, but my text file has ɔ̃, ɛ̃ and ɑ̃ written as ɔ~, ɛ~ and a~.总结:我想比较ɔ̃、ɛ̃和ɑ̃与ɔ、ɛ和a,它们都是不同的,但是我的文本文件中有ɔ̃、ɛ̃和ɑ̃写成ɔ~、ɛ~和a~。


I wrote a script which moves along the characters in two words simultaneously, comparing them to find the pair of characters which is different The words are of equal length (excepting for the diacritic issue which introduces an extra character), and represent the IPA phonetic pronunciation of two French words only one phoneme apart.我写了一个脚本,它同时沿着两个单词中的字符移动,比较它们以找到不同的字符对两个法语单词之间只有一个音位。

The ultimate goal is to filter a list of anki cards so that only certain pairs of phonemes are included, because other pairs are too easy to recognize.最终目标是过滤 anki 卡片列表,以便仅包含某些音素对,因为其他对太容易识别。 Each pair of words represents an anki note.每对单词代表一个 anki 音符。

For this I need to differentiate the nasal sounds ɔ̃, ɛ̃ and ɑ̃ form other sounds, as they are only really confusable with themselves.为此,我需要区分鼻音 ɔ̃、ɛ̃ 和 ɑ̃ 形成其他声音,因为它们只会与自己混淆。

As written, the code treats accented characters as the character plus ~, and so as two characters.如所写,代码将重音字符视为字符加〜,因此视为两个字符。 Thus if the only difference in a word is between a final accented and on-accented character, the script finds no differences on the last letter and as written will then find one word shorter than the other (the other still has the ~ left) and throw an error trying to compare one more character.因此,如果一个单词的唯一区别是最后一个重音字符和一个重音字符之间的区别,则脚本在最后一个字母上没有发现任何差异,并且按照所写的那样,然后会发现一个词比另一个词短(另一个词仍然有 ~ 左边)和尝试再比较一个字符时抛出错误。 This is a whole 'problem' by itself, but if I can get the accented characters to read as single units the words will then have the same lengths, and it will disappear.这本身就是一个完整的“问题”,但是如果我可以让重音字符作为单个单元读取,那么单词将具有相同的长度,并且会消失。

I do not want to replace the accented characters with non-accented ones, as some people do for comparisons, because they are different sounds.我不想用非重音字符替换重音字符,就像有些人为了比较所做的那样,因为它们是不同的声音。

I have tried 'normalizing' the unicode to a 'combined' form, eg unicodedata.normalize('NFKC', line) , but it didn't change anything.我尝试将 unicode '规范化'为'组合'形式,例如unicodedata.normalize('NFKC', line) ,但它没有改变任何东西。


Here is some output, including the line at which it just throws the error;这是一些 output,包括它刚刚抛出错误的行; the printouts show the words and character of each word that the code is comparing;打印输出显示代码正在比较的每个单词的单词和字符; the number is the index of that character within the word.数字是单词中该字符的索引。 The final letter is therefore what the script 'thinks' the two characters are, and it sees the same thing for ɛ̃ and ɛ.因此,最后一个字母是脚本“认为”这两个字符的内容,并且它认为 ɛ̃ 和 ɛ 是相同的。 It is also choosing the wrong pair of letters then when it reports the differences, and it's important that the pair is right because I compare with a master list of allowable pairs.当它报告差异时,它也会选择错误的字母对,重要的是这对是正确的,因为我与允许对的主列表进行比较。

0 alyʁ alɔʁ a a # this first word is done well
1 alyʁ alɔʁ l l
2 alyʁ alɔʁ y ɔ # it doesn't continue to compare the ʁ because it found the difference
...
0 ɑ̃bisjø ɑ̃bisjɔ̃ ɑ ɑ
1 ɑ̃bisjø ɑ̃bisjɔ̃ ̃ ̃  # the tildes are compared / treated  separately
2 ɑ̃bisjø ɑ̃bisjɔ̃ b b
3 ɑ̃bisjø ɑ̃bisjɔ̃ i i
4 ɑ̃bisjø ɑ̃bisjɔ̃ s s
5 ɑ̃bisjø ɑ̃bisjɔ̃ j j
6 ɑ̃bisjø ɑ̃bisjɔ̃ ø ɔ # luckily that wasn't where the difference was, this is
...
0 osi ɛ̃si o ɛ # here it should report (o, ɛ̃), not (o, ɛ)
...
0 bɛ̃ bɔ̃ b b
1 bɛ̃ bɔ̃ ɛ ɔ # an error of this type
...
0 bo ba b b
1 bo ba o a # this is working correctly 
...
0 bjɛ bjɛ̃ b b
1 bjɛ bjɛ̃ j j
2 bjɛ bjɛ̃ ɛ ɛ # AND here's the money, it thinks these are the same letter, but it has also run out of characters to compare from the first word, so it throws the error below
Traceback (most recent call last):

  File "C:\Users\tchak\OneDrive\Desktop\French.py", line 42, in <module>
    letter1 = line[0][index]

IndexError: string index out of range

Here is the code:这是代码:

def lens(word):
    return len(word)

# open file, and new file to write to
input_file = "./phonetics_input.txt"
output_file = "./phonetics_output.txt"
set1 = ["e", "ɛ", "œ", "ø", "ə"]
set2 = ["ø", "o", "œ", "ɔ", "ə"]
set3 = ["ə", "i", "y"]
set4 = ["u", "y", "ə"]
set5 = ["ɑ̃", "ɔ̃", "ɛ̃", "ə"]
set6 = ["a", "ə"]
vowelsets = [set1, set2, set3, set4, set5, set6]
with open(input_file, encoding="utf8") as ipf, open(output_file, encoding="utf8") as opf:
    # for line in file; 
    vowelpairs= []
    acceptedvowelpairs = []
    input_lines = ipf.readlines()
    print(len(input_lines))
    for line in input_lines:
        #find word ipa transctipts
        unicodedata.normalize('NFKC', line)
        line = line.split("/")
        line.sort(key = lens)
        line = line[0:2] # the shortest two strings after splitting are the ipa words
        index = 0
        letter1 = line[0][index]
        letter2 = line[1][index]
        print(index, line[0], line[1], letter1, letter2)
            
        linelen = max(len(line[0]), len(line[1]))
        while letter1 == letter2:
            index += 1
            letter1 = line[0][index] # throws the error here, technically, after printing the last characters and incrementing the index one more
            letter2 = line[1][index]
            print(index, line[0], line[1], letter1, letter2)
            
        vowelpairs.append((letter1, letter2))   
        
    for i in vowelpairs:
        for vowelset in vowelsets:
            if set(i).issubset(vowelset):
                acceptedvowelpairs.append(i)
    print(len(vowelpairs))
    print(len(acceptedvowelpairs))

Unicode normalization does not help for described particular character combinations because an excerpt from Unicode database UnicodeData.Txt using simple regex "Latin.*Letter.*with tilde$" gives ÃÑÕãñõĨĩŨũṼṽẼẽỸỹ (no Latin letters Open O , Open E or Alpha ). Unicode normalization does not help for described particular character combinations because an excerpt from Unicode database UnicodeData.Txt using simple regex "Latin.*Letter.*with tilde$" gives ÃÑÕãñõĨĩŨũṼṽẼẽỸỹ (no Latin letters Open O , Open E or Alpha ). So you need to iterate through both compared strings separately as follows (omitted most of your code above a Minimal, Reproducible Example ):因此,您需要分别遍历两个比较字符串,如下所示(在 Minimal, Reproducible Example上方省略了大部分代码):

import unicodedata

def lens(word):
    return len(word)

input_lines = ['alyʁ/alɔʁ', 'ɑ̃bisjø/ɑ̃bisjɔ̃ ', 'osi/ɛ̃si', 'bɛ̃ /bɔ̃ ', 'bo/ba', 'bjɛ/bjɛ̃ ']
print(len(input_lines))
for line in input_lines:
    print('')
    #find word ipa transctipts
    line = unicodedata.normalize('NFKC', line.rstrip('\n'))
    line = line.split("/")
    line.sort(key = lens)
    word1, word2 = line[0:2] # the shortest two strings after splitting are the ipa words
    index = i1 = i2 = 0
    while i1 < len(word1) and i2 < len(word2):
        letter1 = word1[i1]
        i1 += 1
        if i1 < len(word1) and unicodedata.category(word1[i1]) == 'Mn':
            letter1 += word1[i1]
            i1 += 1
        letter2 = word2[i2]
        i2 += 1
        if i2 < len(word2) and unicodedata.category(word2[i2]) == 'Mn':
            letter2 += word2[i2]
            i2 += 1
        same = chr(0xA0) if letter1 == letter2 else '#' 
        print(index, same, word1, word2, letter1, letter2)
        index += 1
        #if same != chr(0xA0):
        #    break

Output : .\SO\67335977.py Output.\SO\67335977.py

6

0   alyʁ alɔʁ a a
1   alyʁ alɔʁ l l
2 # alyʁ alɔʁ y ɔ
3   alyʁ alɔʁ ʁ ʁ

0   ɑ̃bisjø ɑ̃bisjɔ̃  ɑ̃ ɑ̃
1   ɑ̃bisjø ɑ̃bisjɔ̃  b b
2   ɑ̃bisjø ɑ̃bisjɔ̃  i i
3   ɑ̃bisjø ɑ̃bisjɔ̃  s s
4   ɑ̃bisjø ɑ̃bisjɔ̃  j j
5 # ɑ̃bisjø ɑ̃bisjɔ̃  ø ɔ̃

0 # osi ɛ̃si o ɛ̃
1   osi ɛ̃si s s
2   osi ɛ̃si i i

0   bɛ̃  bɔ̃  b b
1 # bɛ̃  bɔ̃  ɛ̃ ɔ̃
2   bɛ̃  bɔ̃

0   bo ba b b
1 # bo ba o a

0   bjɛ bjɛ̃  b b
1   bjɛ bjɛ̃  j j
2 # bjɛ bjɛ̃  ɛ ɛ̃

Note : diacritic tested as Unicode category Mn ;变音符号测试为 Unicode 类别Mn you can test against another condition (eg from the following list):您可以针对其他条件进行测试(例如,从以下列表中):

  • Mn Nonspacing_Mark: a nonspacing combining mark (zero advance width) Mn Nonspacing_Mark:非间距组合标记(零前进宽度)
  • Mc Spacing_Mark: a spacing combining mark (positive advance width) Mc Spacing_Mark:间距组合标记(正前进宽度)
  • Me Enclosing_Mark: an enclosing combining mark Me Enclosing_Mark:一个封闭的组合标记
  • M Mark: Mn | Mc | Me M Mark: Mn | Mc | Me Mn | Mc | Me

I am in the process of solving this by just doing a find and replace on these characters before processing it and a reverse find and replace when I'm done.我正在通过在处理之前对这些字符进行查找和替换以及在完成后进行反向查找和替换来解决这个问题。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 json编码为UTF-8字符。 如何在Python请求中作为json处理 - json encoded as UTF-8 characters. How do I process as json in Python Requests 如何使用 Python 读取 utf-8 编码的文本文件 - How to read a utf-8 encoded text file using Python 如何检测文件是否使用UTF-8编码? - How do I detect if a file is encoded using UTF-8? 如何使用nltk.data.load()从以UTF-8编码的文件中读取CFG? ASCII文件工作正常,但UTF-8编码的文件给出了错误 - How do I read CFG from a file encoded in UTF-8 using nltk.data.load() ? ASCII files works fine but UTF-8 encoded file gives an error 当我从 Python 中的 utf-8 文件打印文本时,为什么看不到希伯来语字符? - Why don't I see the hebrew characters, when I print text from an utf-8 file in Python? 如何在Python中将\\ xXY编码的字符转换为UTF-8? - How to convert \xXY encoded characters to UTF-8 in Python? 如何打开带有 utf-8 非编码字符的文件? - How to open a file with utf-8 non encoded characters? 有UTF-8字符时如何将输出定向到文件? - How do I direct output to a file when there are UTF-8 characters? 如何在 UTF-8 文件开头去除垃圾字符 - How do I strip garbage characters at start of UTF-8 file 如何在Python中将UTF-8和其他编码中的字符写入文件? - How do I write UTF-8 and characters in other encodings to file in Python?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM