简体   繁体   English

python中的循环不会遍历文件中的每一行

[英]Loops in python does not loop through every line in file

I am trying to loop through every line in a text file and perform some actions. 我试图遍历文本文件中的每一行并执行一些操作。 Right now I have a text file which contains this: 现在,我有一个包含以下内容的文本文件:

--- small modified --- #line 1
1,2,3                  #line 2
4,5,6                  #line 3
--- big modified ---   #line 4
7;8;9                  #line 5
10;11;12               #line 6

I am trying to parse line 2,3 into one file, and lines 5,6 into another file but right now, only lines 2 and 3 gets written into the file and idk why the "elif" statement is not run. 我试图将第2,3行解析为一个文件,而将5,6行解析为另一个文件,但是现在,只有第2行和第3行被写入文件和idk中,因此为什么不运行“ elif”语句。 I can't solve the logic error and would appreciate if someone could help me out. 我无法解决逻辑错误,如果有人可以帮助我,我将不胜感激。

Below is my code: 下面是我的代码:

def convert_json(fileName):
    with open(fileName,'r') as file:
        for line in file:
            if 'modified' and 'small' in line:
                for li in file:
                    fields1 = li.split(',')
                    if len(fields1) >= 3:
                            smallarr.append({
                            "a": fields1[0],
                            "b": fields1[1],
                            "c": fields1[2]
                                })
                            with open('smalljson.txt','w+') as small_file:
                                json.dump(smallarr, small_file)
                    else:
                        pass

            elif 'modified' and 'big' in line:
                for li in file:
                    fields2 = li.split(';')
                    if len(fields2) >= 3:
                            bigarr.append({
                            "w1": fields2[0],
                            "w2": fields2[1],
                            "w3": fields2[2],
                                })
                            with open('big.txt','w+') as big_file:
                                json.dump(bigarr, big_file)
                    else: 
                        pass



            else:
                print 'test'

Update: THis is my current code, I am able to do it but only for lines 2 and lines 5, other than s second for-loop i cannot think of another way to loop through the lines 更新:这是我当前的代码,我能够做到,但仅适用于第2行和第5行,除了s第二个for循环外,我无法想到另一种遍历这些行的方法

def convert_json(fileName):
with open(fileName,'r') as file:
    for line in file:
        #if 'modified' in line and 'small' in line:
        if 'modified' in line and 'Small' in line:
            fields1 = next(file).split(',')
            if len(fields1) >= 3:
                smallarr.append({
                "a": fields1[0],
                "b": fields1[1],
                "c": fields1[2]
                })
                with open('smalljson.txt','w+') as small_file:
                    json.dump(smallarr, small_file)
            else:
                pass



        elif 'modified' in line and 'big' in line:
            fields2 = next(file).split(';')
            if len(fields2) >= 3:
                bigarr.append({
                "w1": fields2[0],
                "w2": fields2[1],
                "w3": fields2[2],
                })
                with open('bigwater.txt','w+') as big_file:
                    json.dump(bigarr, big_file)
            else:
                pass

        else:
            print 'test'

change 更改

elif 'modified' and 'big' in line:

into 进入

elif 'modified' in line and 'big' in line:

Your parsing logic need to be changed. 您的解析逻辑需要更改。 Here is what code looks like, use it for reference in future improvements. 这是代码的样子,可以在以后的改进中用作参考。

def file_parser(self):
    file_section = 0

    smallarr = []
    bigarr = []
    with open('data.txt') as in_file:
        for in_line in in_file:
            in_line = in_line.strip()

            if 'small' in in_line:
                file_section = 1
                continue
            elif 'big' in in_line:
                file_section = 2
                continue

            if file_section == 1:
                fields1 = in_line.split(',')
                if len(fields1) >= 3:
                    smallarr.append({
                        "a": fields1[0],
                        "b": fields1[1],
                        "c": fields1[2]
                    })
            elif file_section == 2:
                fields2 = in_line.split(';')
                if len(fields2) >= 3:
                    bigarr.append({
                        "w1": fields2[0],
                        "w2": fields2[1],
                        "w3": fields2[2],
                    })

    with open('small.txt', 'w+') as small_file:
        json.dump(smallarr, small_file)

    with open('big.txt', 'w+') as big_file:
        json.dump(bigarr, big_file)

Input data: 输入数据:

--- small modified ---
1,2,3
4,5,6
--- big modified ---
7;8;9
10;11;12

small.txt small.txt

[{"a": "1", "c": "3", "b": "2"}, {"a": "4", "c": "6", "b": "5"}]

big.txt big.txt

[{"w3": "9", "w2": "8", "w1": "7"}, {"w3": "12", "w2": "11", "w1": "10"}]

There are a few issues in your code. 您的代码中存在一些问题。

Firstly you are repeating yourself. 首先,您正在重复自己。 The big and small cases don't vary enough to justify the code duplication. 大大小小的情况差异不足以证明代码重复。

Secondly, while I understand what you're trying to do with next(file) , you'd still need to loop that instruction in some way to go get the next lines. 其次,虽然我了解您要使用next(file)做什么,但是您仍然需要以某种方式循环该指令以获取下一行。 But wait, you're already doing that exactly with for line in file . 但是,等等,您已经使用for line in file

Finally, at each loop, you're reopening the same file and redumping an ever increasing array. 最后,在每个循环中,您将重新打开相同的文件并减少不断增加的阵列。 This is wasteful IO. 这是浪费的IO。 If you're trying to stream from file into bigwater.txt and smalljson.txt and not storing too much stuff in memory, this is the wrong approach since json.dump can't be used to stream data. 如果您尝试从file流式传输到bigwater.txtsmalljson.txt并且没有在内存中存储太多内容,则这是错误的方法,因为json.dump不能用于流式传输数据。

Here's my take at it: 这是我的看法:

def convert_json(fileName):
    big = []
    small = []
    with open(fileName,'r') as file:
        for line in file:
            line = line.strip()
            if line.startswith("--"):
                if "big" in line:
                    array = big
                    keys = ["w1", "w2", "w3"]
                    sep = ";"
                else:
                    array = small
                    keys = ["a", "b", "c"]
                    sep = ","
                continue

            values = line.split(sep)
            # todo: make sure sizes match
            mapping = dict(zip(keys, values))
            array.append(mapping)

    with open('smalljson.txt','w') as small_file:
        json.dump(small, small_file)
    with open('bigwater.txt','w') as big_file:
        json.dump(big, big_file)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 python for loop修改文件中的每一行都会跳过第二行 - python for loop modify line in file skips every second line 通过命令行在目录中的每个文件中运行 python 脚本 - run python script in every file in a directory through command line Python循环从文件的每一行获取数据并将其转换为dec - Python loop to get data from every line of file and convert it to dec Python在嵌套for循环中循环遍历字符串 - Python loop through string in nested for loops Python inFile循环未获取文件中的所有项目 - Python inFile loops not getting every item in the file Python正则表达式:循环浏览目录中每个文件的第一行 - Python Regex: Loop through first line of each file in directory Python - 遍历 csv 文件并检查下一行 - Python - loop through csv file and check next line Python,遍历文件中的行; 如果line等于另一个文件中的行,则返回原始行 - Python, loop through lines in a file; if line equals line in another file, return original line 如何在一行中的每个单词然后在文件中的每一行之间循环? - how to loop around every word in a line and then every line in a file? 在 python 中使用 for 循环遍历文本文件 - 为什么这有效? - Iterating through a text file using a for loop in python - why does this work?
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM