简体   繁体   中英

Reading n lines from file (but not all) in Python

How to read n lines from a file instead of just one when iterating over it? I have a file which has well defined structure and I would like to do something like this:

for line1, line2, line3 in file:
    do_something(line1)
    do_something_different(line2)
    do_something_else(line3)

but it doesn't work:

ValueError: too many values to unpack

For now I am doing this:

for line in file:
    do_someting(line)
    newline = file.readline()
    do_something_else(newline)
    newline = file.readline()
    do_something_different(newline)
... etc.

which sucks because I am writing endless ' newline = file.readline() ' which are cluttering the code. Is there any smart way to do this ? (I really want to avoid reading whole file at once because it is huge)

Basically, your file is an iterator which yields your file one line at a time. This turns your problem into how do you yield several items at a time from an iterator. A solution to that is given in this question . Note that the function islice is in the itertools module so you will have to import it from there.

如果它是xml为什么不只是使用lxml?

You could use a helper function like this:

def readnlines(f, n):
    lines = []
    for x in range(0, n):
        lines.append(f.readline())
    return lines

Then you can do something like you want:

while True:
    line1, line2, line3 = readnlines(file, 3)
    do_stuff(line1)
    do_stuff(line2)
    do_stuff(line3)

That being said, if you are using xml files, you will probably be happier in the long run if you use a real xml parser...

itertools to the rescue:

import itertools
def grouper(n, iterable, fillvalue=None):
    "grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx"
    args = [iter(iterable)] * n
    return itertools.izip_longest(fillvalue=fillvalue, *args)


fobj= open(yourfile, "r")
for line1, line2, line3 in grouper(3, fobj):
    pass

for i in file produces a str , so you can't just do for i, j, k in file and read it in batches of three (try a, b, c = 'bar' and a, b, c = 'too many characters' and look at the values of a, b and c to work out why you get the "too many values to unpack").

It's not clear entirely what you mean, but if you're doing the same thing for each line and just want to stop at some point, then do it like this:

for line in file_handle:
    do_something(line)
    if some_condition:
        break  # Don't want to read anything else

(Also, don't use file as a variable name, you're shadowning a builtin.)

If your're doing the same thing why do you need to process multiple lines per iteration?

For line in file is your friend. It is in general much more efficient than manually reading the file, both in terms of io performance and memory.

Do you know something about the length of the lines/format of the data? If so, you could read in the first n bytes (say 80*3) and f.read(240).split("\\n")[0:3].

If you want to be able to use this data over and over again, one approach might be to do this:

lines = []
for line in file_handle:
    lines.append(line)

This will give you a list of the lines, which you can then access by index. Also, when you say a HUGE file, it is most likely trivial what the size is, because python can process thousands of lines very quickly.

why can't you just do:

ctr = 0

for line in file:

  if ctr == 0:

     ....

  elif ctr == 1:

     ....

  ctr = ctr + 1

if you find the if/elif construct ugly you could just create a hash table or list of function pointers and then do:

for line in file:

   function_list[ctr]()

or something similar

It sounds like you are trying to read from disk in parallel... that is really hard to do. All the solutions given to you are realistic and legitimate. You shouldn't let something put you off just because the code "looks ugly". The most important thing is how efficient/effective is it, then if the code is messy, you can tidy it up, but don't look for a whole new method of doing something because you don't like how one way of doing it looks like in code.

As for running out of memory, you may want to check out pickle .

It's possible to do it with a clever use of the zip function. It's short, but a bit voodoo-ish for my tastes (hard to see how it works). It cuts off any lines at the end that don't fill a group, which may be good or bad depending on what you're doing. If you need the final lines, itertools.izip_longest might do the trick.

zip(*[iter(inputfile)] * 3)

Doing it more explicitly and flexibly, this is a modification of Mats Ekberg's solution:

def groupsoflines(f, n):
    while True:
        group = []
        for i in range(n):
            try:
                group.append(next(f))
            except StopIteration:
                if group:
                    tofill = n - len(group)
                    yield group + [None] * tofill
                return
        yield group

for line1, line2, line3 in groupsoflines(inputfile, 3):
    ...

NB If this runs out of lines halfway through a group, it will fill in the gaps with None , so that you can still unpack it. So, if the number of lines in your file might not be a multiple of three, you'll need to check whether line2 and line3 are None .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM