简体   繁体   中英

how to read 10 records every time from csv in python or pyspark?

I have a csv file with a 100,000 rows and I want to read 10 rows at a time and process each row to save to its respective file every time and sleep for 5 seconds. I'm trying Nslice but it only reads the first 10 and stops. I want the program to run till the EOF. I'm using jupyter, python2 & pyspark if thats of any help.

from itertools import islice
with open("per-vehicle-records-2020-01-31.csv") as f:
    while True:
        next_n_lines = list(islice(f, 10))
        if not next_n_lines:
            break
        else:
            print(next_n_lines)
            sleep(5)

this does not separate each row. It combines 10 rows into a list

['"cosit","year","month","day","hour","minute","second","millisecond","minuteofday","lane","lanename","straddlelane","straddlelanename","class","classname","length","headway","gap","speed","weight","temperature","duration","validitycode","numberofaxles","axleweights","axlespacings"\n', '"000000000997","2020","1","31","1","30","2","0","90","1","Test1","0","","5","HGV_RIG","11.4","2.88","3.24","70.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","3","0","90","2","Test2","0","","2","CAR","5.2","3.17","2.92","71.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","5","0","90","1","Test1","0","","2","CAR","5.1","2.85","2.51","70.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","6","0","90","2","Test2","0","","2","CAR","5.1","3.0","2.94","69.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","9","0","90","1","Test1","0","","5","HGV_RIG","11.5","3.45","3.74","70.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","10","0","90","2","Test2","0","","2","CAR","5.4","3.32","3.43","71.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","13","0","90","2","Test2","0","","2","CAR","5.3","3.19","3.23","71.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","13","0","90","1","Test1","0","","2","CAR","5.2","3.45","3.21","70.0","0.0","0.0","0","0","0","",""\n', '"000000000997","2020","1","31","1","30","16","0","90","1","Test1","0","","5","HGV_RIG","11.0","2.9","3.13","69.0","0.0","0.0","0","0","0","",""\n']

This should work:

import pandas as pd
import time
path_data = 'per-vehicle-records-2020-01-31.csv'

reader = pd.read_csv(path_data, sep=';', chunksize=10, iterator=True)
for i in reader:
    df = next(reader)
    print(df)
    time.sleep(5) 

The chunksize will read every 10 rows, and the for loop should make sure they are read in that manner, and sleep 5 seconds in between each iteration.

islice retrun a genrator so you need o iterate after you assign it

from itertools import islice
with open("per-vehicle-records-2020-01-31.csv") as f:
    while True:
        next_n_lines = islice(f, 10)
        if not next_n_lines:
            break
        else:
            for line in next_n_lines:
               print(line)
            sleep(5)

you read more here How to read file N lines at a time in Python?

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM