繁体   English   中英

如何在不使用Python的情况下在本地下载FTP文件的情况下计算FTP文件中的行数

[英]How do I count the number of line in a FTP file without downloading it locally while using Python

因此,我需要能够从FTP服务器读取和计数行数,而无需在使用Python的情况下将其下载到本地计算机上。

我知道连接到服务器的代码:

ftp = ftplib.FTP('example.com')  //Object ftp set as server address
ftp.login ('username' , 'password')  // Login info
ftp.retrlines('LIST')  // List file directories
ftp.cwd ('/parent folder/another folder/file/')  //Change file directory

我也知道计算行数的基本代码( 如果已经在本地下载/存储的话):

with open('file') as f:  
...     count = sum(1 for line in f)  
...     print (count)                 

我只需要知道如何连接这两段代码,而不必将文件下载到本地系统。

任何帮助表示赞赏。 谢谢

据我所知,FTP没有提供任何类型的功能来读取文件内容而不进行实际下载。 但是,您可以尝试使用类似的方法, 是否可以不使用Python编写而直接读取FTP文件? (您尚未指定要使用的python)

#!/usr/bin/env python
from ftplib import FTP

def countLines(s):
   print len(s.split('\n'))

ftp = FTP('ftp.kernel.org') 
ftp.login()
ftp.retrbinary('RETR /pub/README_ABOUT_BZ2_FILES', countLines)

请将此代码仅供参考

有一种方法:我修改了为“动态”处理csv文件创建的代码。 是由生产者-消费者问题方法实施的。 应用此模式可使我们将每个任务分配给一个线程(或进程),并显示巨大的远程文件的部分结果。 您可以使其适应ftp请求。

下载流被保存在队列中并被“即时”使用 无需HDD多余的空间,并且存储效率高。 在Fedora Core 25 x86_64上的Python 3.5.2(vanilla)中进行了测试。

这是适用于ftp(通过http)检索的源:

from threading import Thread, Event
from queue import Queue, Empty
import urllib.request,sys,csv,io,os,time;
import argparse

FILE_URL = 'http://cdiac.ornl.gov/ftp/ndp030/CSV-FILES/nation.1751_2010.csv'


def download_task(url,chunk_queue,event):

    CHUNK = 1*1024
    response = urllib.request.urlopen(url)
    event.clear()

    print('%% - Starting Download  - %%')
    print('%% - ------------------ - %%')
    '''VT100 control codes.'''
    CURSOR_UP_ONE = '\x1b[1A'
    ERASE_LINE = '\x1b[2K'
    while True:
        chunk = response.read(CHUNK)
        if not chunk:
            print('%% - Download completed - %%')
            event.set()
            break
        chunk_queue.put(chunk)

def count_task(chunk_queue, event):
    part = False
    time.sleep(5) #give some time to producer
    M=0
    contador = 0
    '''VT100 control codes.'''
    CURSOR_UP_ONE = '\x1b[1A'
    ERASE_LINE = '\x1b[2K'
    while True:
        try:
            #Default behavior of queue allows getting elements from it and block if queue is Empty.
            #In this case I set argument block=False. When queue.get() and queue Empty ocurrs not block and throws a 
            #queue.Empty exception that I use for show partial result of process.
            chunk = chunk_queue.get(block=False)
            for line in chunk.splitlines(True):
                if line.endswith(b'\n'):
                    if part: ##for treat last line of chunk (normally is a part of line)
                        line = linepart + line
                        part = False
                    M += 1
                else: 
                ##if line not contains '\n' is last line of chunk. 
                ##a part of line which is completed in next interation over next chunk
                    part = True
                    linepart = line
        except Empty:
            # QUEUE EMPTY 
            print(CURSOR_UP_ONE + ERASE_LINE + CURSOR_UP_ONE)
            print(CURSOR_UP_ONE + ERASE_LINE + CURSOR_UP_ONE)
            print('Downloading records ...')
            if M>0:
                print('Partial result:  Lines: %d ' % M) #M-1 because M contains header
            if (event.is_set()): #'THE END: no elements in queue and download finished (even is set)'
                print(CURSOR_UP_ONE + ERASE_LINE+ CURSOR_UP_ONE)
                print(CURSOR_UP_ONE + ERASE_LINE+ CURSOR_UP_ONE)
                print(CURSOR_UP_ONE + ERASE_LINE+ CURSOR_UP_ONE)
                print('The consumer has waited %s times' % str(contador))
                print('RECORDS = ', M)
                break
            contador += 1
            time.sleep(1) #(give some time for loading more records) 

def main():


    chunk_queue = Queue()
    event = Event()
    args = parse_args()
    url = args.url

    p1 = Thread(target=download_task, args=(url,chunk_queue,event,))
    p1.start()
    p2 = Thread(target=count_task, args=(chunk_queue,event,))
    p2.start()
    p1.join()
    p2.join()

# The user of this module can customized one parameter:
#   + URL where the remote file can be found.

def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument('-u', '--url', default=FILE_URL,
                        help='remote-csv-file URL')
    return parser.parse_args()


if __name__ == '__main__':
    main()

用法

$ python ftp-data.py -u <ftp-file>

例:

python ftp-data-ol.py -u 'http://cdiac.ornl.gov/ftp/ndp030/CSV-FILES/nation.1751_2010.csv' 
The consumer has waited 0 times
RECORDS =  16327

Github上的CSV版本: https//github.com/AALVAREZG/csv-data-onthefly

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM