简体   繁体   English

s3 上的代码优化读取 csv 并摄取回 s3 存储桶

[英]Code Optimization on s3 read csv and ingest back to s3 bucket

ddict = defaultdict(set)
    
file_str = query_csv_s3(s3, BUCKET_NAME, filename, sql_exp, use_header)
            #  read CSV to dataframe
            df = pd.read_csv(StringIO(file_str))
            fdf = df.drop_duplicates(subset='cleverTapId', keep='first')
            fdf.dropna(inplace=True)
            col_one_list = fdf['identity'].tolist()
            col_two_list = fdf['cleverTapId'].tolist()
            for k, v in zip(col_one_list, col_two_list):
                ddict[k].add(v)
            for imkey in ddict:
                im_length = len(str(imkey))
                if im_length == 9:
                    if len(ddict[imkey]) == 1:
                        for value in ddict[imkey]:
                            tdict = {imkey:value}
                        write_to_csv(FILE_NAME,tdict)
                    else:
                        ctlist = list(ddict[imkey])
                        snp_dict = {imkey:'|'.join(ctlist)}
                        write_to_csv(SNAP_FILE_NAME, snp_dict)
    
                elif im_length > 0:
                    if len(ddict[imkey]) == 1:
                        for value in ddict[imkey]:
                            fdict = {imkey:value}
                        write_to_csv(FRAUD_FILE_NAME,fdict)
                    else:
                        pass
                        # mult_ct = list(ddict[imkey])
                        # mydict = {imkey:','.join(mult_ct)}
                        # write_to_csv(MY_FILENAME,mydict)
                else:
                    pass

Here is write_to_csv :这是write_to_csv

def write_to_csv(filename,mdict):
    file_exists = os.path.isfile(filename)
    with open(filename,'a',newline='') as csvfile:
        headers = ['IM No', 'CT ID']
        writer = csv.DictWriter(
            csvfile,
            delimiter=',',
            lineterminator='\n',
            fieldnames=headers
        )
        if not file_exists:
            writer.writeheader()
        for key in mdict:
            writer.writerow({'IM No': key, 'CT ID': mdict[key]})

I'm reading a csv file containing 2 col using s3 select.我正在使用 s3 select 读取包含 2 列的 csv 文件。

I'm generating 1 IM:1 CTID,one to many and many to many file and uploading it back to an s3 bucket我正在生成 1 个 IM:1 CTID,一对多和多对多文件并将其上传回 s3 存储桶

How can I optimize it more because it's taking 18hrs to process 530 MB file size read from s3 and upload back?我该如何优化它,因为它需要 18 小时来处理从 s3 读取的 530 MB 文件大小并上传回来?

This is essentially a guess, because I can't run your code.这本质上是一个猜测,因为我无法运行您的代码。 The way you write data to your CSV files is extremely inefficient.将数据写入 CSV 文件的方式效率极低

I/O operations to SSDs or Disks are among the more expensive operations in IT.对 SSD 或磁盘的 I/O 操作是 IT 中成本更高的操作之一。 Right now you open a file for each line you want to append, then append it and close the file again.现在,您为要 append 的每一行打开一个文件,然后打开 append 并再次关闭该文件。 That means for a 530 MB file you're probably doing millions of these expensive operations.这意味着对于一个 530 MB 的文件,您可能要执行数百万次这些昂贵的操作。

If you check out the performance tab in task manager you'll probably see a very high disk usage.如果您查看任务管理器中的性能选项卡,您可能会看到非常高的磁盘使用率。

It's much more efficient to buffer a few of these (or all if RAM is big enough) in memory and flush them to disk at the end.在 memory 中缓冲其中的一些(或者如果 RAM 足够大,则全部缓冲)并在最后将它们刷新到磁盘会更有效。

Roughly like this:大致是这样的:

FRAUD_FILE_CONTENTS = []

# Computation stuff

FRAU_FILE_CONTENTS.append({"my": "dict"})

# After the loop

with open(FRAUD_FILE_NAME, "w"):
    # Write to CSV

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM