[英]How to split .csv into multiple .csv by columns using R?
I have a huge .CSV file with more than 3000 columns and need to load these into the database and because the table limitation is 1024 columns I want to split these .CSV files into multiple files of 1024 or less columns.我有一个超过 3000 列的巨大 .CSV 文件,需要将它们加载到数据库中,因为表限制是 1024 列,我想将这些 .CSV 文件拆分为 1024 列或更少列的多个文件。
So far what I have tried with the help of previous questions regarding this topic -到目前为止,我在有关此主题的先前问题的帮助下尝试过的-
Python - Python -
import csv
import file
input import os
source = 'H:\\Programs\\Exploratory Items\\Outdated\\2018\\csvs\\' for root, dirs, filenames in os.walk(source):
for f in filenames:
fullpath = os.path.join(source, f)
output_specifications = ( (fullpath[:-4] + 'pt1.csv', slice(1000)
(fullpath[:-4] + 'pt2.csv', slice(1000, 2000)),
(fullpath[:-4] + 'pt3.csv', slice(2000, 3000)),
(fullpath[:-4] + 'pt4.csv', slice(3000, 4000)),
(fullpath[:-4] + 'pt5.csv', slice(4000, 5000)), )
output_row_writers = [
(
csv.writer(open(file_name, 'wb'),
quoting=csv.QUOTE_MINIMAL).writerow, selector,
) for file_name, selector in output_specifications ]
reader = csv.reader(fileinput.input())
for row in reader:
for row_writer, selector in output_row_writers: row_writer(row[selector])
The issue with the above python code is it takes forever to split and write these files because it writes by row is my understanding.上面python代码的问题是拆分和写入这些文件需要很长时间,因为它是按行写入的,这是我的理解。 Not ideal for my case as I have more than 200 .CSV files with 1000+ rows in each.
不适合我的情况,因为我有 200 多个 .CSV 文件,每个文件有 1000 多行。
Trying now -现在尝试 -
-cut command (POSIX) but I use Windows so will try this on Ubuntu platform. -cut 命令 (POSIX) 但我使用 Windows,因此将在 Ubuntu 平台上尝试此操作。
Want to try this in R:想在 R 中尝试这个:
I have a code that converts all my SPSS to .csv which works efficiently so I want to add more to this at this stage so it can split my file by column into multiple .csvs.我有一个代码可以将我所有的 SPSS 转换为 .csv,它可以有效地工作,所以我想在这个阶段添加更多,以便它可以将我的文件按列拆分为多个 .csv。
setwd("H:\\Programs\\2018")
getwd()
list.files()
files <- list.files(path = '.', pattern = '.sav')
library(foreign)
for (f in files) { #iterate over them
data <- read.spss(f, to.data.frame = TRUE, use.value.labels = FALSE )
write.csv (data, paste0(strsplit(f, split = '.', fixed = T)[[1]][1], '.csv'))
}
Thank you谢谢
References - Python code ref参考资料 - Python 代码参考
Late is better than never :) Here's the solution based on code generating library - convtools迟到总比没有好:) 这是基于代码生成库的解决方案 - convtools
from convtools import conversion as c
from convtools.contrib.tables import Table
columns_per_file = 1000
for filename in filenames:
# reading columns
columns = Table.from_csv(filename, header=False).columns
# slicing
for part_number, i in enumerate(
range(0, len(columns), columns_per_file), 1
):
Table.from_csv(filename, header=False).take(
# taking only needed ones
*(col for col in columns[i : i + columns_per_file])
).into_csv(
# streaming to the part
"{}.pt{}.csv".format(filename.replace(".csv", ""), part_number),
include_header=False,
)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.