简体   繁体   中英

Write columns of different dataframe/np.array in external files - Python

I am stacked with something that should be very easy to realize. I have a panda data frame where each column (in total 10000) represents my x-variable. I have also another panda data frame that is my y-variable and it is composed by one column only. I would like to create external files where I can find in file0->[y,x[0]], in file1->[y,x[1]], etc. etc. At the beginning I though to put everything together in one only file by concatenating the variables:
new=pd.concat([time['#Time'],lc], axis=1) new.to_csv('simulated_lc.csv', sep=' ',index=False)

but with 10000 columns it is not so pratical to use then the data file.

I also tried with another approach: instead of putting my variables inside a dataframe, I defined them as array. So, I have the x-variable that is x[i,j] where each i-row is the dataset that I want to write in the i-file together with the y-variable that is a one-dimensional array:

for i in range(0,10000):
fname='lc'+str(i)+'.txt'
dataset=[x[i],y]
np.savetxt(fname,dataset)

The only problem I have is when I open the file the data are not written as two separate columns, like:

0 1
2 3
3 4
...

How can I solve it? Thank you.

How about this:

z = x.join(y, lsuffix='L', rsuffix='R')
for i in range(0,1000):
    fname='lc'+str(i)+'.csv'
    z.to_csv(fname, index=i)

Simply use pd.concat across a loop of X's columns using double bracket slicer, [[...]] :

for col in x.columns:
   fname='lc'+str(i)+'.txt'

   dataset = pd.concat([y, x[[col]]], axis=1)
   dataset.to_csv(fname, sep=' ', index=False)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM