Question: Is there a quick way to convert a 2D Numpy matrix to a set of Pandas Series? For example, a (100 x5) ndarray, to 5 series with 100 rows each.
Background: I need to create a pandas dataframe using randomly generated data of different types (float, string, etc). Currently, for float, I create a numpy matrix, for strings, I create an array of strings. Then I combine all of these along axis=1 to form a dataframe. This does not preserve the datatypes of each individual column.
To preserve the datatype, I plan to use pandas series. Since creating multiple series of floats will likely be slower than creating a numpy matrix of floats, I was wondering if there was a way to convert a numpy matrix to a set of series.
This question is different from mine in that it asks about converting a numpy matrix into a single series. I require multiple series.
You can convert the matrix of each data type directly to a dataframe and then concatenate the resulting dataframes.
float_df = pd.DataFrame(np.random.rand(500).reshape((-1,5)))
# 0 1 2 3 4
#0 0.561765 0.177957 0.279419 0.332973 0.967186
#1 0.761327 0.323747 0.707742 0.555475 0.680662
#.. ... ... ... ... ...
#98 0.741207 0.061200 0.142316 0.381168 0.591554
#99 0.417697 0.723469 0.730677 0.538261 0.281296
#
#[100 rows x 5 columns]
pd.concat([float_df, int_df, ...], axis=1)
Making a dataframe from a dict of arrays:
In [571]: df = pd.DataFrame({'a':['one','two','three'], 'b':np.arange(3), 'c':np.ones(3)})
In [572]: df
Out[572]:
a b c
0 one 0 1.0
1 two 1 1.0
2 three 2 1.0
Note the mixed column dtypes:
In [579]: df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 a 3 non-null object
1 b 3 non-null int64
2 c 3 non-null float64
dtypes: float64(1), int64(1), object(1)
memory usage: 200.0+ bytes
If we ask for a numpy from that, we get a 2d object dtype array:
In [580]: df.values
Out[580]:
array([['one', 0, 1.0],
['two', 1, 1.0],
['three', 2, 1.0]], dtype=object)
Recreating a dataframe, looks the same, but the column dtypes are different:
In [581]: pd.DataFrame(df.values, columns=['a','b','c'])
Out[581]:
a b c
0 one 0 1.0
1 two 1 1.0
2 three 2 1.0
In [582]: _.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 a 3 non-null object
1 b 3 non-null object
2 c 3 non-null object
dtypes: object(3)
memory usage: 200.0+ bytes
But a structured array does preserve column dtpes:
In [587]: df.to_records(index=False)
Out[587]:
rec.array([('one', 0, 1.), ('two', 1, 1.), ('three', 2, 1.)],
dtype=[('a', 'O'), ('b', '<i8'), ('c', '<f8')])
In [588]: pd.DataFrame(_)
Out[588]:
a b c
0 one 0 1.0
1 two 1 1.0
2 three 2 1.0
In [589]: _.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 a 3 non-null object
1 b 3 non-null int64
2 c 3 non-null float64
dtypes: float64(1), int64(1), object(1)
memory usage: 200.0+ bytes
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.