简体   繁体   English

使用列表中的随机值在 Pyspark 中创建数据框

[英]Create a dataframe in Pyspark using random values from a list

I need to convert this code into PySpark equivalent.我需要将此代码转换为 PySpark 等效代码。 I can not use pandas to create the dataframe.我不能使用熊猫来创建数据框。

This is how I create the dataframe using Pandas:这就是我使用 Pandas 创建数据框的方式:

df['Name'] = np.random.choice(["Alex","James","Michael","Peter","Harry"], size=3)
df['ID'] = np.random.randint(1, 10, 3)
df['Fruit'] = np.random.choice(["Apple","Grapes","Orange","Pear","Kiwi"], size=3)

The dataframe should look like this in PySpark: PySpark 中的数据框应如下所示:

df df

Name   ID  Fruit
Alex   3   Apple
James  6   Grapes
Harry  5   Pear

I have tried the following for 1 column:我为 1 列尝试了以下操作:

sdf1 = spark.createDataFrame([(k,) for k in ['Alex','James', 'Harry']]).orderBy(rand()).limit(6).show()
names = np.random.choice(["Alex","James","Michael","Peter","Harry"], size=3)
id = np.random.randint(1, 10, 3)
fruits = np.random.choice(["Apple","Grapes","Orange","Pear","Kiwi"], size=3)
columns = ['Name', 'ID', "Fruit"]
  
dataframe = spark.createDataFrame(zip(names, id, fruits), columns)

dataframe.show()

You can first create pandas dataframe then convert it into Pyspark dataframe.您可以先创建 pandas 数据帧,然后将其转换为 Pyspark 数据帧。 Or you can zip the 3 random numpy arrays and create spark dataframe like this:或者您可以压缩 3 个随机 numpy 数组并创建像这样的 spark 数据框:

import numpy as np

names = [str(x) for x in np.random.choice(["Alex", "James", "Michael", "Peter", "Harry"], size=3)]
ids = [int(x) for x in np.random.randint(1, 10, 3)]
fruits = [str(x) for x in np.random.choice(["Apple", "Grapes", "Orange", "Pear", "Kiwi"], size=3)]

df = spark.createDataFrame(list(zip(names, ids, fruits)), ["Name", "ID", "Fruit"])

df.show()

#+-------+---+------+
#|   Name| ID| Fruit|
#+-------+---+------+
#|  Peter|  8|  Pear|
#|Michael|  7|  Kiwi|
#|  Harry|  4|Orange|
#+-------+---+------+

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM