繁体   English   中英

如何将非常大的 pyspark dataframe 转换为 pandas?

[英]How to convert a very large pyspark dataframe into pandas?

I want to convert a very large pyspark dataframe into pandas in order to be able to split it into train/test pandas frames for the sklearns random forest regressor. 我正在使用 Spark 3.1.2 在数据块中工作。

数据集的形状为 (782019, 4242)。

运行以下命令时,根据堆栈跟踪,我用完了 memory。

dataset_name = "dataset_path"
dataset = spark.read.table(dataset_name)
dataset_pd = dataset.toPandas()

Spark UI 执行器总结

22/01/31 08:06:32 WARN TaskSetManager: Lost task 2.2 in stage 16.0 (TID 85) (X.X.X.X executor 3): java.lang.OutOfMemoryError
    at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
    at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
    at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
    at org.apache.spark.util.ByteBufferOutputStream.write(ByteBufferOutputStream.scala:41)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$19(Executor.scala:859)
    at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:859)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:672)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

根据这里的回复,这是因为 toPandas 实现,因为它试图将数据集写入一个 ByteArrayOutputStream ,该数据集仅适用于大小低于 2GB 的数据。

还有其他方法可以将我的 dataframe 转换为 pandas 吗?

Edit1:对于进一步的上下文,添加 rf 回归器训练

search_space={'n_estimators':hp.uniform('n_estimators',100,1000),
           'max_depth':hp.uniform('max_depth',10,100),
           'min_samples_leaf':hp.uniform('min_samples_leaf',1,50),
           'min_samples_split':hp.uniform('min_samples_split',2,50)}
 
def train_model(params):
  # Enable autologging on each worker
  mlflow.autolog()
  with mlflow.start_run(nested=True):
    est=int(params['n_estimators'])
    md=int(params['max_depth'])
    msl=int(params['min_samples_leaf'])
    mss=int(params['min_samples_split'])
    
    
    model_hp = RandomForestRegressor(n_estimators=est,max_depth=md,min_samples_leaf=msl,min_samples_split=mss)
    model_hp.fit(X_train, y_train)
    pred=model_hp.predict(X_test)
    
    mae= rmse=sklearn.metrics.mean_absolute_error(y_test,pred)
    mse= rmse=sklearn.metrics.mean_squared_error(y_test,pred)
    rmse=sklearn.metrics.mean_squared_error(y_test,pred, squared=False)
    
    mlflow.log_metric('mae', mae)
    mlflow.log_metric('mse', mse)
    mlflow.log_metric('rmse', rmse)
    return rmse
  

spark_trials = SparkTrials(
  parallelism=8
)

 
with mlflow.start_run(run_name='rf') as run:
  best_params = fmin(
    fn=train_model, 
    space=search_space, 
    algo=tpe.suggest, 
    max_evals=128,
    trials=spark_trials)

你试过dask吗?

import dask.dataframe as dd 
data = dd.read_csv(...) # dask dataframe
df = data.compute() #this is pandas dataframe

Parallel Dask XGBoost Model 使用 xgb.dask.train() 训练 默认情况下,XGBoost 按顺序训练您的 model。 这对于基本项目来说很好,但是随着数据集和/或 XGBoost model 的大小增长,您需要考虑使用 Dask 在分布式模式下运行 XGBoost 以加快计算速度并减轻本地计算机的负担。

当您收到以下错误消息时,您将知道您何时达到了机器的 memory 限制:

xgboost.core.XGBoostError:超出 memory

XGBoost 带有原生 Dask 集成,可以并行训练多个模型。 使用分布式 Dask 后端运行 XGBoost model 只需对常规 XGBoost 代码进行两项更改:

substitute dtrain = xgb.DMatrix(X_train, y_train)
with dtrain = xgb.dask.DaskDMatrix(X_train, y_train)
substitute xgb.train(params, dtrain, ...)
with xgb.dask.train(client, params, dtrain, ...)

Take a look at the notebook if you want to know how to create the data_local subset.

from dask_ml.model_selection import train_test_split
 
# Create the train-test split
X, y = data_local.iloc[:, :-1], data_local["target"]
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, shuffle=True, random_state=2
)
Now you’re all set to train your XGBoost model.

Let’s use the default parameters for this example.

import xgboost as xgb
# Create the XGBoost DMatrices
dtrain = xgb.dask.DaskDMatrix(client, X_train, y_train)
dtest = xgb.dask.DaskDMatrix(client, X_test, y_test)
# train the model
output = xgb.dask.train(
    client, params, dtrain, num_boost_round=4,
    evals=[(dtrain, 'train')]
)
You can then use your trained model together with your testing split to make predictions.

# make predictions
y_pred = xgb.dask.predict(client, output, dtest)

信用: https://coiled.io/blog/dask-xgboost-python-example/

Converting such DataFrame to Pandas will fail, because this function requires all the data to be loaded into the driver's memory, which will run out at some point. 可能最好的方法是在这里切换到 PySpark 的RandomForestRegressor实现。

如果您只需要查看数据的一般行为方式,您可以尝试将 DataFrame 分解为更小的块,分别转换它们并连接数据。 但它真的不推荐,因为它不是一个好的长期解决方案。 第一个问题与找到分解数据所需的最佳块数有关。 其次,根据任务和使用情况,有时驱动程序可能会很快用完 memory,然后您的任务会再次崩溃。 如果您的数据集将来会扩展,则尤其如此。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM