簡體   English   中英

如何將非常大的 pyspark dataframe 轉換為 pandas?

[英]How to convert a very large pyspark dataframe into pandas?

I want to convert a very large pyspark dataframe into pandas in order to be able to split it into train/test pandas frames for the sklearns random forest regressor. 我正在使用 Spark 3.1.2 在數據塊中工作。

數據集的形狀為 (782019, 4242)。

運行以下命令時,根據堆棧跟蹤,我用完了 memory。

dataset_name = "dataset_path"
dataset = spark.read.table(dataset_name)
dataset_pd = dataset.toPandas()

Spark UI 執行器總結

22/01/31 08:06:32 WARN TaskSetManager: Lost task 2.2 in stage 16.0 (TID 85) (X.X.X.X executor 3): java.lang.OutOfMemoryError
    at java.io.ByteArrayOutputStream.hugeCapacity(ByteArrayOutputStream.java:123)
    at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:117)
    at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
    at org.apache.spark.util.ByteBufferOutputStream.write(ByteBufferOutputStream.scala:41)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$19(Executor.scala:859)
    at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:859)
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at com.databricks.spark.util.ExecutorFrameProfiler$.record(ExecutorFrameProfiler.scala:110)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:672)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

根據這里的回復,這是因為 toPandas 實現,因為它試圖將數據集寫入一個 ByteArrayOutputStream ,該數據集僅適用於大小低於 2GB 的數據。

還有其他方法可以將我的 dataframe 轉換為 pandas 嗎?

Edit1:對於進一步的上下文,添加 rf 回歸器訓練

search_space={'n_estimators':hp.uniform('n_estimators',100,1000),
           'max_depth':hp.uniform('max_depth',10,100),
           'min_samples_leaf':hp.uniform('min_samples_leaf',1,50),
           'min_samples_split':hp.uniform('min_samples_split',2,50)}
 
def train_model(params):
  # Enable autologging on each worker
  mlflow.autolog()
  with mlflow.start_run(nested=True):
    est=int(params['n_estimators'])
    md=int(params['max_depth'])
    msl=int(params['min_samples_leaf'])
    mss=int(params['min_samples_split'])
    
    
    model_hp = RandomForestRegressor(n_estimators=est,max_depth=md,min_samples_leaf=msl,min_samples_split=mss)
    model_hp.fit(X_train, y_train)
    pred=model_hp.predict(X_test)
    
    mae= rmse=sklearn.metrics.mean_absolute_error(y_test,pred)
    mse= rmse=sklearn.metrics.mean_squared_error(y_test,pred)
    rmse=sklearn.metrics.mean_squared_error(y_test,pred, squared=False)
    
    mlflow.log_metric('mae', mae)
    mlflow.log_metric('mse', mse)
    mlflow.log_metric('rmse', rmse)
    return rmse
  

spark_trials = SparkTrials(
  parallelism=8
)

 
with mlflow.start_run(run_name='rf') as run:
  best_params = fmin(
    fn=train_model, 
    space=search_space, 
    algo=tpe.suggest, 
    max_evals=128,
    trials=spark_trials)

你試過dask嗎?

import dask.dataframe as dd 
data = dd.read_csv(...) # dask dataframe
df = data.compute() #this is pandas dataframe

Parallel Dask XGBoost Model 使用 xgb.dask.train() 訓練 默認情況下,XGBoost 按順序訓練您的 model。 這對於基本項目來說很好,但是隨着數據集和/或 XGBoost model 的大小增長,您需要考慮使用 Dask 在分布式模式下運行 XGBoost 以加快計算速度並減輕本地計算機的負擔。

當您收到以下錯誤消息時,您將知道您何時達到了機器的 memory 限制:

xgboost.core.XGBoostError:超出 memory

XGBoost 帶有原生 Dask 集成,可以並行訓練多個模型。 使用分布式 Dask 后端運行 XGBoost model 只需對常規 XGBoost 代碼進行兩項更改:

substitute dtrain = xgb.DMatrix(X_train, y_train)
with dtrain = xgb.dask.DaskDMatrix(X_train, y_train)
substitute xgb.train(params, dtrain, ...)
with xgb.dask.train(client, params, dtrain, ...)

Take a look at the notebook if you want to know how to create the data_local subset.

from dask_ml.model_selection import train_test_split
 
# Create the train-test split
X, y = data_local.iloc[:, :-1], data_local["target"]
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.3, shuffle=True, random_state=2
)
Now you’re all set to train your XGBoost model.

Let’s use the default parameters for this example.

import xgboost as xgb
# Create the XGBoost DMatrices
dtrain = xgb.dask.DaskDMatrix(client, X_train, y_train)
dtest = xgb.dask.DaskDMatrix(client, X_test, y_test)
# train the model
output = xgb.dask.train(
    client, params, dtrain, num_boost_round=4,
    evals=[(dtrain, 'train')]
)
You can then use your trained model together with your testing split to make predictions.

# make predictions
y_pred = xgb.dask.predict(client, output, dtest)

信用: https://coiled.io/blog/dask-xgboost-python-example/

Converting such DataFrame to Pandas will fail, because this function requires all the data to be loaded into the driver's memory, which will run out at some point. 可能最好的方法是在這里切換到 PySpark 的RandomForestRegressor實現。

如果您只需要查看數據的一般行為方式,您可以嘗試將 DataFrame 分解為更小的塊,分別轉換它們並連接數據。 但它真的不推薦,因為它不是一個好的長期解決方案。 第一個問題與找到分解數據所需的最佳塊數有關。 其次,根據任務和使用情況,有時驅動程序可能會很快用完 memory,然后您的任務會再次崩潰。 如果您的數據集將來會擴展,則尤其如此。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM