简体   繁体   中英

PySpark: Exception in thread “dag-scheduler-event-loop” java.lang.OutOfMemoryError: Java heap space

I am trying to convert categorical to numerical values using StringIndexer , OneHotEncoder and VectorAssembler in order to apply K-means clustering in PySpark. Here's my code:

indexers = [
    StringIndexer(inputCol=c, outputCol="{0}_indexed".format(c))
    for c in columnList
]

encoders = [OneHotEncoder(dropLast=False, inputCol=indexer.getOutputCol(),
                          outputCol="{0}_encoded".format(indexer.getOutputCol()))
            for indexer in indexers
            ]

assembler = VectorAssembler(inputCols=[encoder.getOutputCol() for encoder in encoders], outputCol="features")


pipeline = Pipeline(stages=indexers + encoders + [assembler])
model = pipeline.fit(df)
transformed = model.transform(df)

kmeans = KMeans().setK(2).setFeaturesCol("features").setPredictionCol("prediction")
kMeansPredictionModel = kmeans.fit(transformed)

predictionResult = kMeansPredictionModel.transform(transformed)
predictionResult.show(5)

I am getting Exception in thread "dag-scheduler-event-loop" java.lang.OutOfMemoryError: Java heap space . How can I allocate more heap space in the code or better? Is it smart to allocate more space? Can I restrict my program to the available number of threads and heap space?

I run into the same problem. Increasing number of allowed processes for user helped. Run for example:

ulimit -u 4096

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM