简体   繁体   中英

Pyspark java.lang.IllegalArgumentException while saving onehotEncoder Pipeline

I am trying to save an OneHotEncoder pipeline using the pipeline.save() method but I am getting the following error -

An error occurred while calling o3844.save.
: org.apache.spark.SparkException: Job aborted.
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:231)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:188)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)
    at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)
............
Caused by: java.lang.IllegalArgumentException: newLimit > capacity: (230 > 189)
    at java.base/java.nio.Buffer.createLimitException(Buffer.java:372)
    at java.base/java.nio.Buffer.limit(Buffer.java:346)
    at java.base/java.nio.ByteBuffer.limit(ByteBuffer.java:1107)
    at java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:235)
    at java.base/java.nio.MappedByteBuffer.limit(MappedByteBuffer.java:67)
    at org.xerial.snappy.Snappy.compress(Snappy.java:156)
    at org.apache.parquet.hadoop.codec.SnappyCompressor.compress(SnappyCompressor.java:76)
    at org.apache.hadoop.io.compress.CompressorStream.compress(CompressorStream.java:81)
    at org.apache.hadoop.io.compress.CompressorStream.finish(CompressorStream.java:92)
    at org.apache.parquet.hadoop.CodecFactory$HeapBytesCompressor.compress(CodecFactory.java:165)
    at org.apache.parquet.hadoop.ColumnChunkPageWriteStore$ColumnChunkPageWriter.writePage(ColumnChunkPageWriteStore.java:95)
    at org.apache.parquet.column.impl.ColumnWriterV1.writePage(ColumnWriterV1.java:147)
    at org.apache.parquet.column.impl.ColumnWriterV1.flush(ColumnWriterV1.java:235)
    at org.apache.parquet.column.impl.ColumnWriteStoreV1.flush(ColumnWriteStoreV1.java:122)
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:172)
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:114)
    at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
    at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:41)
    at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:58)
    at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:75)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:280)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1473)
    at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:286)
    ... 9 more

The output of the pipeline is correct and I have validated that. I thought that my data size might be an issue, but I am running it on a small subset (~10 rows x 1 columns) if I am still getting this error.

Environment:
python - 3.7. spark version 3.1.1
Using Scala version 2.12.10, OpenJDK 64-Bit Server VM, 11.0.11.
Java version:
openjdk version "11.0.11" 2021-04-20 LTS.
OpenJDK Runtime Environment 18.9 (build 11.0.11+9-LTS).
OpenJDK 64-Bit Server VM 18.9 (build 11.0.11+9-LTS, mixed mode, sharing).

May be you have given more arguments or less arguments than recommended for a function . IllegalArgumentException usually comes when theres some problem with arguments. I'm not sure about it mate .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM