简体   繁体   English

如何使用GzipCodec或BZip2Codec通过Spark Shell进行随机溢出压缩

[英]How to use GzipCodec or BZip2Codec for shuffle spill compression with Spark shell

So when start the spark shell with -Dspark.io.compression.codec=org.apache.hadoop.io.compress.GzipCodec I get. 因此,当使用-Dspark.io.compression.codec=org.apache.hadoop.io.compress.GzipCodec启动spark shell时,我得到了。

Due to space limitations on our cluster, I want to use a more aggressive compression codec but how do I use BZip2Codec and avoid this exception? 由于我们群集上的空间限制,我想使用更具攻击性的压缩编解码器,但如何使用BZip2Codec并避免出现此异常? Is it even possible? 可能吗?

java.lang.NoSuchMethodException: org.apache.hadoop.io.compress.BZip2Codec.<init>(org.apache.spark.SparkConf)
    at java.lang.Class.getConstructor0(Class.java:2810)
    at java.lang.Class.getConstructor(Class.java:1718)
    at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:48)
    at org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:42)
    at org.apache.spark.broadcast.HttpBroadcast$.initialize(HttpBroadcast.scala:106)
    at org.apache.spark.broadcast.HttpBroadcastFactory.initialize(HttpBroadcast.scala:70)
    at org.apache.spark.broadcast.BroadcastManager.initialize(Broadcast.scala:81)
    at org.apache.spark.broadcast.BroadcastManager.<init>(Broadcast.scala:68)
    at org.apache.spark.SparkEnv$.create(SparkEnv.scala:175)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:141)
    at org.apache.spark.repl.SparkILoop.createSparkContext(SparkILoop.scala:956)
    at $iwC$$iwC.<init>(<console>:8)
    at $iwC.<init>(<console>:14)
    at <init>(<console>:16)
    at .<init>(<console>:20)
    at .<clinit>(<console>)
    at .<init>(<console>:7)
    at .<clinit>(<console>)
    at $print(<console>)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:772)
    at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1040)
    at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:609)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:640)
    at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:604)
    at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:795)
    at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:840)
    at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:752)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:119)
    at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark$1.apply(SparkILoopInit.scala:118)
    at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:258)
    at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:118)
    at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:55)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$5.apply$mcV$sp(SparkILoop.scala:912)
    at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:140)
    at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:55)
    at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:102)
    at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:55)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:929)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:883)
    at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:883)
    at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:883)
    at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:981)
    at org.apache.spark.repl.Main$.main(Main.scala:31)
    at org.apache.spark.repl.Main.main(Main.scala)

Running hadoop checknative : 运行hadoop checknative

14/06/13 17:41:24 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
14/06/13 17:41:24 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop: true /usr/lib/hadoop/lib/native/libhadoop.so.1.0.0
zlib:   true /lib/x86_64-linux-gnu/libz.so.1
snappy: true /usr/lib/hadoop/lib/native/libsnappy.so.1
lz4:    true revision:99
bzip2:  true /lib/x86_64-linux-gnu/libbz2.so.1

After starting the shell a simple import statement should allow you to use BZip2 - Compression as and how you want, although I am still unclear on your question. 启动shell之后,一个简单的import语句应允许您按需要使用BZip2-Compression,尽管我仍不清楚您的问题。

import org.apache.hadoop.io.compress.BZip2Codec

And even when starting the shell, I executed the following command and it gave me no error, try to check if you're entering it correctly. 即使启动外壳程序,我也执行了以下命令,但它没有给我任何错误,请尝试检查是否正确输入了它。

./spark-shell -D spark.io.compression.codec=org.apache.hadoop.io.compress.BZip2Codec

This is of-course in "~/spark/bin/" 这当然是在“〜/ spark / bin /”中

Similarly for Gzip: 对于Gzip同样:

./spark-shell -D spark.io.compression.codec=org.apache.hadoop.io.compress.GzipCodec

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM