简体   繁体   English

Hive GUDF custom unzip function test case failing with error “B cannot be cast to org.apache.hadoop.io.BytesWritable”

[英]Hive GUDF custom unzip function test case failing with error “B cannot be cast to org.apache.hadoop.io.BytesWritable”

I'm writing test case for Generic UDF custom unzip evaluate function which does unzipping of zip files.我正在为通用 UDF 自定义解压缩评估 function 编写测试用例,它会解压缩 zip 文件。 This jar is used in Hivequery.这个 jar 用于 Hivequery。 Here code for test case,这里是测试用例的代码,

public void testEvaluate() throws HiveException, IOException {
    Unzip unzip = new Unzip();
    File resourcesDirectory = new File("src/test/resources/test.zip");
    byte[] bytes = Files.readAllBytes( resourcesDirectory.toPath() );

    ObjectInspector binaryOI = PrimitiveObjectInspectorFactory.writableBinaryObjectInspector;
    ObjectInspector[] arguments = {binaryOI};
    unzip.initialize(arguments);

    GenericUDF.DeferredObject valueObj0 = new GenericUDF.DeferredJavaObject(bytes);
    GenericUDF.DeferredObject[] args = { valueObj0 };

    unzip.evaluate(args  );}

I'm getting error as below ,我收到如下错误

java.lang.ClassCastException: [B cannot be cast to org.apache.hadoop.io.BytesWritable

at org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableBinaryObjectInspector.getPrimitiveJavaObject(WritableBinaryObjectInspector.java:49)
at Unzip.evaluate(Unzip.java:32)
at UnzipTest.testEvaluate(UnzipTest.java:96)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)

The error is occuring at line when reading bytes from DeferredObject[] args,从 DeferredObject[] args 读取字节时,行发生错误,

 -  byte[] input = elementOI.getPrimitiveJavaObject( arg[0].get() );

PS: test.zip contains a text file(with test string) zipped to test.zip PS:test.zip 包含一个压缩到 test.zip 的文本文件(带有测试字符串)

You'll need to wrap byte[] with a Writable that Hive can work with, BytesWritable in your case.您需要使用 Hive 可以使用的 Writable 包装byte[] ,在您的情况下为BytesWritable
As you can see WritableBinaryObjectInspector.getPrimitiveJavaObject expects BytesWritable object as an input, not an array of bytes.如您所见WritableBinaryObjectInspector.getPrimitiveJavaObject期望BytesWritable object 作为输入,而不是字节数组。

Try instead of尝试代替

GenericUDF.DeferredObject valueObj0 = new GenericUDF.DeferredJavaObject(bytes);

do the following:请执行下列操作:

GenericUDF.DeferredObject valueObj0 = new GenericUDF.DeferredJavaObject(new BytesWritable(bytes));

Reproducing your case locally I was able to retrieve byte[] inside UDF evaluate method successfully.在本地复制您的案例我能够成功地在 UDF evaluate方法中检索byte[]

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Hadoop错误.ClassCastException:org.apache.hadoop.io.LongWritable无法强制转换为org.apache.hadoop.io.Text - Hadoop error .ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text Hadoop:LongWritable无法强制转换为org.apache.hadoop.io.IntWritable - Hadoop: LongWritable cannot be cast to org.apache.hadoop.io.IntWritable 如何解决org.apache.hadoop.io.LongWritable无法转换为org.apache.hadoop.io.Text - How to solve org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text org.apache.hadoop.io.Text无法转换为org.apache.hadoop.io.NullWritable - org.apache.hadoop.io.Text cannot be cast to org.apache.hadoop.io.NullWritable 为什么org.apache.hadoop.io.Writable无法转换为org.apache.hadoop.io.IntWritable? - Why org.apache.hadoop.io.Writable cannot be cast to org.apache.hadoop.io.IntWritable? 带有凤凰的MapReduce:org.apache.hadoop.io.LongWritable无法转换为org.apache.hadoop.io.NullWritable - MapReduce with phoenix : org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.NullWritable Hadoop:java.lang.ClassCastException:org.apache.hadoop.io.LongWritable无法强制转换为org.apache.hadoop.io.Text - Hadoop : java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text 批量加载到HBase:错误:java.lang.ClassCastException:org.apache.hadoop.io.FloatWritable无法转换为org.apache.hadoop.hbase.Cell - Bulk Load to HBase: ERROR : java.lang.ClassCastException: org.apache.hadoop.io.FloatWritable cannot be cast to org.apache.hadoop.hbase.Cell java.lang.ClassCastException:org.apache.hadoop.io.LongWritable无法强制转换为org.apache.hadoop.hbase.io.ImmutableBytesWritable - java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.hbase.io.ImmutableBytesWritable 错误:java.lang.ClassCastException:wordCountTest.WordCountTest无法强制转换为org.apache.hadoop.mapreduce.Mapper - Error: java.lang.ClassCastException: wordCountTest.WordCountTest cannot be cast to org.apache.hadoop.mapreduce.Mapper
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM