簡體   English   中英

Spark Streaming-Parquet文件上傳到S3錯誤

[英]Spark Streaming - Parquet file upload to S3 error

我在Spark Streaming主題中是全新的。
通過流應用程序,我正在創建大小約為2.5MB的Parquet文件,並將它們存儲在S3 / Local目錄中。

我使用的方法如下:

data.write.parquet(destination)

其中“數據”是一個DataFrame

如果目的地是本地路徑,則所有內容都可以像超級按鈕一樣工作,但是如果僅使用“ s3n:// bucket / directory / filename”之類的路徑將其發送到s3,則會出現以下異常:

    15/12/17 10:47:06 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-3,5,main]
    java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
        at org.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
        at org.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
        at org.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
        at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
        at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
        at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
        at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:416)
        at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:198)
        at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.newBackupFile(NativeS3FileSystem.java:263)
        at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.<init>(NativeS3FileSystem.java:245)
        at org.apache.hadoop.fs.s3native.NativeS3FileSystem.create(NativeS3FileSystem.java:412)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
        at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:176)
        at org.apache.parquet.hadoop.ParquetFileWriter.<init>(ParquetFileWriter.java:160)
        at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:289)
        at org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:262)
        at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.<init>(ParquetRelation.scala:94)
        at org.apache.spark.sql.execution.datasources.parquet.ParquetRelation$$anon$3.newInstance(ParquetRelation.scala:272)
        at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:234)
        at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
        at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1$$anonfun$apply$mcV$sp$3.apply(InsertIntoHadoopFsRelation.scala:150)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

從存儲桶操作中讀取可以正常工作。 盡管有錯誤,還是有一些存儲在存儲桶中。 像“目錄和文件夾”一樣,它為給定的路徑創建文件夾,但最后是“文件名和文件夾”文件,而不是文件。

技術細節:#

  • S3瀏覽器
  • Windows 8.1
  • IntelliJ CE 14.1.5
  • 火花流應用
  • 適用於Hadoop 2.6.0的Spark 1.5

問題出在Hadoop庫中。 我必須使用Windows SDK 7重建winutils(winutils.exe)和本機lib(hadoop.dll),然后將其移至%HADOOP_HOME%\\bin%並將%HADOOP_HOME%\\bin%Path變量。 可以在hadoop-2.7.1-src\\hadoop-common-project\\hadoop-common\\target下找到要重建hadoop-2.7.1-src\\hadoop-common-project\\hadoop-common\\target 對於Win utils,我建議使用Windows優化分支http://svn.apache.org/repos/asf/hadoop/common/branches/branch-trunk-win/

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM