简体   繁体   中英

Spark-submit cannot access local file system

Really simple Scala code files at the first count() method call.

def main(args: Array[String]) {
    // create Spark context with Spark configuration
    val sc = new SparkContext(new SparkConf().setAppName("Spark File Count"))
    val fileList = recursiveListFiles(new File("C:/data")).filter(_.isFile).map(file => file.getName())
    val filesRDD = sc.parallelize(fileList)
    val linesRDD = sc.textFile("file:///temp/dataset.txt")
    val lines = linesRDD.count()
    val files = filesRDD.count()
  }

I don't want to set up a HDFS installation for this right now. How do I configure Spark to use the local file system? This works with spark-shell .

To read the file from local filesystem(From Windows directory) you need to use below pattern.

val fileRDD = sc.textFile("C:\\Users\\Sandeep\\Documents\\test\\test.txt");

Please see below sample working program to read data from local file system.

package com.scala.example
import org.apache.spark._

object Test extends Serializable {
  val conf = new SparkConf().setAppName("read local file")
  conf.set("spark.executor.memory", "100M")
  conf.setMaster("local");

  val sc = new SparkContext(conf)
  val input = "C:\\Users\\Sandeep\\Documents\\test\\test.txt"

  def main(args: Array[String]): Unit = {
    val fileRDD = sc.textFile(input);
    val counts = fileRDD.flatMap(line => line.split(","))
      .map(word => (word, 1))
      .reduceByKey(_ + _)

    counts.collect().foreach(println)
    //Stop the Spark context
    sc.stop

  }
}

val sc = new SparkContext(new SparkConf().setAppName("Spark File Count")).setMaster("local[8]")

might help

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM