简体   繁体   English

Eclipse Scala IDE代码未编译

[英]Eclipse Scala IDE code not compiling

I downloaded eclipse scala ide from scala-ide.org site and trying to compile my first scala word count program. 我从scala-ide.org网站下载了eclipse scala ide,并尝试编译我的第一个scala单词计数程序。 But its gives error "object not a member of package org" in the following import command 但是在以下导入命令中给出了错误“对象不是软件包组织的成员”

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._

After some research I found that I need to add the jar file spark-assembly-1.0.0-hadoop2.2.0.jar to overcome this issue 经过一些研究,我发现我需要添加jar文件spark-assembly-1.0.0-hadoop2.2.0.jar来解决此问题

But after doing lot of research I could not locate this jar. 但是经过大量研究,我无法找到这个罐子。 Can anyone help here ? 有人可以帮忙吗?

Scala is not a simple language/env to learn. Scala不是一种简单的语言/环境。 It is important you learn how scala works and then move into spark. 了解scala的工作原理然后进入火花很重要。 There are tons of material available on web. 网络上有大量可用的材料。 A proper learning path will be to learn SBT > SCALA > Use Scala for Spark 正确的学习途径是学习SBT> SCALA>将Scala用于Spark

The dependency that you mentioned, can be put in he sbt's build.sbt. 您提到的依赖项可以放在sbt的build.sbt中。 You can also use maven, but I recommend learning sbt as way of learning scala. 您也可以使用Maven,但我建议学习sbt作为学习scala的方式。 Once you have resolved, the dependency using SBT, your simple code should work fine. 使用SBT解决依赖关系后,简单的代码应该可以正常工作。 But still, I recommend doing a "hello world" first than doing a "word count" :-) 但是我还是建议先做一个“ hello world”,而不要做“字数统计” :-)

Ando to answer your question, in your SBT you should be adding following library, 安藤回答您的问题,在您的SBT中,您应该添加以下库,

libraryDependencies += "org.apache.spark" % "spark-assembly_2.10" % "1.1.1"

This was for spark assembly 1.1.1 for hadoop 2.10. 这是用于hadoop 2.10的火花组件1.1.1。 I see you need a different version, you can find the proper version at 我看到您需要其他版本,可以在以下位置找到合适的版本

Maven Repo details for spark/hadoop Maven Repo的Spark / Hadoop详细信息

Here's the pure eclipse solutions (I had to download and setup eclipse just to answer this question) 这是纯蚀解决方案(我必须下载并设置蚀才能回答此问题)

  1. Get Scala IDE (it comes with inbuilt Scala Compiler version 2.10.5 and 2.11.6) 获取Scala IDE(内置的Scala编译器版本2.10.5和2.11.6附带)
  2. Create a new project Scala Wizard > Scala Project 创建一个新项目Scala向导> Scala项目
  3. Right click "src" in the scale project , select "Scala Object", give it a name - I gave WordCount 在规模项目中右键单击“ src”,选择“ Scala Object”,为其命名-我给了WordCount
  4. Right click project > Configure > Convert to Maven Project 右键单击项目>配置>转换为Maven项目
  5. In the body of word count object (I named the object as WordCount) paste the text from Apache Spark Example which finally looks like 在字数对象的主体(我将该对象命名为WordCount)中,粘贴来自Apache Spark Example的文本,最终看起来像

``` ```

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf

object WordCount {
    val sparkConf = new SparkConf().setAppName("SampleTest")
    val spark = new SparkContext(sparkConf)
    val textFile = spark.textFile("hdfs://...")
    val counts = textFile.flatMap(line => line.split(" "))
                                   .map(word => (word, 1))
                                   .reduceByKey(_ + _)
    counts.saveAsTextFile("hdfs://...")
}

``` 6. Add following to your maven `6.将关注者添加到您的行家中

``` ```

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-core_2.10</artifactId>
    <version>1.4.0</version>
</dependency>

``` ```

  1. Right click on "Scala Library Container", and select "Latest 2.10 bundle", click ok 右键单击“ Scala库容器”,然后选择“最新的2.10软件包”,单击“确定”。
  2. Once I was done with these, there was no error message displayed on my "problems" list of Eclipse...indicating that it compiled as expected. 一旦完成这些操作,Eclipse的“问题”列表上就不会显示任何错误消息……表明它已按预期进行了编译。
  3. Obviously, this example won't run as I haven't provided enough info for it to run...but this was just to answer to the question, "how to get the code compiled". 显然,此示例无法运行,因为我没有提供足够的信息来运行它……但这只是为了回答“如何编译代码”这一问题。

Hope this helps. 希望这可以帮助。

  • Install the SBT Scala build+dependency tool 安装SBT Scala构建+依赖工具
  • Create an empty directory. 创建一个空目录。 Name it spark-test or whatever you want to name your project. 将其命名为spark-test或任何您想命名的项目。
  • Put your source code in the sub-directory src/scala/main. 将您的源代码放在子目录src / scala / main中。 If you have Main.scala in package scalatest, it should be src/scala/main/scalatest/Main.scala 如果软件包scalatest中包含Main.scala,则应为src / scala / main / scalatest / Main.scala
  • Make a build.sbt file with the following contents 使用以下内容制作一个build.sbt文件

    name := """sparktest""" 名称:=“”“ sparktest”“”

    version := "1.0-SNAPSHOT" 版本:=“ 1.0-SNAPSHOT”

    scalaVersion := "2.11.7" scalaVersion:=“ 2.11.7”

    libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % "1.4.0" ) libraryDependencies ++ = Seq(“ org.apache.spark” %%“ spark-core”%“ 1.4.0”)

  • Configure the SBT Eclipse plugin. 配置SBT Eclipse插件。 Create ~/.sbt/0.13/plugins/plugins.sbt, with: 使用以下命令创建〜/ .sbt / 0.13 / plugins / plugins.sbt:

    addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "4.0.0") addSbtPlugin(“ com.typesafe.sbteclipse”%“ sbteclipse-plugin”%“ 4.0.0”)

  • Generate an Eclipse project with sbt eclipse 使用sbt eclipse生成Eclipse项目

  • Import your project into eclipse. 将您的项目导入eclipse。
  • Run :) 跑 :)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM