繁体   English   中英

如何将Elasticsearch-Hadoop依赖项导入Spark脚本

[英]How to import elasticsearch-hadoop dependency to spark script

我试图通过将Build.scala文档设置为使用spark-es连接器

libraryDependencies ++= Seq(
    "com.datastax.spark" %% "spark-cassandra-connector" % "1.2.1",
    "org.elasticsearch" %% "elasticsearch-hadoop" % "2.2.0"
  )

但是我得到了错误:

[error] (*:update) sbt.ResolveException: unresolved dependency: org.elasticsearch#elasticsearch-hadoop_2.10;2.2.0: not found

我可以看到它存在于这里 ...

编辑:

当我将Build.scala更改为:

"org.elasticsearch" % "elasticsearch-hadoop" % "2.2.0"

我收到以下错误:

[error] impossible to get artifacts when data has not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3
java.lang.IllegalStateException: impossible to get artifacts when data has not been loaded. IvyNode = org.scala-lang#scala-library;2.10.3

怎么了?

elasticsearch-hadoop不是Scala依赖项,因此它没有特定于Scala的版本,因此无法与%%一起使用。 尝试

"org.elasticsearch" % "elasticsearch-hadoop" % "2.2.0"

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM