简体   繁体   English

如何引用Hadoop上hdfs中存储的依赖项(jar)?

[英]How to reference the dependencies (jars) stored in hdfs on Hadoop?

On Hadoop: I want to run a jar that has many dependencies (jars) stored into hdfs (I don't want to put them in the local file system). 在Hadoop上:我想运行一个罐,该罐具有存储在hdfs中的许多依赖项(罐)(我不想将它们放在本地文件系统中)。

How do I do that? 我怎么做?

You can put them on the class path of the client (Edge)node ie the machine from where you are running the code (main jar) 您可以将它们放在客户端(Edge)节点的类路径上,即运行代码的机器(主jar)

Update: You can explore the -libjars option. 更新:您可以浏览-libjars选项。

You can also check this very good blog. 您也可以检查这个非常好的博客。 http://grepalex.com/2013/02/25/hadoop-libjars/ http://grepalex.com/2013/02/25/hadoop-libjars/

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM