简体   繁体   English

纱线-让hadoop使用更多资源

[英]Yarn - make hadoop use more resources

在此处输入图片说明

As you can see, this node contains 3 containers (which is using 6GB memory and 3 vCores). 如您所见,该节点包含3个容器(正在使用6GB内存和3个vCore)。 I would like it to use rest of the vCores ( 5 vCores in vCores Avail column). 我希望它使用其余的vCore(vCore可用列中的5个vCore)。 I've not done any configuration on the yarn-site.xml yet. 我尚未在yarn-site.xml上进行任何配置。

Yarn uses only the resources that it needs 纱线仅使用所需的资源

What you are currently looking at seems to be the resources currently used by the running jobs. 您当前正在查看的似乎是正在运行的作业当前使用的资源。

Yarn knows that it is allowed to use 2GB more memory and 5 more vcores, but it seems that by the nature of the job these can simply not be utilized. Yarn知道可以使用2GB以上的内存和5个以上的vcore,但是根据工作的性质,似乎无法使用这些内存。

Hence, it is not likely a problem, or somethign that needs to be fixed but just a matter of you running job of this nature. 因此,这不太可能是一个问题或需要解决的问题,而只是您从事这种性质的工作即可。


Example

When I say a job of this nature, I mean a job for which it is not required to use 3 containers of 2 GB each. 当我说这种性质的工作时,是指不需要使用3个2 GB的容器的工作。

The most simple example of such a job would be a count of 3 comparatively small files with a default container size of 2 GB. 此类作业的最简单示例是3个相对较小的文件,默认容器大小为2 GB。

If you really would want to run this kind of job more in parrallel, you would need to resort to workarounds (like setting a tiny maximum container size, or splitting all files in half). 如果您确实想在并行环境中运行更多此类工作,则需要采取解决方法(例如,设置一个很小的最大容器大小或将所有文件分成两半)。 However, I would not recommend this. 但是,我不建议这样做。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM