hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ravi Prakash <ravihad...@gmail.com>
Subject Re: Some Questions about Node Manager Memory Used
Date Tue, 24 Jan 2017 19:15:17 GMT
Hi Zhuo Chen!

Yarn has a few methods to account for memory. By default, it is
guaranteeing your (hive) application a certain amount of memory. It depends
totally on the application whether it uses all of that memory, or as in
your case, leaves plenty of headroom in case it needs to expand in the
future.

There's plenty of documentation from several vendors on this. I suggest a
search engine query on the lines of "hadoop Yarn memory usage"

HTH
Ravi

On Tue, Jan 24, 2017 at 1:04 AM, Zhuo Chen <ccenuo.dev@gmail.com> wrote:

> My Hive job gets stuck when submitted to the cluster. To view the Resource
> Manager web UI,
> I found the metrics [mem used] have reached approximately the upper limit.
> but when I login into the host, the OS shows memory used is only 13GB by
> run command 'free', and about 46GB were occupied by cache.
>
> ‚Äč
>
> so I wonder why there is such inconsistency and how to understand this
> scenario?
> any explanations would be appreciated.
>

Mime
View raw message