hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Eng <a...@mapr.com>
Subject Re: Physical memory (bytes) snapshot counter question - how to get maximum memory used in reduce task
Date Thu, 06 Apr 2017 01:37:13 GMT
An important consideration is the difference between the RSS of the JVM process vs. the used
heap size.  Which of those are you looking for? And also, importantly, why/what do you plan
to do with that info?

A second important consideration is the length of time you are at/around your max RSS/java
heap.  Holding X MB of memory for 100ms is very different from holding X MB of memory for
100 seconds.  Are you looking for that info? And if so, how do you plan to use it?

> On Apr 5, 2017, at 6:15 PM, Nico Pappagianis <nico.pappagianis@salesforce.com>
wrote:
> 
> Hi all
> 
> I've made some memory optimizations on the reduce task and I would like to compare the
old reducer vs new reducer in terms of maximum memory consumption.
> 
> I have a question regarding the description of the following counter:
> 
> PHYSICAL_MEMORY_BYTES | Physical memory (bytes) snapshot | Total physical memory used
by all tasks including spilled data.
> 
> I'm assuming this means the aggregate of memory used throughout the entire reduce task
(if viewing at the reduce task-level). 
> Please correct me if I'm wrong on this assumption (the description seems pretty straightforward).
> 
> Is there a way to get the maximum (not total) memory used by a reduce task from the default
counters?
> 
> Thanks!
> 
> 
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org


Mime
View raw message