hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Bockelman <bbock...@cse.unl.edu>
Subject Re: Datanode high memory usage
Date Tue, 01 Sep 2009 19:20:22 GMT

On Sep 1, 2009, at 2:02 PM, Stas Oskin wrote:

> Hi.
>
> What does 'up to 700MB' mean? Is it JVM's virtual memory? resident  
> memory?
>> or java heap in use?
>>
>
> 700 MB is what taken by overall java process.
>

Resident, shared, or virtual?  Unix memory management is not  
straightforward; the worst thing you can do is look at the virtual  
memory size of the java process and assume that's how much RAM it is  
using.

IIRC, the JVM will pre-allocate enough virtual memory to allocate its  
entire heap (so the virtual size will be large), but the heap won't  
necessarily get that large.  The JVM in server mode won't be  
aggressive about GC unless it is pressed for memory - i.e., if you  
give it a 512MB heap, then it will possibly use a good hunk of it  
before doing a GC run.  On top of the heap, the JVM has several other  
memory buffers for compiled code and the like.

Brian

>
>>
>> How many blocks to you have? For an idle DN, most of the memory is  
>> taken by
>> block info structures. It does not really optimize for it.. May be  
>> about 1k
>> per block is the upper limit.
>>
>
>
> Here are the details from NN ui:
>
> *160537 files and directories, 144118 blocks = 304655 total. Heap  
> Size is
> 155.42 MB / 966.69 MB (16%)*
>
> Thanks.


Mime
View raw message