hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hemanth Yamijala <yhema...@thoughtworks.com>
Subject Re: Child JVM memory allocation / Usage
Date Mon, 25 Mar 2013 06:31:55 GMT
Hi,

The free memory might be low, just because GC hasn't reclaimed what it can.
Can you just try reading in the data you want to read and see if that works
?

Thanks
Hemanth


On Mon, Mar 25, 2013 at 10:32 AM, nagarjuna kanamarlapudi <
nagarjuna.kanamarlapudi@gmail.com> wrote:

> io.sort.mb = 256 MB
>
>
> On Monday, March 25, 2013, Harsh J wrote:
>
>> The MapTask may consume some memory of its own as well. What is your
>> io.sort.mb (MR1) or mapreduce.task.io.sort.mb (MR2) set to?
>>
>> On Sun, Mar 24, 2013 at 3:40 PM, nagarjuna kanamarlapudi
>> <nagarjuna.kanamarlapudi@gmail.com> wrote:
>> > Hi,
>> >
>> > I configured  my child jvm heap to 2 GB. So, I thought I could really
>> read
>> > 1.5GB of data and store it in memory (mapper/reducer).
>> >
>> > I wanted to confirm the same and wrote the following piece of code in
>> the
>> > configure method of mapper.
>> >
>> > @Override
>> >
>> > public void configure(JobConf job) {
>> >
>> > System.out.println("FREE MEMORY -- "
>> >
>> > + Runtime.getRuntime().freeMemory());
>> >
>> > System.out.println("MAX MEMORY ---" + Runtime.getRuntime().maxMemory());
>> >
>> > }
>> >
>> >
>> > Surprisingly the output was
>> >
>> >
>> > FREE MEMORY -- 341854864  = 320 MB
>> > MAX MEMORY ---1908932608  = 1.9 GB
>> >
>> >
>> > I am just wondering what processes are taking up that extra 1.6GB of
>> heap
>> > which I configured for the child jvm heap.
>> >
>> >
>> > Appreciate in helping me understand the scenario.
>> >
>> >
>> >
>> > Regards
>> >
>> > Nagarjuna K
>> >
>> >
>> >
>>
>>
>>
>> --
>> Harsh J
>>
>
>
> --
> Sent from iPhone
>

Mime
View raw message