hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From stack <st...@duboce.net>
Subject Re: hbase master heap space
Date Fri, 21 Dec 2007 19:57:28 GMT
Hey Billy:

Master itself should use little memory and though it is not out of the 
realm of possibiliites, it should not have a leak.

Are you running with the default heap size?  You might want to give it 
more memory if you are (See 
http://wiki.apache.org/lucene-hadoop/Hbase/FAQ#3 for how).

If you are uploading all via the REST server running on the master, the 
problem as you speculate, could be in the REST servlet itself (though it 
looks like it shouldn't be holding on to anything having given it a 
cursory glance).  You could try running the REST server independent of 
the master.  Grep for 'Starting the REST Server' in this page, 
http://wiki.apache.org/lucene-hadoop/Hbase/HbaseRest, for how (If you 
are only running one REST instance, your upload might go faster if you 
run multiple).

St.Ack


Billy wrote:
> I forgot to say that once restart the master only uses about 70mb of memory
>
> Billy
>
> "Billy" <sales@pearsonwholesale.com> wrote in 
> message news:fkejpo$u8c$1@ger.gmane.org...
>   
>> I not sure of this but why does the master server use up so much memory. I 
>> been running an script that been inserting data into a table for a little 
>> over 24 hours and the master crashed because of java.lang.OutOfMemoryError: 
>> Java heap space.
>>
>> So my question is why does the master use up so much memory at most it 
>> should store the -ROOT-,.META. tables in memory and block to table 
>> mapping.
>>
>> Is it cache or a memory leak?
>>
>> I am using the rest interface so could that be the reason?
>>
>> I inserted according to the high edit ids on all the region servers about
>> 51,932,760 edits and the master ran out of memory with a heap of about 
>> 1GB.
>>
>> The other side to this is the data I inserted is only taking up 886.61 MB 
>> and that's with
>> dfs.replication set to 2 so half that is only 440MB of data compressed at 
>> the block level.
>> From what I understand the master should have lower memory and cpu usage 
>> and the namenode on hadoop should be the memory hog it has to keep up with 
>> all the data about the blocks.
>>
>>
>>
>>     
>
>
>
>   


Mime
View raw message