incubator-hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon" <edwardy...@apache.org>
Subject Re: java.lang.OutOfMemoryError: Java heap space
Date Fri, 12 Dec 2008 10:02:20 GMT
If I create scanners to avoid this problem on each map, hbase gives a
lot of UnknowScannerException.

On Fri, Dec 12, 2008 at 5:39 PM, Edward J. Yoon <edwardyoon@apache.org> wrote:
> Yes, RowResult seems too large. But, I don't think "increasing the
> child heap" is a good solution.
>
> Let's see the bigTable paper.
>
> Each row in the imagery table corresponds to a single
> geographic segment. Rows are named to ensure that
> adjacent geographic segments are stored near each other.
> The table contains a column family to keep track of the
> sources of data for each segment. This column family
> has a large number of columns: essentially one for each
> raw data image.
>
> Yes, We can have a large number of columns in the one column-family.
> In above case, I think ......
>
>                       column:miles                          image: ...
> =================================================
> segment(x, y)   column:1 miles  <segment(x',y')>
>                       column:2 miles  <segment(x^,y^)>
>                       .......
>
> Then, we can search something within a N-mile radius. Right?
> Finally, ... I need another solution.
>
> On Sat, Nov 29, 2008 at 12:35 AM, Thibaut_ <tbritz@blue.lu> wrote:
>>
>> Your application uses too much memory. Try increasing the child heap space
>> for mapreduce applications. (It's in the hadoop configuration file,
>> mapred.child.java.opts)
>>
>> Thibaut
>>
>>
>> Edward J. Yoon-2 wrote:
>>>
>>> While run mapred, I received below error. The size of RowResult seems
>>> too large. What do you think?
>>>
>>> ----
>>> 08/11/27 13:42:49 INFO mapred.JobClient: map 0% reduce 0%
>>> 08/11/27 13:42:55 INFO mapred.JobClient: map 50% reduce 0%
>>> 08/11/27 13:43:09 INFO mapred.JobClient: map 50% reduce 8%
>>> 08/11/27 13:43:13 INFO mapred.JobClient: map 50% reduce 16%
>>> 08/11/27 13:43:15 INFO mapred.JobClient: Task Id :
>>> attempt_200811271320_0006_m_000000_0, Status : FAILED
>>> java.lang.OutOfMemoryError: Java heap space
>>>         at java.util.Arrays.copyOf(Arrays.java:2786)
>>>         at
>>> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>>>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>>>         at
>>> org.apache.hadoop.hbase.util.Bytes.writeByteArray(Bytes.java:65)
>>>         at org.apache.hadoop.hbase.io.Cell.write(Cell.java:152)
>>>         at
>>> org.apache.hadoop.hbase.io.HbaseMapWritable.write(HbaseMapWritable.java:196)
>>>         at org.apache.hadoop.hbase.io.RowResult.write(RowResult.java:245)
>>>         at
>>> org.apache.hadoop.hbase.util.Writables.getBytes(Writables.java:49)
>>>         at
>>> org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:134)
>>>
>>> --
>>> Best Regards, Edward J. Yoon @ NHN, corp.
>>> edwardyoon@apache.org
>>> http://blog.udanax.org
>>>
>>>
>>
>> --
>> View this message in context: http://www.nabble.com/java.lang.OutOfMemoryError%3A-Java-heap-space-tp20714065p20736470.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>
>
>
>
> --
> Best Regards, Edward J. Yoon @ NHN, corp.
> edwardyoon@apache.org
> http://blog.udanax.org
>



-- 
Best Regards, Edward J. Yoon @ NHN, corp.
edwardyoon@apache.org
http://blog.udanax.org

Mime
View raw message