incubator-hama-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward J. Yoon (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HAMA-115) "java.lang.OutOfMemoryError: Java heap space" while run multiplication job
Date Thu, 27 Nov 2008 06:22:44 GMT

    [ https://issues.apache.org/jira/browse/HAMA-115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12651255#action_12651255
] 

Edward J. Yoon commented on HAMA-115:
-------------------------------------

And, one block is a 2000 by 2000 matrix.

> "java.lang.OutOfMemoryError: Java heap space" while run multiplication job
> --------------------------------------------------------------------------
>
>                 Key: HAMA-115
>                 URL: https://issues.apache.org/jira/browse/HAMA-115
>             Project: Hama
>          Issue Type: Bug
>          Components: implementation
>            Reporter: Edward J. Yoon
>             Fix For: 0.1.0
>
>
> While run BlockingMapRed, I received below error. It need to improve.
> ----
> java.lang.OutOfMemoryError: Java heap space
>         at java.util.Arrays.copyOf(Arrays.java:2786)
>         at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>         at org.apache.hadoop.hbase.util.Bytes.writeByteArray(Bytes.java:65)
>         at org.apache.hadoop.hbase.io.Cell.write(Cell.java:152)
>         at org.apache.hadoop.hbase.io.HbaseMapWritable.write(HbaseMapWritable.java:196)
> ----
> [d8g053:/root/hama-trunk]# bin/hama examples mult 4000 4000
> 08/11/27 13:25:33 INFO hama.AbstractMatrix: Initializing the matrix storage.
> 08/11/27 13:25:37 INFO hama.AbstractMatrix: Create Matrix DenseMatrix_randcsbgf
> 08/11/27 13:25:37 INFO hama.AbstractMatrix: Create the 4000 * 4000 random matrix : DenseMatrix_randcsbgf
> 08/11/27 13:29:40 INFO hama.AbstractMatrix: Initializing the matrix storage.
> 08/11/27 13:29:46 INFO hama.AbstractMatrix: Create Matrix DenseMatrix_randtqolf
> 08/11/27 13:29:46 INFO hama.AbstractMatrix: Create the 4000 * 4000 random matrix : DenseMatrix_randtqolf
> 08/11/27 13:33:50 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments.
Applications should implement Tool for the same.
> 08/11/27 13:33:50 WARN mapred.JobClient: Use genericOptions for the option -libjars
> 08/11/27 13:33:50 WARN mapred.JobClient: No job jar file set.  User classes may not be
found. See JobConf(Class) or JobConf#setJar(String).
> 08/11/27 13:33:50 INFO mapred.JobClient: Running job: job_200811271320_0004
> 08/11/27 13:33:51 INFO mapred.JobClient:  map 0% reduce 0%
> 08/11/27 13:35:10 INFO mapred.JobClient:  map 50% reduce 0%
> 08/11/27 13:35:20 INFO mapred.JobClient:  map 50% reduce 8%
> 08/11/27 13:35:22 INFO mapred.JobClient:  map 50% reduce 16%
> 08/11/27 13:37:07 INFO mapred.JobClient: Task Id : attempt_200811271320_0004_m_000001_0,
Status : FAILED
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local
directory for taskTracker/jobcache/job_200811271320_0004/attempt_200811271320_0004_m_000001_0/output/file.out
>         at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:313)
>         at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
>         at org.apache.hadoop.mapred.MapOutputFile.getOutputFileForWrite(MapOutputFile.java:61)
>         at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.mergeParts(MapTask.java:994)
>         at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:702)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:228)
>         at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
> 08/11/27 13:38:22 INFO mapred.JobClient:  map 100% reduce 16%
> 08/11/27 13:38:29 INFO mapred.JobClient:  map 100% reduce 41%
> 08/11/27 13:38:36 INFO mapred.JobClient:  map 100% reduce 66%
> 08/11/27 13:38:42 INFO mapred.JobClient:  map 100% reduce 74%
> 08/11/27 13:38:45 INFO mapred.JobClient:  map 100% reduce 83%
> 08/11/27 13:38:51 INFO mapred.JobClient:  map 100% reduce 91%
> 08/11/27 13:38:54 INFO mapred.JobClient: Job complete: job_200811271320_0004
> 08/11/27 13:38:54 INFO mapred.JobClient: Counters: 13
> 08/11/27 13:38:54 INFO mapred.JobClient:   File Systems
> 08/11/27 13:38:54 INFO mapred.JobClient:     Local bytes read=1088487520
> 08/11/27 13:38:54 INFO mapred.JobClient:     Local bytes written=1631948820
> 08/11/27 13:38:54 INFO mapred.JobClient:   Job Counters 
> 08/11/27 13:38:54 INFO mapred.JobClient:     Launched reduce tasks=2
> 08/11/27 13:38:54 INFO mapred.JobClient:     Launched map tasks=3
> 08/11/27 13:38:54 INFO mapred.JobClient:   Map-Reduce Framework
> 08/11/27 13:38:54 INFO mapred.JobClient:     Reduce input groups=4
> 08/11/27 13:38:54 INFO mapred.JobClient:     Combine output records=0
> 08/11/27 13:38:54 INFO mapred.JobClient:     Map input records=4000
> 08/11/27 13:38:54 INFO mapred.JobClient:     Reduce output records=0
> 08/11/27 13:38:54 INFO mapred.JobClient:     Map output bytes=539725780
> 08/11/27 13:38:54 INFO mapred.JobClient:     Map input bytes=0
> 08/11/27 13:38:54 INFO mapred.JobClient:     Combine input records=0
> 08/11/27 13:38:54 INFO mapred.JobClient:     Map output records=8000
> 08/11/27 13:38:54 INFO mapred.JobClient:     Reduce input records=8000
> 08/11/27 13:38:56 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments.
Applications should implement Tool for the same.
> 08/11/27 13:38:56 WARN mapred.JobClient: Use genericOptions for the option -libjars
> 08/11/27 13:38:57 WARN mapred.JobClient: No job jar file set.  User classes may not be
found. See JobConf(Class) or JobConf#setJar(String).
> 08/11/27 13:38:57 INFO mapred.JobClient: Running job: job_200811271320_0005
> 08/11/27 13:38:58 INFO mapred.JobClient:  map 0% reduce 0%
> 08/11/27 13:40:17 INFO mapred.JobClient:  map 50% reduce 0%
> 08/11/27 13:40:25 INFO mapred.JobClient:  map 50% reduce 8%
> 08/11/27 13:40:32 INFO mapred.JobClient:  map 50% reduce 16%
> 08/11/27 13:42:12 INFO mapred.JobClient:  map 100% reduce 16%
> 08/11/27 13:42:25 INFO mapred.JobClient:  map 100% reduce 41%
> 08/11/27 13:42:26 INFO mapred.JobClient:  map 100% reduce 66%
> 08/11/27 13:42:32 INFO mapred.JobClient:  map 100% reduce 74%
> 08/11/27 13:42:39 INFO mapred.JobClient:  map 100% reduce 83%
> 08/11/27 13:42:41 INFO mapred.JobClient:  map 100% reduce 91%
> 08/11/27 13:42:44 INFO mapred.JobClient: Job complete: job_200811271320_0005
> 08/11/27 13:42:44 INFO mapred.JobClient: Counters: 13
> 08/11/27 13:42:44 INFO mapred.JobClient:   File Systems
> 08/11/27 13:42:44 INFO mapred.JobClient:     Local bytes read=1088487520
> 08/11/27 13:42:44 INFO mapred.JobClient:     Local bytes written=1631948820
> 08/11/27 13:42:44 INFO mapred.JobClient:   Job Counters 
> 08/11/27 13:42:44 INFO mapred.JobClient:     Launched reduce tasks=2
> 08/11/27 13:42:44 INFO mapred.JobClient:     Launched map tasks=3
> 08/11/27 13:42:44 INFO mapred.JobClient:   Map-Reduce Framework
> 08/11/27 13:42:44 INFO mapred.JobClient:     Reduce input groups=4
> 08/11/27 13:42:44 INFO mapred.JobClient:     Combine output records=0
> 08/11/27 13:42:44 INFO mapred.JobClient:     Map input records=4000
> 08/11/27 13:42:44 INFO mapred.JobClient:     Reduce output records=0
> 08/11/27 13:42:44 INFO mapred.JobClient:     Map output bytes=539725780
> 08/11/27 13:42:44 INFO mapred.JobClient:     Map input bytes=0
> 08/11/27 13:42:44 INFO mapred.JobClient:     Combine input records=0
> 08/11/27 13:42:44 INFO mapred.JobClient:     Map output records=8000
> 08/11/27 13:42:44 INFO mapred.JobClient:     Reduce input records=8000
> 08/11/27 13:42:44 INFO hama.AbstractMatrix: Initializing the matrix storage.
> 08/11/27 13:42:48 INFO hama.AbstractMatrix: Create Matrix DenseMatrix_randxvnpe
> 08/11/27 13:42:48 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments.
Applications should implement Tool for the same.
> 08/11/27 13:42:48 WARN mapred.JobClient: Use genericOptions for the option -libjars
> 08/11/27 13:42:48 WARN mapred.JobClient: No job jar file set.  User classes may not be
found. See JobConf(Class) or JobConf#setJar(String).
> 08/11/27 13:42:48 INFO mapred.JobClient: Running job: job_200811271320_0006
> 08/11/27 13:42:49 INFO mapred.JobClient:  map 0% reduce 0%
> 08/11/27 13:42:55 INFO mapred.JobClient:  map 50% reduce 0%
> 08/11/27 13:43:09 INFO mapred.JobClient:  map 50% reduce 8%
> 08/11/27 13:43:13 INFO mapred.JobClient:  map 50% reduce 16%
> 08/11/27 13:43:15 INFO mapred.JobClient: Task Id : attempt_200811271320_0006_m_000000_0,
Status : FAILED
> java.lang.OutOfMemoryError: Java heap space
>         at java.util.Arrays.copyOf(Arrays.java:2786)
>         at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>         at org.apache.hadoop.hbase.util.Bytes.writeByteArray(Bytes.java:65)
>         at org.apache.hadoop.hbase.io.Cell.write(Cell.java:152)
>         at org.apache.hadoop.hbase.io.HbaseMapWritable.write(HbaseMapWritable.java:196)
>         at org.apache.hadoop.hbase.io.RowResult.write(RowResult.java:245)
>         at org.apache.hadoop.hbase.util.Writables.getBytes(Writables.java:49)
>         at org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:134)
>         at org.apache.hama.mapred.BlockInputFormatBase$TableRecordReader.next(BlockInputFormatBase.java:181)
>         at org.apache.hama.mapred.BlockInputFormatBase$TableRecordReader.next(BlockInputFormatBase.java:58)
>         at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:165)
>         at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:45)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
>         at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
> attempt_200811271320_0006_m_000000_0: log4j:WARN No appenders could be found for logger
(org.apache.hadoop.mapred.TaskRunner).
> attempt_200811271320_0006_m_000000_0: log4j:WARN Please initialize the log4j system properly.
> 08/11/27 13:43:32 INFO mapred.JobClient: Task Id : attempt_200811271320_0006_m_000000_1,
Status : FAILED
> java.lang.OutOfMemoryError: Java heap space
>         at java.util.Arrays.copyOf(Arrays.java:2786)
>         at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>         at org.apache.hadoop.hbase.util.Bytes.writeByteArray(Bytes.java:65)
>         at org.apache.hadoop.hbase.io.Cell.write(Cell.java:152)
>         at org.apache.hadoop.hbase.io.HbaseMapWritable.write(HbaseMapWritable.java:196)
>         at org.apache.hadoop.hbase.io.RowResult.write(RowResult.java:245)
>         at org.apache.hadoop.hbase.util.Writables.getBytes(Writables.java:49)
>         at org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:134)
>         at org.apache.hama.mapred.BlockInputFormatBase$TableRecordReader.next(BlockInputFormatBase.java:181)
>         at org.apache.hama.mapred.BlockInputFormatBase$TableRecordReader.next(BlockInputFormatBase.java:58)
>         at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:165)
>         at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:45)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
>         at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
> 08/11/27 13:43:50 INFO mapred.JobClient: Task Id : attempt_200811271320_0006_m_000000_2,
Status : FAILED
> java.lang.OutOfMemoryError: Java heap space
>         at java.util.Arrays.copyOf(Arrays.java:2786)
>         at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>         at org.apache.hadoop.hbase.util.Bytes.writeByteArray(Bytes.java:65)
>         at org.apache.hadoop.hbase.io.Cell.write(Cell.java:152)
>         at org.apache.hadoop.hbase.io.HbaseMapWritable.write(HbaseMapWritable.java:196)
>         at org.apache.hadoop.hbase.io.RowResult.write(RowResult.java:245)
>         at org.apache.hadoop.hbase.util.Writables.getBytes(Writables.java:49)
>         at org.apache.hadoop.hbase.util.Writables.copyWritable(Writables.java:134)
>         at org.apache.hama.mapred.BlockInputFormatBase$TableRecordReader.next(BlockInputFormatBase.java:181)
>         at org.apache.hama.mapred.BlockInputFormatBase$TableRecordReader.next(BlockInputFormatBase.java:58)
>         at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:165)
>         at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:45)
>         at org.apache.hadoop.mapred.MapTask.run(MapTask.java:227)
>         at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2207)
> java.io.IOException: Job failed!
>         at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1113)
>         at org.apache.hama.util.JobManager.execute(JobManager.java:33)
>         at org.apache.hama.DenseMatrix.mult(DenseMatrix.java:335)
>         at org.apache.hama.examples.MatrixMultiplication.main(MatrixMultiplication.java:51)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
>         at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:141)
>         at org.apache.hama.examples.ExampleDriver.main(ExampleDriver.java:30)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
>         at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>         at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
> [d8g053:/root/hama-trunk]# 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message