hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1561) mapreduce patch tests hung with "java.lang.OutOfMemoryError: Java heap space"
Date Mon, 08 Mar 2010 20:49:27 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12842812#action_12842812
] 

Doug Cutting commented on MAPREDUCE-1561:
-----------------------------------------

Somehow Avro 1.2.0 is still on the CLASSPATH, causing the above error (SpecificData.java:48).

http://svn.apache.org/viewvc/hadoop/avro/tags/release-1.2.0/src/java/org/apache/avro/specific/SpecificData.java?view=annotate#l48

The log several times states that 1.2.0 (still referenced by common) is evicted by 1.3.0,
but perhaps this is not done consistently?

org.apache.hadoop#avro;1.2.0 by [org.apache.hadoop#avro;1.3.0] in [common]

HADOOP-6486, which upgrades common to Avro 1.3.0 is ready-to-commit, but I wanted to commit
it at the same time as MAPREDUCE-1556, as otherwise things will be broken.  What should I
do?


> mapreduce patch tests hung with "java.lang.OutOfMemoryError: Java heap space"
> -----------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-1561
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1561
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: Giridharan Kesavan
>
> http://hudson.zones.apache.org/hudson/view/Mapreduce/job/Mapreduce-Patch-h9.grid.sp2.yahoo.net/4/console
> Error form the console:
>  [exec]     [junit] 10/03/05 04:08:29 INFO datanode.DataNode: PacketResponder 2 for block
blk_-3280111748864197295_19758 terminating
>      [exec]     [junit] 10/03/05 04:08:29 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: 127.0.0.1:46067 is added to blk_-3280111748864197295_19758{blockUCState=UNDER_CONSTRUCTION,
primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[127.0.0.1:46067|RBW], ReplicaUnderConstruction[127.0.0.1:37626|RBW],
ReplicaUnderConstruction[127.0.0.1:48886|RBW]]} size 0
>      [exec]     [junit] 10/03/05 04:08:29 INFO hdfs.StateChange: DIR* NameSystem.completeFile:
file /tmp/hadoop-hudson/mapred/system/job_20100304162726530_3751/job-info is closed by DFSClient_79157028
>      [exec]     [junit] 10/03/05 04:08:29 INFO mapred.JobTracker: Job job_20100304162726530_3751
added successfully for user 'hudson' to queue 'default'
>      [exec]     [junit] 10/03/05 04:08:29 INFO mapred.JobTracker: Initializing job_20100304162726530_3751
>      [exec]     [junit] 10/03/05 04:08:29 INFO mapred.JobInProgress: Initializing job_20100304162726530_3751
>      [exec]     [junit] 10/03/05 04:08:29 INFO mapreduce.Job: Running job: job_20100304162726530_3751
>      [exec]     [junit] 10/03/05 04:08:29 INFO jobhistory.JobHistory: SetupWriter, creating
file file:/grid/0/hudson/hudson-slave/workspace/Mapreduce-Patch-h9.grid.sp2.yahoo.net/trunk/build/contrib/raid/test/logs/history/job_20100304162726530_3751_hudson
>      [exec]     [junit] 10/03/05 04:08:29 ERROR mapred.JobTracker: Job initialization
failed:
>      [exec]     [junit] org.apache.avro.AvroRuntimeException: java.lang.NoSuchFieldException:
_SCHEMA
>      [exec]     [junit] 	at org.apache.avro.specific.SpecificData.createSchema(SpecificData.java:50)
>      [exec]     [junit] 	at org.apache.avro.reflect.ReflectData.getSchema(ReflectData.java:210)
>      [exec]     [junit] 	at org.apache.avro.specific.SpecificDatumWriter.<init>(SpecificDatumWriter.java:28)
>      [exec]     [junit] 	at org.apache.hadoop.mapreduce.jobhistory.EventWriter.<init>(EventWriter.java:47)
>      [exec]     [junit] 	at org.apache.hadoop.mapreduce.jobhistory.JobHistory.setupEventWriter(JobHistory.java:252)
>      [exec]     [junit] 	at org.apache.hadoop.mapred.JobInProgress.logSubmissionToJobHistory(JobInProgress.java:710)
>      [exec]     [junit] 	at org.apache.hadoop.mapred.JobInProgress.initTasks(JobInProgress.java:619)
>      [exec]     [junit] 	at org.apache.hadoop.mapred.JobTracker.initJob(JobTracker.java:3256)
>      [exec]     [junit] 	at org.apache.hadoop.mapred.EagerTaskInitializationListener$InitJob.run(EagerTaskInitializationListener.java:79)
>      [exec]     [junit] 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>      [exec]     [junit] 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>      [exec]     [junit] 	at java.lang.Thread.run(Thread.java:619)
>      [exec]     [junit] Caused by: java.lang.NoSuchFieldException: _SCHEMA
>      [exec]     [junit] 	at java.lang.Class.getDeclaredField(Class.java:1882)
>      [exec]     [junit] 	at org.apache.avro.specific.SpecificData.createSchema(SpecificData.java:48)
>      [exec]     [junit] 	... 11 more
>      [exec]     [junit] 
>      [exec]     [junit] Exception in thread "pool-1-thread-3" java.lang.OutOfMemoryError:
Java heap space
>      [exec]     [junit] 	at java.util.Arrays.copyOf(Arrays.java:2786)
>      [exec]     [junit] 	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>      [exec]     [junit] 	at java.io.PrintStream.write(PrintStream.java:430)
>      [exec]     [junit] 	at org.apache.tools.ant.util.TeeOutputStream.write(TeeOutputStream.java:81)
>      [exec]     [junit] 	at java.io.PrintStream.write(PrintStream.java:430)
>      [exec]     [junit] 	at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:202)
>      [exec]     [junit] 	at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:272)
>      [exec]     [junit] 	at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:276)
>      [exec]     [junit] 	at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
>      [exec]     [junit] 	at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
>      [exec]     [junit] 	at org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:58)
>      [exec]     [junit] 	at org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:316)
>      [exec]     [junit] 	at org.apache.log4j.WriterAppender.append(WriterAppender.java:160)
>      [exec]     [junit] 	at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>      [exec]     [junit] 	at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)10/03/05
04:08:36 INFO raid.RaidNode: Triggering Policy Filter RaidTest1 hdfs://localhost:44624/user/test/raidtest
>      [exec]     [junit] 10/03/05 04:08:39 INFO raid.RaidNode: Trigger thread continuing
to run...
>      [exec]     [junit] Exception in thread "org.apache.hadoop.raid.RaidNode$TriggerMonitor@5ebac9"
10/03/05 04:08:44 INFO security.Groups: Returning cached groups for 'hudso10/03/05 04:08:47
INFO ipc.Server: IPC Server handler 8 on 44624, call getException in thread "IPC Server handler
8 on 44624" java.lang.OutOfMemoryError: Java heap space10/03/05 04:08:53 INFO mapreduce.Job:
 map 0% reduce 0%

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message