Return-Path: X-Original-To: apmail-hadoop-mapreduce-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A3585DD86 for ; Wed, 19 Sep 2012 04:36:06 +0000 (UTC) Received: (qmail 91426 invoked by uid 500); 19 Sep 2012 04:36:06 -0000 Delivered-To: apmail-hadoop-mapreduce-commits-archive@hadoop.apache.org Received: (qmail 90645 invoked by uid 500); 19 Sep 2012 04:36:00 -0000 Mailing-List: contact mapreduce-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-dev@hadoop.apache.org Delivered-To: mailing list mapreduce-commits@hadoop.apache.org Received: (qmail 90599 invoked by uid 99); 19 Sep 2012 04:35:58 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Sep 2012 04:35:58 +0000 X-ASF-Spam-Status: No, hits=-1999.0 required=5.0 tests=ALL_TRUSTED,FILL_THIS_FORM_SHORT X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 19 Sep 2012 04:35:50 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id 524642388A66; Wed, 19 Sep 2012 04:35:05 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1387449 - in /hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project: ./ conf/ hadoop-mapreduce-client/ hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/ hadoop-mapreduce-clie... Date: Wed, 19 Sep 2012 04:35:04 -0000 To: mapreduce-commits@hadoop.apache.org From: todd@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20120919043505.524642388A66@eris.apache.org> Author: todd Date: Wed Sep 19 04:34:55 2012 New Revision: 1387449 URL: http://svn.apache.org/viewvc?rev=1387449&view=rev Log: Merge trunk into branch Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/CHANGES.txt (contents, props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/conf/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskImpl.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestClientServiceDelegate.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFileInputFormat.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMultipleLevelCaching.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/UtilsForTests.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/pom.xml hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/c++/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/block_forensics/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/build-contrib.xml (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/build.xml (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/data_join/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/eclipse-plugin/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/index/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/vaidya/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/examples/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/java/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/test/mapred/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc/ (props changed) hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/webapps/job/ (props changed) Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project:r1383030-1387448 Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/CHANGES.txt?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/CHANGES.txt (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/CHANGES.txt Wed Sep 19 04:34:55 2012 @@ -138,6 +138,12 @@ Release 2.0.3-alpha - Unreleased BUG FIXES + MAPREDUCE-4607. Race condition in ReduceTask completion can result in Task + being incorrectly failed. (Bikas Saha via tomwhite) + + MAPREDUCE-4646. Fixed MR framework to send diagnostic information correctly + to clients in case of failed jobs also. (Jason Lowe via vinodkv) + Release 2.0.2-alpha - 2012-09-07 INCOMPATIBLE CHANGES Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/CHANGES.txt ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/conf/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/conf:r1383030-1387448 Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/JobImpl.java Wed Sep 19 04:34:55 2012 @@ -582,17 +582,23 @@ public class JobImpl implements org.apac String jobFile = remoteJobConfFile == null ? "" : remoteJobConfFile.toString(); + StringBuilder diagsb = new StringBuilder(); + for (String s : getDiagnostics()) { + diagsb.append(s).append("\n"); + } + if (getState() == JobState.NEW) { return MRBuilderUtils.newJobReport(jobId, jobName, username, state, appSubmitTime, startTime, finishTime, setupProgress, 0.0f, 0.0f, - cleanupProgress, jobFile, amInfos, isUber); + cleanupProgress, jobFile, amInfos, isUber, diagsb.toString()); } computeProgress(); - return MRBuilderUtils.newJobReport(jobId, jobName, username, state, - appSubmitTime, startTime, finishTime, setupProgress, + JobReport report = MRBuilderUtils.newJobReport(jobId, jobName, username, + state, appSubmitTime, startTime, finishTime, setupProgress, this.mapProgress, this.reduceProgress, - cleanupProgress, jobFile, amInfos, isUber); + cleanupProgress, jobFile, amInfos, isUber, diagsb.toString()); + return report; } finally { readLock.unlock(); } Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java Wed Sep 19 04:34:55 2012 @@ -71,6 +71,7 @@ import org.apache.hadoop.mapreduce.v2.ap import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptReport; import org.apache.hadoop.mapreduce.v2.api.records.TaskAttemptState; import org.apache.hadoop.mapreduce.v2.api.records.TaskId; +import org.apache.hadoop.mapreduce.v2.api.records.TaskState; import org.apache.hadoop.mapreduce.v2.api.records.TaskType; import org.apache.hadoop.mapreduce.v2.app.AppContext; import org.apache.hadoop.mapreduce.v2.app.TaskAttemptListener; @@ -86,6 +87,7 @@ import org.apache.hadoop.mapreduce.v2.ap import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType; import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptKillEvent; import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent; +import org.apache.hadoop.mapreduce.v2.app.job.event.TaskEvent; import org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptStatusUpdateEvent.TaskAttemptStatus; import org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType; import org.apache.hadoop.mapreduce.v2.app.job.event.TaskTAttemptEvent; @@ -120,6 +122,7 @@ import org.apache.hadoop.yarn.event.Even import org.apache.hadoop.yarn.factories.RecordFactory; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; import org.apache.hadoop.yarn.state.InvalidStateTransitonException; +import org.apache.hadoop.yarn.state.MultipleArcTransition; import org.apache.hadoop.yarn.state.SingleArcTransition; import org.apache.hadoop.yarn.state.StateMachine; import org.apache.hadoop.yarn.state.StateMachineFactory; @@ -128,6 +131,8 @@ import org.apache.hadoop.yarn.util.Build import org.apache.hadoop.yarn.util.ConverterUtils; import org.apache.hadoop.yarn.util.RackResolver; +import com.google.common.base.Preconditions; + /** * Implementation of TaskAttempt interface. */ @@ -404,10 +409,10 @@ public abstract class TaskAttemptImpl im TaskAttemptState.FAILED, TaskAttemptEventType.TA_TOO_MANY_FETCH_FAILURE, new TooManyFetchFailureTransition()) - .addTransition( - TaskAttemptState.SUCCEEDED, TaskAttemptState.KILLED, - TaskAttemptEventType.TA_KILL, - new KilledAfterSuccessTransition()) + .addTransition(TaskAttemptState.SUCCEEDED, + EnumSet.of(TaskAttemptState.SUCCEEDED, TaskAttemptState.KILLED), + TaskAttemptEventType.TA_KILL, + new KilledAfterSuccessTransition()) .addTransition( TaskAttemptState.SUCCEEDED, TaskAttemptState.SUCCEEDED, TaskAttemptEventType.TA_DIAGNOSTICS_UPDATE, @@ -1483,6 +1488,9 @@ public abstract class TaskAttemptImpl im @SuppressWarnings("unchecked") @Override public void transition(TaskAttemptImpl taskAttempt, TaskAttemptEvent event) { + // too many fetch failure can only happen for map tasks + Preconditions + .checkArgument(taskAttempt.getID().getTaskId().getTaskType() == TaskType.MAP); //add to diagnostic taskAttempt.addDiagnosticInfo("Too Many fetch failures.Failing the attempt"); //set the finish time @@ -1506,15 +1514,30 @@ public abstract class TaskAttemptImpl im } private static class KilledAfterSuccessTransition implements - SingleArcTransition { + MultipleArcTransition { @SuppressWarnings("unchecked") @Override - public void transition(TaskAttemptImpl taskAttempt, + public TaskAttemptState transition(TaskAttemptImpl taskAttempt, TaskAttemptEvent event) { - TaskAttemptKillEvent msgEvent = (TaskAttemptKillEvent) event; - //add to diagnostic - taskAttempt.addDiagnosticInfo(msgEvent.getMessage()); + if(taskAttempt.getID().getTaskId().getTaskType() == TaskType.REDUCE) { + // after a reduce task has succeeded, its outputs are in safe in HDFS. + // logically such a task should not be killed. we only come here when + // there is a race condition in the event queue. E.g. some logic sends + // a kill request to this attempt when the successful completion event + // for this task is already in the event queue. so the kill event will + // get executed immediately after the attempt is marked successful and + // result in this transition being exercised. + // ignore this for reduce tasks + LOG.info("Ignoring killed event for successful reduce task attempt" + + taskAttempt.getID().toString()); + return TaskAttemptState.SUCCEEDED; + } + if(event instanceof TaskAttemptKillEvent) { + TaskAttemptKillEvent msgEvent = (TaskAttemptKillEvent) event; + //add to diagnostic + taskAttempt.addDiagnosticInfo(msgEvent.getMessage()); + } // not setting a finish time since it was set on success assert (taskAttempt.getFinishTime() != 0); @@ -1528,6 +1551,7 @@ public abstract class TaskAttemptImpl im .getTaskId().getJobId(), tauce)); taskAttempt.eventHandler.handle(new TaskTAttemptEvent( taskAttempt.attemptId, TaskEventType.T_ATTEMPT_KILLED)); + return TaskAttemptState.KILLED; } } Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskImpl.java Wed Sep 19 04:34:55 2012 @@ -191,12 +191,12 @@ public abstract class TaskImpl implement TaskEventType.T_ADD_SPEC_ATTEMPT)) // Transitions from SUCCEEDED state - .addTransition(TaskState.SUCCEEDED, //only possible for map tasks + .addTransition(TaskState.SUCCEEDED, EnumSet.of(TaskState.SCHEDULED, TaskState.SUCCEEDED, TaskState.FAILED), - TaskEventType.T_ATTEMPT_FAILED, new MapRetroactiveFailureTransition()) - .addTransition(TaskState.SUCCEEDED, //only possible for map tasks + TaskEventType.T_ATTEMPT_FAILED, new RetroactiveFailureTransition()) + .addTransition(TaskState.SUCCEEDED, EnumSet.of(TaskState.SCHEDULED, TaskState.SUCCEEDED), - TaskEventType.T_ATTEMPT_KILLED, new MapRetroactiveKilledTransition()) + TaskEventType.T_ATTEMPT_KILLED, new RetroactiveKilledTransition()) // Ignore-able transitions. .addTransition( TaskState.SUCCEEDED, TaskState.SUCCEEDED, @@ -897,7 +897,7 @@ public abstract class TaskImpl implement } } - private static class MapRetroactiveFailureTransition + private static class RetroactiveFailureTransition extends AttemptFailedTransition { @Override @@ -911,8 +911,8 @@ public abstract class TaskImpl implement return TaskState.SUCCEEDED; } } - - //verify that this occurs only for map task + + // a successful REDUCE task should not be overridden //TODO: consider moving it to MapTaskImpl if (!TaskType.MAP.equals(task.getType())) { LOG.error("Unexpected event for REDUCE task " + event.getType()); @@ -938,42 +938,46 @@ public abstract class TaskImpl implement } } - private static class MapRetroactiveKilledTransition implements + private static class RetroactiveKilledTransition implements MultipleArcTransition { @Override public TaskState transition(TaskImpl task, TaskEvent event) { - // verify that this occurs only for map task + TaskAttemptId attemptId = null; + if (event instanceof TaskTAttemptEvent) { + TaskTAttemptEvent castEvent = (TaskTAttemptEvent) event; + attemptId = castEvent.getTaskAttemptID(); + if (task.getState() == TaskState.SUCCEEDED && + !attemptId.equals(task.successfulAttempt)) { + // don't allow a different task attempt to override a previous + // succeeded state + return TaskState.SUCCEEDED; + } + } + + // a successful REDUCE task should not be overridden // TODO: consider moving it to MapTaskImpl if (!TaskType.MAP.equals(task.getType())) { LOG.error("Unexpected event for REDUCE task " + event.getType()); task.internalError(event.getType()); } - TaskTAttemptEvent attemptEvent = (TaskTAttemptEvent) event; - TaskAttemptId attemptId = attemptEvent.getTaskAttemptID(); - if(task.successfulAttempt == attemptId) { - // successful attempt is now killed. reschedule - // tell the job about the rescheduling - unSucceed(task); - task.handleTaskAttemptCompletion( - attemptId, - TaskAttemptCompletionEventStatus.KILLED); - task.eventHandler.handle(new JobMapTaskRescheduledEvent(task.taskId)); - // typically we are here because this map task was run on a bad node and - // we want to reschedule it on a different node. - // Depending on whether there are previous failed attempts or not this - // can SCHEDULE or RESCHEDULE the container allocate request. If this - // SCHEDULE's then the dataLocal hosts of this taskAttempt will be used - // from the map splitInfo. So the bad node might be sent as a location - // to the RM. But the RM would ignore that just like it would ignore - // currently pending container requests affinitized to bad nodes. - task.addAndScheduleAttempt(); - return TaskState.SCHEDULED; - } else { - // nothing to do - return TaskState.SUCCEEDED; - } + // successful attempt is now killed. reschedule + // tell the job about the rescheduling + unSucceed(task); + task.handleTaskAttemptCompletion(attemptId, + TaskAttemptCompletionEventStatus.KILLED); + task.eventHandler.handle(new JobMapTaskRescheduledEvent(task.taskId)); + // typically we are here because this map task was run on a bad node and + // we want to reschedule it on a different node. + // Depending on whether there are previous failed attempts or not this + // can SCHEDULE or RESCHEDULE the container allocate request. If this + // SCHEDULE's then the dataLocal hosts of this taskAttempt will be used + // from the map splitInfo. So the bad node might be sent as a location + // to the RM. But the RM would ignore that just like it would ignore + // currently pending container requests affinitized to bad nodes. + task.addAndScheduleAttempt(); + return TaskState.SCHEDULED; } } Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestMRApp.java Wed Sep 19 04:34:55 2012 @@ -180,7 +180,7 @@ public class TestMRApp { @Test public void testUpdatedNodes() throws Exception { int runCount = 0; - MRApp app = new MRAppWithHistory(2, 1, false, this.getClass().getName(), + MRApp app = new MRAppWithHistory(2, 2, false, this.getClass().getName(), true, ++runCount); Configuration conf = new Configuration(); // after half of the map completion, reduce will start @@ -189,7 +189,7 @@ public class TestMRApp { conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, false); Job job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("Num tasks not correct", 3, job.getTasks().size()); + Assert.assertEquals("Num tasks not correct", 4, job.getTasks().size()); Iterator it = job.getTasks().values().iterator(); Task mapTask1 = it.next(); Task mapTask2 = it.next(); @@ -272,18 +272,19 @@ public class TestMRApp { // rerun // in rerun the 1st map will be recovered from previous run - app = new MRAppWithHistory(2, 1, false, this.getClass().getName(), false, + app = new MRAppWithHistory(2, 2, false, this.getClass().getName(), false, ++runCount); conf = new Configuration(); conf.setBoolean(MRJobConfig.MR_AM_JOB_RECOVERY_ENABLE, true); conf.setBoolean(MRJobConfig.JOB_UBERTASK_ENABLE, false); job = app.submit(conf); app.waitForState(job, JobState.RUNNING); - Assert.assertEquals("No of tasks not correct", 3, job.getTasks().size()); + Assert.assertEquals("No of tasks not correct", 4, job.getTasks().size()); it = job.getTasks().values().iterator(); mapTask1 = it.next(); mapTask2 = it.next(); - Task reduceTask = it.next(); + Task reduceTask1 = it.next(); + Task reduceTask2 = it.next(); // map 1 will be recovered, no need to send done app.waitForState(mapTask1, TaskState.SUCCEEDED); @@ -306,19 +307,36 @@ public class TestMRApp { Assert.assertEquals("Expecting 1 more completion events for success", 3, events.length); - app.waitForState(reduceTask, TaskState.RUNNING); - TaskAttempt task3Attempt = reduceTask.getAttempts().values().iterator() + app.waitForState(reduceTask1, TaskState.RUNNING); + app.waitForState(reduceTask2, TaskState.RUNNING); + + TaskAttempt task3Attempt = reduceTask1.getAttempts().values().iterator() .next(); app.getContext() .getEventHandler() .handle( new TaskAttemptEvent(task3Attempt.getID(), TaskAttemptEventType.TA_DONE)); - app.waitForState(reduceTask, TaskState.SUCCEEDED); + app.waitForState(reduceTask1, TaskState.SUCCEEDED); + app.getContext() + .getEventHandler() + .handle( + new TaskAttemptEvent(task3Attempt.getID(), + TaskAttemptEventType.TA_KILL)); + app.waitForState(reduceTask1, TaskState.SUCCEEDED); + + TaskAttempt task4Attempt = reduceTask2.getAttempts().values().iterator() + .next(); + app.getContext() + .getEventHandler() + .handle( + new TaskAttemptEvent(task4Attempt.getID(), + TaskAttemptEventType.TA_DONE)); + app.waitForState(reduceTask2, TaskState.SUCCEEDED); events = job.getTaskAttemptCompletionEvents(0, 100); - Assert.assertEquals("Expecting 1 more completion events for success", 4, - events.length); + Assert.assertEquals("Expecting 2 more completion events for reduce success", + 5, events.length); // job succeeds app.waitForState(job, JobState.SUCCEEDED); Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/TestRMContainerAllocator.java Wed Sep 19 04:34:55 2012 @@ -138,7 +138,7 @@ public class TestRMContainerAllocator { Job mockJob = mock(Job.class); when(mockJob.getReport()).thenReturn( MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0, - 0, 0, 0, 0, 0, 0, "jobfile", null, false)); + 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, appAttemptId, mockJob); @@ -215,7 +215,7 @@ public class TestRMContainerAllocator { Job mockJob = mock(Job.class); when(mockJob.getReport()).thenReturn( MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0, - 0, 0, 0, 0, 0, 0, "jobfile", null, false)); + 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, appAttemptId, mockJob); @@ -281,7 +281,7 @@ public class TestRMContainerAllocator { Job mockJob = mock(Job.class); when(mockJob.getReport()).thenReturn( MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0, - 0, 0, 0, 0, 0, 0, "jobfile", null, false)); + 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, appAttemptId, mockJob); @@ -723,7 +723,7 @@ public class TestRMContainerAllocator { Job mockJob = mock(Job.class); when(mockJob.getReport()).thenReturn( MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0, - 0, 0, 0, 0, 0, 0, "jobfile", null, false)); + 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, appAttemptId, mockJob); @@ -827,7 +827,7 @@ public class TestRMContainerAllocator { Job mockJob = mock(Job.class); when(mockJob.getReport()).thenReturn( MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0, - 0, 0, 0, 0, 0, 0, "jobfile", null, false)); + 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, appAttemptId, mockJob); @@ -993,7 +993,7 @@ public class TestRMContainerAllocator { Job mockJob = mock(Job.class); when(mockJob.getReport()).thenReturn( MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0, - 0, 0, 0, 0, 0, 0, "jobfile", null, false)); + 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); MyContainerAllocator allocator = new MyContainerAllocator(rm, conf, appAttemptId, mockJob); @@ -1445,7 +1445,7 @@ public class TestRMContainerAllocator { Job job = mock(Job.class); when(job.getReport()).thenReturn( MRBuilderUtils.newJobReport(jobId, "job", "user", JobState.RUNNING, 0, - 0, 0, 0, 0, 0, 0, "jobfile", null, false)); + 0, 0, 0, 0, 0, 0, "jobfile", null, false, "")); doReturn(10).when(job).getTotalMaps(); doReturn(10).when(job).getTotalReduces(); doReturn(0).when(job).getCompletedMaps(); Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestJobImpl.java Wed Sep 19 04:34:55 2012 @@ -45,11 +45,14 @@ import org.apache.hadoop.mapreduce.v2.ap import org.apache.hadoop.mapreduce.v2.api.records.JobState; import org.apache.hadoop.mapreduce.v2.api.records.TaskId; import org.apache.hadoop.mapreduce.v2.app.job.Task; +import org.apache.hadoop.mapreduce.v2.app.job.event.JobDiagnosticsUpdateEvent; import org.apache.hadoop.mapreduce.v2.app.job.event.JobEvent; +import org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType; import org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.InitTransition; import org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl.JobNoTasksCompletedTransition; import org.apache.hadoop.mapreduce.v2.app.metrics.MRAppMetrics; import org.apache.hadoop.security.UserGroupInformation; +import org.apache.hadoop.yarn.SystemClock; import org.apache.hadoop.yarn.api.records.ApplicationAttemptId; import org.apache.hadoop.yarn.event.EventHandler; import org.apache.hadoop.yarn.util.Records; @@ -172,6 +175,8 @@ public class TestJobImpl { t.testCheckJobCompleteSuccess(); t.testCheckJobCompleteSuccessFailed(); t.testCheckAccess(); + t.testReportDiagnostics(); + t.testUberDecision(); } @Test @@ -241,6 +246,41 @@ public class TestJobImpl { Assert.assertTrue(job5.checkAccess(ugi1, null)); Assert.assertTrue(job5.checkAccess(ugi2, null)); } + + @Test + public void testReportDiagnostics() throws Exception { + JobID jobID = JobID.forName("job_1234567890000_0001"); + JobId jobId = TypeConverter.toYarn(jobID); + final String diagMsg = "some diagnostic message"; + final JobDiagnosticsUpdateEvent diagUpdateEvent = + new JobDiagnosticsUpdateEvent(jobId, diagMsg); + MRAppMetrics mrAppMetrics = MRAppMetrics.create(); + JobImpl job = new JobImpl(jobId, Records + .newRecord(ApplicationAttemptId.class), new Configuration(), + mock(EventHandler.class), + null, mock(JobTokenSecretManager.class), null, + new SystemClock(), null, + mrAppMetrics, mock(OutputCommitter.class), + true, null, 0, null, null); + job.handle(diagUpdateEvent); + String diagnostics = job.getReport().getDiagnostics(); + Assert.assertNotNull(diagnostics); + Assert.assertTrue(diagnostics.contains(diagMsg)); + + job = new JobImpl(jobId, Records + .newRecord(ApplicationAttemptId.class), new Configuration(), + mock(EventHandler.class), + null, mock(JobTokenSecretManager.class), null, + new SystemClock(), null, + mrAppMetrics, mock(OutputCommitter.class), + true, null, 0, null, null); + job.handle(new JobEvent(jobId, JobEventType.JOB_KILL)); + job.handle(diagUpdateEvent); + diagnostics = job.getReport().getDiagnostics(); + Assert.assertNotNull(diagnostics); + Assert.assertTrue(diagnostics.contains(diagMsg)); + } + @Test public void testUberDecision() throws Exception { Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskImpl.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskImpl.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskImpl.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestTaskImpl.java Wed Sep 19 04:34:55 2012 @@ -84,7 +84,6 @@ public class TestTaskImpl { private ApplicationId appId; private TaskSplitMetaInfo taskSplitMetaInfo; private String[] dataLocations = new String[0]; - private final TaskType taskType = TaskType.MAP; private AppContext appContext; private int startCount = 0; @@ -97,6 +96,7 @@ public class TestTaskImpl { private class MockTaskImpl extends TaskImpl { private int taskAttemptCounter = 0; + TaskType taskType; public MockTaskImpl(JobId jobId, int partition, EventHandler eventHandler, Path remoteJobConfFile, JobConf conf, @@ -104,11 +104,12 @@ public class TestTaskImpl { Token jobToken, Credentials credentials, Clock clock, Map completedTasksFromPreviousRun, int startCount, - MRAppMetrics metrics, AppContext appContext) { + MRAppMetrics metrics, AppContext appContext, TaskType taskType) { super(jobId, taskType , partition, eventHandler, remoteJobConfFile, conf, taskAttemptListener, committer, jobToken, credentials, clock, completedTasksFromPreviousRun, startCount, metrics, appContext); + this.taskType = taskType; } @Override @@ -120,7 +121,7 @@ public class TestTaskImpl { protected TaskAttemptImpl createAttempt() { MockTaskAttemptImpl attempt = new MockTaskAttemptImpl(getID(), ++taskAttemptCounter, eventHandler, taskAttemptListener, remoteJobConfFile, partition, - conf, committer, jobToken, credentials, clock, appContext); + conf, committer, jobToken, credentials, clock, appContext, taskType); taskAttempts.add(attempt); return attempt; } @@ -142,18 +143,20 @@ public class TestTaskImpl { private float progress = 0; private TaskAttemptState state = TaskAttemptState.NEW; private TaskAttemptId attemptId; + private TaskType taskType; public MockTaskAttemptImpl(TaskId taskId, int id, EventHandler eventHandler, TaskAttemptListener taskAttemptListener, Path jobFile, int partition, JobConf conf, OutputCommitter committer, Token jobToken, Credentials credentials, Clock clock, - AppContext appContext) { + AppContext appContext, TaskType taskType) { super(taskId, id, eventHandler, taskAttemptListener, jobFile, partition, conf, dataLocations, committer, jobToken, credentials, clock, appContext); attemptId = Records.newRecord(TaskAttemptId.class); attemptId.setId(id); attemptId.setTaskId(taskId); + this.taskType = taskType; } public TaskAttemptId getAttemptId() { @@ -162,7 +165,7 @@ public class TestTaskImpl { @Override protected Task createRemoteTask() { - return new MockTask(); + return new MockTask(taskType); } public float getProgress() { @@ -185,6 +188,11 @@ public class TestTaskImpl { private class MockTask extends Task { + private TaskType taskType; + MockTask(TaskType taskType) { + this.taskType = taskType; + } + @Override public void run(JobConf job, TaskUmbilicalProtocol umbilical) throws IOException, ClassNotFoundException, InterruptedException { @@ -193,7 +201,7 @@ public class TestTaskImpl { @Override public boolean isMapTask() { - return true; + return (taskType == TaskType.MAP); } } @@ -227,14 +235,15 @@ public class TestTaskImpl { taskSplitMetaInfo = mock(TaskSplitMetaInfo.class); when(taskSplitMetaInfo.getLocations()).thenReturn(dataLocations); - taskAttempts = new ArrayList(); - - mockTask = new MockTaskImpl(jobId, partition, dispatcher.getEventHandler(), + taskAttempts = new ArrayList(); + } + + private MockTaskImpl createMockTask(TaskType taskType) { + return new MockTaskImpl(jobId, partition, dispatcher.getEventHandler(), remoteJobConfFile, conf, taskAttemptListener, committer, jobToken, credentials, clock, completedTasksFromPreviousRun, startCount, - metrics, appContext); - + metrics, appContext, taskType); } @After @@ -342,6 +351,7 @@ public class TestTaskImpl { @Test public void testInit() { LOG.info("--- START: testInit ---"); + mockTask = createMockTask(TaskType.MAP); assertTaskNewState(); assert(taskAttempts.size() == 0); } @@ -352,6 +362,7 @@ public class TestTaskImpl { */ public void testScheduleTask() { LOG.info("--- START: testScheduleTask ---"); + mockTask = createMockTask(TaskType.MAP); TaskId taskId = getNewTaskID(); scheduleTaskAttempt(taskId); } @@ -362,6 +373,7 @@ public class TestTaskImpl { */ public void testKillScheduledTask() { LOG.info("--- START: testKillScheduledTask ---"); + mockTask = createMockTask(TaskType.MAP); TaskId taskId = getNewTaskID(); scheduleTaskAttempt(taskId); killTask(taskId); @@ -374,6 +386,7 @@ public class TestTaskImpl { */ public void testKillScheduledTaskAttempt() { LOG.info("--- START: testKillScheduledTaskAttempt ---"); + mockTask = createMockTask(TaskType.MAP); TaskId taskId = getNewTaskID(); scheduleTaskAttempt(taskId); killScheduledTaskAttempt(getLastAttempt().getAttemptId()); @@ -386,6 +399,7 @@ public class TestTaskImpl { */ public void testLaunchTaskAttempt() { LOG.info("--- START: testLaunchTaskAttempt ---"); + mockTask = createMockTask(TaskType.MAP); TaskId taskId = getNewTaskID(); scheduleTaskAttempt(taskId); launchTaskAttempt(getLastAttempt().getAttemptId()); @@ -398,6 +412,7 @@ public class TestTaskImpl { */ public void testKillRunningTaskAttempt() { LOG.info("--- START: testKillRunningTaskAttempt ---"); + mockTask = createMockTask(TaskType.MAP); TaskId taskId = getNewTaskID(); scheduleTaskAttempt(taskId); launchTaskAttempt(getLastAttempt().getAttemptId()); @@ -407,6 +422,7 @@ public class TestTaskImpl { @Test public void testTaskProgress() { LOG.info("--- START: testTaskProgress ---"); + mockTask = createMockTask(TaskType.MAP); // launch task TaskId taskId = getNewTaskID(); @@ -444,6 +460,7 @@ public class TestTaskImpl { @Test public void testFailureDuringTaskAttemptCommit() { + mockTask = createMockTask(TaskType.MAP); TaskId taskId = getNewTaskID(); scheduleTaskAttempt(taskId); launchTaskAttempt(getLastAttempt().getAttemptId()); @@ -469,8 +486,7 @@ public class TestTaskImpl { assertTaskSucceededState(); } - @Test - public void testSpeculativeTaskAttemptSucceedsEvenIfFirstFails() { + private void runSpeculativeTaskAttemptSucceedsEvenIfFirstFails(TaskEventType failEvent) { TaskId taskId = getNewTaskID(); scheduleTaskAttempt(taskId); launchTaskAttempt(getLastAttempt().getAttemptId()); @@ -489,11 +505,34 @@ public class TestTaskImpl { // Now fail the first task attempt, after the second has succeeded mockTask.handle(new TaskTAttemptEvent(taskAttempts.get(0).getAttemptId(), - TaskEventType.T_ATTEMPT_FAILED)); + failEvent)); // The task should still be in the succeeded state assertTaskSucceededState(); - + } + + @Test + public void testMapSpeculativeTaskAttemptSucceedsEvenIfFirstFails() { + mockTask = createMockTask(TaskType.MAP); + runSpeculativeTaskAttemptSucceedsEvenIfFirstFails(TaskEventType.T_ATTEMPT_FAILED); + } + + @Test + public void testReduceSpeculativeTaskAttemptSucceedsEvenIfFirstFails() { + mockTask = createMockTask(TaskType.REDUCE); + runSpeculativeTaskAttemptSucceedsEvenIfFirstFails(TaskEventType.T_ATTEMPT_FAILED); + } + + @Test + public void testMapSpeculativeTaskAttemptSucceedsEvenIfFirstIsKilled() { + mockTask = createMockTask(TaskType.MAP); + runSpeculativeTaskAttemptSucceedsEvenIfFirstFails(TaskEventType.T_ATTEMPT_KILLED); + } + + @Test + public void testReduceSpeculativeTaskAttemptSucceedsEvenIfFirstIsKilled() { + mockTask = createMockTask(TaskType.REDUCE); + runSpeculativeTaskAttemptSucceedsEvenIfFirstFails(TaskEventType.T_ATTEMPT_KILLED); } } Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/util/MRBuilderUtils.java Wed Sep 19 04:34:55 2012 @@ -67,7 +67,7 @@ public class MRBuilderUtils { String userName, JobState state, long submitTime, long startTime, long finishTime, float setupProgress, float mapProgress, float reduceProgress, float cleanupProgress, String jobFile, List amInfos, - boolean isUber) { + boolean isUber, String diagnostics) { JobReport report = Records.newRecord(JobReport.class); report.setJobId(jobId); report.setJobName(jobName); @@ -83,6 +83,7 @@ public class MRBuilderUtils { report.setJobFile(jobFile); report.setAMInfos(amInfos); report.setIsUber(isUber); + report.setDiagnostics(diagnostics); return report; } Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/util/ConfigUtil.java Wed Sep 19 04:34:55 2012 @@ -520,5 +520,10 @@ public class ConfigUtil { MRJobConfig.MR_AM_SECURITY_SERVICE_AUTHORIZATION_CLIENT }); } + + public static void main(String[] args) { + loadResources(); + Configuration.dumpDeprecatedKeys(); + } } Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml:r1383030-1387448 Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestClientServiceDelegate.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestClientServiceDelegate.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestClientServiceDelegate.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestClientServiceDelegate.java Wed Sep 19 04:34:55 2012 @@ -219,7 +219,8 @@ public class TestClientServiceDelegate { GetJobReportResponse jobReportResponse1 = mock(GetJobReportResponse.class); when(jobReportResponse1.getJobReport()).thenReturn( MRBuilderUtils.newJobReport(jobId, "jobName-firstGen", "user", - JobState.RUNNING, 0, 0, 0, 0, 0, 0, 0, "anything", null, false)); + JobState.RUNNING, 0, 0, 0, 0, 0, 0, 0, "anything", null, + false, "")); // First AM returns a report with jobName firstGen and simulates AM shutdown // on second invocation. @@ -231,7 +232,8 @@ public class TestClientServiceDelegate { GetJobReportResponse jobReportResponse2 = mock(GetJobReportResponse.class); when(jobReportResponse2.getJobReport()).thenReturn( MRBuilderUtils.newJobReport(jobId, "jobName-secondGen", "user", - JobState.RUNNING, 0, 0, 0, 0, 0, 0, 0, "anything", null, false)); + JobState.RUNNING, 0, 0, 0, 0, 0, 0, 0, "anything", null, + false, "")); // Second AM generation returns a report with jobName secondGen MRClientProtocol secondGenAMProxy = mock(MRClientProtocol.class); Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFileInputFormat.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFileInputFormat.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFileInputFormat.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestFileInputFormat.java Wed Sep 19 04:34:55 2012 @@ -23,6 +23,7 @@ import static org.mockito.Mockito.when; import java.io.DataOutputStream; import java.io.IOException; +import java.util.concurrent.TimeoutException; import junit.framework.TestCase; @@ -95,7 +96,7 @@ public class TestFileInputFormat extends } private void createInputs(FileSystem fs, Path inDir, String fileName) - throws IOException { + throws IOException, TimeoutException, InterruptedException { // create a multi-block file on hdfs Path path = new Path(inDir, fileName); final short replication = 2; @@ -157,7 +158,7 @@ public class TestFileInputFormat extends } } - public void testMultiLevelInput() throws IOException { + public void testMultiLevelInput() throws Exception { JobConf job = new JobConf(conf); job.setBoolean("dfs.replication.considerLoad", false); @@ -291,7 +292,8 @@ public class TestFileInputFormat extends } static void writeFile(Configuration conf, Path name, - short replication, int numBlocks) throws IOException { + short replication, int numBlocks) + throws IOException, TimeoutException, InterruptedException { FileSystem fileSys = FileSystem.get(conf); FSDataOutputStream stm = fileSys.create(name, true, Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMultipleLevelCaching.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMultipleLevelCaching.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMultipleLevelCaching.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/TestMultipleLevelCaching.java Wed Sep 19 04:34:55 2012 @@ -71,13 +71,13 @@ public class TestMultipleLevelCaching ex return rack.toString(); } - public void testMultiLevelCaching() throws IOException { + public void testMultiLevelCaching() throws Exception { for (int i = 1 ; i <= MAX_LEVEL; ++i) { testCachingAtLevel(i); } } - private void testCachingAtLevel(int level) throws IOException { + private void testCachingAtLevel(int level) throws Exception { String namenode = null; MiniDFSCluster dfs = null; MiniMRCluster mr = null; Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/UtilsForTests.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/UtilsForTests.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/UtilsForTests.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapred/UtilsForTests.java Wed Sep 19 04:34:55 2012 @@ -31,6 +31,7 @@ import java.util.Enumeration; import java.util.Iterator; import java.util.List; import java.util.Properties; +import java.util.concurrent.TimeoutException; import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; @@ -449,11 +450,14 @@ public class UtilsForTests { static void signalTasks(MiniDFSCluster dfs, FileSystem fileSys, String mapSignalFile, String reduceSignalFile, int replication) - throws IOException { - writeFile(dfs.getNameNode(), fileSys.getConf(), new Path(mapSignalFile), - (short)replication); - writeFile(dfs.getNameNode(), fileSys.getConf(), new Path(reduceSignalFile), - (short)replication); + throws IOException, TimeoutException { + try { + writeFile(dfs.getNameNode(), fileSys.getConf(), new Path(mapSignalFile), + (short)replication); + writeFile(dfs.getNameNode(), fileSys.getConf(), new Path(reduceSignalFile), (short)replication); + } catch (InterruptedException ie) { + // Ignore + } } /** @@ -462,12 +466,16 @@ public class UtilsForTests { static void signalTasks(MiniDFSCluster dfs, FileSystem fileSys, boolean isMap, String mapSignalFile, String reduceSignalFile) - throws IOException { - // signal the maps to complete - writeFile(dfs.getNameNode(), fileSys.getConf(), - isMap - ? new Path(mapSignalFile) - : new Path(reduceSignalFile), (short)1); + throws IOException, TimeoutException { + try { + // signal the maps to complete + writeFile(dfs.getNameNode(), fileSys.getConf(), + isMap + ? new Path(mapSignalFile) + : new Path(reduceSignalFile), (short)1); + } catch (InterruptedException ie) { + // Ignore + } } static String getSignalFile(Path dir) { @@ -483,7 +491,8 @@ public class UtilsForTests { } static void writeFile(NameNode namenode, Configuration conf, Path name, - short replication) throws IOException { + short replication) + throws IOException, TimeoutException, InterruptedException { FileSystem fileSys = FileSystem.get(conf); SequenceFile.Writer writer = SequenceFile.createWriter(fileSys, conf, name, Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/lib/input/TestCombineFileInputFormat.java Wed Sep 19 04:34:55 2012 @@ -23,6 +23,7 @@ import java.net.URI; import java.util.List; import java.util.ArrayList; import java.util.zip.GZIPOutputStream; +import java.util.concurrent.TimeoutException; import junit.framework.TestCase; @@ -278,7 +279,7 @@ public class TestCombineFileInputFormat assertFalse(rr.nextKeyValue()); } - public void testSplitPlacement() throws IOException { + public void testSplitPlacement() throws Exception { MiniDFSCluster dfs = null; FileSystem fileSys = null; try { @@ -678,7 +679,8 @@ public class TestCombineFileInputFormat } static void writeFile(Configuration conf, Path name, - short replication, int numBlocks) throws IOException { + short replication, int numBlocks) + throws IOException, TimeoutException, InterruptedException { FileSystem fileSys = FileSystem.get(conf); FSDataOutputStream stm = fileSys.create(name, true, @@ -689,7 +691,8 @@ public class TestCombineFileInputFormat // Creates the gzip file and return the FileStatus static FileStatus writeGzipFile(Configuration conf, Path name, - short replication, int numBlocks) throws IOException { + short replication, int numBlocks) + throws IOException, TimeoutException, InterruptedException { FileSystem fileSys = FileSystem.get(conf); GZIPOutputStream out = new GZIPOutputStream(fileSys.create(name, true, conf @@ -699,7 +702,8 @@ public class TestCombineFileInputFormat } private static void writeDataAndSetReplication(FileSystem fileSys, Path name, - OutputStream out, short replication, int numBlocks) throws IOException { + OutputStream out, short replication, int numBlocks) + throws IOException, TimeoutException, InterruptedException { for (int i = 0; i < numBlocks; i++) { out.write(databuf); } @@ -707,7 +711,7 @@ public class TestCombineFileInputFormat DFSTestUtil.waitReplication(fileSys, name, replication); } - public void testSplitPlacementForCompressedFiles() throws IOException { + public void testSplitPlacementForCompressedFiles() throws Exception { MiniDFSCluster dfs = null; FileSystem fileSys = null; try { @@ -1058,7 +1062,7 @@ public class TestCombineFileInputFormat /** * Test that CFIF can handle missing blocks. */ - public void testMissingBlocks() throws IOException { + public void testMissingBlocks() throws Exception { String namenode = null; MiniDFSCluster dfs = null; FileSystem fileSys = null; Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/hadoop-mapreduce-client/pom.xml Wed Sep 19 04:34:55 2012 @@ -172,6 +172,18 @@ Max + + org.apache.maven.plugins + maven-surefire-plugin + + + + listener + org.apache.hadoop.test.TimedOutTestsListener + + + + Modified: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/pom.xml URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/pom.xml?rev=1387449&r1=1387448&r2=1387449&view=diff ============================================================================== --- hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/pom.xml (original) +++ hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/pom.xml Wed Sep 19 04:34:55 2012 @@ -220,6 +220,18 @@ + + org.apache.maven.plugins + maven-surefire-plugin + + + + listener + org.apache.hadoop.test.TimedOutTestsListener + + + + Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/c++/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/c++:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/contrib:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/block_forensics/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/block_forensics:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/build-contrib.xml ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/build-contrib.xml:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/build.xml ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/build.xml:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/data_join/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/data_join:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/eclipse-plugin/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/eclipse-plugin:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/index/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/index:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/contrib/vaidya/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/contrib/vaidya:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/examples/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/examples:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/java/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/java:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/test/mapred/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/test/mapred:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc:r1383030-1387448 Propchange: hadoop/common/branches/HDFS-3077/hadoop-mapreduce-project/src/webapps/job/ ------------------------------------------------------------------------------ Merged /hadoop/common/trunk/hadoop-mapreduce-project/src/webapps/job:r1383030-1387448