Return-Path: X-Original-To: apmail-hadoop-mapreduce-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 4B1E39A2D for ; Wed, 30 Nov 2011 18:28:04 +0000 (UTC) Received: (qmail 33699 invoked by uid 500); 30 Nov 2011 18:28:02 -0000 Delivered-To: apmail-hadoop-mapreduce-commits-archive@hadoop.apache.org Received: (qmail 33645 invoked by uid 500); 30 Nov 2011 18:28:02 -0000 Mailing-List: contact mapreduce-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-dev@hadoop.apache.org Delivered-To: mailing list mapreduce-commits@hadoop.apache.org Received: (qmail 33549 invoked by uid 99); 30 Nov 2011 18:28:02 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Nov 2011 18:28:02 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 30 Nov 2011 18:27:50 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id B0BC623888CC; Wed, 30 Nov 2011 18:27:27 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1208644 [1/3] - in /hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project: ./ conf/ hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/ hadoop-mapreduce-client/hadoop-mapreduce... Date: Wed, 30 Nov 2011 18:27:20 -0000 To: mapreduce-commits@hadoop.apache.org From: atm@apache.org X-Mailer: svnmailer-1.0.8-patched Message-Id: <20111130182727.B0BC623888CC@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: atm Date: Wed Nov 30 18:27:04 2011 New Revision: 1208644 URL: http://svn.apache.org/viewvc?rev=1208644&view=rev Log: Merge trunk into HA branch. Added: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DirectoryCollection.java - copied unchanged from r1208622, hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DirectoryCollection.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java - copied unchanged from r1208622, hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LocalDirsHandlerService.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeHealthCheckerService.java - copied unchanged from r1208622, hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeHealthCheckerService.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeHealthScriptRunner.java - copied unchanged from r1208622, hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeHealthScriptRunner.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeHealthService.java - copied unchanged from r1208622, hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeHealthService.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestDiskFailures.java - copied unchanged from r1208622, hadoop/common/trunk/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestDiskFailures.java Removed: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/NodeHealthCheckerService.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/TestNodeHealthService.java Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/.gitignore (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/CHANGES.txt (contents, props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/conf/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/conf/container-executor.cfg hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapred/LocalDistributedCacheManager.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/avro/Events.avpr hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTaskAttempt.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryEvents.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryParsing.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/resources/yarn-default.xml hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerExitEvent.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainersLauncher.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ContainerLocalizer.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/LogAggregationService.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/NonAggregatingLogHandler.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/ContainerLogsPage.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/webapp/WebServer.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/main.c hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/DummyContainerManager.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestEventFlow.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutor.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestLinuxContainerExecutorWithMocks.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeStatusUpdater.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManager.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/TestContainer.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestLogAggregationService.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/webapp/TestNMWebServer.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/TestContainerManagerSecurity.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt/ClusterSetup.apt.vm hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/c++/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/contrib/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/contrib/block_forensics/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/contrib/build-contrib.xml (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/contrib/build.xml (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/contrib/data_join/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/contrib/eclipse-plugin/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/contrib/index/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/contrib/vaidya/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/examples/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/java/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/java/org/apache/hadoop/mapred/JobInProgress.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/FileBench.java (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc/ (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/mapred/TestCombineOutputCollector.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEvents.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java (props changed) hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/tools/rumen/TestRumenJobTraces.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/tools/org/apache/hadoop/tools/rumen/MapAttempt20LineHistoryEventEmitter.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/tools/org/apache/hadoop/tools/rumen/ReduceAttempt20LineHistoryEventEmitter.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/tools/org/apache/hadoop/tools/rumen/TaskAttempt20LineEventEmitter.java hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/src/webapps/job/ (props changed) Propchange: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Wed Nov 30 18:27:04 2011 @@ -1,2 +1,2 @@ -/hadoop/common/trunk/hadoop-mapreduce-project:1152502-1208001 +/hadoop/common/trunk/hadoop-mapreduce-project:1152502-1208622 /hadoop/core/branches/branch-0.19/mapred:713112 Propchange: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/.gitignore ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Wed Nov 30 18:27:04 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/hadoop-mapreduce-project/.gitignore:1161333-1208001 +/hadoop/common/trunk/hadoop-mapreduce-project/.gitignore:1161333-1208622 /hadoop/core/branches/branch-0.19/mapred/.gitignore:713112 /hadoop/core/trunk/.gitignore:784664-785643 Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/CHANGES.txt?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/CHANGES.txt (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/CHANGES.txt Wed Nov 30 18:27:04 2011 @@ -71,6 +71,8 @@ Release 0.23.1 - Unreleased INCOMPATIBLE CHANGES NEW FEATURES + + MAPREDUCE-3121. NodeManager should handle disk-failures (Ravi Gummadi via mahadev) IMPROVEMENTS @@ -122,6 +124,9 @@ Release 0.23.1 - Unreleased MAPREDUCE-3045. Fixed UI filters to not filter on hidden title-numeric sort fields. (Jonathan Eagles via sseth) + MAPREDUCE-3448. TestCombineOutputCollector javac unchecked warning on mocked + generics (Jonathan Eagles via mahadev) + OPTIMIZATIONS BUG FIXES @@ -192,6 +197,9 @@ Release 0.23.1 - Unreleased MAPREDUCE-3433. Finding counters by legacy group name returns empty counters. (tomwhite) + MAPREDUCE-3450. NM port info no longer available in JobHistory. + (Siddharth Seth via mahadev) + Release 0.23.0 - 2011-11-01 INCOMPATIBLE CHANGES Propchange: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/CHANGES.txt ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Wed Nov 30 18:27:04 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt:1161333-1208001 +/hadoop/common/trunk/hadoop-mapreduce-project/CHANGES.txt:1161333-1208622 /hadoop/core/branches/branch-0.19/mapred/CHANGES.txt:713112 /hadoop/mapreduce/branches/HDFS-641/CHANGES.txt:817878-835964 Propchange: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/conf/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Wed Nov 30 18:27:04 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/hadoop-mapreduce-project/conf:1152502-1208001 +/hadoop/common/trunk/hadoop-mapreduce-project/conf:1152502-1208622 /hadoop/core/branches/branch-0.19/mapred/conf:713112 /hadoop/core/trunk/conf:784664-785643 Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/conf/container-executor.cfg URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/conf/container-executor.cfg?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/conf/container-executor.cfg (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/conf/container-executor.cfg Wed Nov 30 18:27:04 2011 @@ -1,3 +1,3 @@ -yarn.nodemanager.local-dirs=#configured value of yarn.nodemanager.local-dirs. It can be a list of comma separated paths. -yarn.nodemanager.log-dirs=#configured value of yarn.nodemanager.log-dirs. yarn.nodemanager.linux-container-executor.group=#configured value of yarn.nodemanager.linux-container-executor.group +banned.users=#comma separated list of users who can not run applications +min.user.id=1000#Prevent other super-users Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TaskAttemptImpl.java Wed Nov 30 18:27:04 2011 @@ -922,8 +922,11 @@ public abstract class TaskAttemptImpl im TypeConverter.fromYarn(taskAttempt.attemptId.getTaskId() .getTaskType()), attemptState.toString(), taskAttempt.finishTime, - taskAttempt.containerMgrAddress == null ? "UNKNOWN" - : taskAttempt.containerMgrAddress, StringUtils.join( + taskAttempt.containerNodeId == null ? "UNKNOWN" + : taskAttempt.containerNodeId.getHost(), + taskAttempt.containerNodeId == null ? -1 + : taskAttempt.containerNodeId.getPort(), + StringUtils.join( LINE_SEPARATOR, taskAttempt.getDiagnostics()), taskAttempt .getProgressSplitBlock().burst()); return tauce; @@ -1273,6 +1276,7 @@ public abstract class TaskAttemptImpl im finishTime, this.containerNodeId == null ? "UNKNOWN" : this.containerNodeId.getHost(), + this.containerNodeId == null ? -1 : this.containerNodeId.getPort(), this.nodeRackName == null ? "UNKNOWN" : this.nodeRackName, this.reportedStatus.stateString, TypeConverter.fromYarn(getCounters()), @@ -1288,7 +1292,8 @@ public abstract class TaskAttemptImpl im this.reportedStatus.sortFinishTime, finishTime, this.containerNodeId == null ? "UNKNOWN" - : this.containerNodeId.getHost(), + : this.containerNodeId.getHost(), + this.containerNodeId == null ? -1 : this.containerNodeId.getPort(), this.nodeRackName == null ? "UNKNOWN" : this.nodeRackName, this.reportedStatus.stateString, TypeConverter.fromYarn(getCounters()), Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapred/LocalDistributedCacheManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapred/LocalDistributedCacheManager.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapred/LocalDistributedCacheManager.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapred/LocalDistributedCacheManager.java Wed Nov 30 18:27:04 2011 @@ -113,9 +113,10 @@ class LocalDistributedCacheManager { Map> resourcesToPaths = Maps.newHashMap(); ExecutorService exec = Executors.newCachedThreadPool(); + Path destPath = localDirAllocator.getLocalPathForWrite(".", conf); for (LocalResource resource : localResources.values()) { Callable download = new FSDownload(localFSFileContext, ugi, conf, - localDirAllocator, resource, new Random()); + destPath, resource, new Random()); Future future = exec.submit(download); resourcesToPaths.put(resource, future); } Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/avro/Events.avpr URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/avro/Events.avpr?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/avro/Events.avpr (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/avro/Events.avpr Wed Nov 30 18:27:04 2011 @@ -136,6 +136,7 @@ {"name": "mapFinishTime", "type": "long"}, {"name": "finishTime", "type": "long"}, {"name": "hostname", "type": "string"}, + {"name": "port", "type": "int"}, {"name": "rackname", "type": "string"}, {"name": "state", "type": "string"}, {"name": "counters", "type": "JhCounters"}, @@ -156,6 +157,7 @@ {"name": "sortFinishTime", "type": "long"}, {"name": "finishTime", "type": "long"}, {"name": "hostname", "type": "string"}, + {"name": "port", "type": "int"}, {"name": "rackname", "type": "string"}, {"name": "state", "type": "string"}, {"name": "counters", "type": "JhCounters"}, @@ -199,6 +201,7 @@ {"name": "attemptId", "type": "string"}, {"name": "finishTime", "type": "long"}, {"name": "hostname", "type": "string"}, + {"name": "port", "type": "int"}, {"name": "status", "type": "string"}, {"name": "error", "type": "string"}, {"name": "clockSplits", "type": { "type": "array", "items": "int"}}, Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java Wed Nov 30 18:27:04 2011 @@ -209,6 +209,7 @@ public class JobHistoryParser { attemptInfo.sortFinishTime = event.getSortFinishTime(); attemptInfo.counters = event.getCounters(); attemptInfo.hostname = event.getHostname(); + attemptInfo.port = event.getPort(); attemptInfo.rackname = event.getRackName(); } @@ -222,6 +223,7 @@ public class JobHistoryParser { attemptInfo.mapFinishTime = event.getMapFinishTime(); attemptInfo.counters = event.getCounters(); attemptInfo.hostname = event.getHostname(); + attemptInfo.port = event.getPort(); attemptInfo.rackname = event.getRackname(); } @@ -234,6 +236,7 @@ public class JobHistoryParser { attemptInfo.error = event.getError(); attemptInfo.status = event.getTaskStatus(); attemptInfo.hostname = event.getHostname(); + attemptInfo.port = event.getPort(); attemptInfo.shuffleFinishTime = event.getFinishTime(); attemptInfo.sortFinishTime = event.getFinishTime(); attemptInfo.mapFinishTime = event.getFinishTime(); @@ -542,6 +545,7 @@ public class JobHistoryParser { int httpPort; int shufflePort; String hostname; + int port; String rackname; ContainerId containerId; @@ -552,6 +556,7 @@ public class JobHistoryParser { startTime = finishTime = shuffleFinishTime = sortFinishTime = mapFinishTime = -1; error = state = trackerName = hostname = rackname = ""; + port = -1; httpPort = -1; shufflePort = -1; } @@ -599,6 +604,8 @@ public class JobHistoryParser { public String getTrackerName() { return trackerName; } /** @return the host name */ public String getHostname() { return hostname; } + /** @return the port */ + public int getPort() { return port; } /** @return the rack name */ public String getRackname() { return rackname; } /** @return the counters for the attempt */ Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java Wed Nov 30 18:27:04 2011 @@ -44,6 +44,7 @@ public class MapAttemptFinishedEvent im * @param mapFinishTime Finish time of the map phase * @param finishTime Finish time of the attempt * @param hostname Name of the host where the map executed + * @param port RPC port for the tracker host. * @param rackName Name of the rack where the map executed * @param state State string for the attempt * @param counters Counters for the attempt @@ -57,9 +58,8 @@ public class MapAttemptFinishedEvent im */ public MapAttemptFinishedEvent (TaskAttemptID id, TaskType taskType, String taskStatus, - long mapFinishTime, long finishTime, String hostname, String rackName, - String state, Counters counters, - int[][] allSplits) { + long mapFinishTime, long finishTime, String hostname, int port, + String rackName, String state, Counters counters, int[][] allSplits) { datum.taskid = new Utf8(id.getTaskID().toString()); datum.attemptId = new Utf8(id.toString()); datum.taskType = new Utf8(taskType.name()); @@ -67,6 +67,7 @@ public class MapAttemptFinishedEvent im datum.mapFinishTime = mapFinishTime; datum.finishTime = finishTime; datum.hostname = new Utf8(hostname); + datum.port = port; datum.rackname = new Utf8(rackName); datum.state = new Utf8(state); datum.counters = EventWriter.toAvro(counters); @@ -106,7 +107,7 @@ public class MapAttemptFinishedEvent im (TaskAttemptID id, TaskType taskType, String taskStatus, long mapFinishTime, long finishTime, String hostname, String state, Counters counters) { - this(id, taskType, taskStatus, mapFinishTime, finishTime, hostname, "", + this(id, taskType, taskStatus, mapFinishTime, finishTime, hostname, -1, "", state, counters, null); } @@ -136,6 +137,8 @@ public class MapAttemptFinishedEvent im public long getFinishTime() { return datum.finishTime; } /** Get the host name */ public String getHostname() { return datum.hostname.toString(); } + /** Get the tracker rpc port */ + public int getPort() { return datum.port; } /** Get the rack name */ public String getRackname() { return datum.rackname.toString(); } /** Get the state string */ Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java Wed Nov 30 18:27:04 2011 @@ -46,6 +46,7 @@ public class ReduceAttemptFinishedEvent * @param sortFinishTime Finish time of the sort phase * @param finishTime Finish time of the attempt * @param hostname Name of the host where the attempt executed + * @param port RPC port for the tracker host. * @param rackName Name of the rack where the attempt executed * @param state State of the attempt * @param counters Counters for the attempt @@ -57,8 +58,8 @@ public class ReduceAttemptFinishedEvent public ReduceAttemptFinishedEvent (TaskAttemptID id, TaskType taskType, String taskStatus, long shuffleFinishTime, long sortFinishTime, long finishTime, - String hostname, String rackName, String state, Counters counters, - int[][] allSplits) { + String hostname, int port, String rackName, String state, + Counters counters, int[][] allSplits) { datum.taskid = new Utf8(id.getTaskID().toString()); datum.attemptId = new Utf8(id.toString()); datum.taskType = new Utf8(taskType.name()); @@ -67,6 +68,7 @@ public class ReduceAttemptFinishedEvent datum.sortFinishTime = sortFinishTime; datum.finishTime = finishTime; datum.hostname = new Utf8(hostname); + datum.port = port; datum.rackname = new Utf8(rackName); datum.state = new Utf8(state); datum.counters = EventWriter.toAvro(counters); @@ -108,7 +110,7 @@ public class ReduceAttemptFinishedEvent String hostname, String state, Counters counters) { this(id, taskType, taskStatus, shuffleFinishTime, sortFinishTime, finishTime, - hostname, "", state, counters, null); + hostname, -1, "", state, counters, null); } ReduceAttemptFinishedEvent() {} @@ -138,6 +140,8 @@ public class ReduceAttemptFinishedEvent public long getFinishTime() { return datum.finishTime; } /** Get the name of the host where the attempt ran */ public String getHostname() { return datum.hostname.toString(); } + /** Get the tracker rpc port */ + public int getPort() { return datum.port; } /** Get the rack name of the node where the attempt ran */ public String getRackName() { return datum.rackname.toString(); } /** Get the state string */ Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java Wed Nov 30 18:27:04 2011 @@ -46,6 +46,7 @@ public class TaskAttemptUnsuccessfulComp * @param status Status of the attempt * @param finishTime Finish time of the attempt * @param hostname Name of the host where the attempt executed + * @param port rpc port for for the tracker * @param error Error string * @param allSplits the "splits", or a pixelated graph of various * measurable worker node state variables against progress. @@ -55,13 +56,14 @@ public class TaskAttemptUnsuccessfulComp public TaskAttemptUnsuccessfulCompletionEvent (TaskAttemptID id, TaskType taskType, String status, long finishTime, - String hostname, String error, + String hostname, int port, String error, int[][] allSplits) { datum.taskid = new Utf8(id.getTaskID().toString()); datum.taskType = new Utf8(taskType.name()); datum.attemptId = new Utf8(id.toString()); datum.finishTime = finishTime; datum.hostname = new Utf8(hostname); + datum.port = port; datum.error = new Utf8(error); datum.status = new Utf8(status); @@ -97,7 +99,7 @@ public class TaskAttemptUnsuccessfulComp (TaskAttemptID id, TaskType taskType, String status, long finishTime, String hostname, String error) { - this(id, taskType, status, finishTime, hostname, error, null); + this(id, taskType, status, finishTime, hostname, -1, error, null); } TaskAttemptUnsuccessfulCompletionEvent() {} @@ -121,6 +123,8 @@ public class TaskAttemptUnsuccessfulComp public long getFinishTime() { return datum.finishTime; } /** Get the name of the host where the attempt executed */ public String getHostname() { return datum.hostname.toString(); } + /** Get the rpc port for the host where the attempt executed */ + public int getPort() { return datum.port; } /** Get the error string */ public String getError() { return datum.error.toString(); } /** Get the task status */ Propchange: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Wed Nov 30 18:27:04 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml:1166973-1208001 +/hadoop/common/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml:1166973-1208622 /hadoop/core/branches/branch-0.19/mapred/src/java/mapred-default.xml:713112 /hadoop/core/trunk/src/mapred/mapred-default.xml:776175-785643 Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTaskAttempt.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTaskAttempt.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTaskAttempt.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/main/java/org/apache/hadoop/mapreduce/v2/hs/CompletedTaskAttempt.java Wed Nov 30 18:27:04 2011 @@ -80,12 +80,11 @@ public class CompletedTaskAttempt implem report.setStateString(attemptInfo.getState()); report.setCounters(getCounters()); report.setContainerId(attemptInfo.getContainerId()); - String []hostSplits = attemptInfo.getHostname().split(":"); - if (hostSplits.length != 2) { + if (attemptInfo.getHostname() == null) { report.setNodeManagerHost("UNKNOWN"); } else { - report.setNodeManagerHost(hostSplits[0]); - report.setNodeManagerPort(Integer.parseInt(hostSplits[1])); + report.setNodeManagerHost(attemptInfo.getHostname()); + report.setNodeManagerPort(attemptInfo.getPort()); } report.setNodeManagerHttpPort(attemptInfo.getHttpPort()); } @@ -97,7 +96,7 @@ public class CompletedTaskAttempt implem @Override public String getAssignedContainerMgrAddress() { - return attemptInfo.getHostname(); + return attemptInfo.getHostname() + ":" + attemptInfo.getPort(); } @Override Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryEvents.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryEvents.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryEvents.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryEvents.java Wed Nov 30 18:27:04 2011 @@ -165,6 +165,9 @@ public class TestJobHistoryEvents { //Verify the wrong ctor is not being used. Remove after mrv1 is removed. ContainerId fakeCid = BuilderUtils.newContainerId(-1, -1, -1, -1); Assert.assertFalse(attempt.getAssignedContainerID().equals(fakeCid)); + //Verify complete contianerManagerAddress + Assert.assertEquals(MRApp.NM_HOST + ":" + MRApp.NM_PORT, + attempt.getAssignedContainerMgrAddress()); } static class MRAppWithHistory extends MRApp { Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryParsing.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryParsing.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryParsing.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/TestJobHistoryParsing.java Wed Nov 30 18:27:04 2011 @@ -34,6 +34,7 @@ import org.apache.hadoop.fs.CommonConfig import org.apache.hadoop.fs.FSDataInputStream; import org.apache.hadoop.fs.FileContext; import org.apache.hadoop.fs.Path; +import org.apache.hadoop.mapreduce.MRJobConfig; import org.apache.hadoop.mapreduce.TaskID; import org.apache.hadoop.mapreduce.TypeConverter; import org.apache.hadoop.mapreduce.jobhistory.JobHistoryParser; @@ -64,13 +65,14 @@ public class TestJobHistoryParsing { public static class MyResolver implements DNSToSwitchMapping { @Override public List resolve(List names) { - return Arrays.asList(new String[]{"MyRackName"}); + return Arrays.asList(new String[]{"/MyRackName"}); } } @Test public void testHistoryParsing() throws Exception { Configuration conf = new Configuration(); + conf.set(MRJobConfig.USER_NAME, System.getProperty("user.name")); long amStartTimeEst = System.currentTimeMillis(); conf.setClass( CommonConfigurationKeysPublic.NET_TOPOLOGY_NODE_SWITCH_MAPPING_IMPL_KEY, @@ -165,10 +167,12 @@ public class TestJobHistoryParsing { Assert.assertNotNull("TaskAttemptInfo not found", taskAttemptInfo); Assert.assertEquals("Incorrect shuffle port for task attempt", taskAttempt.getShufflePort(), taskAttemptInfo.getShufflePort()); + Assert.assertEquals(MRApp.NM_HOST, taskAttemptInfo.getHostname()); + Assert.assertEquals(MRApp.NM_PORT, taskAttemptInfo.getPort()); // Verify rack-name Assert.assertEquals("rack-name is incorrect", taskAttemptInfo - .getRackname(), "MyRackName"); + .getRackname(), "/MyRackName"); } } Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/v2/MiniMRYarnCluster.java Wed Nov 30 18:27:04 2011 @@ -56,7 +56,7 @@ public class MiniMRYarnCluster extends M } public MiniMRYarnCluster(String testName, int noOfNMs) { - super(testName, noOfNMs); + super(testName, noOfNMs, 4, 4); //TODO: add the history server historyServerWrapper = new JobHistoryServerWrapper(); addService(historyServerWrapper); Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java Wed Nov 30 18:27:04 2011 @@ -43,7 +43,8 @@ public class TestDistributedShell { public static void setup() throws InterruptedException, IOException { LOG.info("Starting up YARN cluster"); if (yarnCluster == null) { - yarnCluster = new MiniYARNCluster(TestDistributedShell.class.getName()); + yarnCluster = new MiniYARNCluster(TestDistributedShell.class.getName(), + 1, 1, 1); yarnCluster.init(conf); yarnCluster.start(); } Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java Wed Nov 30 18:27:04 2011 @@ -351,13 +351,39 @@ public class YarnConfiguration extends C /** Class that calculates containers current resource utilization.*/ public static final String NM_CONTAINER_MON_RESOURCE_CALCULATOR = NM_PREFIX + "container-monitor.resource-calculator.class"; - + + /** + * Enable/Disable disks' health checker. Default is true. + * An expert level configuration property. + */ + public static final String NM_DISK_HEALTH_CHECK_ENABLE = + NM_PREFIX + "disk-health-checker.enable"; + /** Frequency of running disks' health checker.*/ + public static final String NM_DISK_HEALTH_CHECK_INTERVAL_MS = + NM_PREFIX + "disk-health-checker.interval-ms"; + /** By default, disks' health is checked every 2 minutes. */ + public static final long DEFAULT_NM_DISK_HEALTH_CHECK_INTERVAL_MS = + 2 * 60 * 1000; + + /** + * The minimum fraction of number of disks to be healthy for the nodemanager + * to launch new containers. This applies to nm-local-dirs and nm-log-dirs. + */ + public static final String NM_MIN_HEALTHY_DISKS_FRACTION = + NM_PREFIX + "disk-health-checker.min-healthy-disks"; + /** + * By default, at least 5% of disks are to be healthy to say that the node + * is healthy in terms of disks. + */ + public static final float DEFAULT_NM_MIN_HEALTHY_DISKS_FRACTION + = 0.25F; + /** Frequency of running node health script.*/ public static final String NM_HEALTH_CHECK_INTERVAL_MS = NM_PREFIX + "health-checker.interval-ms"; public static final long DEFAULT_NM_HEALTH_CHECK_INTERVAL_MS = 10 * 60 * 1000; - - /** Script time out period.*/ + + /** Health check script time out period.*/ public static final String NM_HEALTH_CHECK_SCRIPT_TIMEOUT_MS = NM_PREFIX + "health-checker.script.timeout-ms"; public static final long DEFAULT_NM_HEALTH_CHECK_SCRIPT_TIMEOUT_MS = Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/AggregatedLogFormat.java Wed Nov 30 18:27:04 2011 @@ -31,6 +31,7 @@ import java.io.Writer; import java.security.PrivilegedExceptionAction; import java.util.EnumSet; import java.util.HashMap; +import java.util.List; import java.util.Map; import java.util.Map.Entry; @@ -105,12 +106,12 @@ public class AggregatedLogFormat { public static class LogValue { - private final String[] rootLogDirs; + private final List rootLogDirs; private final ContainerId containerId; // TODO Maybe add a version string here. Instead of changing the version of // the entire k-v format - public LogValue(String[] rootLogDirs, ContainerId containerId) { + public LogValue(List rootLogDirs, ContainerId containerId) { this.rootLogDirs = rootLogDirs; this.containerId = containerId; } Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/FSDownload.java Wed Nov 30 18:27:04 2011 @@ -33,7 +33,6 @@ import org.apache.hadoop.fs.FileContext; import org.apache.hadoop.fs.FileStatus; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.FileUtil; -import org.apache.hadoop.fs.LocalDirAllocator; import org.apache.hadoop.fs.Options.Rename; import org.apache.hadoop.fs.Path; import org.apache.hadoop.security.UserGroupInformation; @@ -56,7 +55,10 @@ public class FSDownload implements Calla private final UserGroupInformation userUgi; private Configuration conf; private LocalResource resource; - private LocalDirAllocator dirs; + + /** The local FS dir path under which this resource is to be localized to */ + private Path destDirPath; + private static final FsPermission cachePerms = new FsPermission( (short) 0755); static final FsPermission PUBLIC_FILE_PERMS = new FsPermission((short) 0555); @@ -65,10 +67,11 @@ public class FSDownload implements Calla static final FsPermission PUBLIC_DIR_PERMS = new FsPermission((short) 0755); static final FsPermission PRIVATE_DIR_PERMS = new FsPermission((short) 0700); + public FSDownload(FileContext files, UserGroupInformation ugi, Configuration conf, - LocalDirAllocator dirs, LocalResource resource, Random rand) { + Path destDirPath, LocalResource resource, Random rand) { this.conf = conf; - this.dirs = dirs; + this.destDirPath = destDirPath; this.files = files; this.userUgi = ugi; this.resource = resource; @@ -136,15 +139,13 @@ public class FSDownload implements Calla } Path tmp; - Path dst = - dirs.getLocalPathForWrite(".", getEstimatedSize(resource), - conf); do { - tmp = new Path(dst, String.valueOf(rand.nextLong())); + tmp = new Path(destDirPath, String.valueOf(rand.nextLong())); } while (files.util().exists(tmp)); - dst = tmp; - files.mkdir(dst, cachePerms, false); - final Path dst_work = new Path(dst + "_tmp"); + destDirPath = tmp; + + files.mkdir(destDirPath, cachePerms, false); + final Path dst_work = new Path(destDirPath + "_tmp"); files.mkdir(dst_work, cachePerms, false); Path dFinal = files.makeQualified(new Path(dst_work, sCopy.getName())); @@ -158,9 +159,9 @@ public class FSDownload implements Calla }); unpack(new File(dTmp.toUri()), new File(dFinal.toUri())); changePermissions(dFinal.getFileSystem(conf), dFinal); - files.rename(dst_work, dst, Rename.OVERWRITE); + files.rename(dst_work, destDirPath, Rename.OVERWRITE); } catch (Exception e) { - try { files.delete(dst, true); } catch (IOException ignore) { } + try { files.delete(destDirPath, true); } catch (IOException ignore) { } throw e; } finally { try { @@ -170,9 +171,8 @@ public class FSDownload implements Calla rand = null; conf = null; resource = null; - dirs = null; } - return files.makeQualified(new Path(dst, sCopy.getName())); + return files.makeQualified(new Path(destDirPath, sCopy.getName())); } /** @@ -221,17 +221,4 @@ public class FSDownload implements Calla } } - private static long getEstimatedSize(LocalResource rsrc) { - if (rsrc.getSize() < 0) { - return -1; - } - switch (rsrc.getType()) { - case ARCHIVE: - return 5 * rsrc.getSize(); - case FILE: - default: - return rsrc.getSize(); - } - } - } Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestFSDownload.java Wed Nov 30 18:27:04 2011 @@ -146,13 +146,14 @@ public class TestFSDownload { vis = LocalResourceVisibility.APPLICATION; break; } - - LocalResource rsrc = createFile(files, new Path(basedir, "" + i), - sizes[i], rand, vis); + Path p = new Path(basedir, "" + i); + LocalResource rsrc = createFile(files, p, sizes[i], rand, vis); rsrcVis.put(rsrc, vis); + Path destPath = dirs.getLocalPathForWrite( + basedir.toString(), sizes[i], conf); FSDownload fsd = new FSDownload(files, UserGroupInformation.getCurrentUser(), conf, - dirs, rsrc, new Random(sharedSeed)); + destPath, rsrc, new Random(sharedSeed)); pending.put(rsrc, exec.submit(fsd)); } @@ -249,13 +250,15 @@ public class TestFSDownload { vis = LocalResourceVisibility.APPLICATION; break; } - - LocalResource rsrc = createJar(files, new Path(basedir, "dir" + i - + ".jar"), vis); + + Path p = new Path(basedir, "dir" + i + ".jar"); + LocalResource rsrc = createJar(files, p, vis); rsrcVis.put(rsrc, vis); + Path destPath = dirs.getLocalPathForWrite( + basedir.toString(), conf); FSDownload fsd = new FSDownload(files, UserGroupInformation.getCurrentUser(), conf, - dirs, rsrc, new Random(sharedSeed)); + destPath, rsrc, new Random(sharedSeed)); pending.put(rsrc, exec.submit(fsd)); } Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/resources/yarn-default.xml URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/resources/yarn-default.xml?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/resources/yarn-default.xml (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/resources/yarn-default.xml Wed Nov 30 18:27:04 2011 @@ -389,6 +389,22 @@ + Frequency of running disk health checker code. + yarn.nodemanager.disk-health-checker.interval-ms + 120000 + + + + The minimum fraction of number of disks to be healthy for the + nodemanager to launch new containers. This correspond to both + yarn-nodemanager.local-dirs and yarn.nodemanager.log-dirs. i.e. If there + are less number of healthy local-dirs (or log-dirs) available, then + new containers will not be launched on this node. + yarn.nodemanager.disk-health-checker.min-healthy-disks + 0.25 + + + The path to the Linux container executor. yarn.nodemanager.linux-container-executor.path Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java Wed Nov 30 18:27:04 2011 @@ -45,6 +45,7 @@ public abstract class ContainerExecutor FsPermission.createImmutable((short) 0700); private Configuration conf; + private ConcurrentMap pidFiles = new ConcurrentHashMap(); @@ -68,7 +69,7 @@ public abstract class ContainerExecutor * @throws IOException */ public abstract void init() throws IOException; - + /** * Prepare the environment for containers in this application to execute. * For $x in local.dirs @@ -82,12 +83,14 @@ public abstract class ContainerExecutor * @param appId id of the application * @param nmPrivateContainerTokens path to localized credentials, rsrc by NM * @param nmAddr RPC address to contact NM + * @param localDirs nm-local-dirs + * @param logDirs nm-log-dirs * @throws IOException For most application init failures * @throws InterruptedException If application init thread is halted by NM */ public abstract void startLocalizer(Path nmPrivateContainerTokens, InetSocketAddress nmAddr, String user, String appId, String locId, - List localDirs) + List localDirs, List logDirs) throws IOException, InterruptedException; @@ -100,12 +103,15 @@ public abstract class ContainerExecutor * @param user the user of the container * @param appId the appId of the container * @param containerWorkDir the work dir for the container + * @param localDirs nm-local-dirs to be used for this container + * @param logDirs nm-log-dirs to be used for this container * @return the return status of the launch * @throws IOException */ public abstract int launchContainer(Container container, Path nmPrivateContainerScriptPath, Path nmPrivateTokensPath, - String user, String appId, Path containerWorkDir) throws IOException; + String user, String appId, Path containerWorkDir, List localDirs, + List logDirs) throws IOException; public abstract boolean signalContainer(String user, String pid, Signal signal) @@ -116,7 +122,8 @@ public abstract class ContainerExecutor public enum ExitCode { FORCE_KILLED(137), - TERMINATED(143); + TERMINATED(143), + DISKS_FAILED(-101); private final int code; private ExitCode(int exitCode) { Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java Wed Nov 30 18:27:04 2011 @@ -26,6 +26,7 @@ import java.io.File; import java.io.IOException; import java.io.PrintStream; import java.net.InetSocketAddress; +import java.util.ArrayList; import java.util.Arrays; import java.util.EnumSet; import java.util.List; @@ -39,7 +40,6 @@ import org.apache.hadoop.fs.permission.F import org.apache.hadoop.util.Shell.ExitCodeException; import org.apache.hadoop.util.Shell.ShellCommandExecutor; import org.apache.hadoop.yarn.api.records.ContainerId; -import org.apache.hadoop.yarn.conf.YarnConfiguration; import org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider; import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container; import org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerDiagnosticsUpdateEvent; @@ -77,16 +77,17 @@ public class DefaultContainerExecutor ex @Override public void startLocalizer(Path nmPrivateContainerTokensPath, InetSocketAddress nmAddr, String user, String appId, String locId, - List localDirs) throws IOException, InterruptedException { + List localDirs, List logDirs) + throws IOException, InterruptedException { ContainerLocalizer localizer = - new ContainerLocalizer(this.lfs, user, appId, locId, - localDirs, RecordFactoryProvider.getRecordFactory(getConf())); + new ContainerLocalizer(lfs, user, appId, locId, getPaths(localDirs), + RecordFactoryProvider.getRecordFactory(getConf())); createUserLocalDirs(localDirs, user); createUserCacheDirs(localDirs, user); createAppDirs(localDirs, user, appId); - createAppLogDirs(appId); + createAppLogDirs(appId, logDirs); // TODO: Why pick first app dir. The same in LCE why not random? Path appStorageDir = getFirstApplicationDir(localDirs, user, appId); @@ -104,8 +105,8 @@ public class DefaultContainerExecutor ex @Override public int launchContainer(Container container, Path nmPrivateContainerScriptPath, Path nmPrivateTokensPath, - String userName, String appId, Path containerWorkDir) - throws IOException { + String userName, String appId, Path containerWorkDir, + List localDirs, List logDirs) throws IOException { ContainerId containerId = container.getContainerID(); @@ -115,10 +116,7 @@ public class DefaultContainerExecutor ex ConverterUtils.toString( container.getContainerID().getApplicationAttemptId(). getApplicationId()); - String[] sLocalDirs = getConf().getStrings( - YarnConfiguration.NM_LOCAL_DIRS, - YarnConfiguration.DEFAULT_NM_LOCAL_DIRS); - for (String sLocalDir : sLocalDirs) { + for (String sLocalDir : localDirs) { Path usersdir = new Path(sLocalDir, ContainerLocalizer.USERCACHE); Path userdir = new Path(usersdir, userName); Path appCacheDir = new Path(userdir, ContainerLocalizer.APPCACHE); @@ -128,7 +126,7 @@ public class DefaultContainerExecutor ex } // Create the container log-dirs on all disks - createContainerLogDirs(appIdStr, containerIdStr); + createContainerLogDirs(appIdStr, containerIdStr, logDirs); // copy launch script to work dir Path launchDst = @@ -299,9 +297,9 @@ public class DefaultContainerExecutor ex * $logdir/$user/$appId */ private static final short LOGDIR_PERM = (short)0710; - private Path getFirstApplicationDir(List localDirs, String user, + private Path getFirstApplicationDir(List localDirs, String user, String appId) { - return getApplicationDir(localDirs.get(0), user, appId); + return getApplicationDir(new Path(localDirs.get(0)), user, appId); } private Path getApplicationDir(Path base, String user, String appId) { @@ -328,14 +326,14 @@ public class DefaultContainerExecutor ex *
  • $local.dir/usercache/$user
  • * */ - private void createUserLocalDirs(List localDirs, String user) + private void createUserLocalDirs(List localDirs, String user) throws IOException { boolean userDirStatus = false; FsPermission userperms = new FsPermission(USER_PERM); - for (Path localDir : localDirs) { + for (String localDir : localDirs) { // create $local.dir/usercache/$user and its immediate parent try { - lfs.mkdir(getUserCacheDir(localDir, user), userperms, true); + lfs.mkdir(getUserCacheDir(new Path(localDir), user), userperms, true); } catch (IOException e) { LOG.warn("Unable to create the user directory : " + localDir, e); continue; @@ -357,7 +355,7 @@ public class DefaultContainerExecutor ex *
  • $local.dir/usercache/$user/filecache
  • * */ - private void createUserCacheDirs(List localDirs, String user) + private void createUserCacheDirs(List localDirs, String user) throws IOException { LOG.info("Initializing user " + user); @@ -366,9 +364,10 @@ public class DefaultContainerExecutor ex FsPermission appCachePerms = new FsPermission(APPCACHE_PERM); FsPermission fileperms = new FsPermission(FILECACHE_PERM); - for (Path localDir : localDirs) { + for (String localDir : localDirs) { // create $local.dir/usercache/$user/appcache - final Path appDir = getAppcacheDir(localDir, user); + Path localDirPath = new Path(localDir); + final Path appDir = getAppcacheDir(localDirPath, user); try { lfs.mkdir(appDir, appCachePerms, true); appcacheDirStatus = true; @@ -376,7 +375,7 @@ public class DefaultContainerExecutor ex LOG.warn("Unable to create app cache directory : " + appDir, e); } // create $local.dir/usercache/$user/filecache - final Path distDir = getFileCacheDir(localDir, user); + final Path distDir = getFileCacheDir(localDirPath, user); try { lfs.mkdir(distDir, fileperms, true); distributedCacheDirStatus = true; @@ -403,12 +402,12 @@ public class DefaultContainerExecutor ex * * @param localDirs */ - private void createAppDirs(List localDirs, String user, String appId) + private void createAppDirs(List localDirs, String user, String appId) throws IOException { boolean initAppDirStatus = false; FsPermission appperms = new FsPermission(APPDIR_PERM); - for (Path localDir : localDirs) { - Path fullAppDir = getApplicationDir(localDir, user, appId); + for (String localDir : localDirs) { + Path fullAppDir = getApplicationDir(new Path(localDir), user, appId); // create $local.dir/usercache/$user/appcache/$appId try { lfs.mkdir(fullAppDir, appperms, true); @@ -427,15 +426,12 @@ public class DefaultContainerExecutor ex /** * Create application log directories on all disks. */ - private void createAppLogDirs(String appId) + private void createAppLogDirs(String appId, List logDirs) throws IOException { - String[] rootLogDirs = - getConf() - .getStrings(YarnConfiguration.NM_LOG_DIRS, YarnConfiguration.DEFAULT_NM_LOG_DIRS); - + boolean appLogDirStatus = false; FsPermission appLogDirPerms = new FsPermission(LOGDIR_PERM); - for (String rootLogDir : rootLogDirs) { + for (String rootLogDir : logDirs) { // create $log.dir/$appid Path appLogDir = new Path(rootLogDir, appId); try { @@ -455,15 +451,12 @@ public class DefaultContainerExecutor ex /** * Create application log directories on all disks. */ - private void createContainerLogDirs(String appId, String containerId) - throws IOException { - String[] rootLogDirs = - getConf() - .getStrings(YarnConfiguration.NM_LOG_DIRS, YarnConfiguration.DEFAULT_NM_LOG_DIRS); - + private void createContainerLogDirs(String appId, String containerId, + List logDirs) throws IOException { + boolean containerLogDirStatus = false; FsPermission containerLogDirPerms = new FsPermission(LOGDIR_PERM); - for (String rootLogDir : rootLogDirs) { + for (String rootLogDir : logDirs) { // create $log.dir/$appid/$containerid Path appLogDir = new Path(rootLogDir, appId); Path containerLogDir = new Path(appLogDir, containerId); @@ -483,4 +476,15 @@ public class DefaultContainerExecutor ex + containerId); } } + + /** + * @return the list of paths of given local directories + */ + private static List getPaths(List dirs) { + List paths = new ArrayList(dirs.size()); + for (int i = 0; i < dirs.size(); i++) { + paths.add(new Path(dirs.get(i))); + } + return paths; + } } Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java Wed Nov 30 18:27:04 2011 @@ -126,13 +126,18 @@ public class LinuxContainerExecutor exte @Override public void startLocalizer(Path nmPrivateContainerTokensPath, InetSocketAddress nmAddr, String user, String appId, String locId, - List localDirs) throws IOException, InterruptedException { + List localDirs, List logDirs) + throws IOException, InterruptedException { + List command = new ArrayList( Arrays.asList(containerExecutorExe, user, Integer.toString(Commands.INITIALIZE_CONTAINER.getValue()), appId, - nmPrivateContainerTokensPath.toUri().getPath().toString())); + nmPrivateContainerTokensPath.toUri().getPath().toString(), + StringUtils.join(",", localDirs), + StringUtils.join(",", logDirs))); + File jvm = // use same jvm as parent new File(new File(System.getProperty("java.home"), "bin"), "java"); command.add(jvm.toString()); @@ -148,8 +153,8 @@ public class LinuxContainerExecutor exte command.add(locId); command.add(nmAddr.getHostName()); command.add(Integer.toString(nmAddr.getPort())); - for (Path p : localDirs) { - command.add(p.toUri().getPath().toString()); + for (String dir : localDirs) { + command.add(dir); } String[] commandArray = command.toArray(new String[command.size()]); ShellCommandExecutor shExec = new ShellCommandExecutor(commandArray); @@ -174,7 +179,8 @@ public class LinuxContainerExecutor exte @Override public int launchContainer(Container container, Path nmPrivateCotainerScriptPath, Path nmPrivateTokensPath, - String user, String appId, Path containerWorkDir) throws IOException { + String user, String appId, Path containerWorkDir, + List localDirs, List logDirs) throws IOException { ContainerId containerId = container.getContainerID(); String containerIdStr = ConverterUtils.toString(containerId); @@ -189,8 +195,10 @@ public class LinuxContainerExecutor exte .toString(Commands.LAUNCH_CONTAINER.getValue()), appId, containerIdStr, containerWorkDir.toString(), nmPrivateCotainerScriptPath.toUri().getPath().toString(), - nmPrivateTokensPath.toUri().getPath().toString(), pidFilePath - .toString())); + nmPrivateTokensPath.toUri().getPath().toString(), + pidFilePath.toString(), + StringUtils.join(",", localDirs), + StringUtils.join(",", logDirs))); String[] commandArray = command.toArray(new String[command.size()]); shExec = new ShellCommandExecutor(commandArray, null, // NM's cwd container.getLaunchContext().getEnvironment()); // sanitized env Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeManager.java Wed Nov 30 18:27:04 2011 @@ -25,7 +25,6 @@ import java.util.concurrent.ConcurrentSk import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.NodeHealthCheckerService; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.metrics2.lib.DefaultMetricsSystem; import org.apache.hadoop.security.SecurityUtil; @@ -59,6 +58,8 @@ public class NodeManager extends Composi protected final NodeManagerMetrics metrics = NodeManagerMetrics.create(); protected ContainerTokenSecretManager containerTokenSecretManager; private ApplicationACLsManager aclsManager; + private NodeHealthCheckerService nodeHealthChecker; + private LocalDirsHandlerService dirsHandler; public NodeManager() { super(NodeManager.class.getName()); @@ -78,14 +79,16 @@ public class NodeManager extends Composi protected ContainerManagerImpl createContainerManager(Context context, ContainerExecutor exec, DeletionService del, NodeStatusUpdater nodeStatusUpdater, ContainerTokenSecretManager - containerTokenSecretManager, ApplicationACLsManager aclsManager) { + containerTokenSecretManager, ApplicationACLsManager aclsManager, + LocalDirsHandlerService dirsHandler) { return new ContainerManagerImpl(context, exec, del, nodeStatusUpdater, - metrics, containerTokenSecretManager, aclsManager); + metrics, containerTokenSecretManager, aclsManager, dirsHandler); } protected WebServer createWebServer(Context nmContext, - ResourceView resourceView, ApplicationACLsManager aclsManager) { - return new WebServer(nmContext, resourceView, aclsManager); + ResourceView resourceView, ApplicationACLsManager aclsManager, + LocalDirsHandlerService dirsHandler) { + return new WebServer(nmContext, resourceView, aclsManager, dirsHandler); } protected void doSecureLogin() throws IOException { @@ -121,16 +124,12 @@ public class NodeManager extends Composi // NodeManager level dispatcher AsyncDispatcher dispatcher = new AsyncDispatcher(); - NodeHealthCheckerService healthChecker = null; - if (NodeHealthCheckerService.shouldRun(conf)) { - healthChecker = new NodeHealthCheckerService(); - addService(healthChecker); - } + nodeHealthChecker = new NodeHealthCheckerService(); + addService(nodeHealthChecker); + dirsHandler = nodeHealthChecker.getDiskHandler(); - NodeStatusUpdater nodeStatusUpdater = - createNodeStatusUpdater(context, dispatcher, healthChecker, - this.containerTokenSecretManager); - + NodeStatusUpdater nodeStatusUpdater = createNodeStatusUpdater(context, + dispatcher, nodeHealthChecker, this.containerTokenSecretManager); nodeStatusUpdater.register(this); NodeResourceMonitor nodeResourceMonitor = createNodeResourceMonitor(); @@ -138,11 +137,11 @@ public class NodeManager extends Composi ContainerManagerImpl containerManager = createContainerManager(context, exec, del, nodeStatusUpdater, - this.containerTokenSecretManager, this.aclsManager); + this.containerTokenSecretManager, this.aclsManager, dirsHandler); addService(containerManager); Service webServer = createWebServer(context, containerManager - .getContainersMonitor(), this.aclsManager); + .getContainersMonitor(), this.aclsManager, dirsHandler); addService(webServer); dispatcher.register(ContainerManagerEventType.class, containerManager); @@ -215,7 +214,14 @@ public class NodeManager extends Composi } } - + + /** + * @return the node health checker + */ + public NodeHealthCheckerService getNodeHealthChecker() { + return nodeHealthChecker; + } + @Override public void stateChanged(Service service) { // Shutdown the Nodemanager when the NodeStatusUpdater is stopped. Modified: hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java?rev=1208644&r1=1208643&r2=1208644&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java (original) +++ hadoop/common/branches/HDFS-1623/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java Wed Nov 30 18:27:04 2011 @@ -27,7 +27,6 @@ import java.util.Map.Entry; import org.apache.avro.AvroRuntimeException; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; -import org.apache.hadoop.NodeHealthCheckerService; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.net.NetUtils; import org.apache.hadoop.security.UserGroupInformation; @@ -222,11 +221,14 @@ public class NodeStatusUpdaterImpl exten + numActiveContainers + " containers"); NodeHealthStatus nodeHealthStatus = this.context.getNodeHealthStatus(); - if (this.healthChecker != null) { - this.healthChecker.setHealthStatus(nodeHealthStatus); + nodeHealthStatus.setHealthReport(healthChecker.getHealthReport()); + nodeHealthStatus.setIsNodeHealthy(healthChecker.isHealthy()); + nodeHealthStatus.setLastHealthReportTime( + healthChecker.getLastHealthReportTime()); + if (LOG.isDebugEnabled()) { + LOG.debug("Node's health-status : " + nodeHealthStatus.getIsNodeHealthy() + + ", " + nodeHealthStatus.getHealthReport()); } - LOG.debug("Node's health-status : " + nodeHealthStatus.getIsNodeHealthy() - + ", " + nodeHealthStatus.getHealthReport()); nodeStatus.setNodeHealthStatus(nodeHealthStatus); return nodeStatus;