Return-Path: X-Original-To: apmail-hadoop-mapreduce-commits-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-commits-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6F71A7B79 for ; Tue, 16 Aug 2011 00:38:20 +0000 (UTC) Received: (qmail 1959 invoked by uid 500); 16 Aug 2011 00:38:20 -0000 Delivered-To: apmail-hadoop-mapreduce-commits-archive@hadoop.apache.org Received: (qmail 1762 invoked by uid 500); 16 Aug 2011 00:38:19 -0000 Mailing-List: contact mapreduce-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-dev@hadoop.apache.org Delivered-To: mailing list mapreduce-commits@hadoop.apache.org Received: (qmail 1740 invoked by uid 99); 16 Aug 2011 00:38:17 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 16 Aug 2011 00:38:17 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO eris.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 16 Aug 2011 00:38:13 +0000 Received: from eris.apache.org (localhost [127.0.0.1]) by eris.apache.org (Postfix) with ESMTP id E83D72388A9C; Tue, 16 Aug 2011 00:37:35 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r1158072 [1/3] - in /hadoop/common/branches/HDFS-1623/mapreduce: ./ conf/ src/c++/ src/contrib/ src/contrib/block_forensics/ src/contrib/capacity-scheduler/ src/contrib/data_join/ src/contrib/dynamic-scheduler/ src/contrib/eclipse-plugin/ s... Date: Tue, 16 Aug 2011 00:37:31 -0000 To: mapreduce-commits@hadoop.apache.org From: todd@apache.org X-Mailer: svnmailer-1.0.8 Message-Id: <20110816003735.E83D72388A9C@eris.apache.org> Author: todd Date: Tue Aug 16 00:37:15 2011 New Revision: 1158072 URL: http://svn.apache.org/viewvc?rev=1158072&view=rev Log: Merge trunk into HDFS-1623 branch. Added: hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/CumulativePeriodicStats.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/CumulativePeriodicStats.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/MROutputFiles.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/PeriodicStatsAccumulator.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/PeriodicStatsAccumulator.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/ProgressSplitsBlock.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/ProgressSplitsBlock.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/StatePeriodicStats.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapred/StatePeriodicStats.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/FileSystemCounter.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/FileSystemCounter.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/FileSystemCounter.properties - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/FileSystemCounter.properties hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/ - copied from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/ hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounter.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounter.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounterGroup.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounterGroup.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounters.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/AbstractCounters.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/CounterGroupBase.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/CounterGroupBase.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/CounterGroupFactory.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/CounterGroupFactory.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/FileSystemCounterGroup.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/FrameworkCounterGroup.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/FrameworkCounterGroup.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/GenericCounter.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/GenericCounter.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/LimitExceededException.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/LimitExceededException.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/Limits.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/Limits.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/package-info.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/counters/package-info.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/AvroArrayUtils.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/AvroArrayUtils.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/util/CountersStrings.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/util/CountersStrings.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/java/org/apache/hadoop/mapreduce/util/ResourceBundles.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestCombineOutputCollector.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestCombineOutputCollector.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestTaskPerformanceSplits.java - copied unchanged from r1158071, hadoop/common/trunk/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestTaskPerformanceSplits.java Modified: hadoop/common/branches/HDFS-1623/mapreduce/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/.gitignore (props changed) hadoop/common/branches/HDFS-1623/mapreduce/CHANGES.txt (contents, props changed) hadoop/common/branches/HDFS-1623/mapreduce/conf/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/conf/capacity-scheduler.xml.template (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/c++/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/block_forensics/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/build-contrib.xml (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/build.xml (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/capacity-scheduler/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/data_join/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/dynamic-scheduler/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/eclipse-plugin/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/fairscheduler/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/index/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRaid.java hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/streaming/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/vaidya/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/examples/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/java/mapred-default.xml hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/Counters.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/IndexCache.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/InterTrackerProtocol.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobInProgress.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobTracker.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/MapOutputFile.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/MapTask.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/ReduceTask.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/Task.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/TaskInProgress.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/TaskMemoryManagerThread.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/TaskStatus.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/TaskTracker.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/Counter.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/CounterGroup.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/Counters.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/Job.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/JobCounter.properties hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/MRConfig.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/MRJobConfig.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/TaskCounter.properties hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/EventReader.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/Events.avpr hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/JobHistoryParser.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/MapAttemptFinishedEvent.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/ReduceAttemptFinishedEvent.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskAttemptUnsuccessfulCompletionEvent.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/jobhistory/TaskFinishedEvent.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/protocol/ClientProtocol.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/security/TokenCache.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/server/jobtracker/JTConfig.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/MergeManager.java hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapreduce/task/reduce/Shuffle.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred-site.xml hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/fs/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/hdfs/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/io/FileBench.java (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/ipc/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestIndexCache.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestJobInProgress.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestMapRed.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestMiniMRDFSSort.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestMiniMRWithDFS.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapred/TestSeveral.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/SleepJob.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/TestCounters.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/jobhistory/TestJobHistoryEvents.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCacheOldApi.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/security/TestMapredGroupMappingServiceRefresh.java hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/security/authorize/TestServiceLevelAuthorization.java (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/test/MapredTestDriver.java (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/test/mapred/org/apache/hadoop/tools/rumen/TestRumenJobTraces.java hadoop/common/branches/HDFS-1623/mapreduce/src/tools/org/apache/hadoop/tools/rumen/JobBuilder.java hadoop/common/branches/HDFS-1623/mapreduce/src/tools/org/apache/hadoop/tools/rumen/LoggedTaskAttempt.java hadoop/common/branches/HDFS-1623/mapreduce/src/tools/org/apache/hadoop/tools/rumen/MapAttempt20LineHistoryEventEmitter.java hadoop/common/branches/HDFS-1623/mapreduce/src/tools/org/apache/hadoop/tools/rumen/MapTaskAttemptInfo.java hadoop/common/branches/HDFS-1623/mapreduce/src/tools/org/apache/hadoop/tools/rumen/ReduceAttempt20LineHistoryEventEmitter.java hadoop/common/branches/HDFS-1623/mapreduce/src/tools/org/apache/hadoop/tools/rumen/ReduceTaskAttemptInfo.java hadoop/common/branches/HDFS-1623/mapreduce/src/tools/org/apache/hadoop/tools/rumen/TaskAttempt20LineEventEmitter.java hadoop/common/branches/HDFS-1623/mapreduce/src/tools/org/apache/hadoop/tools/rumen/TaskAttemptInfo.java hadoop/common/branches/HDFS-1623/mapreduce/src/tools/org/apache/hadoop/tools/rumen/ZombieJob.java hadoop/common/branches/HDFS-1623/mapreduce/src/webapps/job/ (props changed) hadoop/common/branches/HDFS-1623/mapreduce/src/webapps/job/jobdetailshistory.jsp Propchange: hadoop/common/branches/HDFS-1623/mapreduce/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,2 +1,2 @@ -/hadoop/common/trunk/mapreduce:1152502-1153927 +/hadoop/common/trunk/mapreduce:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred:713112 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/.gitignore ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/.gitignore:1152502-1153927 +/hadoop/common/trunk/mapreduce/.gitignore:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/.gitignore:713112 /hadoop/core/trunk/.gitignore:784664-785643 Modified: hadoop/common/branches/HDFS-1623/mapreduce/CHANGES.txt URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/CHANGES.txt?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/CHANGES.txt (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/CHANGES.txt Tue Aug 16 00:37:15 2011 @@ -38,6 +38,9 @@ Trunk (unreleased changes) MAPREDUCE-2323. Add metrics to the fair scheduler. (todd) + MAPREDUCE-2037. Capture intermediate progress, CPU and memory usage for + tasks. (Dick King via acmurthy) + IMPROVEMENTS MAPREDUCE-2187. Reporter sends progress during sort/merge. (Anupam Seth via @@ -212,6 +215,9 @@ Trunk (unreleased changes) MAPREDUCE-2705. Permits parallel multiple task launches. (Thomas Graves via ddas) + MAPREDUCE-2489. Jobsplits with random hostnames can make the queue + unusable (jeffrey naisbit via mahadev) + OPTIMIZATIONS MAPREDUCE-2026. Make JobTracker.getJobCounters() and @@ -221,6 +227,8 @@ Trunk (unreleased changes) MAPREDUCE-2740. MultipleOutputs in new API creates needless TaskAttemptContexts. (todd) + MAPREDUCE-901. Efficient framework counters. (llu via acmurthy) + BUG FIXES MAPREDUCE-2603. Disable High-Ram emulation in system tests. @@ -381,6 +389,23 @@ Trunk (unreleased changes) MAPREDUCE-2760. mapreduce.jobtracker.split.metainfo.maxsize typoed in mapred-default.xml. (todd via eli) + MAPREDUCE-2797. Update mapreduce tests and RAID for HDFS-2239. (szetszwo) + + MAPREDUCE-2805. Update RAID for HDFS-2241. (szetszwo) + + MAPREDUCE-2837. Ported bug fixes from y-merge to prepare for MAPREDUCE-279 + merge. (acmurthy) + + MAPREDUCE-2541. Fixed a race condition in IndexCache.removeMap. (Binglin + Chang via acmurthy) + + MAPREDUCE-2839. Fixed TokenCache to get delegation tokens using both new + and old apis. (Siddharth Seth via acmurthy) + + MAPREDUCE-2727. Fix divide-by-zero error in SleepJob for sleepCount equals + 0. (Jeffrey Naisbitt via acmurthy) + + Release 0.22.0 - Unreleased INCOMPATIBLE CHANGES Propchange: hadoop/common/branches/HDFS-1623/mapreduce/CHANGES.txt ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/CHANGES.txt:1152502-1153927 +/hadoop/common/trunk/mapreduce/CHANGES.txt:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/CHANGES.txt:713112 /hadoop/mapreduce/branches/HDFS-641/CHANGES.txt:817878-835964 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/conf/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/conf:1152502-1153927 +/hadoop/common/trunk/mapreduce/conf:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/conf:713112 /hadoop/core/trunk/conf:784664-785643 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/conf/capacity-scheduler.xml.template ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/conf/capacity-scheduler.xml.template:1152502-1153927 +/hadoop/common/trunk/mapreduce/conf/capacity-scheduler.xml.template:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/conf/capacity-scheduler.xml.template:713112 /hadoop/core/trunk/conf/capacity-scheduler.xml.template:776175-785643 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/c++/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/c++:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/c++:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/c++:713112 /hadoop/core/trunk/src/c++:776175-784663 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/contrib:713112 /hadoop/core/trunk/src/contrib:784664-785643 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/block_forensics/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,2 +1,2 @@ -/hadoop/common/trunk/mapreduce/src/contrib/block_forensics:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/block_forensics:1152502-1158071 /hadoop/core/branches/branch-0.19/hdfs/src/contrib/block_forensics:713112 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/build-contrib.xml ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/build-contrib.xml:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/build-contrib.xml:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/contrib/build-contrib.xml:713112 /hadoop/core/trunk/src/contrib/build-contrib.xml:776175-786373 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/build.xml ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/build.xml:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/build.xml:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/contrib/build.xml:713112 /hadoop/core/trunk/src/contrib/build.xml:776175-786373 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/capacity-scheduler/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/capacity-scheduler:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/capacity-scheduler:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/contrib/capacity-scheduler:713112 /hadoop/core/trunk/src/contrib/capacity-scheduler:776175-786373 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/data_join/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/data_join:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/data_join:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/contrib/data_join:713112 /hadoop/core/trunk/src/contrib/data_join:776175-786373 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/dynamic-scheduler/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/dynamic-scheduler:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/dynamic-scheduler:1152502-1158071 /hadoop/core/branches/branch-0.19/src/contrib/dynamic-scheduler:713112 /hadoop/core/trunk/src/contrib/dynamic-scheduler:784975-786373 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/eclipse-plugin/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/eclipse-plugin:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/eclipse-plugin:1152502-1158071 /hadoop/core/branches/branch-0.19/core/src/contrib/eclipse-plugin:713112 /hadoop/core/trunk/src/contrib/eclipse-plugin:776175-784663 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/fairscheduler/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/fairscheduler:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/fairscheduler:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/contrib/fairscheduler:713112 /hadoop/core/trunk/src/contrib/fairscheduler:776175-786373 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/index/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/index:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/index:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/contrib/index:713112 /hadoop/core/trunk/src/contrib/index:776175-786373 Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRaid.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRaid.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRaid.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockPlacementPolicyRaid.java Tue Aug 16 00:37:15 2011 @@ -543,7 +543,7 @@ public class BlockPlacementPolicyRaid ex } // remove the prefix String src = parity.substring(prefix.length()); - if (NameNodeRaidUtil.getFileInfo(namesystem.dir, src, true) == null) { + if (NameNodeRaidUtil.getFileInfo(namesystem, src, true) == null) { return null; } return src; @@ -575,7 +575,7 @@ public class BlockPlacementPolicyRaid ex private String getParityFile(String parityPrefix, String src) throws IOException { String parity = parityPrefix + src; - if (NameNodeRaidUtil.getFileInfo(namesystem.dir, parity, true) == null) { + if (NameNodeRaidUtil.getFileInfo(namesystem, parity, true) == null) { return null; } return parity; Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/raid/src/java/org/apache/hadoop/hdfs/server/datanode/RaidBlockSender.java Tue Aug 16 00:37:15 2011 @@ -41,7 +41,7 @@ import org.apache.hadoop.util.StringUtil /** * Reads a block from the disk and sends it to a recipient. */ -public class RaidBlockSender implements java.io.Closeable, FSConstants { +public class RaidBlockSender implements java.io.Closeable { public static final Log LOG = DataNode.LOG; static final Log ClientTraceLog = DataNode.ClientTraceLog; @@ -389,7 +389,7 @@ public class RaidBlockSender implements streamForSendChunks = baseStream; // assure a mininum buffer size. - maxChunksPerPacket = (Math.max(BUFFER_SIZE, + maxChunksPerPacket = (Math.max(FSConstants.IO_FILE_BUFFER_SIZE, MIN_BUFFER_WITH_TRANSFERTO) + bytesPerChecksum - 1)/bytesPerChecksum; @@ -397,7 +397,7 @@ public class RaidBlockSender implements pktSize += checksumSize * maxChunksPerPacket; } else { maxChunksPerPacket = Math.max(1, - (BUFFER_SIZE + bytesPerChecksum - 1)/bytesPerChecksum); + (FSConstants.IO_FILE_BUFFER_SIZE + bytesPerChecksum - 1)/bytesPerChecksum); pktSize += (bytesPerChecksum + checksumSize) * maxChunksPerPacket; } Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/streaming/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/streaming:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/streaming:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/contrib/streaming:713112 /hadoop/core/trunk/src/contrib/streaming:776175-786373 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/contrib/vaidya/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/contrib/vaidya:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/contrib/vaidya:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/contrib/vaidya:713112 /hadoop/core/trunk/src/contrib/vaidya:776175-786373 Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/examples/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/examples:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/examples:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/examples:713112 /hadoop/core/trunk/src/examples:776175-784663 Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/examples/org/apache/hadoop/examples/terasort/TeraInputFormat.java Tue Aug 16 00:37:15 2011 @@ -61,19 +61,32 @@ public class TeraInputFormat extends Fil private static List lastResult = null; static class TeraFileSplit extends FileSplit { + static private String[] ZERO_LOCATIONS = new String[0]; + private String[] locations; - public TeraFileSplit() {} + + public TeraFileSplit() { + locations = ZERO_LOCATIONS; + } public TeraFileSplit(Path file, long start, long length, String[] hosts) { super(file, start, length, hosts); - locations = hosts; + try { + locations = super.getLocations(); + } catch (IOException e) { + locations = ZERO_LOCATIONS; + } } + + // XXXXXX should this also be null-protected? protected void setLocations(String[] hosts) { locations = hosts; } + @Override public String[] getLocations() { return locations; } + public String toString() { StringBuffer result = new StringBuffer(); result.append(getPath()); Propchange: hadoop/common/branches/HDFS-1623/mapreduce/src/java/ ------------------------------------------------------------------------------ --- svn:mergeinfo (original) +++ svn:mergeinfo Tue Aug 16 00:37:15 2011 @@ -1,3 +1,3 @@ -/hadoop/common/trunk/mapreduce/src/java:1152502-1153927 +/hadoop/common/trunk/mapreduce/src/java:1152502-1158071 /hadoop/core/branches/branch-0.19/mapred/src/java:713112 /hadoop/core/trunk/src/mapred:776175-785643 Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/java/mapred-default.xml URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/java/mapred-default.xml?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/java/mapred-default.xml (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/java/mapred-default.xml Tue Aug 16 00:37:15 2011 @@ -33,6 +33,29 @@ + mapreduce.jobtracker.jobhistory.task.numberprogresssplits + 12 + Every task attempt progresses from 0.0 to 1.0 [unless + it fails or is killed]. We record, for each task attempt, certain + statistics over each twelfth of the progress range. You can change + the number of intervals we divide the entire range of progress into + by setting this property. Higher values give more precision to the + recorded data, but costs more memory in the job tracker at runtime. + Each increment in this attribute costs 16 bytes per running task. + + + + + mapreduce.job.userhistorylocation + + User can specify a location to store the history files of + a particular job. If nothing is specified, the logs are stored in + output directory. The files are stored in "_logs/history/" in the directory. + User can stop logging by giving the value "none". + + + + mapreduce.jobtracker.jobhistory.completed.location The completed job history files are stored at this single well Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/ACLsManager.java Tue Aug 16 00:37:15 2011 @@ -36,7 +36,7 @@ import org.apache.hadoop.security.author * QueueManager for queue operations. */ @InterfaceAudience.Private -class ACLsManager { +public class ACLsManager { static Log LOG = LogFactory.getLog(ACLsManager.class); // MROwner(user who started this mapreduce cluster)'s ugi @@ -49,7 +49,7 @@ class ACLsManager { private final boolean aclsEnabled; - ACLsManager(Configuration conf, JobACLsManager jobACLsManager, + public ACLsManager(Configuration conf, JobACLsManager jobACLsManager, QueueManager queueManager) throws IOException { mrOwner = UserGroupInformation.getCurrentUser(); @@ -68,7 +68,7 @@ class ACLsManager { this.queueManager = queueManager; } - UserGroupInformation getMROwner() { + public UserGroupInformation getMROwner() { return mrOwner; } @@ -76,7 +76,7 @@ class ACLsManager { return adminAcl; } - JobACLsManager getJobACLsManager() { + public JobACLsManager getJobACLsManager() { return jobACLsManager; } @@ -85,7 +85,7 @@ class ACLsManager { * i.e. either cluster owner or cluster administrator * @return true, if user is an admin */ - boolean isMRAdmin(UserGroupInformation callerUGI) { + public boolean isMRAdmin(UserGroupInformation callerUGI) { if (adminAcl.isUserAllowed(callerUGI)) { return true; } @@ -111,7 +111,7 @@ class ACLsManager { * @param operation the operation for which authorization is needed * @throws AccessControlException */ - void checkAccess(JobInProgress job, UserGroupInformation callerUGI, + public void checkAccess(JobInProgress job, UserGroupInformation callerUGI, Operation operation) throws AccessControlException { String queue = job.getProfile().getQueueName(); Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/Counters.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/Counters.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/Counters.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/Counters.java Tue Aug 16 00:37:15 2011 @@ -18,20 +18,9 @@ package org.apache.hadoop.mapred; -import java.io.DataInput; -import java.io.DataOutput; -import java.io.IOException; import java.text.ParseException; -import java.util.ArrayList; -import java.util.Collection; -import java.util.HashMap; -import java.util.IdentityHashMap; -import java.util.Iterator; -import java.util.Map; -import java.util.MissingResourceException; -import java.util.ResourceBundle; -import org.apache.commons.logging.*; +import org.apache.commons.logging.Log; import org.apache.hadoop.classification.InterfaceAudience; import org.apache.hadoop.classification.InterfaceStability; import org.apache.hadoop.io.IntWritable; @@ -40,421 +29,302 @@ import org.apache.hadoop.io.Writable; import org.apache.hadoop.io.WritableUtils; import org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter; import org.apache.hadoop.util.StringUtils; +import org.apache.hadoop.mapreduce.FileSystemCounter; +import org.apache.hadoop.mapreduce.counters.AbstractCounterGroup; +import org.apache.hadoop.mapreduce.counters.AbstractCounters; +import org.apache.hadoop.mapreduce.counters.CounterGroupBase; +import org.apache.hadoop.mapreduce.counters.CounterGroupFactory; +import org.apache.hadoop.mapreduce.counters.FrameworkCounterGroup; +import org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup; +import org.apache.hadoop.mapreduce.counters.GenericCounter; +import org.apache.hadoop.mapreduce.counters.Limits; +import static org.apache.hadoop.mapreduce.util.CountersStrings.*; /** * A set of named counters. - * - *

Counters represent global counters, defined either by the + * + *

Counters represent global counters, defined either by the * Map-Reduce framework or applications. Each Counter can be of * any {@link Enum} type.

- * + * *

Counters are bunched into {@link Group}s, each comprising of - * counters from a particular Enum class. + * counters from a particular Enum class. * @deprecated Use {@link org.apache.hadoop.mapreduce.Counters} instead. */ @Deprecated @InterfaceAudience.Public @InterfaceStability.Stable -public class Counters implements Writable, Iterable { - private static final Log LOG = LogFactory.getLog(Counters.class); - private static final char GROUP_OPEN = '{'; - private static final char GROUP_CLOSE = '}'; - private static final char COUNTER_OPEN = '['; - private static final char COUNTER_CLOSE = ']'; - private static final char UNIT_OPEN = '('; - private static final char UNIT_CLOSE = ')'; - private static char[] charsToEscape = {GROUP_OPEN, GROUP_CLOSE, - COUNTER_OPEN, COUNTER_CLOSE, - UNIT_OPEN, UNIT_CLOSE}; - - //private static Log log = LogFactory.getLog("Counters.class"); - +public class Counters + extends AbstractCounters { + + public Counters() { + super(groupFactory); + } + + public Counters(org.apache.hadoop.mapreduce.Counters newCounters) { + super(newCounters, groupFactory); + } + /** * Downgrade new {@link org.apache.hadoop.mapreduce.Counters} to old Counters * @param newCounters new Counters * @return old Counters instance corresponding to newCounters */ static Counters downgrade(org.apache.hadoop.mapreduce.Counters newCounters) { - Counters oldCounters = new Counters(); - for (org.apache.hadoop.mapreduce.CounterGroup newGroup: newCounters) { - String groupName = newGroup.getName(); - Group oldGroup = oldCounters.getGroup(groupName); - for (org.apache.hadoop.mapreduce.Counter newCounter: newGroup) { - Counter oldCounter = oldGroup.getCounterForName(newCounter.getName()); - oldCounter.setDisplayName(newCounter.getDisplayName()); - oldCounter.increment(newCounter.getValue()); - } - } - return oldCounters; + return new Counters(newCounters); } /** - * A counter record, comprising its name and value. + * A counter record, comprising its name and value. */ - public static class Counter extends org.apache.hadoop.mapreduce.Counter { - - Counter() { - } + public interface Counter extends org.apache.hadoop.mapreduce.Counter { - Counter(String name, String displayName, long value) { - super(name, displayName); - increment(value); - } - - public void setDisplayName(String newName) { - super.setDisplayName(newName); - } - /** * Returns the compact stringified version of the counter in the format * [(actual-name)(display-name)(value)] + * @return the stringified result */ - public synchronized String makeEscapedCompactString() { + String makeEscapedCompactString(); - // First up, obtain the strings that need escaping. This will help us - // determine the buffer length apriori. - String escapedName = escape(getName()); - String escapedDispName = escape(getDisplayName()); - long currentValue = this.getValue(); - int length = escapedName.length() + escapedDispName.length() + 4; - - length += 8; // For the following delimiting characters - StringBuilder builder = new StringBuilder(length); - builder.append(COUNTER_OPEN); - - // Add the counter name - builder.append(UNIT_OPEN); - builder.append(escapedName); - builder.append(UNIT_CLOSE); - - // Add the display name - builder.append(UNIT_OPEN); - builder.append(escapedDispName); - builder.append(UNIT_CLOSE); - - // Add the value - builder.append(UNIT_OPEN); - builder.append(currentValue); - builder.append(UNIT_CLOSE); - - builder.append(COUNTER_CLOSE); - - return builder.toString(); - } - - // Checks for (content) equality of two (basic) counters + /** + * Checks for (content) equality of two (basic) counters + * @param counter to compare + * @return true if content equals + * @deprecated + */ @Deprecated - synchronized boolean contentEquals(Counter c) { - return this.equals(c); - } - + boolean contentEquals(Counter counter); + /** - * What is the current value of this counter? - * @return the current value + * @return the value of the counter */ - public synchronized long getCounter() { + long getCounter(); + } + + static class OldCounterImpl extends GenericCounter implements Counter { + + OldCounterImpl() { + } + + OldCounterImpl(String name, String displayName, long value) { + super(name, displayName, value); + } + + @Override + public synchronized String makeEscapedCompactString() { + return toEscapedCompactString(this); + } + + @Override @Deprecated + public boolean contentEquals(Counter counter) { + return equals(counter); + } + + @Override + public long getCounter() { return getValue(); } - } - + /** - * Group of counters, comprising of counters from a particular - * counter {@link Enum} class. + * Group of counters, comprising of counters from a particular + * counter {@link Enum} class. * - *

Grouphandles localization of the class name and the + *

Grouphandles localization of the class name and the * counter names.

*/ - public static class Group implements Writable, Iterable { - private String groupName; - private String displayName; - private Map subcounters = new HashMap(); - - // Optional ResourceBundle for localization of group and counter names. - private ResourceBundle bundle = null; - - Group(String groupName) { - try { - bundle = getResourceBundle(groupName); - } - catch (MissingResourceException neverMind) { - } - this.groupName = groupName; - this.displayName = localize("CounterGroupName", groupName); - if (LOG.isDebugEnabled()) { - LOG.debug("Creating group " + groupName + " with " + - (bundle == null ? "nothing" : "bundle")); - } - } - + public static interface Group extends CounterGroupBase { + /** - * Returns the specified resource bundle, or throws an exception. - * @throws MissingResourceException if the bundle isn't found + * @param counterName the name of the counter + * @return the value of the specified counter, or 0 if the counter does + * not exist. */ - private static ResourceBundle getResourceBundle(String enumClassName) { - String bundleName = enumClassName.replace('$','_'); - return ResourceBundle.getBundle(bundleName); - } - + long getCounter(String counterName); + /** - * Returns raw name of the group. This is the name of the enum class - * for this group of counters. + * @return the compact stringified version of the group in the format + * {(actual-name)(display-name)(value)[][][]} where [] are compact strings + * for the counters within. */ - public String getName() { - return groupName; - } - + String makeEscapedCompactString(); + /** - * Returns localized name of the group. This is the same as getName() by - * default, but different if an appropriate ResourceBundle is found. + * Get the counter for the given id and create it if it doesn't exist. + * @param id the numeric id of the counter within the group + * @param name the internal counter name + * @return the counter + * @deprecated use {@link #findCounter(String)} instead */ - public String getDisplayName() { - return displayName; - } - + @Deprecated + Counter getCounter(int id, String name); + /** - * Set the display name + * Get the counter for the given name and create it if it doesn't exist. + * @param name the internal counter name + * @return the counter */ - public void setDisplayName(String displayName) { - this.displayName = displayName; + Counter getCounterForName(String name); + } + + // All the group impls need this for legacy group interface + static long getCounterValue(Group group, String counterName) { + Counter counter = group.findCounter(counterName, false); + if (counter != null) return counter.getValue(); + return 0L; + } + + // Mix the generic group implementation into the Group interface + private static class GenericGroup extends AbstractCounterGroup + implements Group { + + GenericGroup(String name, String displayName, Limits limits) { + super(name, displayName, limits); } - - /** - * Returns the compact stringified version of the group in the format - * {(actual-name)(display-name)(value)[][][]} where [] are compact strings for the - * counters within. - */ + + @Override + public long getCounter(String counterName) { + return getCounterValue(this, counterName); + } + + @Override public String makeEscapedCompactString() { - String[] subcountersArray = new String[subcounters.size()]; + return toEscapedCompactString(this); + } - // First up, obtain the strings that need escaping. This will help us - // determine the buffer length apriori. - String escapedName = escape(getName()); - String escapedDispName = escape(getDisplayName()); - int i = 0; - int length = escapedName.length() + escapedDispName.length(); - for (Counter counter : subcounters.values()) { - String escapedStr = counter.makeEscapedCompactString(); - subcountersArray[i++] = escapedStr; - length += escapedStr.length(); - } + @Override + public Counter getCounter(int id, String name) { + return findCounter(name); + } - length += 6; // for all the delimiting characters below - StringBuilder builder = new StringBuilder(length); - builder.append(GROUP_OPEN); // group start - - // Add the group name - builder.append(UNIT_OPEN); - builder.append(escapedName); - builder.append(UNIT_CLOSE); - - // Add the display name - builder.append(UNIT_OPEN); - builder.append(escapedDispName); - builder.append(UNIT_CLOSE); - - // write the value - for(Counter counter: subcounters.values()) { - builder.append(counter.makeEscapedCompactString()); - } - - builder.append(GROUP_CLOSE); // group end - return builder.toString(); + @Override + public Counter getCounterForName(String name) { + return findCounter(name); } @Override - public int hashCode() { - return subcounters.hashCode(); + protected Counter newCounter(String counterName, String displayName, + long value) { + return new OldCounterImpl(counterName, displayName, value); } - /** - * Checks for (content) equality of Groups - */ @Override - public boolean equals(Object obj) { - if (this == obj) { - return true; + protected Counter newCounter() { + return new OldCounterImpl(); + } + } + + // Mix the framework group implementation into the Group interface + private static class FrameworkGroupImpl> + extends FrameworkCounterGroup implements Group { + + // Mix the framework counter implmementation into the Counter interface + class FrameworkCounterImpl extends FrameworkCounter implements Counter { + + FrameworkCounterImpl(T key) { + super(key); } - if (obj == null || obj.getClass() != getClass()) { - return false; + + @Override + public String makeEscapedCompactString() { + return toEscapedCompactString(this); } - boolean isEqual = false; - Group g = (Group) obj; - synchronized (this) { - if (size() == g.size()) { - isEqual = true; - for (Map.Entry entry : subcounters.entrySet()) { - String key = entry.getKey(); - Counter c1 = entry.getValue(); - Counter c2 = g.getCounterForName(key); - if (!c1.contentEquals(c2)) { - isEqual = false; - break; - } - } - } + + @Override + public boolean contentEquals(Counter counter) { + return equals(counter); } - return isEqual; - } - - /** - * Returns the value of the specified counter, or 0 if the counter does - * not exist. - */ - public synchronized long getCounter(String counterName) { - for(Counter counter: subcounters.values()) { - if (counter != null && counter.getDisplayName().equals(counterName)) { - return counter.getValue(); - } + + @Override + public long getCounter() { + return getValue(); } - return 0L; - } - - /** - * Get the counter for the given id and create it if it doesn't exist. - * @param id the numeric id of the counter within the group - * @param name the internal counter name - * @return the counter - * @deprecated use {@link #getCounter(String)} instead - */ - @Deprecated - public synchronized Counter getCounter(int id, String name) { - return getCounterForName(name); } - - /** - * Get the counter for the given name and create it if it doesn't exist. - * @param name the internal counter name - * @return the counter - */ - public synchronized Counter getCounterForName(String name) { - Counter result = subcounters.get(name); - if (result == null) { - if (LOG.isDebugEnabled()) { - LOG.debug("Adding " + name); - } - result = new Counter(name, localize(name + ".name", name), 0L); - subcounters.put(name, result); - } - return result; + + FrameworkGroupImpl(Class cls) { + super(cls); } - - /** - * Returns the number of counters in this group. - */ - public synchronized int size() { - return subcounters.size(); + + @Override + public long getCounter(String counterName) { + return getCounterValue(this, counterName); } - - /** - * Looks up key in the ResourceBundle and returns the corresponding value. - * If the bundle or the key doesn't exist, returns the default value. - */ - private String localize(String key, String defaultValue) { - String result = defaultValue; - if (bundle != null) { - try { - result = bundle.getString(key); - } - catch (MissingResourceException mre) { - } - } - return result; + + @Override + public String makeEscapedCompactString() { + return toEscapedCompactString(this); } - - public synchronized void write(DataOutput out) throws IOException { - Text.writeString(out, displayName); - WritableUtils.writeVInt(out, subcounters.size()); - for(Counter counter: subcounters.values()) { - counter.write(out); - } + + @Override @Deprecated + public Counter getCounter(int id, String name) { + return findCounter(name); } - - public synchronized void readFields(DataInput in) throws IOException { - displayName = Text.readString(in); - subcounters.clear(); - int size = WritableUtils.readVInt(in); - for(int i=0; i < size; i++) { - Counter counter = new Counter(); - counter.readFields(in); - subcounters.put(counter.getName(), counter); - } + + @Override + public Counter getCounterForName(String name) { + return findCounter(name); } - public synchronized Iterator iterator() { - return new ArrayList(subcounters.values()).iterator(); + @Override + protected Counter newCounter(T key) { + return new FrameworkCounterImpl(key); } } - - // Map from group name (enum class name) to map of int (enum ordinal) to - // counter record (name-value pair). - private Map counters = new HashMap(); - /** - * A cache from enum values to the associated counter. Dramatically speeds up - * typical usage. - */ - private Map cache = new IdentityHashMap(); - - /** - * Returns the names of all counter classes. - * @return Set of counter names. - */ - public synchronized Collection getGroupNames() { - return counters.keySet(); - } + // Mix the file system counter group implementation into the Group interface + private static class FSGroupImpl extends FileSystemCounterGroup + implements Group { - public synchronized Iterator iterator() { - return counters.values().iterator(); - } + private class FSCounterImpl extends FSCounter implements Counter { - /** - * Returns the named counter group, or an empty group if there is none - * with the specified name. - */ - public synchronized Group getGroup(String groupName) { - // To provide support for deprecated group names - if (groupName.equals("org.apache.hadoop.mapred.Task$Counter")) { - groupName = "org.apache.hadoop.mapreduce.TaskCounter"; - LOG.warn("Group org.apache.hadoop.mapred.Task$Counter is deprecated." + - " Use org.apache.hadoop.mapreduce.TaskCounter instead"); - } else if (groupName.equals( - "org.apache.hadoop.mapred.JobInProgress$Counter")) { - groupName = "org.apache.hadoop.mapreduce.JobCounter"; - LOG.warn("Group org.apache.hadoop.mapred.JobInProgress$Counter " + - "is deprecated. Use " + - "org.apache.hadoop.mapreduce.JobCounter instead"); + FSCounterImpl(String scheme, FileSystemCounter key) { + super(scheme, key); + } + + @Override + public String makeEscapedCompactString() { + return toEscapedCompactString(this); + } + + @Override @Deprecated + public boolean contentEquals(Counter counter) { + throw new UnsupportedOperationException("Not supported yet."); + } + + @Override + public long getCounter() { + return getValue(); + } + + } + + @Override + protected Counter newCounter(String scheme, FileSystemCounter key) { + return new FSCounterImpl(scheme, key); } - Group result = counters.get(groupName); - if (result == null) { - result = new Group(groupName); - counters.put(groupName, result); + + @Override + public long getCounter(String counterName) { + return getCounterValue(this, counterName); } - return result; - } - /** - * Find the counter for the given enum. The same enum will always return the - * same counter. - * @param key the counter key - * @return the matching counter object - */ - public synchronized Counter findCounter(Enum key) { - Counter counter = cache.get(key); - if (counter == null) { - Group group = getGroup(key.getDeclaringClass().getName()); - counter = group.getCounterForName(key.toString()); - cache.put(key, counter); + @Override + public String makeEscapedCompactString() { + return toEscapedCompactString(this); + } + + @Override @Deprecated + public Counter getCounter(int id, String name) { + return findCounter(name); } - return counter; + + @Override + public Counter getCounterForName(String name) { + return findCounter(name); + } + } - /** - * Find a counter given the group and the name. - * @param group the name of the group - * @param name the internal name of the counter - * @return the counter for that name - */ public synchronized Counter findCounter(String group, String name) { if (name.equals("MAP_INPUT_BYTES")) { LOG.warn("Counter name MAP_INPUT_BYTES is deprecated. " + @@ -466,15 +336,46 @@ public class Counters implements Writabl } /** + * Provide factory methods for counter group factory implementation. + * See also the GroupFactory in + * {@link org.apache.hadoop.mapreduce.Counters mapreduce.Counters} + */ + static class GroupFactory extends CounterGroupFactory { + + @Override + protected > + FrameworkGroupFactory newFrameworkGroupFactory(final Class cls) { + return new FrameworkGroupFactory() { + @Override public Group newGroup(String name) { + return new FrameworkGroupImpl(cls); // impl in this package + } + }; + } + + @Override + protected Group newGenericGroup(String name, String displayName, + Limits limits) { + return new GenericGroup(name, displayName, limits); + } + + @Override + protected Group newFileSystemGroup() { + return new FSGroupImpl(); + } + } + + private static final GroupFactory groupFactory = new GroupFactory(); + + /** * Find a counter by using strings * @param group the name of the group * @param id the id of the counter within the group (0 to N-1) * @param name the internal name of the counter * @return the counter for that name - * @deprecated + * @deprecated use {@link findCounter(String, String)} instead */ @Deprecated - public synchronized Counter findCounter(String group, int id, String name) { + public Counter findCounter(String group, int id, String name) { return findCounter(group, name); } @@ -484,10 +385,10 @@ public class Counters implements Writabl * @param key identifies a counter * @param amount amount by which counter is to be incremented */ - public synchronized void incrCounter(Enum key, long amount) { + public void incrCounter(Enum key, long amount) { findCounter(key).increment(amount); } - + /** * Increments the specified counter by the specified amount, creating it if * it didn't already exist. @@ -495,27 +396,29 @@ public class Counters implements Writabl * @param counter the internal name of the counter * @param amount amount by which counter is to be incremented */ - public synchronized void incrCounter(String group, String counter, long amount) { + public void incrCounter(String group, String counter, long amount) { findCounter(group, counter).increment(amount); } - + /** * Returns current value of the specified counter, or 0 if the counter * does not exist. + * @param key the counter enum to lookup + * @return the counter value or 0 if counter not found */ - public synchronized long getCounter(Enum key) { + public synchronized long getCounter(Enum key) { return findCounter(key).getValue(); } - + /** - * Increments multiple counters by their amounts in another Counters + * Increments multiple counters by their amounts in another Counters * instance. * @param other the other Counters instance */ public synchronized void incrAllCounters(Counters other) { for (Group otherGroup: other) { Group group = getGroup(otherGroup.getName()); - group.displayName = otherGroup.displayName; + group.setDisplayName(otherGroup.getDisplayName()); for (Counter otherCounter : otherGroup) { Counter counter = group.getCounterForName(otherCounter.getName()); counter.setDisplayName(otherCounter.getDisplayName()); @@ -525,7 +428,18 @@ public class Counters implements Writabl } /** + * @return the total number of counters + * @deprecated use {@link #countCounters()} instead + */ + public int size() { + return countCounters(); + } + + /** * Convenience method for computing the sum of two sets of counters. + * @param a the first counters + * @param b the second counters + * @return a new summed counters object */ public static Counters sum(Counters a, Counters b) { Counters counters = new Counters(); @@ -533,55 +447,7 @@ public class Counters implements Writabl counters.incrAllCounters(b); return counters; } - - /** - * Returns the total number of counters, by summing the number of counters - * in each group. - */ - public synchronized int size() { - int result = 0; - for (Group group : this) { - result += group.size(); - } - return result; - } - - /** - * Write the set of groups. - * The external format is: - * #groups (groupName group)* - * - * i.e. the number of groups followed by 0 or more groups, where each - * group is of the form: - * - * groupDisplayName #counters (false | true counter)* - * - * where each counter is of the form: - * - * name (false | true displayName) value - */ - public synchronized void write(DataOutput out) throws IOException { - out.writeInt(counters.size()); - for (Group group: counters.values()) { - Text.writeString(out, group.getName()); - group.write(out); - } - } - - /** - * Read a set of groups. - */ - public synchronized void readFields(DataInput in) throws IOException { - int numClasses = in.readInt(); - counters.clear(); - while (numClasses-- > 0) { - String groupName = Text.readString(in); - Group group = new Group(groupName); - group.readFields(in); - counters.put(groupName, group); - } - } - + /** * Logs the current counter values. * @param log The log to use. @@ -591,212 +457,31 @@ public class Counters implements Writabl for(Group group: this) { log.info(" " + group.getDisplayName()); for (Counter counter: group) { - log.info(" " + counter.getDisplayName() + "=" + + log.info(" " + counter.getDisplayName() + "=" + counter.getCounter()); - } - } - } - - /** - * Return textual representation of the counter values. - */ - public synchronized String toString() { - StringBuilder sb = new StringBuilder("Counters: " + size()); - for (Group group: this) { - sb.append("\n\t" + group.getDisplayName()); - for (Counter counter: group) { - sb.append("\n\t\t" + counter.getDisplayName() + "=" + - counter.getCounter()); } } - return sb.toString(); } /** - * Convert a counters object into a single line that is easy to parse. - * @return the string with "name=value" for each counter and separated by "," - */ - public synchronized String makeCompactString() { - StringBuffer buffer = new StringBuffer(); - boolean first = true; - for(Group group: this){ - for(Counter counter: group) { - if (first) { - first = false; - } else { - buffer.append(','); - } - buffer.append(group.getDisplayName()); - buffer.append('.'); - buffer.append(counter.getDisplayName()); - buffer.append(':'); - buffer.append(counter.getCounter()); - } - } - return buffer.toString(); - } - - /** - * Represent the counter in a textual format that can be converted back to + * Represent the counter in a textual format that can be converted back to * its object form * @return the string in the following format - * {(groupname)(group-displayname)[(countername)(displayname)(value)][][]}{}{} + * {(groupName)(group-displayName)[(counterName)(displayName)(value)][]*}* */ - public synchronized String makeEscapedCompactString() { - String[] groupsArray = new String[counters.size()]; - int i = 0; - int length = 0; - - // First up, obtain the escaped string for each group so that we can - // determine the buffer length apriori. - for (Group group : this) { - String escapedString = group.makeEscapedCompactString(); - groupsArray[i++] = escapedString; - length += escapedString.length(); - } - - // Now construct the buffer - StringBuilder builder = new StringBuilder(length); - for (String group : groupsArray) { - builder.append(group); - } - return builder.toString(); - } - - // Extracts a block (data enclosed within delimeters) ignoring escape - // sequences. Throws ParseException if an incomplete block is found else - // returns null. - private static String getBlock(String str, char open, char close, - IntWritable index) throws ParseException { - StringBuilder split = new StringBuilder(); - int next = StringUtils.findNext(str, open, StringUtils.ESCAPE_CHAR, - index.get(), split); - split.setLength(0); // clear the buffer - if (next >= 0) { - ++next; // move over '(' - - next = StringUtils.findNext(str, close, StringUtils.ESCAPE_CHAR, - next, split); - if (next >= 0) { - ++next; // move over ')' - index.set(next); - return split.toString(); // found a block - } else { - throw new ParseException("Unexpected end of block", next); - } - } - return null; // found nothing + public String makeEscapedCompactString() { + return toEscapedCompactString(this); } - + /** - * Convert a stringified counter representation into a counter object. Note - * that the counter can be recovered if its stringified using - * {@link #makeEscapedCompactString()}. - * @return a Counter + * Convert a stringified (by {@link #makeEscapedCompactString()} counter + * representation into a counter object. + * @param compactString to parse + * @return a new counters object + * @throws ParseException */ - public static Counters fromEscapedCompactString(String compactString) - throws ParseException { - Counters counters = new Counters(); - IntWritable index = new IntWritable(0); - - // Get the group to work on - String groupString = - getBlock(compactString, GROUP_OPEN, GROUP_CLOSE, index); - - while (groupString != null) { - IntWritable groupIndex = new IntWritable(0); - - // Get the actual name - String groupName = - getBlock(groupString, UNIT_OPEN, UNIT_CLOSE, groupIndex); - groupName = unescape(groupName); - - // Get the display name - String groupDisplayName = - getBlock(groupString, UNIT_OPEN, UNIT_CLOSE, groupIndex); - groupDisplayName = unescape(groupDisplayName); - - // Get the counters - Group group = counters.getGroup(groupName); - group.setDisplayName(groupDisplayName); - - String counterString = - getBlock(groupString, COUNTER_OPEN, COUNTER_CLOSE, groupIndex); - - while (counterString != null) { - IntWritable counterIndex = new IntWritable(0); - - // Get the actual name - String counterName = - getBlock(counterString, UNIT_OPEN, UNIT_CLOSE, counterIndex); - counterName = unescape(counterName); - - // Get the display name - String counterDisplayName = - getBlock(counterString, UNIT_OPEN, UNIT_CLOSE, counterIndex); - counterDisplayName = unescape(counterDisplayName); - - // Get the value - long value = - Long.parseLong(getBlock(counterString, UNIT_OPEN, UNIT_CLOSE, - counterIndex)); - - // Add the counter - Counter counter = group.getCounterForName(counterName); - counter.setDisplayName(counterDisplayName); - counter.increment(value); - - // Get the next counter - counterString = - getBlock(groupString, COUNTER_OPEN, COUNTER_CLOSE, groupIndex); - } - - groupString = getBlock(compactString, GROUP_OPEN, GROUP_CLOSE, index); - } - return counters; - } - - // Escapes all the delimiters for counters i.e {,[,(,),],} - private static String escape(String string) { - return StringUtils.escapeString(string, StringUtils.ESCAPE_CHAR, - charsToEscape); - } - - // Unescapes all the delimiters for counters i.e {,[,(,),],} - private static String unescape(String string) { - return StringUtils.unEscapeString(string, StringUtils.ESCAPE_CHAR, - charsToEscape); - } - - @Override - public synchronized int hashCode() { - return counters.hashCode(); - } - - @Override - public boolean equals(Object obj) { - if (this == obj) { - return true; - } - if (obj == null || obj.getClass() != getClass()) { - return false; - } - boolean isEqual = false; - Counters other = (Counters) obj; - synchronized (this) { - if (size() == other.size()) { - isEqual = true; - for (Map.Entry entry : this.counters.entrySet()) { - String key = entry.getKey(); - Group sourceGroup = entry.getValue(); - Group targetGroup = other.getGroup(key); - if (!sourceGroup.equals(targetGroup)) { - isEqual = false; - break; - } - } - } - } - return isEqual; + public static Counters fromEscapedCompactString(String compactString) + throws ParseException { + return parseEscapedCompactString(compactString, new Counters()); } } Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/IndexCache.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/IndexCache.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/IndexCache.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/IndexCache.java Tue Aug 16 00:37:15 2011 @@ -130,12 +130,19 @@ class IndexCache { } /** - * This method removes the map from the cache. It should be called when - * a map output on this tracker is discarded. + * This method removes the map from the cache if index information for this + * map is loaded(size>0), index information entry in cache will not be + * removed if it is in the loading phrase(size=0), this prevents corruption + * of totalMemoryUsed. It should be called when a map output on this tracker + * is discarded. * @param mapId The taskID of this map. */ public void removeMap(String mapId) { - IndexInformation info = cache.remove(mapId); + IndexInformation info = cache.get(mapId); + if ((info != null) && (info.getSize() == 0)) { + return; + } + info = cache.remove(mapId); if (info != null) { totalMemoryUsed.addAndGet(-info.getSize()); if (!queue.remove(mapId)) { @@ -147,6 +154,19 @@ class IndexCache { } /** + * This method checks if cache and totolMemoryUsed is consistent. + * It is only used for unit test. + * @return True if cache and totolMemoryUsed is consistent + */ + boolean checkTotalMemoryUsed() { + int totalSize = 0; + for (IndexInformation info : cache.values()) { + totalSize += info.getSize(); + } + return totalSize == totalMemoryUsed.get(); + } + + /** * Bring memory usage below totalMemoryAllowed. */ private synchronized void freeIndexInformation() { Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/InterTrackerProtocol.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/InterTrackerProtocol.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/InterTrackerProtocol.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/InterTrackerProtocol.java Tue Aug 16 00:37:15 2011 @@ -77,8 +77,10 @@ interface InterTrackerProtocol extends V * Version 29: Adding user name to the serialized Task for use by TT. * Version 30: Adding available memory and CPU usage information on TT to * TaskTrackerStatus for MAPREDUCE-1218 + * Version 31: Efficient serialization format for Framework counters + * (MAPREDUCE-901) */ - public static final long versionID = 30L; + public static final long versionID = 31L; public final static int TRACKERS_OK = 0; public final static int UNKNOWN_TASKTRACKER = 1; Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobACLsManager.java Tue Aug 16 00:37:15 2011 @@ -29,7 +29,7 @@ import org.apache.hadoop.security.UserGr import org.apache.hadoop.security.authorize.AccessControlList; @InterfaceAudience.Private -class JobACLsManager { +public class JobACLsManager { Configuration conf; @@ -37,7 +37,7 @@ class JobACLsManager { this.conf = conf; } - boolean areACLsEnabled() { + public boolean areACLsEnabled() { return conf.getBoolean(MRConfig.MR_ACLS_ENABLED, false); } @@ -86,7 +86,7 @@ class JobACLsManager { * @param jobACL * @throws AccessControlException */ - boolean checkAccess(UserGroupInformation callerUGI, + public boolean checkAccess(UserGroupInformation callerUGI, JobACL jobOperation, String jobOwner, AccessControlList jobACL) { String user = callerUGI.getShortUserName(); Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobInProgress.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobInProgress.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobInProgress.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobInProgress.java Tue Aug 16 00:37:15 2011 @@ -20,6 +20,7 @@ package org.apache.hadoop.mapred; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; +import java.net.UnknownHostException; import java.security.PrivilegedExceptionAction; import java.util.ArrayList; import java.util.Collection; @@ -52,6 +53,7 @@ import org.apache.hadoop.mapreduce.JobCo import org.apache.hadoop.mapreduce.JobSubmissionFiles; import org.apache.hadoop.mapreduce.MRJobConfig; import org.apache.hadoop.mapreduce.TaskType; +import org.apache.hadoop.mapreduce.counters.LimitExceededException; import org.apache.hadoop.mapreduce.jobhistory.JobFinishedEvent; import org.apache.hadoop.mapreduce.jobhistory.JobHistory; import org.apache.hadoop.mapreduce.jobhistory.JobInfoChangeEvent; @@ -622,7 +624,7 @@ public class JobInProgress { * at {@link JobTracker#initJob(JobInProgress)} for more details. */ public synchronized void initTasks() - throws IOException, KillInterruptedException { + throws IOException, KillInterruptedException, UnknownHostException { if (tasksInited.get() || isComplete()) { return; } @@ -653,6 +655,11 @@ public class JobInProgress { checkTaskLimits(); + // Sanity check the locations so we don't create/initialize unnecessary tasks + for (TaskSplitMetaInfo split : taskSplitMetaInfo) { + NetUtils.verifyHostnames(split.getLocations()); + } + jobtracker.getInstrumentation().addWaitingMaps(getJobID(), numMapTasks); jobtracker.getInstrumentation().addWaitingReduces(getJobID(), numReduceTasks); @@ -1285,8 +1292,12 @@ public class JobInProgress { */ private Counters incrementTaskCounters(Counters counters, TaskInProgress[] tips) { - for (TaskInProgress tip : tips) { - counters.incrAllCounters(tip.getCounters()); + try { + for (TaskInProgress tip : tips) { + counters.incrAllCounters(tip.getCounters()); + } + } catch (LimitExceededException e) { + // too many user counters/groups, leaving existing counters intact. } return counters; } @@ -2667,25 +2678,29 @@ public class JobInProgress { status.getTaskTracker(), ttStatus.getHttpPort()); jobHistory.logEvent(tse, status.getTaskID().getJobID()); - + TaskAttemptID statusAttemptID = status.getTaskID(); if (status.getIsMap()){ MapAttemptFinishedEvent mfe = new MapAttemptFinishedEvent( - status.getTaskID(), taskType, TaskStatus.State.SUCCEEDED.toString(), + statusAttemptID, taskType, TaskStatus.State.SUCCEEDED.toString(), status.getMapFinishTime(), status.getFinishTime(), trackerHostname, status.getStateString(), - new org.apache.hadoop.mapreduce.Counters(status.getCounters())); + new org.apache.hadoop.mapreduce.Counters(status.getCounters()), + tip.getSplits(statusAttemptID).burst() + ); jobHistory.logEvent(mfe, status.getTaskID().getJobID()); }else{ ReduceAttemptFinishedEvent rfe = new ReduceAttemptFinishedEvent( - status.getTaskID(), taskType, TaskStatus.State.SUCCEEDED.toString(), + statusAttemptID, taskType, TaskStatus.State.SUCCEEDED.toString(), status.getShuffleFinishTime(), status.getSortFinishTime(), status.getFinishTime(), trackerHostname, status.getStateString(), - new org.apache.hadoop.mapreduce.Counters(status.getCounters())); + new org.apache.hadoop.mapreduce.Counters(status.getCounters()), + tip.getSplits(statusAttemptID).burst() + ); jobHistory.logEvent(rfe, status.getTaskID().getJobID()); @@ -2738,6 +2753,9 @@ public class JobInProgress { retireMap(tip); if ((finishedMapTasks + failedMapTIPs) == (numMapTasks)) { this.status.setMapProgress(1.0f); + if (canLaunchJobCleanupTask()) { + checkCountersLimitsOrFail(); + } } } else { runningReduceTasks -= 1; @@ -2750,6 +2768,9 @@ public class JobInProgress { retireReduce(tip); if ((finishedReduceTasks + failedReduceTIPs) == (numReduceTasks)) { this.status.setReduceProgress(1.0f); + if (canLaunchJobCleanupTask()) { + checkCountersLimitsOrFail(); + } } } decrementSpeculativeCount(wasSpeculating, tip); @@ -2759,6 +2780,19 @@ public class JobInProgress { } return true; } + + /* + * add up the counters and fail the job if it exceeds the limits. + * Make sure we do not recalculate the counters after we fail the job. + * Currently this is taken care by terminateJob() since it does not + * calculate the counters. + */ + private void checkCountersLimitsOrFail() { + Counters counters = getCounters(); + if (counters.limits().violation() != null) { + jobtracker.failJob(this); + } + } private void updateTaskTrackerStats(TaskInProgress tip, TaskTrackerStatus ttStatus, Map trackerStats, DataStatistics overallStats) { @@ -3165,12 +3199,16 @@ public class JobInProgress { taskid, taskType, startTime, taskTrackerName, taskTrackerPort); jobHistory.logEvent(tse, taskid.getJobID()); + + ProgressSplitsBlock splits = tip.getSplits(taskStatus.getTaskID()); - TaskAttemptUnsuccessfulCompletionEvent tue = - new TaskAttemptUnsuccessfulCompletionEvent(taskid, - taskType, taskStatus.getRunState().toString(), - finishTime, - taskTrackerHostName, diagInfo); + TaskAttemptUnsuccessfulCompletionEvent tue = + new TaskAttemptUnsuccessfulCompletionEvent + (taskid, + taskType, taskStatus.getRunState().toString(), + finishTime, + taskTrackerHostName, diagInfo, + splits.burst()); jobHistory.logEvent(tue, taskid.getJobID()); // After this, try to assign tasks with the one after this, so that Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobTracker.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobTracker.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobTracker.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/JobTracker.java Tue Aug 16 00:37:15 2011 @@ -2778,7 +2778,8 @@ public class JobTracker implements MRCon */ synchronized boolean processHeartbeat( TaskTrackerStatus trackerStatus, - boolean initialContact) { + boolean initialContact) + throws UnknownHostException { getInstrumentation().heartbeat(); Modified: hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java URL: http://svn.apache.org/viewvc/hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java?rev=1158072&r1=1158071&r2=1158072&view=diff ============================================================================== --- hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java (original) +++ hadoop/common/branches/HDFS-1623/mapreduce/src/java/org/apache/hadoop/mapred/LocalJobRunner.java Tue Aug 16 00:37:15 2011 @@ -240,7 +240,7 @@ public class LocalJobRunner implements C getShortUserName()); TaskRunner.setupChildMapredLocalDirs(map, localConf); - MapOutputFile mapOutput = new MapOutputFile(); + MapOutputFile mapOutput = new MROutputFiles(); mapOutput.setConf(localConf); mapOutputFiles.put(mapId, mapOutput); @@ -404,7 +404,7 @@ public class LocalJobRunner implements C if (!this.isInterrupted()) { TaskAttemptID mapId = mapIds.get(i); Path mapOut = mapOutputFiles.get(mapId).getOutputFile(); - MapOutputFile localOutputFile = new MapOutputFile(); + MapOutputFile localOutputFile = new MROutputFiles(); localOutputFile.setConf(localConf); Path reduceIn = localOutputFile.getInputFileForWrite(mapId.getTaskID(),