Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F34D8D80E for ; Fri, 2 Nov 2012 12:51:50 +0000 (UTC) Received: (qmail 42540 invoked by uid 500); 2 Nov 2012 12:51:50 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 42452 invoked by uid 500); 2 Nov 2012 12:51:50 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 42442 invoked by uid 99); 2 Nov 2012 12:51:50 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 02 Nov 2012 12:51:50 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.8] (HELO aegis.apache.org) (140.211.11.8) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 02 Nov 2012 12:51:44 +0000 Received: from aegis.apache.org (localhost [127.0.0.1]) by aegis.apache.org (Postfix) with ESMTP id B39CFC00B7 for ; Fri, 2 Nov 2012 12:51:22 +0000 (UTC) Date: Fri, 2 Nov 2012 12:51:22 +0000 (UTC) From: Apache Jenkins Server To: hdfs-dev@hadoop.apache.org Message-ID: <240576505.219.1351860682719.JavaMail.hudson@aegis> In-Reply-To: <722961622.1627.1351774294862.JavaMail.hudson@aegis> References: <722961622.1627.1351774294862.JavaMail.hudson@aegis> Subject: Build failed in Jenkins: Hadoop-Hdfs-trunk #1214 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Jenkins-Job: Hadoop-Hdfs-trunk X-Jenkins-Result: FAILURE X-Virus-Checked: Checked by ClamAV on apache.org See Changes: [jlowe] MAPREDUCE-4729. job history UI not showing all job attempts. Contributed by Vinod Kumar Vavilapalli [bobby] MAPREDUCE-4746. The MR Application Master does not have a config to set environment variables (Rob Parker via bobby) ------------------------------------------ [...truncated 11285 lines...] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.099 sec Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.161 sec Running org.apache.hadoop.hdfs.util.TestGSet Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.596 sec Running org.apache.hadoop.hdfs.util.TestCyclicIteration Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.059 sec Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec Running org.apache.hadoop.hdfs.TestSetTimes Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.6 sec Running org.apache.hadoop.hdfs.TestBlockReaderLocal Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.804 sec Running org.apache.hadoop.hdfs.TestHftpURLTimeouts Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.907 sec Running org.apache.hadoop.cli.TestHDFSCLI Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.037 sec Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.13 sec Running org.apache.hadoop.fs.TestGlobPaths Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.034 sec Running org.apache.hadoop.fs.TestResolveHdfsSymlink Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.08 sec Running org.apache.hadoop.fs.TestFcHdfsSetUMask Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.294 sec Running org.apache.hadoop.fs.TestFcHdfsPermission Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.283 sec Running org.apache.hadoop.fs.TestUrlStreamHandler Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.288 sec Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.637 sec Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.337 sec Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.782 sec Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.154 sec Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.056 sec Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.941 sec Running org.apache.hadoop.fs.permission.TestStickyBit Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.314 sec Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.05 sec Running org.apache.hadoop.fs.TestFcHdfsSymlink Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.791 sec Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.279 sec Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.337 sec Results : Tests run: 1610, Failures: 0, Errors: 0, Skipped: 4 [INFO] [INFO] --- maven-antrun-plugin:1.6:run (native_tests) @ hadoop-hdfs --- [INFO] Executing tasks main: [exec] 2012-11-02 12:51:13,929 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:(319)) - starting cluster with 1 namenodes. [exec] Formatting using clusterid: testClusterID [exec] 2012-11-02 12:51:14,188 INFO util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list [exec] 2012-11-02 12:51:14,189 WARN conf.Configuration (Configuration.java:warnOnceIfDeprecated(823)) - hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping [exec] 2012-11-02 12:51:14,190 INFO blockmanagement.DatanodeManager (DatanodeManager.java:(188)) - dfs.block.invalidate.limit=1000 [exec] 2012-11-02 12:51:14,210 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false [exec] 2012-11-02 12:51:14,210 INFO blockmanagement.BlockManager (BlockManager.java:(280)) - defaultReplication = 1 [exec] 2012-11-02 12:51:14,210 INFO blockmanagement.BlockManager (BlockManager.java:(281)) - maxReplication = 512 [exec] 2012-11-02 12:51:14,210 INFO blockmanagement.BlockManager (BlockManager.java:(282)) - minReplication = 1 [exec] 2012-11-02 12:51:14,210 INFO blockmanagement.BlockManager (BlockManager.java:(283)) - maxReplicationStreams = 2 [exec] 2012-11-02 12:51:14,210 INFO blockmanagement.BlockManager (BlockManager.java:(284)) - shouldCheckForEnoughRacks = false [exec] 2012-11-02 12:51:14,210 INFO blockmanagement.BlockManager (BlockManager.java:(285)) - replicationRecheckInterval = 3000 [exec] 2012-11-02 12:51:14,211 INFO blockmanagement.BlockManager (BlockManager.java:(286)) - encryptDataTransfer = false [exec] 2012-11-02 12:51:14,211 INFO namenode.FSNamesystem (FSNamesystem.java:(473)) - fsOwner = jenkins (auth:SIMPLE) [exec] 2012-11-02 12:51:14,211 INFO namenode.FSNamesystem (FSNamesystem.java:(474)) - supergroup = supergroup [exec] 2012-11-02 12:51:14,211 INFO namenode.FSNamesystem (FSNamesystem.java:(475)) - isPermissionEnabled = true [exec] 2012-11-02 12:51:14,211 INFO namenode.FSNamesystem (FSNamesystem.java:(489)) - HA Enabled: false [exec] 2012-11-02 12:51:14,216 INFO namenode.FSNamesystem (FSNamesystem.java:(521)) - Append Enabled: true [exec] 2012-11-02 12:51:14,423 INFO namenode.NameNode (FSDirectory.java:(143)) - Caching file names occuring more than 10 times [exec] 2012-11-02 12:51:14,425 INFO namenode.FSNamesystem (FSNamesystem.java:(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 [exec] 2012-11-02 12:51:14,425 INFO namenode.FSNamesystem (FSNamesystem.java:(3754)) - dfs.namenode.safemode.min.datanodes = 0 [exec] 2012-11-02 12:51:14,425 INFO namenode.FSNamesystem (FSNamesystem.java:(3755)) - dfs.namenode.safemode.extension = 0 [exec] 2012-11-02 12:51:15,526 INFO common.Storage (NNStorage.java:format(525)) - Storage directory has been successfully formatted. [exec] 2012-11-02 12:51:15,532 INFO common.Storage (NNStorage.java:format(525)) - Storage directory has been successfully formatted. [exec] 2012-11-02 12:51:15,543 INFO namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file using no compression [exec] 2012-11-02 12:51:15,543 INFO namenode.FSImage (FSImageFormat.java:save(494)) - Saving image file using no compression [exec] 2012-11-02 12:51:15,553 INFO namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds. [exec] 2012-11-02 12:51:15,556 INFO namenode.FSImage (FSImageFormat.java:save(521)) - Image file of size 122 saved in 0 seconds. [exec] 2012-11-02 12:51:15,569 INFO namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(171)) - Going to retain 1 images with txid >= 0 [exec] 2012-11-02 12:51:15,617 WARN impl.MetricsConfig (MetricsConfig.java:loadFirst(123)) - Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties [exec] 2012-11-02 12:51:15,672 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(341)) - Scheduled snapshot period at 10 second(s). [exec] 2012-11-02 12:51:15,672 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:start(183)) - NameNode metrics system started [exec] 2012-11-02 12:51:15,685 INFO util.HostsFileReader (HostsFileReader.java:refresh(82)) - Refreshing hosts (include/exclude) list [exec] 2012-11-02 12:51:15,685 INFO blockmanagement.DatanodeManager (DatanodeManager.java:(188)) - dfs.block.invalidate.limit=1000 [exec] 2012-11-02 12:51:15,699 INFO blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294)) - dfs.block.access.token.enable=false [exec] 2012-11-02 12:51:15,699 INFO blockmanagement.BlockManager (BlockManager.java:(280)) - defaultReplication = 1 [exec] 2012-11-02 12:51:15,700 INFO blockmanagement.BlockManager (BlockManager.java:(281)) - maxReplication = 512 [exec] 2012-11-02 12:51:15,700 INFO blockmanagement.BlockManager (BlockManager.java:(282)) - minReplication = 1 [exec] 2012-11-02 12:51:15,700 INFO blockmanagement.BlockManager (BlockManager.java:(283)) - maxReplicationStreams = 2 [exec] 2012-11-02 12:51:15,700 INFO blockmanagement.BlockManager (BlockManager.java:(284)) - shouldCheckForEnoughRacks = false [exec] 2012-11-02 12:51:15,700 INFO blockmanagement.BlockManager (BlockManager.java:(285)) - replicationRecheckInterval = 3000 [exec] 2012-11-02 12:51:15,700 INFO blockmanagement.BlockManager (BlockManager.java:(286)) - encryptDataTransfer = false [exec] 2012-11-02 12:51:15,700 INFO namenode.FSNamesystem (FSNamesystem.java:(473)) - fsOwner = jenkins (auth:SIMPLE) [exec] 2012-11-02 12:51:15,700 INFO namenode.FSNamesystem (FSNamesystem.java:(474)) - supergroup = supergroup [exec] 2012-11-02 12:51:15,700 INFO namenode.FSNamesystem (FSNamesystem.java:(475)) - isPermissionEnabled = true [exec] 2012-11-02 12:51:15,701 INFO namenode.FSNamesystem (FSNamesystem.java:(489)) - HA Enabled: false [exec] 2012-11-02 12:51:15,701 INFO namenode.FSNamesystem (FSNamesystem.java:(521)) - Append Enabled: true [exec] 2012-11-02 12:51:15,701 INFO namenode.NameNode (FSDirectory.java:(143)) - Caching file names occuring more than 10 times [exec] 2012-11-02 12:51:15,702 INFO namenode.FSNamesystem (FSNamesystem.java:(3753)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033 [exec] 2012-11-02 12:51:15,702 INFO namenode.FSNamesystem (FSNamesystem.java:(3754)) - dfs.namenode.safemode.min.datanodes = 0 [exec] 2012-11-02 12:51:15,702 INFO namenode.FSNamesystem (FSNamesystem.java:(3755)) - dfs.namenode.safemode.extension = 0 [exec] 2012-11-02 12:51:15,707 INFO common.Storage (Storage.java:tryLock(662)) - Lock on acquired by nodename 19749@asf005.sp2.ygridcore.net [exec] 2012-11-02 12:51:15,712 INFO common.Storage (Storage.java:tryLock(662)) - Lock on acquired by nodename 19749@asf005.sp2.ygridcore.net [exec] 2012-11-02 12:51:15,715 INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in [exec] 2012-11-02 12:51:15,715 INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287)) - Recovering unfinalized segments in [exec] 2012-11-02 12:51:15,716 INFO namenode.FSImage (FSImage.java:loadFSImage(611)) - No edit log streams selected. [exec] 2012-11-02 12:51:15,718 INFO namenode.FSImage (FSImageFormat.java:load(167)) - Loading image file using no compression [exec] 2012-11-02 12:51:15,718 INFO namenode.FSImage (FSImageFormat.java:load(170)) - Number of files = 1 [exec] 2012-11-02 12:51:15,719 INFO namenode.FSImage (FSImageFormat.java:loadFilesUnderConstruction(358)) - Number of files under construction = 0 [exec] 2012-11-02 12:51:15,719 INFO namenode.FSImage (FSImageFormat.java:load(192)) - Image file of size 122 loaded in 0 seconds. [exec] 2012-11-02 12:51:15,719 INFO namenode.FSImage (FSImage.java:loadFSImage(754)) - Loaded image for txid 0 from [exec] 2012-11-02 12:51:15,723 INFO namenode.FSEditLog (FSEditLog.java:startLogSegment(949)) - Starting log segment at 1 [exec] 2012-11-02 12:51:16,034 INFO namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups [exec] 2012-11-02 12:51:16,035 INFO namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(441)) - Finished loading FSImage in 333 msecs [exec] 2012-11-02 12:51:16,165 INFO ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 37009 [exec] 2012-11-02 12:51:16,186 INFO namenode.FSNamesystem (FSNamesystem.java:registerMBean(4615)) - Registered FSNamesystemState MBean [exec] 2012-11-02 12:51:16,200 INFO namenode.FSNamesystem (FSNamesystem.java:getCompleteBlocksTotal(4307)) - Number of blocks under construction: 0 [exec] 2012-11-02 12:51:16,201 INFO namenode.FSNamesystem (FSNamesystem.java:initializeReplQueues(3858)) - initializing replication queues [exec] 2012-11-02 12:51:16,212 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2205)) - Total number of blocks = 0 [exec] 2012-11-02 12:51:16,212 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2206)) - Number of invalid blocks = 0 [exec] 2012-11-02 12:51:16,213 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2207)) - Number of under-replicated blocks = 0 [exec] 2012-11-02 12:51:16,213 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2208)) - Number of over-replicated blocks = 0 [exec] 2012-11-02 12:51:16,213 INFO blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2210)) - Number of blocks being written = 0 [exec] 2012-11-02 12:51:16,213 INFO hdfs.StateChange (FSNamesystem.java:initializeReplQueues(3863)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 12 msec [exec] 2012-11-02 12:51:16,213 INFO hdfs.StateChange (FSNamesystem.java:leave(3835)) - STATE* Leaving safe mode after 0 secs [exec] 2012-11-02 12:51:16,213 INFO hdfs.StateChange (FSNamesystem.java:leave(3845)) - STATE* Network topology has 0 racks and 0 datanodes [exec] 2012-11-02 12:51:16,213 INFO hdfs.StateChange (FSNamesystem.java:leave(3848)) - STATE* UnderReplicatedBlocks has 0 blocks [exec] 2012-11-02 12:51:16,266 INFO mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog [exec] 2012-11-02 12:51:16,321 INFO http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) [exec] 2012-11-02 12:51:16,323 INFO http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs [exec] 2012-11-02 12:51:16,323 INFO http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static [exec] 2012-11-02 12:51:16,326 INFO http.HttpServer (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false [exec] 2012-11-02 12:51:16,332 INFO http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 45522 [exec] 2012-11-02 12:51:16,332 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26 [exec] 2012-11-02 12:51:16,490 INFO mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:45522 [exec] 2012-11-02 12:51:16,491 INFO namenode.NameNode (NameNode.java:setHttpServerAddress(395)) - Web-server up at: localhost:45522 [exec] 2012-11-02 12:51:16,491 INFO ipc.Server (Server.java:run(817)) - IPC Server Responder: starting [exec] 2012-11-02 12:51:16,491 INFO ipc.Server (Server.java:run(648)) - IPC Server listener on 37009: starting [exec] 2012-11-02 12:51:16,494 INFO namenode.NameNode (NameNode.java:startCommonServices(492)) - NameNode RPC up at: localhost/127.0.0.1:37009 [exec] 2012-11-02 12:51:16,494 INFO namenode.FSNamesystem (FSNamesystem.java:startActiveServices(647)) - Starting services required for active state [exec] 2012-11-02 12:51:16,496 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1145)) - Starting DataNode 0 with dfs.datanode.data.dir: : [exec] 2012-11-02 12:51:16,513 WARN util.NativeCodeLoader (NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable [exec] 2012-11-02 12:51:16,523 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:init(151)) - DataNode metrics system started (again) [exec] 2012-11-02 12:51:16,524 INFO datanode.DataNode (DataNode.java:(313)) - Configured hostname is 127.0.0.1 [exec] 2012-11-02 12:51:16,529 INFO datanode.DataNode (DataNode.java:initDataXceiver(539)) - Opened streaming server at /127.0.0.1:45299 [exec] 2012-11-02 12:51:16,531 INFO datanode.DataNode (DataXceiverServer.java:(77)) - Balancing bandwith is 1048576 bytes/s [exec] 2012-11-02 12:51:16,532 INFO http.HttpServer (HttpServer.java:addGlobalFilter(505)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) [exec] 2012-11-02 12:51:16,532 INFO http.HttpServer (HttpServer.java:addFilter(483)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode [exec] 2012-11-02 12:51:16,533 INFO http.HttpServer (HttpServer.java:addFilter(490)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static [exec] 2012-11-02 12:51:16,534 INFO datanode.DataNode (DataNode.java:startInfoServer(365)) - Opened info server at localhost:0 [exec] 2012-11-02 12:51:16,536 INFO datanode.DataNode (WebHdfsFileSystem.java:isEnabled(142)) - dfs.webhdfs.enabled = false [exec] 2012-11-02 12:51:16,536 INFO http.HttpServer (HttpServer.java:start(663)) - Jetty bound to port 44178 [exec] 2012-11-02 12:51:16,536 INFO mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26 [exec] 2012-11-02 12:51:16,676 INFO mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:44178 [exec] 2012-11-02 12:51:16,683 INFO ipc.Server (Server.java:run(524)) - Starting Socket Reader #1 for port 53374 [exec] 2012-11-02 12:51:16,688 INFO datanode.DataNode (DataNode.java:initIpcServer(436)) - Opened IPC server at /127.0.0.1:53374 [exec] 2012-11-02 12:51:16,695 INFO datanode.DataNode (BlockPoolManager.java:refreshNamenodes(148)) - Refresh request received for nameservices: null [exec] 2012-11-02 12:51:16,698 INFO datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(193)) - Starting BPOfferServices for nameservices: [exec] 2012-11-02 12:51:16,705 INFO datanode.DataNode (BPServiceActor.java:run(658)) - Block pool (storage id unknown) service to localhost/127.0.0.1:37009 starting to offer service [exec] 2012-11-02 12:51:16,709 INFO ipc.Server (Server.java:run(817)) - IPC Server Responder: starting [exec] 2012-11-02 12:51:16,709 INFO ipc.Server (Server.java:run(648)) - IPC Server listener on 53374: starting [exec] 2012-11-02 12:51:17,124 INFO common.Storage (Storage.java:tryLock(662)) - Lock on acquired by nodename 19749@asf005.sp2.ygridcore.net [exec] 2012-11-02 12:51:17,125 INFO common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory is not formatted [exec] 2012-11-02 12:51:17,125 INFO common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ... [exec] 2012-11-02 12:51:17,130 INFO common.Storage (Storage.java:tryLock(662)) - Lock on acquired by nodename 19749@asf005.sp2.ygridcore.net [exec] 2012-11-02 12:51:17,130 INFO common.Storage (DataStorage.java:recoverTransitionRead(162)) - Storage directory is not formatted [exec] 2012-11-02 12:51:17,130 INFO common.Storage (DataStorage.java:recoverTransitionRead(163)) - Formatting ... [exec] 2012-11-02 12:51:17,167 INFO common.Storage (Storage.java:lock(626)) - Locking is disabled [exec] 2012-11-02 12:51:17,167 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory is not formatted. [exec] 2012-11-02 12:51:17,167 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ... [exec] 2012-11-02 12:51:17,167 INFO common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-714465857-67.195.138.27-1351860674434 directory [exec] 2012-11-02 12:51:17,169 INFO common.Storage (Storage.java:lock(626)) - Locking is disabled [exec] 2012-11-02 12:51:17,170 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116)) - Storage directory is not formatted. [exec] 2012-11-02 12:51:17,170 INFO common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117)) - Formatting ... [exec] 2012-11-02 12:51:17,170 INFO common.Storage (BlockPoolSliceStorage.java:format(171)) - Formatting block pool BP-714465857-67.195.138.27-1351860674434 directory [exec] 2012-11-02 12:51:17,173 INFO datanode.DataNode (DataNode.java:initStorage(852)) - Setting up storage: nsid=71175640;bpid=BP-714465857-67.195.138.27-1351860674434;lv=-40;nsInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0;bpid=BP-714465857-67.195.138.27-1351860674434 [exec] 2012-11-02 12:51:17,183 INFO impl.FsDatasetImpl (FsDatasetImpl.java:(197)) - Added volume - [exec] 2012-11-02 12:51:17,183 INFO impl.FsDatasetImpl (FsDatasetImpl.java:(197)) - Added volume - [exec] 2012-11-02 12:51:17,194 INFO impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(1209)) - Registered FSDatasetState MBean [exec] 2012-11-02 12:51:17,198 INFO datanode.DirectoryScanner (DirectoryScanner.java:start(243)) - Periodic Directory Tree Verification scan starting at 1351879520198 with interval 21600000 [exec] 2012-11-02 12:51:17,199 INFO impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(1577)) - Adding block pool BP-714465857-67.195.138.27-1351860674434 [exec] 2012-11-02 12:51:17,206 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1833)) - Waiting for cluster to become active [exec] 2012-11-02 12:51:17,207 INFO datanode.DataNode (BPServiceActor.java:register(618)) - Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 beginning handshake with NN [exec] 2012-11-02 12:51:17,209 INFO hdfs.StateChange (DatanodeManager.java:registerDatanode(661)) - BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-743789385-67.195.138.27-45299-1351860677132, infoPort=44178, ipcPort=53374, storageInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0) storage DS-743789385-67.195.138.27-45299-1351860677132 [exec] 2012-11-02 12:51:17,212 INFO net.NetworkTopology (NetworkTopology.java:add(388)) - Adding a new node: /default-rack/127.0.0.1:45299 [exec] 2012-11-02 12:51:17,213 INFO datanode.DataNode (BPServiceActor.java:register(631)) - Block pool Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 successfully registered with NN [exec] 2012-11-02 12:51:17,213 INFO datanode.DataNode (BPServiceActor.java:offerService(499)) - For namenode localhost/127.0.0.1:37009 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec; heartBeatInterval=3000 [exec] 2012-11-02 12:51:17,217 INFO datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(419)) - Namenode Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 trying to claim ACTIVE state with txid=1 [exec] 2012-11-02 12:51:17,217 INFO datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(431)) - Acknowledging ACTIVE Namenode Block pool BP-714465857-67.195.138.27-1351860674434 (storage id DS-743789385-67.195.138.27-45299-1351860677132) service to localhost/127.0.0.1:37009 [exec] 2012-11-02 12:51:17,221 INFO blockmanagement.BlockManager (BlockManager.java:processReport(1526)) - BLOCK* processReport: Received first block report from 127.0.0.1:45299 after becoming active. Its block contents are no longer considered stale [exec] 2012-11-02 12:51:17,222 INFO hdfs.StateChange (BlockManager.java:processReport(1539)) - BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-743789385-67.195.138.27-45299-1351860677132, infoPort=44178, ipcPort=53374, storageInfo=lv=-40;cid=testClusterID;nsid=71175640;c=0), blocks: 0, processing time: 1 msecs [exec] 2012-11-02 12:51:17,223 INFO datanode.DataNode (BPServiceActor.java:blockReport(409)) - BlockReport of 0 blocks took 1 msec to generate and 5 msecs for RPC and NN processing [exec] 2012-11-02 12:51:17,223 INFO datanode.DataNode (BPServiceActor.java:blockReport(428)) - sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@19ccb73 [exec] 2012-11-02 12:51:17,225 INFO datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:(156)) - Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-714465857-67.195.138.27-1351860674434 [exec] 2012-11-02 12:51:17,229 INFO datanode.DataBlockScanner (DataBlockScanner.java:addBlockPool(248)) - Added bpid=BP-714465857-67.195.138.27-1351860674434 to blockPoolScannerMap, new size=1 [exec] 2012-11-02 12:51:17,311 INFO hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1864)) - Cluster is active [exec] # [exec] # A fatal error has been detected by the Java Runtime Environment: [exec] # [exec] # SIGSEGV (0xb) at pc=0xf6e97b68, pid=19749, tid=4136773328 [exec] # [exec] # JRE version: 6.0_26-b03 [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 ) [exec] # Problematic frame: [exec] # V [libjvm.so+0x3efb68] unsigned+0xb8 [exec] # [exec] # An error report file with more information is saved as: [exec] # [exec] # [exec] # If you would like to submit a bug report, please visit: [exec] # http://java.sun.com/webapps/bugreport/crash.jsp [exec] # [exec] Aborted [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS ................................ FAILURE [1:17:35.336s] [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1:17:36.108s [INFO] Finished at: Fri Nov 02 12:51:17 UTC 2012 [INFO] Final Memory: 18M/478M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests) on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException Build step 'Execute shell' marked build as failure Archiving artifacts Updating MAPREDUCE-4729 Updating MAPREDUCE-4746