Return-Path: Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: (qmail 30066 invoked from network); 21 Dec 2010 04:21:26 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 21 Dec 2010 04:21:26 -0000 Received: (qmail 83703 invoked by uid 500); 21 Dec 2010 04:21:25 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 83485 invoked by uid 500); 21 Dec 2010 04:21:25 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 83475 invoked by uid 99); 21 Dec 2010 04:21:25 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Dec 2010 04:21:25 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.8] (HELO aegis.apache.org) (140.211.11.8) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 21 Dec 2010 04:21:24 +0000 Received: from aegis (localhost [127.0.0.1]) by aegis.apache.org (Postfix) with ESMTP id BF82BC018A for ; Tue, 21 Dec 2010 04:21:03 +0000 (UTC) Date: Tue, 21 Dec 2010 04:21:02 +0000 (UTC) From: Apache Hudson Server To: hdfs-dev@hadoop.apache.org Message-ID: <2039606479.37051292905263750.JavaMail.hudson@aegis> In-Reply-To: <1575284256.9571290618113029.JavaMail.hudson@aegis> References: <1575284256.9571290618113029.JavaMail.hudson@aegis> Subject: Hadoop-Hdfs-22-branch - Build # 3 - Still Failing MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit See https://hudson.apache.org/hudson/job/Hadoop-Hdfs-22-branch/3/ ################################################################################### ########################## LAST 60 LINES OF THE CONSOLE ########################### [...truncated 2908 lines...] [junit] Running org.apache.hadoop.fs.permission.TestStickyBit [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 9.246 sec [junit] Running org.apache.hadoop.hdfs.TestBlockMissingException [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 16.011 sec [junit] Running org.apache.hadoop.hdfs.TestByteRangeInputStream [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.147 sec [junit] Running org.apache.hadoop.hdfs.TestClientBlockVerification [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 2.957 sec [junit] Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.506 sec [junit] Running org.apache.hadoop.hdfs.TestCrcCorruption [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 14.056 sec [junit] Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.734 sec [junit] Running org.apache.hadoop.hdfs.TestDFSClientRetries [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 46.754 sec [junit] Running org.apache.hadoop.hdfs.TestDFSPermission [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 16.733 sec [junit] Running org.apache.hadoop.hdfs.TestDFSRemove [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 14.218 sec [junit] Running org.apache.hadoop.hdfs.TestDFSStartupVersions [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 17.763 sec [junit] Running org.apache.hadoop.hdfs.TestDFSUpgrade [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 24.731 sec [junit] Running org.apache.hadoop.hdfs.TestDFSUtil [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.143 sec [junit] Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 245.858 sec [junit] Running org.apache.hadoop.hdfs.TestDatanodeConfig [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 2.453 sec [junit] Running org.apache.hadoop.hdfs.TestDatanodeDeath [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 113.693 sec [junit] Running org.apache.hadoop.hdfs.TestDatanodeRegistration [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.742 sec [junit] Running org.apache.hadoop.hdfs.TestDecommission [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 32.233 sec [junit] Running org.apache.hadoop.hdfs.TestDeprecatedKeys [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.269 sec [junit] Running org.apache.hadoop.hdfs.TestDfsOverAvroRpc [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 3.394 sec [junit] Running org.apache.hadoop.hdfs.TestFileAppend4 [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 9.422 sec [junit] Running org.apache.hadoop.hdfs.TestFileConcurrentReader [junit] Tests run: 7, Failures: 0, Errors: 2, Time elapsed: 16.19 sec [junit] Test org.apache.hadoop.hdfs.TestFileConcurrentReader FAILED [junit] Running org.apache.hadoop.hdfs.TestFileCreation [junit] Tests run: 12, Failures: 0, Errors: 0, Time elapsed: 46.712 sec [junit] Running org.apache.hadoop.hdfs.TestFileCreationClient Build timed out. Aborting /tmp/hudson1283337574971406906.sh: line 2: 12969 Terminated bash ${WORKSPACE}/nightly/commitBuild.sh [FINDBUGS] Skipping publisher since build result is FAILURE Publishing Javadoc Archiving artifacts Recording test results Recording fingerprints Publishing Clover coverage report... No Clover report will be published due to a Build Failure Email was triggered for: Failure Sending email for trigger: Failure ################################################################################### ############################## FAILED TESTS (if any) ############################## 4 tests failed. REGRESSION: org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransfer Error Message: Too many open files Stack Trace: java.io.IOException: Too many open files at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method) at sun.nio.ch.EPollArrayWrapper.(EPollArrayWrapper.java:68) at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:52) at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18) at java.nio.channels.Selector.open(Selector.java:209) at org.apache.hadoop.ipc.Server$Responder.(Server.java:602) at org.apache.hadoop.ipc.Server.(Server.java:1501) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:394) at org.apache.hadoop.ipc.WritableRpcEngine$Server.(WritableRpcEngine.java:331) at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:291) at org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:47) at org.apache.hadoop.ipc.RPC.getServer(RPC.java:382) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:416) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:507) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:281) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:263) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1561) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1504) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1471) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:614) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:448) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:176) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:71) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:168) at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88) at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73) FAILED: TEST-org.apache.hadoop.hdfs.TestFileCreationClient.xml. Error Message: Stack Trace: Test report file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/test/TEST-org.apache.hadoop.hdfs.TestFileCreationClient.xml was length 0 FAILED: org.apache.hadoop.hdfs.TestFileConcurrentReader.testUnfinishedBlockCRCErrorNormalTransferVerySmallWrite Error Message: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/test/data/dfs/name1. The directory is already locked. Stack Trace: java.io.IOException: Cannot lock storage /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/test/data/dfs/name1. The directory is already locked. at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:615) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1332) at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1350) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1403) at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:201) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:435) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:176) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:71) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:168) at org.apache.hadoop.hdfs.TestFileConcurrentReader.init(TestFileConcurrentReader.java:88) at org.apache.hadoop.hdfs.TestFileConcurrentReader.setUp(TestFileConcurrentReader.java:73) FAILED: org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore Error Message: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 662125b8ea1a49748cce872485a002e8 but expecting 2f1956eb4bf8cec2a59e88fb7037f3d7 Stack Trace: java.io.IOException: Image file /grid/0/hudson/hudson-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build/test/data/dfs/secondary/current/fsimage is corrupt with MD5 checksum of 662125b8ea1a49748cce872485a002e8 but expecting 2f1956eb4bf8cec2a59e88fb7037f3d7 at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:1062) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.doMerge(SecondaryNameNode.java:678) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$CheckpointStorage.access$500(SecondaryNameNode.java:583) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doMerge(SecondaryNameNode.java:460) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:424) at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.__CLR3_0_2dn2tm410tm(TestStorageRestore.java:316) at org.apache.hadoop.hdfs.server.namenode.TestStorageRestore.testStorageRestore(TestStorageRestore.java:286)