Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 75797 invoked from network); 25 Jan 2008 19:00:05 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 25 Jan 2008 19:00:05 -0000 Received: (qmail 62035 invoked by uid 500); 25 Jan 2008 18:59:52 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 62006 invoked by uid 500); 25 Jan 2008 18:59:51 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 61974 invoked by uid 500); 25 Jan 2008 18:59:51 -0000 Delivered-To: apmail-lucene-hadoop-dev@lucene.apache.org Received: (qmail 61968 invoked by uid 99); 25 Jan 2008 18:59:51 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 25 Jan 2008 10:59:51 -0800 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 25 Jan 2008 18:59:44 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 2674D714267 for ; Fri, 25 Jan 2008 10:59:36 -0800 (PST) Message-ID: <20575876.1201287576153.JavaMail.jira@brutus> Date: Fri, 25 Jan 2008 10:59:36 -0800 (PST) From: "Robert Chansler (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Assigned: (HADOOP-2713) Unit test fails on Windows: org.apache.hadoop.dfs.TestDatanodeDeath In-Reply-To: <16906078.1201285534895.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Chansler reassigned HADOOP-2713: --------------------------------------- Assignee: Hairong Kuang (was: Robert Chansler) > Unit test fails on Windows: org.apache.hadoop.dfs.TestDatanodeDeath > ------------------------------------------------------------------- > > Key: HADOOP-2713 > URL: https://issues.apache.org/jira/browse/HADOOP-2713 > Project: Hadoop Core > Issue Type: Bug > Components: dfs > Affects Versions: 0.16.0 > Environment: Windows > Reporter: Mukund Madhugiri > Assignee: Hairong Kuang > Priority: Blocker > Fix For: 0.16.0 > > > Unit test fails consistently on Windows with a timeout: > Test: org.apache.hadoop.dfs.TestDatanodeDeath > Here is a snippet of the console log: > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] 2008-01-25 09:10:47,841 WARN fs.FSNamesystem (PendingReplicationBlocks.java:pendingReplicationCheck(209)) - PendingReplicationMonitor timed out block blk_2509851293741663991 > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] 2008-01-25 09:10:52,839 INFO dfs.StateChange (FSNamesystem.java:pendingTransfers(3249)) - BLOCK* NameSystem.pendingTransfer: ask 127.0.0.1:3773 to replicate blk_2509851293741663991 to datanode(s) 127.0.0.1:3767 > [junit] 2008-01-25 09:10:53,526 INFO dfs.DataNode (DataNode.java:transferBlocks(786)) - 127.0.0.1:3773 Starting thread to transfer block blk_2509851293741663991 to 127.0.0.1:3767 > [junit] 2008-01-25 09:10:53,526 INFO dfs.DataNode (DataNode.java:writeBlock(1035)) - Receiving block blk_2509851293741663991 from /127.0.0.1 > [junit] 2008-01-25 09:10:53,526 INFO dfs.DataNode (DataNode.java:writeBlock(1147)) - writeBlock blk_2509851293741663991 received exception java.io.IOException: Block blk_2509851293741663991 has already been started (though not completed), and thus cannot be created. > [junit] 2008-01-25 09:10:53,526 ERROR dfs.DataNode (DataNode.java:run(948)) - 127.0.0.1:3767:DataXceiver: java.io.IOException: Block blk_2509851293741663991 has already been started (though not completed), and thus cannot be created. > [junit] at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:638) > [junit] at org.apache.hadoop.dfs.DataNode$BlockReceiver.(DataNode.java:1949) > [junit] at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1060) > [junit] at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:925) > [junit] at java.lang.Thread.run(Thread.java:595) > [junit] 2008-01-25 09:10:53,526 WARN dfs.DataNode (DataNode.java:run(2366)) - 127.0.0.1:3773:Failed to transfer blk_2509851293741663991 to 127.0.0.1:3767 got java.net.SocketException: Software caused connection abort: socket write error > [junit] at java.net.SocketOutputStream.socketWrite0(Native Method) > [junit] at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) > [junit] at java.net.SocketOutputStream.write(SocketOutputStream.java:136) > [junit] at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) > [junit] at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) > [junit] at java.io.DataOutputStream.flush(DataOutputStream.java:106) > [junit] at org.apache.hadoop.dfs.DataNode$BlockSender.sendBlock(DataNode.java:1621) > [junit] at org.apache.hadoop.dfs.DataNode$DataTransfer.run(DataNode.java:2360) > [junit] at java.lang.Thread.run(Thread.java:595) > [junit] File simpletest.dat has 3 blocks: The 0 block has only 2 replicas but is expected to have 3 replicas. > [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec > [junit] Test org.apache.hadoop.dfs.TestDatanodeDeath FAILED (timeout) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.