hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2713) Unit test fails on Windows: org.apache.hadoop.dfs.TestDatanodeDeath
Date Sat, 26 Jan 2008 02:13:35 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12562804#action_12562804
] 

Raghu Angadi commented on HADOOP-2713:
--------------------------------------

Can we have a description of bug and the fix? Looks like this patch fixes HADOOP-2714 as well.


> Unit test fails on Windows: org.apache.hadoop.dfs.TestDatanodeDeath
> -------------------------------------------------------------------
>
>                 Key: HADOOP-2713
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2713
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>         Environment: Windows
>            Reporter: Mukund Madhugiri
>            Assignee: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.16.0
>
>         Attachments: TestDatanodeDeath.patch, TestDatanodeDeath.patch
>
>
> Unit test fails consistently on Windows with a timeout:
> Test: org.apache.hadoop.dfs.TestDatanodeDeath
> Here is a snippet of the console log:
> [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is expected
to have 3 replicas.
>     [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is
expected to have 3 replicas.
>     [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is
expected to have 3 replicas.
>     [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is
expected to have 3 replicas.
>     [junit] 2008-01-25 09:10:47,841 WARN  fs.FSNamesystem (PendingReplicationBlocks.java:pendingReplicationCheck(209))
- PendingReplicationMonitor timed out block blk_2509851293741663991
>     [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is
expected to have 3 replicas.
>     [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is
expected to have 3 replicas.
>     [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is
expected to have 3 replicas.
>     [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is
expected to have 3 replicas.
>     [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is
expected to have 3 replicas.
>     [junit] 2008-01-25 09:10:52,839 INFO  dfs.StateChange (FSNamesystem.java:pendingTransfers(3249))
- BLOCK* NameSystem.pendingTransfer: ask 127.0.0.1:3773 to replicate blk_2509851293741663991
to datanode(s) 127.0.0.1:3767
>     [junit] 2008-01-25 09:10:53,526 INFO  dfs.DataNode (DataNode.java:transferBlocks(786))
- 127.0.0.1:3773 Starting thread to transfer block blk_2509851293741663991 to 127.0.0.1:3767
>     [junit] 2008-01-25 09:10:53,526 INFO  dfs.DataNode (DataNode.java:writeBlock(1035))
- Receiving block blk_2509851293741663991 from /127.0.0.1
>     [junit] 2008-01-25 09:10:53,526 INFO  dfs.DataNode (DataNode.java:writeBlock(1147))
- writeBlock blk_2509851293741663991 received exception java.io.IOException: Block blk_2509851293741663991
has already been started (though not completed), and thus cannot be created.
>     [junit] 2008-01-25 09:10:53,526 ERROR dfs.DataNode (DataNode.java:run(948)) - 127.0.0.1:3767:DataXceiver:
java.io.IOException: Block blk_2509851293741663991 has already been started (though not completed),
and thus cannot be created.
>     [junit] 	at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:638)
>     [junit] 	at org.apache.hadoop.dfs.DataNode$BlockReceiver.<init>(DataNode.java:1949)
>     [junit] 	at org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:1060)
>     [junit] 	at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:925)
>     [junit] 	at java.lang.Thread.run(Thread.java:595)
>     [junit] 2008-01-25 09:10:53,526 WARN  dfs.DataNode (DataNode.java:run(2366)) - 127.0.0.1:3773:Failed
to transfer blk_2509851293741663991 to 127.0.0.1:3767 got java.net.SocketException: Software
caused connection abort: socket write error
>     [junit] 	at java.net.SocketOutputStream.socketWrite0(Native Method)
>     [junit] 	at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
>     [junit] 	at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
>     [junit] 	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
>     [junit] 	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
>     [junit] 	at java.io.DataOutputStream.flush(DataOutputStream.java:106)
>     [junit] 	at org.apache.hadoop.dfs.DataNode$BlockSender.sendBlock(DataNode.java:1621)
>     [junit] 	at org.apache.hadoop.dfs.DataNode$DataTransfer.run(DataNode.java:2360)
>     [junit] 	at java.lang.Thread.run(Thread.java:595)
>     [junit] File simpletest.dat has 3 blocks:  The 0 block has only 2 replicas  but is
expected to have 3 replicas.
>     [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
>     [junit] Test org.apache.hadoop.dfs.TestDatanodeDeath FAILED (timeout)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message