hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Hudson Server <hud...@hudson.zones.apache.org>
Subject Build failed in Hudson: Hadoop-Hdfs-trunk #43
Date Sat, 08 Aug 2009 17:46:44 GMT
See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/43/changes

Changes:

[szetszwo] HDFS-451. Add fault injection tests, Pipeline_Fi_06,07,14,15, for DataTransferProtocol.

[szetszwo] Update hadoop-core-0.21.0-dev.jar and hadoop-core-test-0.21.0-dev.jar.

------------------------------------------
[...truncated 312630 lines...]
    [junit] 2009-08-08 17:14:19,643 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123))
- Formatting ...
    [junit] 2009-08-08 17:14:19,927 INFO  datanode.DataNode (FSDataset.java:registerMBean(1417))
- Registered FSDatasetStatusMBean
    [junit] 2009-08-08 17:14:19,928 INFO  datanode.DataNode (DataNode.java:startDataNode(326))
- Opened info server at 54191
    [junit] 2009-08-08 17:14:19,928 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74))
- Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-08 17:14:19,928 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133))
- scan starts at 1249762895928 with interval 21600000
    [junit] 2009-08-08 17:14:19,930 INFO  http.HttpServer (HttpServer.java:start(425)) - Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener
on 0
    [junit] 2009-08-08 17:14:19,930 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort()
returned 42641 webServer.getConnectors()[0].getLocalPort() returned 42641
    [junit] 2009-08-08 17:14:19,930 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty
bound to port 42641
    [junit] 2009-08-08 17:14:19,930 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-08 17:14:20,005 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:42641
    [junit] 2009-08-08 17:14:20,006 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot
initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-08 17:14:20,007 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58))
- Initializing RPC Metrics with hostName=DataNode, port=54116
    [junit] 2009-08-08 17:14:20,007 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder:
starting
    [junit] 2009-08-08 17:14:20,007 INFO  datanode.DataNode (DataNode.java:startDataNode(404))
- dnRegistration = DatanodeRegistration(vesta.apache.org:54191, storageID=, infoPort=42641,
ipcPort=54116)
    [junit] 2009-08-08 17:14:20,007 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler
0 on 54116: starting
    [junit] 2009-08-08 17:14:20,007 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener
on 54116: starting
    [junit] 2009-08-08 17:14:20,009 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774))
- BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:54191 storage DS-1647170729-67.195.138.9-54191-1249751660008
    [junit] 2009-08-08 17:14:20,009 INFO  net.NetworkTopology (NetworkTopology.java:add(327))
- Adding a new node: /default-rack/127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,054 INFO  datanode.DataNode (DataNode.java:register(571))
- New storage id DS-1647170729-67.195.138.9-54191-1249751660008 is assigned to data-node 127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,054 INFO  datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:54191,
storageID=DS-1647170729-67.195.138.9-54191-1249751660008, infoPort=42641, ipcPort=54116)In
DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}

    [junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6

    [junit] 2009-08-08 17:14:20,055 INFO  datanode.DataNode (DataNode.java:offerService(739))
- using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-08 17:14:20,062 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122))
- Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5
 is not formatted.
    [junit] 2009-08-08 17:14:20,062 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123))
- Formatting ...
    [junit] 2009-08-08 17:14:20,092 INFO  datanode.DataNode (DataNode.java:blockReport(974))
- BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-08 17:14:20,092 INFO  datanode.DataNode (DataNode.java:offerService(782))
- Starting Periodic block scanner.
    [junit] 2009-08-08 17:14:20,244 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122))
- Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6
 is not formatted.
    [junit] 2009-08-08 17:14:20,245 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123))
- Formatting ...
    [junit] 2009-08-08 17:14:20,533 INFO  datanode.DataNode (FSDataset.java:registerMBean(1417))
- Registered FSDatasetStatusMBean
    [junit] 2009-08-08 17:14:20,534 INFO  datanode.DataNode (DataNode.java:startDataNode(326))
- Opened info server at 34975
    [junit] 2009-08-08 17:14:20,534 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74))
- Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-08 17:14:20,535 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133))
- scan starts at 1249761925535 with interval 21600000
    [junit] 2009-08-08 17:14:20,536 INFO  http.HttpServer (HttpServer.java:start(425)) - Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener
on 0
    [junit] 2009-08-08 17:14:20,536 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort()
returned 47612 webServer.getConnectors()[0].getLocalPort() returned 47612
    [junit] 2009-08-08 17:14:20,536 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty
bound to port 47612
    [junit] 2009-08-08 17:14:20,537 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-08 17:14:20,601 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:47612
    [junit] 2009-08-08 17:14:20,602 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot
initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-08 17:14:20,603 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58))
- Initializing RPC Metrics with hostName=DataNode, port=44352
    [junit] 2009-08-08 17:14:20,604 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder:
starting
    [junit] 2009-08-08 17:14:20,604 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler
0 on 44352: starting
    [junit] 2009-08-08 17:14:20,604 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener
on 44352: starting
    [junit] 2009-08-08 17:14:20,605 INFO  datanode.DataNode (DataNode.java:startDataNode(404))
- dnRegistration = DatanodeRegistration(vesta.apache.org:34975, storageID=, infoPort=47612,
ipcPort=44352)
    [junit] 2009-08-08 17:14:20,606 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774))
- BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:34975 storage DS-123548898-67.195.138.9-34975-1249751660605
    [junit] 2009-08-08 17:14:20,606 INFO  net.NetworkTopology (NetworkTopology.java:add(327))
- Adding a new node: /default-rack/127.0.0.1:34975
    [junit] 2009-08-08 17:14:20,647 INFO  datanode.DataNode (DataNode.java:register(571))
- New storage id DS-123548898-67.195.138.9-34975-1249751660605 is assigned to data-node 127.0.0.1:34975
    [junit] 2009-08-08 17:14:20,647 INFO  datanode.DataNode (DataNode.java:run(1258)) - DatanodeRegistration(127.0.0.1:34975,
storageID=DS-123548898-67.195.138.9-34975-1249751660605, infoPort=47612, ipcPort=44352)In
DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}

    [junit] 2009-08-08 17:14:20,648 INFO  datanode.DataNode (DataNode.java:offerService(739))
- using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-08 17:14:20,687 INFO  datanode.DataNode (DataNode.java:blockReport(974))
- BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-08 17:14:20,687 INFO  datanode.DataNode (DataNode.java:offerService(782))
- Starting Periodic block scanner.
    [junit] 2009-08-08 17:14:20,747 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114))
- ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/testPipelineFi15/foo	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-08 17:14:20,749 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1303))
- BLOCK* NameSystem.allocateBlock: /testPipelineFi15/foo. blk_-5657119858028835158_1001
    [junit] 2009-08-08 17:14:20,781 INFO  protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(32))
- FI: addBlock Pipeline[127.0.0.1:54191, 127.0.0.1:42259, 127.0.0.1:34975]
    [junit] 2009-08-08 17:14:20,782 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,783 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
- FI: receiverOpWriteBlock
    [junit] 2009-08-08 17:14:20,783 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-5657119858028835158_1001 src: /127.0.0.1:37959 dest: /127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,784 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:42259
    [junit] 2009-08-08 17:14:20,785 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
- FI: receiverOpWriteBlock
    [junit] 2009-08-08 17:14:20,785 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-5657119858028835158_1001 src: /127.0.0.1:46736 dest: /127.0.0.1:42259
    [junit] 2009-08-08 17:14:20,787 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:34975
    [junit] 2009-08-08 17:14:20,787 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
- FI: receiverOpWriteBlock
    [junit] 2009-08-08 17:14:20,787 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-5657119858028835158_1001 src: /127.0.0.1:58987 dest: /127.0.0.1:34975
    [junit] 2009-08-08 17:14:20,788 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60))
- FI: statusRead SUCCESS, datanode=127.0.0.1:42259
    [junit] 2009-08-08 17:14:20,788 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60))
- FI: statusRead SUCCESS, datanode=127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,790 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
- FI: callReceivePacket
    [junit] 2009-08-08 17:14:20,790 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
- FI: callReceivePacket
    [junit] 2009-08-08 17:14:20,790 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
- FI: callReceivePacket
    [junit] 2009-08-08 17:14:20,790 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
- FI: callReceivePacket
    [junit] 2009-08-08 17:14:20,791 INFO  fi.FiTestUtil (DataTransferTestUtil.java:run(158))
- FI: testPipelineFi15, index=1, datanode=127.0.0.1:42259
    [junit] 2009-08-08 17:14:20,791 INFO  datanode.DataNode (BlockReceiver.java:handleMirrorOutError(185))
- DatanodeRegistration(127.0.0.1:42259, storageID=DS-1381817528-67.195.138.9-42259-1249751659354,
infoPort=36286, ipcPort=41586):Exception writing block blk_-5657119858028835158_1001 to mirror
127.0.0.1:34975
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15,
index=1, datanode=127.0.0.1:42259
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-08 17:14:20,792 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(566))
- Exception in receiveBlock for block blk_-5657119858028835158_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException:
FI: testPipelineFi15, index=1, datanode=127.0.0.1:42259
    [junit] 2009-08-08 17:14:20,792 INFO  datanode.DataNode (BlockReceiver.java:run(907))
- PacketResponder blk_-5657119858028835158_1001 1 Exception java.io.InterruptedIOException:
Interruped while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/127.0.0.1:58987
remote=/127.0.0.1:34975]. 59997 millis timeout left.
    [junit] 	at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:349)
    [junit] 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
    [junit] 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
    [junit] 	at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
    [junit] 	at java.io.DataInputStream.readFully(DataInputStream.java:178)
    [junit] 	at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-08 17:14:20,793 INFO  datanode.DataNode (BlockReceiver.java:run(922))
- PacketResponder blk_-5657119858028835158_1001 1 : Thread is interrupted.
    [junit] 2009-08-08 17:14:20,793 INFO  datanode.DataNode (BlockReceiver.java:run(1009))
- PacketResponder 1 for block blk_-5657119858028835158_1001 terminating
    [junit] 2009-08-08 17:14:20,793 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(358))
- writeBlock blk_-5657119858028835158_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException:
FI: testPipelineFi15, index=1, datanode=127.0.0.1:42259
    [junit] 2009-08-08 17:14:20,793 ERROR datanode.DataNode (DataXceiver.java:run(112)) -
DatanodeRegistration(127.0.0.1:42259, storageID=DS-1381817528-67.195.138.9-42259-1249751659354,
infoPort=36286, ipcPort=41586):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: testPipelineFi15,
index=1, datanode=127.0.0.1:42259
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:159)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:47)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:408)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-08 17:14:20,793 INFO  datanode.DataNode (BlockReceiver.java:run(907))
- PacketResponder blk_-5657119858028835158_1001 2 Exception java.io.EOFException
    [junit] 	at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit] 	at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:869)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-08 17:14:20,793 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(566))
- Exception in receiveBlock for block blk_-5657119858028835158_1001 java.io.EOFException:
while trying to read 65557 bytes
    [junit] 2009-08-08 17:14:20,794 INFO  datanode.DataNode (BlockReceiver.java:run(1009))
- PacketResponder 2 for block blk_-5657119858028835158_1001 terminating
    [junit] 2009-08-08 17:14:20,794 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(779))
- PacketResponder 0 for block blk_-5657119858028835158_1001 Interrupted.
    [junit] 2009-08-08 17:14:20,794 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843))
- PacketResponder 0 for block blk_-5657119858028835158_1001 terminating
    [junit] 2009-08-08 17:14:20,794 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(358))
- writeBlock blk_-5657119858028835158_1001 received exception java.io.EOFException: while
trying to read 65557 bytes
    [junit] 2009-08-08 17:14:20,794 WARN  hdfs.DFSClient (DFSClient.java:run(2593)) - DFSOutputStream
ResponseProcessor exception  for block blk_-5657119858028835158_1001java.io.IOException: Bad
response ERROR for block blk_-5657119858028835158_1001 from datanode 127.0.0.1:42259
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2573)
    [junit] 
    [junit] 2009-08-08 17:14:20,795 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2622))
- Error Recovery for block blk_-5657119858028835158_1001 bad datanode[1] 127.0.0.1:42259
    [junit] 2009-08-08 17:14:20,795 ERROR datanode.DataNode (DataXceiver.java:run(112)) -
DatanodeRegistration(127.0.0.1:34975, storageID=DS-123548898-67.195.138.9-34975-1249751660605,
infoPort=47612, ipcPort=44352):DataXceiver
    [junit] java.io.EOFException: while trying to read 65557 bytes
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:271)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:315)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:379)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:532)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-08 17:14:20,795 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2666))
- Error Recovery for block blk_-5657119858028835158_1001 in pipeline 127.0.0.1:54191, 127.0.0.1:42259,
127.0.0.1:34975: bad datanode 127.0.0.1:42259
    [junit] 2009-08-08 17:14:20,798 INFO  datanode.DataNode (DataNode.java:logRecoverBlock(1700))
- Client calls recoverBlock(block=blk_-5657119858028835158_1001, targets=[127.0.0.1:54191,
127.0.0.1:34975])
    [junit] 2009-08-08 17:14:20,802 INFO  datanode.DataNode (DataNode.java:updateBlock(1510))
- oldblock=blk_-5657119858028835158_1001(length=1), newblock=blk_-5657119858028835158_1002(length=0),
datanode=127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,803 INFO  datanode.DataNode (DataNode.java:updateBlock(1510))
- oldblock=blk_-5657119858028835158_1001(length=0), newblock=blk_-5657119858028835158_1002(length=0),
datanode=127.0.0.1:34975
    [junit] 2009-08-08 17:14:20,804 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613))
- commitBlockSynchronization(lastblock=blk_-5657119858028835158_1001, newgenerationstamp=1002,
newlength=0, newtargets=[127.0.0.1:54191, 127.0.0.1:34975], closeFile=false, deleteBlock=false)
    [junit] 2009-08-08 17:14:20,804 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677))
- commitBlockSynchronization(blk_-5657119858028835158_1002) successful
    [junit] 2009-08-08 17:14:20,805 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,806 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
- FI: receiverOpWriteBlock
    [junit] 2009-08-08 17:14:20,806 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-5657119858028835158_1002 src: /127.0.0.1:37964 dest: /127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,806 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1011))
- Reopen already-open Block for append blk_-5657119858028835158_1002
    [junit] 2009-08-08 17:14:20,807 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:34975
    [junit] 2009-08-08 17:14:20,807 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(70))
- FI: receiverOpWriteBlock
    [junit] 2009-08-08 17:14:20,807 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-5657119858028835158_1002 src: /127.0.0.1:58991 dest: /127.0.0.1:34975
    [junit] 2009-08-08 17:14:20,807 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1011))
- Reopen already-open Block for append blk_-5657119858028835158_1002
    [junit] 2009-08-08 17:14:20,808 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60))
- FI: statusRead SUCCESS, datanode=127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,809 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
- FI: callReceivePacket
    [junit] 2009-08-08 17:14:20,809 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
- FI: callReceivePacket
    [junit] 2009-08-08 17:14:20,809 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
- FI: callReceivePacket
    [junit] 2009-08-08 17:14:20,809 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
- FI: callReceivePacket
    [junit] 2009-08-08 17:14:20,809 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(46))
- FI: callReceivePacket
    [junit] 2009-08-08 17:14:20,811 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(819))
- src: /127.0.0.1:58991, dest: /127.0.0.1:34975, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1504818771,
offset: 0, srvID: DS-123548898-67.195.138.9-34975-1249751660605, blockid: blk_-5657119858028835158_1002,
duration: 2370502
    [junit] 2009-08-08 17:14:20,811 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(843))
- PacketResponder 0 for block blk_-5657119858028835158_1002 terminating
    [junit] 2009-08-08 17:14:20,851 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950))
- BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:34975 is added to blk_-5657119858028835158_1002
size 1
    [junit] 2009-08-08 17:14:20,852 INFO  DataNode.clienttrace (BlockReceiver.java:run(945))
- src: /127.0.0.1:37964, dest: /127.0.0.1:54191, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-1504818771,
offset: 0, srvID: DS-1647170729-67.195.138.9-54191-1249751660008, blockid: blk_-5657119858028835158_1002,
duration: 3023379
    [junit] 2009-08-08 17:14:20,852 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950))
- BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:54191 is added to blk_-5657119858028835158_1002
size 1
    [junit] 2009-08-08 17:14:20,852 INFO  datanode.DataNode (BlockReceiver.java:run(1009))
- PacketResponder 1 for block blk_-5657119858028835158_1002 terminating
    [junit] 2009-08-08 17:14:20,854 INFO  hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269))
- DIR* NameSystem.completeFile: file /testPipelineFi15/foo is closed by DFSClient_-1504818771
    [junit] 2009-08-08 17:14:20,863 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114))
- ugi=hudson,hudson	ip=/127.0.0.1	cmd=open	src=/testPipelineFi15/foo	dst=null	perm=null
    [junit] 2009-08-08 17:14:20,865 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(50))
- FI: receiverOp READ_BLOCK, datanode=127.0.0.1:54191
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-08 17:14:20,866 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(417))
- src: /127.0.0.1:54191, dest: /127.0.0.1:37966, bytes: 5, op: HDFS_READ, cliID: DFSClient_-1504818771,
offset: 0, srvID: DS-1647170729-67.195.138.9-54191-1249751660008, blockid: blk_-5657119858028835158_1002,
duration: 234440
    [junit] 2009-08-08 17:14:20,867 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(60))
- FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:54191
    [junit] 2009-08-08 17:14:20,968 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 44352
    [junit] 2009-08-08 17:14:20,968 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
0 on 44352: exiting
    [junit] 2009-08-08 17:14:20,969 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC
Server Responder
    [junit] 2009-08-08 17:14:20,969 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-08 17:14:20,969 WARN  datanode.DataNode (DataXceiverServer.java:run(137))
- DatanodeRegistration(127.0.0.1:34975, storageID=DS-123548898-67.195.138.9-34975-1249751660605,
infoPort=47612, ipcPort=44352):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-08 17:14:20,970 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 44352
    [junit] 2009-08-08 17:14:20,972 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-08 17:14:20,972 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616))
- Exiting DataBlockScanner thread.
    [junit] 2009-08-08 17:14:20,972 INFO  datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:34975,
storageID=DS-123548898-67.195.138.9-34975-1249751660605, infoPort=47612, ipcPort=44352):Finishing
DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}

    [junit] 2009-08-08 17:14:20,972 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 44352
    [junit] 2009-08-08 17:14:20,973 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-08 17:14:21,075 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 54116
    [junit] 2009-08-08 17:14:21,075 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 54116
    [junit] 2009-08-08 17:14:21,075 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC
Server Responder
    [junit] 2009-08-08 17:14:21,076 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
0 on 54116: exiting
    [junit] 2009-08-08 17:14:21,076 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-08 17:14:21,075 WARN  datanode.DataNode (DataXceiverServer.java:run(137))
- DatanodeRegistration(127.0.0.1:54191, storageID=DS-1647170729-67.195.138.9-54191-1249751660008,
infoPort=42641, ipcPort=54116):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-08 17:14:21,077 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616))
- Exiting DataBlockScanner thread.
    [junit] 2009-08-08 17:14:21,077 INFO  datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:54191,
storageID=DS-1647170729-67.195.138.9-54191-1249751660008, infoPort=42641, ipcPort=54116):Finishing
DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}

    [junit] 2009-08-08 17:14:21,077 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 54116
    [junit] 2009-08-08 17:14:21,077 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-08 17:14:21,116 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 41586
    [junit] 2009-08-08 17:14:21,116 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
0 on 41586: exiting
    [junit] 2009-08-08 17:14:21,117 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 41586
    [junit] 2009-08-08 17:14:21,117 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-08 17:14:21,117 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC
Server Responder
    [junit] 2009-08-08 17:14:21,117 WARN  datanode.DataNode (DataXceiverServer.java:run(137))
- DatanodeRegistration(127.0.0.1:42259, storageID=DS-1381817528-67.195.138.9-42259-1249751659354,
infoPort=36286, ipcPort=41586):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-08 17:14:21,119 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-08 17:14:21,120 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616))
- Exiting DataBlockScanner thread.
    [junit] 2009-08-08 17:14:21,120 INFO  datanode.DataNode (DataNode.java:run(1278)) - DatanodeRegistration(127.0.0.1:42259,
storageID=DS-1381817528-67.195.138.9-42259-1249751659354, infoPort=36286, ipcPort=41586):Finishing
DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}

    [junit] 2009-08-08 17:14:21,120 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 41586
    [junit] 2009-08-08 17:14:21,120 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-08 17:14:21,223 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2077))
- ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException:
sleep interrupted
    [junit] 2009-08-08 17:14:21,223 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(884))
- Number of transactions: 5 Total time for transactions(ms): 2Number of transactions batched
in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 43 37 
    [junit] 2009-08-08 17:14:21,223 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67))
- Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-08 17:14:21,232 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 45914
    [junit] 2009-08-08 17:14:21,232 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
0 on 45914: exiting
    [junit] 2009-08-08 17:14:21,232 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
2 on 45914: exiting
    [junit] 2009-08-08 17:14:21,232 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
1 on 45914: exiting
    [junit] 2009-08-08 17:14:21,232 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
3 on 45914: exiting
    [junit] 2009-08-08 17:14:21,233 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
9 on 45914: exiting
    [junit] 2009-08-08 17:14:21,233 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
4 on 45914: exiting
    [junit] 2009-08-08 17:14:21,233 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
6 on 45914: exiting
    [junit] 2009-08-08 17:14:21,233 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
5 on 45914: exiting
    [junit] 2009-08-08 17:14:21,233 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
8 on 45914: exiting
    [junit] 2009-08-08 17:14:21,233 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
7 on 45914: exiting
    [junit] 2009-08-08 17:14:21,234 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC
Server Responder
    [junit] 2009-08-08 17:14:21,234 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 45914
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 97.775 sec

checkfailure:

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :725: Tests
failed!

Total time: 80 minutes 54 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Mime
View raw message