hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Hudson Server <hud...@hudson.zones.apache.org>
Subject Build failed in Hudson: Hadoop-Hdfs-trunk #58
Date Sat, 22 Aug 2009 17:27:00 GMT
See http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/58/changes

Changes:

[cdouglas] HDFS-538. Per the contract elucidated in HADOOP-6201, throw
FileNotFoundException from FileSystem::listStatus rather than returning
null. Contributed by Jakob Homan.

------------------------------------------
[...truncated 222912 lines...]
    [junit] 2009-08-22 17:25:53,625 INFO  http.HttpServer (HttpServer.java:start(425)) - Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener
on 0
    [junit] 2009-08-22 17:25:53,626 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort()
returned 39115 webServer.getConnectors()[0].getLocalPort() returned 39115
    [junit] 2009-08-22 17:25:53,626 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty
bound to port 39115
    [junit] 2009-08-22 17:25:53,626 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-22 17:25:53,686 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:39115
    [junit] 2009-08-22 17:25:53,686 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot
initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-22 17:25:53,687 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58))
- Initializing RPC Metrics with hostName=DataNode, port=44254
    [junit] 2009-08-22 17:25:53,688 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder:
starting
    [junit] 2009-08-22 17:25:53,688 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler
0 on 44254: starting
    [junit] 2009-08-22 17:25:53,688 INFO  datanode.DataNode (DataNode.java:startDataNode(404))
- dnRegistration = DatanodeRegistration(vesta.apache.org:50408, storageID=, infoPort=39115,
ipcPort=44254)
    [junit] 2009-08-22 17:25:53,688 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener
on 44254: starting
    [junit] 2009-08-22 17:25:53,690 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774))
- BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:50408 storage DS-881213117-67.195.138.9-50408-1250961953689
    [junit] 2009-08-22 17:25:53,690 INFO  net.NetworkTopology (NetworkTopology.java:add(327))
- Adding a new node: /default-rack/127.0.0.1:50408
    [junit] 2009-08-22 17:25:53,746 INFO  datanode.DataNode (DataNode.java:register(571))
- New storage id DS-881213117-67.195.138.9-50408-1250961953689 is assigned to data-node 127.0.0.1:50408
    [junit] 2009-08-22 17:25:53,746 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:50408,
storageID=DS-881213117-67.195.138.9-50408-1250961953689, infoPort=39115, ipcPort=44254)In
DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}

    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4

    [junit] 2009-08-22 17:25:53,747 INFO  datanode.DataNode (DataNode.java:offerService(763))
- using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-22 17:25:53,755 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122))
- Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3
 is not formatted.
    [junit] 2009-08-22 17:25:53,755 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123))
- Formatting ...
    [junit] 2009-08-22 17:25:53,785 INFO  datanode.DataNode (DataNode.java:blockReport(998))
- BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-08-22 17:25:53,785 INFO  datanode.DataNode (DataNode.java:offerService(806))
- Starting Periodic block scanner.
    [junit] 2009-08-22 17:25:53,942 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122))
- Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data4
 is not formatted.
    [junit] 2009-08-22 17:25:53,942 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123))
- Formatting ...
    [junit] 2009-08-22 17:25:54,233 INFO  datanode.DataNode (FSDataset.java:registerMBean(1547))
- Registered FSDatasetStatusMBean
    [junit] 2009-08-22 17:25:54,234 INFO  datanode.DataNode (DataNode.java:startDataNode(326))
- Opened info server at 34356
    [junit] 2009-08-22 17:25:54,234 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74))
- Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-22 17:25:54,235 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133))
- scan starts at 1250982862235 with interval 21600000
    [junit] 2009-08-22 17:25:54,236 INFO  http.HttpServer (HttpServer.java:start(425)) - Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener
on 0
    [junit] 2009-08-22 17:25:54,236 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort()
returned 50526 webServer.getConnectors()[0].getLocalPort() returned 50526
    [junit] 2009-08-22 17:25:54,237 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty
bound to port 50526
    [junit] 2009-08-22 17:25:54,237 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-22 17:25:54,297 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:50526
    [junit] 2009-08-22 17:25:54,298 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot
initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-22 17:25:54,299 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58))
- Initializing RPC Metrics with hostName=DataNode, port=60353
    [junit] 2009-08-22 17:25:54,300 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder:
starting
    [junit] 2009-08-22 17:25:54,300 INFO  datanode.DataNode (DataNode.java:startDataNode(404))
- dnRegistration = DatanodeRegistration(vesta.apache.org:34356, storageID=, infoPort=50526,
ipcPort=60353)
    [junit] 2009-08-22 17:25:54,300 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler
0 on 60353: starting
    [junit] 2009-08-22 17:25:54,300 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener
on 60353: starting
    [junit] 2009-08-22 17:25:54,302 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774))
- BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:34356 storage DS-259311802-67.195.138.9-34356-1250961954301
    [junit] 2009-08-22 17:25:54,302 INFO  net.NetworkTopology (NetworkTopology.java:add(327))
- Adding a new node: /default-rack/127.0.0.1:34356
    [junit] 2009-08-22 17:25:54,343 INFO  datanode.DataNode (DataNode.java:register(571))
- New storage id DS-259311802-67.195.138.9-34356-1250961954301 is assigned to data-node 127.0.0.1:34356
    [junit] 2009-08-22 17:25:54,344 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:34356,
storageID=DS-259311802-67.195.138.9-34356-1250961954301, infoPort=50526, ipcPort=60353)In
DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}

    [junit] Starting DataNode 2 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6

    [junit] 2009-08-22 17:25:54,344 INFO  datanode.DataNode (DataNode.java:offerService(763))
- using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-22 17:25:54,351 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122))
- Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5
 is not formatted.
    [junit] 2009-08-22 17:25:54,352 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123))
- Formatting ...
    [junit] 2009-08-22 17:25:54,382 INFO  datanode.DataNode (DataNode.java:blockReport(998))
- BlockReport of 0 blocks got processed in 0 msecs
    [junit] 2009-08-22 17:25:54,383 INFO  datanode.DataNode (DataNode.java:offerService(806))
- Starting Periodic block scanner.
    [junit] 2009-08-22 17:25:54,549 INFO  common.Storage (DataStorage.java:recoverTransitionRead(122))
- Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data6
 is not formatted.
    [junit] 2009-08-22 17:25:54,549 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123))
- Formatting ...
    [junit] 2009-08-22 17:25:54,819 INFO  datanode.DataNode (FSDataset.java:registerMBean(1547))
- Registered FSDatasetStatusMBean
    [junit] 2009-08-22 17:25:54,820 INFO  datanode.DataNode (DataNode.java:startDataNode(326))
- Opened info server at 38414
    [junit] 2009-08-22 17:25:54,820 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74))
- Balancing bandwith is 1048576 bytes/s
    [junit] 2009-08-22 17:25:54,820 INFO  datanode.DirectoryScanner (DirectoryScanner.java:<init>(133))
- scan starts at 1250963141820 with interval 21600000
    [junit] 2009-08-22 17:25:54,822 INFO  http.HttpServer (HttpServer.java:start(425)) - Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener
on 0
    [junit] 2009-08-22 17:25:54,822 INFO  http.HttpServer (HttpServer.java:start(430)) - listener.getLocalPort()
returned 41030 webServer.getConnectors()[0].getLocalPort() returned 41030
    [junit] 2009-08-22 17:25:54,822 INFO  http.HttpServer (HttpServer.java:start(463)) - Jetty
bound to port 41030
    [junit] 2009-08-22 17:25:54,822 INFO  mortbay.log (?:invoke(?)) - jetty-6.1.14
    [junit] 2009-08-22 17:25:54,882 INFO  mortbay.log (?:invoke(?)) - Started SelectChannelConnector@localhost:41030
    [junit] 2009-08-22 17:25:54,883 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot
initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-08-22 17:25:54,884 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58))
- Initializing RPC Metrics with hostName=DataNode, port=53124
    [junit] 2009-08-22 17:25:54,957 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener
on 53124: starting
    [junit] 2009-08-22 17:25:54,957 INFO  datanode.DataNode (DataNode.java:startDataNode(404))
- dnRegistration = DatanodeRegistration(vesta.apache.org:38414, storageID=, infoPort=41030,
ipcPort=53124)
    [junit] 2009-08-22 17:25:54,957 INFO  ipc.Server (Server.java:run(474)) - IPC Server Responder:
starting
    [junit] 2009-08-22 17:25:54,958 INFO  ipc.Server (Server.java:run(939)) - IPC Server handler
0 on 53124: starting
    [junit] 2009-08-22 17:25:54,959 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(1774))
- BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:38414 storage DS-2029722745-67.195.138.9-38414-1250961954958
    [junit] 2009-08-22 17:25:54,959 INFO  net.NetworkTopology (NetworkTopology.java:add(327))
- Adding a new node: /default-rack/127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,003 INFO  datanode.DataNode (DataNode.java:register(571))
- New storage id DS-2029722745-67.195.138.9-38414-1250961954958 is assigned to data-node 127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,003 INFO  datanode.DataNode (DataNode.java:run(1285)) - DatanodeRegistration(127.0.0.1:38414,
storageID=DS-2029722745-67.195.138.9-38414-1250961954958, infoPort=41030, ipcPort=53124)In
DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}

    [junit] 2009-08-22 17:25:55,004 INFO  datanode.DataNode (DataNode.java:offerService(763))
- using BLOCKREPORT_INTERVAL of 21600000msec Initial delay: 0msec
    [junit] 2009-08-22 17:25:55,070 INFO  datanode.DataNode (DataNode.java:blockReport(998))
- BlockReport of 0 blocks got processed in 0 msecs
    [junit] 2009-08-22 17:25:55,071 INFO  datanode.DataNode (DataNode.java:offerService(806))
- Starting Periodic block scanner.
    [junit] 2009-08-22 17:25:55,178 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114))
- ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/pipeline_Fi_16/foo	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-08-22 17:25:55,181 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1303))
- BLOCK* NameSystem.allocateBlock: /pipeline_Fi_16/foo. blk_-678847007047035635_1001
    [junit] 2009-08-22 17:25:55,211 INFO  protocol.ClientProtocolAspects (ClientProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_protocol_ClientProtocolAspects$1$7076326d(35))
- FI: addBlock Pipeline[127.0.0.1:50408, 127.0.0.1:38414, 127.0.0.1:34356]
    [junit] 2009-08-22 17:25:55,212 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,212 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
- FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,213 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-678847007047035635_1001 src: /127.0.0.1:56770 dest: /127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,214 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,214 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
- FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,214 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-678847007047035635_1001 src: /127.0.0.1:41695 dest: /127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,215 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,216 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
- FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,216 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-678847007047035635_1001 src: /127.0.0.1:43698 dest: /127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,216 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
- FI: statusRead SUCCESS, datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,217 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
- FI: statusRead SUCCESS, datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,219 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,218 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,219 INFO  fi.FiTestUtil (DataTransferTestUtil.java:run(151))
- FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,219 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,219 WARN  datanode.DataNode (DataNode.java:checkDiskError(702))
- checkDiskError: exception: 
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16,
index=2, datanode=127.0.0.1:34356
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-22 17:25:55,220 INFO  mortbay.log (?:invoke(?)) - Completed FSVolumeSet.checkDirs.
Removed=0volumes. List of current volumes: http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current

    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(569))
- Exception in receiveBlock for block blk_-678847007047035635_1001 org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException:
FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(782))
- PacketResponder 0 for block blk_-678847007047035635_1001 Interrupted.
    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853))
- PacketResponder 0 for block blk_-678847007047035635_1001 terminating
    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(358))
- writeBlock blk_-678847007047035635_1001 received exception org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException:
FI: pipeline_Fi_16, index=2, datanode=127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,221 ERROR datanode.DataNode (DataXceiver.java:run(112)) -
DatanodeRegistration(127.0.0.1:34356, storageID=DS-259311802-67.195.138.9-34356-1250961954301,
infoPort=50526, ipcPort=60353):DataXceiver
    [junit] org.apache.hadoop.util.DiskChecker$DiskOutOfSpaceException: FI: pipeline_Fi_16,
index=2, datanode=127.0.0.1:34356
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:152)
    [junit] 	at org.apache.hadoop.fi.DataTransferTestUtil$DoosAction.run(DataTransferTestUtil.java:1)
    [junit] 	at org.apache.hadoop.fi.FiTestUtil$ActionContainer.run(FiTestUtil.java:66)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiverAspects.ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(BlockReceiverAspects.aj:50)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:459)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:535)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.opWriteBlock(DataXceiver.java:339)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.opWriteBlock(DataTransferProtocol.java:324)
    [junit] 	at org.apache.hadoop.hdfs.protocol.DataTransferProtocol$Receiver.processOp(DataTransferProtocol.java:269)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 2009-08-22 17:25:55,221 INFO  datanode.DataNode (BlockReceiver.java:run(917))
- PacketResponder blk_-678847007047035635_1001 1 Exception java.io.EOFException
    [junit] 	at java.io.DataInputStream.readFully(DataInputStream.java:180)
    [junit] 	at java.io.DataInputStream.readLong(DataInputStream.java:399)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:879)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-22 17:25:55,223 INFO  datanode.DataNode (BlockReceiver.java:run(1025))
- PacketResponder 1 for block blk_-678847007047035635_1001 terminating
    [junit] 2009-08-22 17:25:55,265 INFO  datanode.DataNode (BlockReceiver.java:run(1025))
- PacketResponder 2 for block blk_-678847007047035635_1001 terminating
    [junit] 2009-08-22 17:25:55,265 WARN  hdfs.DFSClient (DFSClient.java:run(2601)) - DFSOutputStream
ResponseProcessor exception  for block blk_-678847007047035635_1001java.io.IOException: Bad
response ERROR for block blk_-678847007047035635_1001 from datanode 127.0.0.1:34356
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSClient.java:2581)
    [junit] 
    [junit] 2009-08-22 17:25:55,265 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2630))
- Error Recovery for block blk_-678847007047035635_1001 bad datanode[2] 127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,265 WARN  hdfs.DFSClient (DFSClient.java:processDatanodeError(2674))
- Error Recovery for block blk_-678847007047035635_1001 in pipeline 127.0.0.1:50408, 127.0.0.1:38414,
127.0.0.1:34356: bad datanode 127.0.0.1:34356
    [junit] 2009-08-22 17:25:55,268 INFO  datanode.DataNode (DataNode.java:logRecoverBlock(1727))
- Client calls recoverBlock(block=blk_-678847007047035635_1001, targets=[127.0.0.1:50408,
127.0.0.1:38414])
    [junit] 2009-08-22 17:25:55,272 INFO  datanode.DataNode (DataNode.java:updateBlock(1537))
- oldblock=blk_-678847007047035635_1001(length=1), newblock=blk_-678847007047035635_1002(length=1),
datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,273 INFO  datanode.DataNode (DataNode.java:updateBlock(1537))
- oldblock=blk_-678847007047035635_1001(length=1), newblock=blk_-678847007047035635_1002(length=1),
datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,274 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1613))
- commitBlockSynchronization(lastblock=blk_-678847007047035635_1001, newgenerationstamp=1002,
newlength=1, newtargets=[127.0.0.1:50408, 127.0.0.1:38414], closeFile=false, deleteBlock=false)
    [junit] 2009-08-22 17:25:55,274 INFO  namenode.FSNamesystem (FSNamesystem.java:commitBlockSynchronization(1677))
- commitBlockSynchronization(blk_-678847007047035635_1002) successful
    [junit] 2009-08-22 17:25:55,275 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,275 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
- FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,276 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-678847007047035635_1002 src: /127.0.0.1:56775 dest: /127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,276 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085))
- Reopen already-open Block for append blk_-678847007047035635_1002
    [junit] 2009-08-22 17:25:55,276 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
- FI: receiverOp WRITE_BLOCK, datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,276 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$3$3251489(72))
- FI: receiverOpWriteBlock
    [junit] 2009-08-22 17:25:55,277 INFO  datanode.DataNode (DataXceiver.java:opWriteBlock(222))
- Receiving block blk_-678847007047035635_1002 src: /127.0.0.1:41700 dest: /127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,277 INFO  datanode.DataNode (FSDataset.java:writeToBlock(1085))
- Reopen already-open Block for append blk_-678847007047035635_1002
    [junit] 2009-08-22 17:25:55,277 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
- FI: statusRead SUCCESS, datanode=127.0.0.1:50408
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,279 INFO  datanode.BlockReceiverAspects (BlockReceiverAspects.aj:ajc$before$org_apache_hadoop_hdfs_server_datanode_BlockReceiverAspects$1$4c211928(47))
- FI: callReceivePacket
    [junit] 2009-08-22 17:25:55,280 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(822))
- src: /127.0.0.1:41700, dest: /127.0.0.1:38414, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-2085369977,
offset: 0, srvID: DS-2029722745-67.195.138.9-38414-1250961954958, blockid: blk_-678847007047035635_1002,
duration: 2788947
    [junit] 2009-08-22 17:25:55,281 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950))
- BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:38414 is added to blk_-678847007047035635_1002
size 1
    [junit] 2009-08-22 17:25:55,282 INFO  DataNode.clienttrace (BlockReceiver.java:run(955))
- src: /127.0.0.1:56775, dest: /127.0.0.1:50408, bytes: 1, op: HDFS_WRITE, cliID: DFSClient_-2085369977,
offset: 0, srvID: DS-881213117-67.195.138.9-50408-1250961953689, blockid: blk_-678847007047035635_1002,
duration: 3449887
    [junit] 2009-08-22 17:25:55,281 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(853))
- PacketResponder 0 for block blk_-678847007047035635_1002 terminating
    [junit] 2009-08-22 17:25:55,283 INFO  hdfs.StateChange (BlockManager.java:addStoredBlock(950))
- BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:50408 is added to blk_-678847007047035635_1002
size 1
    [junit] 2009-08-22 17:25:55,283 INFO  datanode.DataNode (BlockReceiver.java:run(1025))
- PacketResponder 1 for block blk_-678847007047035635_1002 terminating
    [junit] 2009-08-22 17:25:55,285 INFO  hdfs.StateChange (FSNamesystem.java:completeFileInternal(1269))
- DIR* NameSystem.completeFile: file /pipeline_Fi_16/foo is closed by DFSClient_-2085369977
    [junit] 2009-08-22 17:25:55,294 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(114))
- ugi=hudson,hudson	ip=/127.0.0.1	cmd=open	src=/pipeline_Fi_16/foo	dst=null	perm=null
    [junit] 2009-08-22 17:25:55,296 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$1$8f59fdd7(51))
- FI: receiverOp READ_BLOCK, datanode=127.0.0.1:38414
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 2
    [junit] 2009-08-22 17:25:55,297 INFO  DataNode.clienttrace (BlockSender.java:sendBlock(417))
- src: /127.0.0.1:38414, dest: /127.0.0.1:41701, bytes: 5, op: HDFS_READ, cliID: DFSClient_-2085369977,
offset: 0, srvID: DS-2029722745-67.195.138.9-38414-1250961954958, blockid: blk_-678847007047035635_1002,
duration: 237979
    [junit] 2009-08-22 17:25:55,299 INFO  datanode.DataTransferProtocolAspects (DataTransferProtocolAspects.aj:ajc$afterReturning$org_apache_hadoop_hdfs_server_datanode_DataTransferProtocolAspects$2$d4f6605f(61))
- FI: statusRead CHECKSUM_OK, datanode=127.0.0.1:38414
    [junit] 2009-08-22 17:25:55,399 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 53124
    [junit] 2009-08-22 17:25:55,400 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
0 on 53124: exiting
    [junit] 2009-08-22 17:25:55,400 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC
Server Responder
    [junit] 2009-08-22 17:25:55,400 WARN  datanode.DataNode (DataXceiverServer.java:run(137))
- DatanodeRegistration(127.0.0.1:38414, storageID=DS-2029722745-67.195.138.9-38414-1250961954958,
infoPort=41030, ipcPort=53124):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-22 17:25:55,400 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-22 17:25:55,400 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 53124
    [junit] 2009-08-22 17:25:55,401 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616))
- Exiting DataBlockScanner thread.
    [junit] 2009-08-22 17:25:55,402 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:38414,
storageID=DS-2029722745-67.195.138.9-38414-1250961954958, infoPort=41030, ipcPort=53124):Finishing
DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data5/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data6/current'}

    [junit] 2009-08-22 17:25:55,402 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 53124
    [junit] 2009-08-22 17:25:55,402 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 2009-08-22 17:25:55,504 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 60353
    [junit] 2009-08-22 17:25:55,504 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
0 on 60353: exiting
    [junit] 2009-08-22 17:25:55,504 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 60353
    [junit] 2009-08-22 17:25:55,505 WARN  datanode.DataNode (DataXceiverServer.java:run(137))
- DatanodeRegistration(127.0.0.1:34356, storageID=DS-259311802-67.195.138.9-34356-1250961954301,
infoPort=50526, ipcPort=60353):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-22 17:25:55,505 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-22 17:25:55,505 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC
Server Responder
    [junit] 2009-08-22 17:25:55,505 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616))
- Exiting DataBlockScanner thread.
    [junit] 2009-08-22 17:25:55,505 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:34356,
storageID=DS-259311802-67.195.138.9-34356-1250961954301, infoPort=50526, ipcPort=60353):Finishing
DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data4/current'}

    [junit] 2009-08-22 17:25:55,506 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 60353
    [junit] 2009-08-22 17:25:55,506 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-08-22 17:25:55,608 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 44254
    [junit] 2009-08-22 17:25:55,608 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
0 on 44254: exiting
    [junit] 2009-08-22 17:25:55,609 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 44254
    [junit] 2009-08-22 17:25:55,609 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-08-22 17:25:55,609 WARN  datanode.DataNode (DataXceiverServer.java:run(137))
- DatanodeRegistration(127.0.0.1:50408, storageID=DS-881213117-67.195.138.9-50408-1250961953689,
infoPort=39115, ipcPort=44254):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-08-22 17:25:55,609 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC
Server Responder
    [junit] 2009-08-22 17:25:55,611 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-22 17:25:55,611 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(616))
- Exiting DataBlockScanner thread.
    [junit] 2009-08-22 17:25:55,612 INFO  datanode.DataNode (DataNode.java:run(1305)) - DatanodeRegistration(127.0.0.1:50408,
storageID=DS-881213117-67.195.138.9-50408-1250961953689, infoPort=39115, ipcPort=44254):Finishing
DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build-fi/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-Hdfs-trunk/trunk/build-fi/test/data/dfs/data/data2/current'}

    [junit] 2009-08-22 17:25:55,612 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 44254
    [junit] 2009-08-22 17:25:55,612 INFO  datanode.DataNode (DataNode.java:shutdown(643))
- Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-08-22 17:25:55,736 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67))
- Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-08-22 17:25:55,736 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(884))
- Number of transactions: 5 Total time for transactions(ms): 0Number of transactions batched
in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 36 34 
    [junit] 2009-08-22 17:25:55,736 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2077))
- ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException:
sleep interrupted
    [junit] 2009-08-22 17:25:55,746 INFO  ipc.Server (Server.java:stop(1103)) - Stopping server
on 42930
    [junit] 2009-08-22 17:25:55,746 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
0 on 42930: exiting
    [junit] 2009-08-22 17:25:55,746 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
1 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
2 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
7 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
3 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
6 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
5 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
4 on 42930: exiting
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
8 on 42930: exiting
    [junit] 2009-08-22 17:25:55,748 INFO  ipc.Server (Server.java:run(539)) - Stopping IPC
Server Responder
    [junit] 2009-08-22 17:25:55,747 INFO  ipc.Server (Server.java:run(997)) - IPC Server handler
9 on 42930: exiting
    [junit] 2009-08-22 17:25:55,748 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 42930
    [junit] Tests run: 7, Failures: 0, Errors: 0, Time elapsed: 79.161 sec

checkfailure:

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-Hdfs-trunk/ws/trunk/build.xml :725: Tests
failed!

Total time: 68 minutes 15 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Mime
View raw message