hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Hudson Server <hud...@hudson.zones.apache.org>
Subject Build failed in Hudson: Hadoop-trunk #803
Date Fri, 10 Apr 2009 15:57:35 GMT
See http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/803/changes

Changes:

[szetszwo] Fix CHANGES.txt.

------------------------------------------
[...truncated 350699 lines...]
    [junit] 2009-04-10 16:09:39,473 INFO  datanode.DataNode (DataNode.java:startDataNode(317))
- Opened info server at 56424
    [junit] 2009-04-10 16:09:39,473 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74))
- Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-10 16:09:39,475 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty
bound to port 52708
    [junit] 2009-04-10 16:09:39,475 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-10 16:09:39,544 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:52708
    [junit] 2009-04-10 16:09:39,544 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot
initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-10 16:09:39,546 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58))
- Initializing RPC Metrics with hostName=DataNode, port=49666
    [junit] 2009-04-10 16:09:39,546 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder:
starting
    [junit] 2009-04-10 16:09:39,547 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler
0 on 49666: starting
    [junit] 2009-04-10 16:09:39,547 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener
on 49666: starting
    [junit] 2009-04-10 16:09:39,547 INFO  datanode.DataNode (DataNode.java:startDataNode(396))
- dnRegistration = DatanodeRegistration(vesta.apache.org:56424, storageID=, infoPort=52708,
ipcPort=49666)
    [junit] 2009-04-10 16:09:39,547 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler
2 on 49666: starting
    [junit] 2009-04-10 16:09:39,549 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler
1 on 49666: starting
    [junit] 2009-04-10 16:09:39,549 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2077))
- BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:56424 storage DS-291053678-67.195.138.9-56424-1239379779548
    [junit] 2009-04-10 16:09:39,550 INFO  net.NetworkTopology (NetworkTopology.java:add(328))
- Adding a new node: /default-rack/127.0.0.1:56424
    [junit] 2009-04-10 16:09:39,552 INFO  datanode.DataNode (DataNode.java:register(554))
- New storage id DS-291053678-67.195.138.9-56424-1239379779548 is assigned to data-node 127.0.0.1:56424
    [junit] 2009-04-10 16:09:39,552 INFO  datanode.DataNode (DataNode.java:run(1196)) - DatanodeRegistration(127.0.0.1:56424,
storageID=DS-291053678-67.195.138.9-56424-1239379779548, infoPort=52708, ipcPort=49666)In
DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}

    [junit] Starting DataNode 1 with dfs.data.dir: http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4

    [junit] 2009-04-10 16:09:39,554 INFO  datanode.DataNode (DataNode.java:offerService(696))
- using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-10 16:09:39,562 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123))
- Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3
 is not formatted.
    [junit] 2009-04-10 16:09:39,563 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124))
- Formatting ...
    [junit] 2009-04-10 16:09:39,567 INFO  common.Storage (DataStorage.java:recoverTransitionRead(123))
- Storage directory http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data4
 is not formatted.
    [junit] 2009-04-10 16:09:39,567 INFO  common.Storage (DataStorage.java:recoverTransitionRead(124))
- Formatting ...
    [junit] 2009-04-10 16:09:39,590 INFO  datanode.DataNode (DataNode.java:offerService(778))
- BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-10 16:09:39,591 INFO  datanode.DataNode (DataNode.java:offerService(803))
- Starting Periodic block scanner.
    [junit] 2009-04-10 16:09:39,602 INFO  datanode.DataNode (FSDataset.java:registerMBean(1414))
- Registered FSDatasetStatusMBean
    [junit] 2009-04-10 16:09:39,603 INFO  datanode.DataNode (DataNode.java:startDataNode(317))
- Opened info server at 37771
    [junit] 2009-04-10 16:09:39,604 INFO  datanode.DataNode (DataXceiverServer.java:<init>(74))
- Balancing bandwith is 1048576 bytes/s
    [junit] 2009-04-10 16:09:39,606 INFO  http.HttpServer (HttpServer.java:start(454)) - Jetty
bound to port 50434
    [junit] 2009-04-10 16:09:39,607 INFO  mortbay.log (?:invoke0(?)) - jetty-6.1.14
    [junit] 2009-04-10 16:09:39,675 INFO  mortbay.log (?:invoke0(?)) - Started SelectChannelConnector@localhost:50434
    [junit] 2009-04-10 16:09:39,676 INFO  jvm.JvmMetrics (JvmMetrics.java:init(66)) - Cannot
initialize JVM Metrics with processName=DataNode, sessionId=null - already initialized
    [junit] 2009-04-10 16:09:39,677 INFO  metrics.RpcMetrics (RpcMetrics.java:<init>(58))
- Initializing RPC Metrics with hostName=DataNode, port=52932
    [junit] 2009-04-10 16:09:39,678 INFO  ipc.Server (Server.java:run(471)) - IPC Server Responder:
starting
    [junit] 2009-04-10 16:09:39,679 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler
2 on 52932: starting
    [junit] 2009-04-10 16:09:39,679 INFO  datanode.DataNode (DataNode.java:startDataNode(396))
- dnRegistration = DatanodeRegistration(vesta.apache.org:37771, storageID=, infoPort=50434,
ipcPort=52932)
    [junit] 2009-04-10 16:09:39,678 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler
1 on 52932: starting
    [junit] 2009-04-10 16:09:39,678 INFO  ipc.Server (Server.java:run(934)) - IPC Server handler
0 on 52932: starting
    [junit] 2009-04-10 16:09:39,678 INFO  ipc.Server (Server.java:run(313)) - IPC Server listener
on 52932: starting
    [junit] 2009-04-10 16:09:39,682 INFO  hdfs.StateChange (FSNamesystem.java:registerDatanode(2077))
- BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:37771 storage DS-106845047-67.195.138.9-37771-1239379779680
    [junit] 2009-04-10 16:09:39,682 INFO  net.NetworkTopology (NetworkTopology.java:add(328))
- Adding a new node: /default-rack/127.0.0.1:37771
    [junit] 2009-04-10 16:09:39,685 INFO  datanode.DataNode (DataNode.java:register(554))
- New storage id DS-106845047-67.195.138.9-37771-1239379779680 is assigned to data-node 127.0.0.1:37771
    [junit] 2009-04-10 16:09:39,685 INFO  datanode.DataNode (DataNode.java:run(1196)) - DatanodeRegistration(127.0.0.1:37771,
storageID=DS-106845047-67.195.138.9-37771-1239379779680, infoPort=50434, ipcPort=52932)In
DataNode.run, data = FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}

    [junit] 2009-04-10 16:09:39,690 INFO  datanode.DataNode (DataNode.java:offerService(696))
- using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
    [junit] 2009-04-10 16:09:39,721 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471))
- current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);

    [junit] 2009-04-10 16:09:39,722 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471))
- current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);

    [junit] 2009-04-10 16:09:39,726 INFO  datanode.DataNode (DataNode.java:offerService(778))
- BlockReport of 0 blocks got processed in 1 msecs
    [junit] 2009-04-10 16:09:39,726 INFO  datanode.DataNode (DataNode.java:offerService(803))
- Starting Periodic block scanner.
    [junit] 2009-04-10 16:09:39,740 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471))
- current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);

    [junit] 2009-04-10 16:09:39,741 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(110))
- ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/test	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2009-04-10 16:09:39,743 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1474))
- BLOCK* NameSystem.allocateBlock: /test. blk_-1167952298056785757_1001
    [junit] 2009-04-10 16:09:39,745 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228))
- Receiving block blk_-1167952298056785757_1001 src: /127.0.0.1:37695 dest: /127.0.0.1:37771
    [junit] 2009-04-10 16:09:39,747 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228))
- Receiving block blk_-1167952298056785757_1001 src: /127.0.0.1:56688 dest: /127.0.0.1:56424
    [junit] 2009-04-10 16:09:39,749 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805))
- src: /127.0.0.1:56688, dest: /127.0.0.1:56424, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1200687489,
offset: 0, srvID: DS-291053678-67.195.138.9-56424-1239379779548, blockid: blk_-1167952298056785757_1001
    [junit] 2009-04-10 16:09:39,750 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079))
- BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56424 is added to blk_-1167952298056785757_1001
size 4096
    [junit] 2009-04-10 16:09:39,750 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829))
- PacketResponder 0 for block blk_-1167952298056785757_1001 terminating
    [junit] 2009-04-10 16:09:39,751 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079))
- BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:37771 is added to blk_-1167952298056785757_1001
size 4096
    [junit] 2009-04-10 16:09:39,750 INFO  DataNode.clienttrace (BlockReceiver.java:run(929))
- src: /127.0.0.1:37695, dest: /127.0.0.1:37771, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1200687489,
offset: 0, srvID: DS-106845047-67.195.138.9-37771-1239379779680, blockid: blk_-1167952298056785757_1001
    [junit] 2009-04-10 16:09:39,752 INFO  datanode.DataNode (BlockReceiver.java:run(993))
- PacketResponder 1 for block blk_-1167952298056785757_1001 terminating
    [junit] 2009-04-10 16:09:39,753 INFO  hdfs.StateChange (FSNamesystem.java:allocateBlock(1474))
- BLOCK* NameSystem.allocateBlock: /test. blk_-7986410566441370361_1001
    [junit] 2009-04-10 16:09:39,754 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228))
- Receiving block blk_-7986410566441370361_1001 src: /127.0.0.1:37697 dest: /127.0.0.1:37771
    [junit] 2009-04-10 16:09:39,755 INFO  datanode.DataNode (DataXceiver.java:writeBlock(228))
- Receiving block blk_-7986410566441370361_1001 src: /127.0.0.1:56690 dest: /127.0.0.1:56424
    [junit] 2009-04-10 16:09:39,757 INFO  DataNode.clienttrace (BlockReceiver.java:lastDataNodeRun(805))
- src: /127.0.0.1:56690, dest: /127.0.0.1:56424, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1200687489,
offset: 0, srvID: DS-291053678-67.195.138.9-56424-1239379779548, blockid: blk_-7986410566441370361_1001
    [junit] 2009-04-10 16:09:39,757 INFO  datanode.DataNode (BlockReceiver.java:lastDataNodeRun(829))
- PacketResponder 0 for block blk_-7986410566441370361_1001 terminating
    [junit] 2009-04-10 16:09:39,758 INFO  DataNode.clienttrace (BlockReceiver.java:run(929))
- src: /127.0.0.1:37697, dest: /127.0.0.1:37771, bytes: 4096, op: HDFS_WRITE, cliID: DFSClient_1200687489,
offset: 0, srvID: DS-106845047-67.195.138.9-37771-1239379779680, blockid: blk_-7986410566441370361_1001
    [junit] 2009-04-10 16:09:39,758 INFO  datanode.DataNode (BlockReceiver.java:run(993))
- PacketResponder 1 for block blk_-7986410566441370361_1001 terminating
    [junit] 2009-04-10 16:09:39,759 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079))
- BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:56424 is added to blk_-7986410566441370361_1001
size 4096
    [junit] 2009-04-10 16:09:39,760 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471))
- current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);

    [junit] 2009-04-10 16:09:39,761 INFO  hdfs.StateChange (FSNamesystem.java:addStoredBlock(3079))
- BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:37771 is added to blk_-7986410566441370361_1001
size 4096
    [junit] 2009-04-10 16:09:39,761 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471))
- current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);

    [junit] init: server=localhost;port=;service=DataNode;localVMUrl=null
    [junit] 
    [junit] Domains:
    [junit] 	Domain = JMImplementation
    [junit] 	Domain = com.sun.management
    [junit] 	Domain = hadoop
    [junit] 	Domain = java.lang
    [junit] 	Domain = java.util.logging
    [junit] 
    [junit] MBeanServer default domain = DefaultDomain
    [junit] 
    [junit] MBean count = 26
    [junit] 
    [junit] Query MBeanServer MBeans:
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId1582077997
    [junit] hadoop services: hadoop:service=DataNode,name=DataNodeActivity-UndefinedStorageId2033102362
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2079230896
    [junit] hadoop services: hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId916916940
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort49666
    [junit] hadoop services: hadoop:service=DataNode,name=RpcActivityForPort52932
    [junit] Info: key = bytes_written; val = 0
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 1
    [junit] 2009-04-10 16:09:39,864 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server
on 52932
    [junit] 2009-04-10 16:09:39,865 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
1 on 52932: exiting
    [junit] 2009-04-10 16:09:39,865 INFO  datanode.DataNode (DataNode.java:shutdown(604))
- Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-10 16:09:39,865 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC
Server Responder
    [junit] 2009-04-10 16:09:39,865 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
0 on 52932: exiting
    [junit] 2009-04-10 16:09:39,865 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 52932
    [junit] 2009-04-10 16:09:39,865 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
2 on 52932: exiting
    [junit] 2009-04-10 16:09:39,865 WARN  datanode.DataNode (DataXceiverServer.java:run(137))
- DatanodeRegistration(127.0.0.1:37771, storageID=DS-106845047-67.195.138.9-37771-1239379779680,
infoPort=50434, ipcPort=52932):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-10 16:09:40,727 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(603))
- Exiting DataBlockScanner thread.
    [junit] 2009-04-10 16:09:40,865 INFO  datanode.DataNode (DataNode.java:shutdown(604))
- Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-10 16:09:40,866 INFO  datanode.DataNode (DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:37771,
storageID=DS-106845047-67.195.138.9-37771-1239379779680, infoPort=50434, ipcPort=52932):Finishing
DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data4/current'}

    [junit] 2009-04-10 16:09:40,867 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server
on 52932
    [junit] 2009-04-10 16:09:40,867 INFO  datanode.DataNode (DataNode.java:shutdown(604))
- Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 2009-04-10 16:09:40,968 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server
on 49666
    [junit] 2009-04-10 16:09:40,968 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
0 on 49666: exiting
    [junit] 2009-04-10 16:09:40,969 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 49666
    [junit] 2009-04-10 16:09:40,969 INFO  datanode.DataNode (DataNode.java:shutdown(604))
- Waiting for threadgroup to exit, active threads is 1
    [junit] 2009-04-10 16:09:40,969 WARN  datanode.DataNode (DataXceiverServer.java:run(137))
- DatanodeRegistration(127.0.0.1:56424, storageID=DS-291053678-67.195.138.9-56424-1239379779548,
infoPort=52708, ipcPort=49666):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:152)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2009-04-10 16:09:40,969 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
1 on 49666: exiting
    [junit] 2009-04-10 16:09:40,969 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
2 on 49666: exiting
    [junit] 2009-04-10 16:09:40,969 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC
Server Responder
    [junit] 2009-04-10 16:09:41,602 INFO  datanode.DataBlockScanner (DataBlockScanner.java:run(603))
- Exiting DataBlockScanner thread.
    [junit] 2009-04-10 16:09:41,969 INFO  datanode.DataNode (DataNode.java:shutdown(604))
- Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-10 16:09:41,970 INFO  datanode.DataNode (DataNode.java:run(1216)) - DatanodeRegistration(127.0.0.1:56424,
storageID=DS-291053678-67.195.138.9-56424-1239379779548, infoPort=52708, ipcPort=49666):Finishing
DataNode in: FSDataset{dirpath='http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/data/data2/current'}

    [junit] 2009-04-10 16:09:41,970 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server
on 49666
    [junit] 2009-04-10 16:09:41,970 INFO  datanode.DataNode (DataNode.java:shutdown(604))
- Waiting for threadgroup to exit, active threads is 0
    [junit] 2009-04-10 16:09:42,072 WARN  namenode.FSNamesystem (FSNamesystem.java:run(2352))
- ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException:
sleep interrupted
    [junit] 2009-04-10 16:09:42,072 INFO  namenode.FSNamesystem (FSEditLog.java:printStatistics(1082))
- Number of transactions: 3 Total time for transactions(ms): 1Number of transactions batched
in Syncs: 0 Number of syncs: 2 SyncTimes(ms): 17 1 
    [junit] 2009-04-10 16:09:42,072 WARN  namenode.DecommissionManager (DecommissionManager.java:run(67))
- Monitor interrupted: java.lang.InterruptedException: sleep interrupted
    [junit] 2009-04-10 16:09:42,073 INFO  namenode.FSNamesystem (FSEditLog.java:processIOError(471))
- current list of storage dirs:http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build/test/data/dfs/name1(IMAGE_AND_EDITS);/home/hudson/hudson-slave/workspace/Hadoop-trunk/trunk/build/test/data/dfs/name2(IMAGE_AND_EDITS);

    [junit] 2009-04-10 16:09:42,073 INFO  ipc.Server (Server.java:stop(1098)) - Stopping server
on 57775
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
0 on 57775: exiting
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
6 on 57775: exiting
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
1 on 57775: exiting
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
3 on 57775: exiting
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(536)) - Stopping IPC
Server Responder
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
2 on 57775: exiting
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
4 on 57775: exiting
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(352)) - Stopping IPC
Server listener on 57775
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
9 on 57775: exiting
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
5 on 57775: exiting
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
8 on 57775: exiting
    [junit] 2009-04-10 16:09:42,074 INFO  ipc.Server (Server.java:run(992)) - IPC Server handler
7 on 57775: exiting
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 6.052 sec
    [junit] Running org.apache.hadoop.util.TestCyclicIteration
    [junit] 
    [junit] 
    [junit] integers=[]
    [junit] map={}
    [junit] start=-1, iteration=[]
    [junit] 
    [junit] 
    [junit] integers=[0]
    [junit] map={0=0}
    [junit] start=-1, iteration=[0]
    [junit] start=0, iteration=[0]
    [junit] start=1, iteration=[0]
    [junit] 
    [junit] 
    [junit] integers=[0, 2]
    [junit] map={0=0, 2=2}
    [junit] start=-1, iteration=[0, 2]
    [junit] start=0, iteration=[2, 0]
    [junit] start=1, iteration=[2, 0]
    [junit] start=2, iteration=[0, 2]
    [junit] start=3, iteration=[0, 2]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4]
    [junit] map={0=0, 2=2, 4=4}
    [junit] start=-1, iteration=[0, 2, 4]
    [junit] start=0, iteration=[2, 4, 0]
    [junit] start=1, iteration=[2, 4, 0]
    [junit] start=2, iteration=[4, 0, 2]
    [junit] start=3, iteration=[4, 0, 2]
    [junit] start=4, iteration=[0, 2, 4]
    [junit] start=5, iteration=[0, 2, 4]
    [junit] 
    [junit] 
    [junit] integers=[0, 2, 4, 6]
    [junit] map={0=0, 2=2, 4=4, 6=6}
    [junit] start=-1, iteration=[0, 2, 4, 6]
    [junit] start=0, iteration=[2, 4, 6, 0]
    [junit] start=1, iteration=[2, 4, 6, 0]
    [junit] start=2, iteration=[4, 6, 0, 2]
    [junit] start=3, iteration=[4, 6, 0, 2]
    [junit] start=4, iteration=[6, 0, 2, 4]
    [junit] start=5, iteration=[6, 0, 2, 4]
    [junit] start=6, iteration=[0, 2, 4, 6]
    [junit] start=7, iteration=[0, 2, 4, 6]
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.094 sec
    [junit] Running org.apache.hadoop.util.TestGenericsUtil
    [junit] 2009-04-10 16:09:43,048 WARN  conf.Configuration (Configuration.java:<clinit>(176))
- DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated.
Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml,
mapred-default.xml and hdfs-default.xml respectively
    [junit] 2009-04-10 16:09:43,061 WARN  util.GenericOptionsParser (GenericOptionsParser.java:parseGeneralOptions(377))
- options parsing failed: Missing argument for option:jt
    [junit] usage: general options are:
    [junit]  -archives <paths>             comma separated archives to be unarchived
    [junit]                                on the compute machines.
    [junit]  -conf <configuration file>    specify an application configuration file
    [junit]  -D <property=value>           use value for given property
    [junit]  -files <paths>                comma separated files to be copied to the
    [junit]                                map reduce cluster
    [junit]  -fs <local|namenode:port>     specify a namenode
    [junit]  -jt <local|jobtracker:port>   specify a job tracker
    [junit]  -libjars <paths>              comma separated jar files to include in the
    [junit]                                classpath.
    [junit] Tests run: 6, Failures: 0, Errors: 0, Time elapsed: 0.187 sec
    [junit] Running org.apache.hadoop.util.TestIndexedSort
    [junit] sortRandom seed: -7027178055227047295(org.apache.hadoop.util.QuickSort)
    [junit] testSorted seed: 4257467073421555077(org.apache.hadoop.util.QuickSort)
    [junit] testAllEqual setting min/max at 410/374(org.apache.hadoop.util.QuickSort)
    [junit] sortWritable seed: -3890651608684584113(org.apache.hadoop.util.QuickSort)
    [junit] QuickSort degen cmp/swp: 23252/3713(org.apache.hadoop.util.QuickSort)
    [junit] sortRandom seed: 2485120508806040293(org.apache.hadoop.util.HeapSort)
    [junit] testSorted seed: 1242076885210625100(org.apache.hadoop.util.HeapSort)
    [junit] testAllEqual setting min/max at 353/30(org.apache.hadoop.util.HeapSort)
    [junit] sortWritable seed: -5121806697500483383(org.apache.hadoop.util.HeapSort)
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.015 sec
    [junit] Running org.apache.hadoop.util.TestProcfsBasedProcessTree
    [junit] 2009-04-10 16:09:44,899 INFO  util.ProcessTree (ProcessTree.java:isSetsidSupported(54))
- setsid exited with exit code 0
    [junit] 2009-04-10 16:09:45,404 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(141))
- Root process pid: 6082
    [junit] 2009-04-10 16:09:45,448 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(146))
- ProcessTree: [ 6082 6084 6085 ]
    [junit] 2009-04-10 16:09:51,980 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(159))
- ProcessTree: [ 6099 6082 6097 6086 6101 6084 6088 6095 6093 ]
    [junit] 2009-04-10 16:09:51,991 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(64))
- Shell Command exit with a non-zero exit code. This is expected as we are killing the subprocesses
of the task intentionally. org.apache.hadoop.util.Shell$ExitCodeException: 
    [junit] 2009-04-10 16:09:51,991 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:run(70))
- Exit code: 143
    [junit] 2009-04-10 16:09:51,991 INFO  util.ProcessTree (ProcessTree.java:destroyProcessGroup(160))
- Killing all processes in the process group 6082 with SIGTERM. Exit code 0
    [junit] 2009-04-10 16:09:52,070 INFO  util.TestProcfsBasedProcessTree (TestProcfsBasedProcessTree.java:testProcessTree(173))
- RogueTaskThread successfully joined.
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 7.262 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] 2009-04-10 16:09:52,977 WARN  conf.Configuration (Configuration.java:<clinit>(176))
- DEPRECATED: hadoop-site.xml found in the classpath. Usage of hadoop-site.xml is deprecated.
Instead use core-site.xml, mapred-site.xml and hdfs-site.xml to override properties of core-default.xml,
mapred-default.xml and hdfs-default.xml respectively
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 0.574 sec
    [junit] Running org.apache.hadoop.util.TestShell
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 0.184 sec
    [junit] Running org.apache.hadoop.util.TestStringUtils
    [junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.092 sec

BUILD FAILED
http://hudson.zones.apache.org/hudson/job/Hadoop-trunk/ws/trunk/build.xml :770: Tests failed!

Total time: 162 minutes 11 seconds
Publishing Javadoc
Recording test results
Recording fingerprints
Publishing Clover coverage report...


Mime
View raw message