hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Hudson Server <hud...@hudson.apache.org>
Subject Build failed in Hudson: Hadoop-Mapreduce-trunk-Commit #548
Date Fri, 19 Nov 2010 23:35:36 GMT
See <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/548/changes>

Changes:

[cos] MAPREDUCE-2195. New property for local conf directory in system-test-mapreduce.xml file.
Contributed by Konstantin Boudnik.

------------------------------------------
[...truncated 35668 lines...]
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,233 INFO  reduce.ShuffleScheduler
(ShuffleScheduler.java:getMapsForHost(333)) - assigned 1 of 1 to localhost:38741 to fetcher#3
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,520 INFO  reduce.Fetcher
(Fetcher.java:copyFromHost(217)) - for url=38741/mapOutput?job=job_20101119233228357_0002&reduce=0&map=attempt_20101119233228357_0002_m_000000_0
sent hash and receievd reply
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,522 INFO  reduce.Fetcher
(Fetcher.java:copyMapOutput(314)) - fetcher#3 about to shuffle output of map attempt_20101119233228357_0002_m_000000_0
decomp: 107 len: 111 to MEMORY
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,525 INFO  reduce.Fetcher
(Fetcher.java:shuffleToMemory(479)) - Read 107 bytes from map-output for attempt_20101119233228357_0002_m_000000_0
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,525 INFO  reduce.MergeManager
(MergeManager.java:closeInMemoryFile(277)) - closeInMemoryFile -> map-output of size: 107,
inMemoryMapOutputs.size() -> 1
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,526 INFO  reduce.ShuffleScheduler
(ShuffleScheduler.java:freeHost(345)) - localhost:38741 freed by fetcher#3 in 293s
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,529 INFO  reduce.MergeManager
(MergeManager.java:finalMerge(629)) - finalMerge called with 1 in-memory map-outputs and 0
on-disk map-outputs
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,554 INFO  mapred.Merger
(Merger.java:merge(549)) - Merging 1 sorted segments
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,555 INFO  mapred.Merger
(Merger.java:merge(648)) - Down to the last merge-pass, with 1 segments left of total size:
103 bytes
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,564 INFO  reduce.MergeManager
(MergeManager.java:finalMerge(701)) - Merged 1 segments, 107 bytes to disk to satisfy reduce
memory limit
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,565 INFO  reduce.MergeManager
(MergeManager.java:finalMerge(727)) - Merging 1 files, 111 bytes from disk
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,566 INFO  reduce.MergeManager
(MergeManager.java:finalMerge(742)) - Merging 0 segments, 0 bytes from memory into reduce
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,566 INFO  mapred.Merger
(Merger.java:merge(549)) - Merging 1 sorted segments
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,569 INFO  mapred.Merger
(Merger.java:merge(648)) - Down to the last merge-pass, with 1 segments left of total size:
103 bytes
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,591 WARN  conf.Configuration
(Configuration.java:set(582)) - mapred.skip.on is deprecated. Instead, use mapreduce.job.skiprecords
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:31,607 INFO  mapred.Task
(Task.java:done(848)) - Task:attempt_20101119233228357_0002_r_000000_0 is done. And is in
the process of commiting
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:33,650 INFO  mapred.Task
(Task.java:commit(1009)) - Task attempt_20101119233228357_0002_r_000000_0 is allowed to commit
now
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:33,654 INFO  output.FileOutputCommitter
(FileOutputCommitter.java:commitTask(173)) - Saved output of task 'attempt_20101119233228357_0002_r_000000_0'
to <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/data/out>
    [junit] attempt_20101119233228357_0002_r_000000_0: 2010-11-19 23:33:33,682 INFO  mapred.Task
(Task.java:sendDone(968)) - Task 'attempt_20101119233228357_0002_r_000000_0' done.
    [junit] 2010-11-19 23:33:37,144 INFO  mapred.TaskTracker (TaskTracker.java:reportProgress(2656))
- attempt_20101119233228357_0002_m_000001_0 0.0% 
    [junit] 2010-11-19 23:33:37,238 INFO  mapred.TaskTracker (TaskTracker.java:reportProgress(2656))
- attempt_20101119233228357_0002_m_000001_0 0.0% cleanup > map
    [junit] 2010-11-19 23:33:37,240 INFO  mapred.TaskTracker (TaskTracker.java:reportDone(2737))
- Task attempt_20101119233228357_0002_m_000001_0 is done.
    [junit] 2010-11-19 23:33:37,240 INFO  mapred.TaskTracker (TaskTracker.java:reportDone(2738))
- reported output size for attempt_20101119233228357_0002_m_000001_0  was -1
    [junit] 2010-11-19 23:33:37,241 INFO  mapred.TaskTracker (TaskTracker.java:addFreeSlots(2223))
- addFreeSlot : current free slots : 2
    [junit] 2010-11-19 23:33:37,432 WARN  util.ProcessTree (ProcessTree.java:sendSignal(134))
- Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill: No such
process
    [junit] 
    [junit] 2010-11-19 23:33:37,432 INFO  util.ProcessTree (ProcessTree.java:sendSignal(137))
- Sending signal to all members of process group -16255: SIGTERM. Exit code 1
    [junit] 2010-11-19 23:33:37,894 INFO  mapreduce.Job (Job.java:monitorAndPrintJob(1099))
-  map 100% reduce 100%
    [junit] 2010-11-19 23:33:38,879 INFO  mapred.JvmManager (JvmManager.java:runChild(472))
- JVM : jvm_20101119233228357_0002_r_-413626645 exited with exit code 0. Number of tasks it
ran: 1
    [junit] 2010-11-19 23:33:38,986 INFO  mapred.JobInProgress (JobInProgress.java:completedTask(2636))
- Task 'attempt_20101119233228357_0002_m_000001_0' has completed task_20101119233228357_0002_m_000001
successfully.
    [junit] 2010-11-19 23:33:38,988 INFO  mapred.JobInProgress (JobInProgress.java:jobComplete(2837))
- Job job_20101119233228357_0002 has completed successfully.
    [junit] 2010-11-19 23:33:38,988 INFO  mapred.JobInProgress$JobSummary (JobInProgress.java:logJobSummary(3611))
- jobId=job_20101119233228357_0002,submitTime=1290209602696,launchTime=1290209603001,firstMapTaskLaunchTime=1290209606942,firstReduceTaskLaunchTime=1290209609958,firstJobSetupTaskLaunchTime=1290209603923,firstJobCleanupTaskLaunchTime=1290209615978,finishTime=1290209618988,numMaps=1,numSlotsPerMap=1,numReduces=1,numSlotsPerReduce=1,user=hudson,queue=default,status=SUCCEEDED,mapSlotSeconds=3,reduceSlotsSeconds=3,clusterMapCapacity=4,clusterReduceCapacity=4
    [junit] 2010-11-19 23:33:38,994 INFO  jobhistory.JobHistory (JobHistory.java:moveToDoneNow(354))
- Moving <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/logs/history/job_20101119233228357_0002_hudson>
to <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/logs/history/done/job_20101119233228357_0002_hudson>
    [junit] 2010-11-19 23:33:38,995 INFO  mapred.JobTracker (JobTracker.java:removeTaskEntry(1863))
- Removing task 'attempt_20101119233228357_0002_m_000000_0'
    [junit] 2010-11-19 23:33:38,995 INFO  mapred.JobTracker (JobTracker.java:removeTaskEntry(1863))
- Removing task 'attempt_20101119233228357_0002_m_000001_0'
    [junit] 2010-11-19 23:33:38,996 INFO  mapred.JobTracker (JobTracker.java:removeTaskEntry(1863))
- Removing task 'attempt_20101119233228357_0002_m_000002_0'
    [junit] 2010-11-19 23:33:38,996 INFO  mapred.JobTracker (JobTracker.java:removeTaskEntry(1863))
- Removing task 'attempt_20101119233228357_0002_r_000000_0'
    [junit] 2010-11-19 23:33:38,997 INFO  mapred.TaskTracker (TaskTracker.java:purgeJob(1972))
- Received 'KillJobAction' for job: job_20101119233228357_0002
    [junit] 2010-11-19 23:33:39,018 INFO  mapred.IndexCache (IndexCache.java:removeMap(141))
- Map ID attempt_20101119233228357_0002_m_000001_0 not found in cache
    [junit] 2010-11-19 23:33:39,020 INFO  mapred.UserLogCleaner (UserLogCleaner.java:markJobLogsForDeletion(174))
- Adding job_20101119233228357_0002 for user-log deletion with retainTimeStamp:1290296019020
    [junit] 2010-11-19 23:33:39,033 INFO  mapred.TaskTracker (TaskTracker.java:purgeJob(1972))
- Received 'KillJobAction' for job: job_20101119233228357_0002
    [junit] 2010-11-19 23:33:39,033 WARN  mapred.TaskTracker (TaskTracker.java:purgeJob(1979))
- Unknown job job_20101119233228357_0002 being deleted.
    [junit] 2010-11-19 23:33:39,045 INFO  jobhistory.JobHistory (JobHistory.java:moveToDoneNow(354))
- Moving <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/logs/history/job_20101119233228357_0002_conf.xml>
to <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/logs/history/done/job_20101119233228357_0002_conf.xml>
    [junit] 2010-11-19 23:33:39,097 INFO  mapred.JobInProgress (JobInProgress.java:cleanupLocalizedJobConf(3652))
- Deleting localized job conf at <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/logs/job_20101119233228357_0002_conf.xml>
    [junit] 2010-11-19 23:33:39,901 INFO  mapreduce.Job (Job.java:printTaskEvents(1200)) -
Task Id : attempt_20101119233228357_0002_m_000001_0, Status : SUCCEEDED
    [junit] attempt_20101119233228357_0002_m_000001_0: 2010-11-19 23:33:37,004 INFO  jvm.JvmMetrics
(JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=MAP, sessionId=
    [junit] attempt_20101119233228357_0002_m_000001_0: 2010-11-19 23:33:37,007 WARN  conf.Configuration
(Configuration.java:handleDeprecation(313)) - user.name is deprecated. Instead, use mapreduce.job.user.name
    [junit] attempt_20101119233228357_0002_m_000001_0: 2010-11-19 23:33:37,055 INFO  util.ProcessTree
(ProcessTree.java:isSetsidSupported(65)) - setsid exited with exit code 0
    [junit] attempt_20101119233228357_0002_m_000001_0: 2010-11-19 23:33:37,080 INFO  mapred.Task
(Task.java:initialize(523)) -  Using ResourceCalculatorPlugin : org.apache.hadoop.mapreduce.util.LinuxResourceCalculatorPlugin@1589e56
    [junit] attempt_20101119233228357_0002_m_000001_0: 2010-11-19 23:33:37,145 INFO  mapred.Task
(Task.java:runJobCleanupTask(1057)) - Cleaning up job
    [junit] attempt_20101119233228357_0002_m_000001_0: 2010-11-19 23:33:37,145 INFO  mapred.Task
(Task.java:runJobCleanupTask(1069)) - Committing job
    [junit] attempt_20101119233228357_0002_m_000001_0: 2010-11-19 23:33:37,168 INFO  mapred.Task
(Task.java:done(848)) - Task:attempt_20101119233228357_0002_m_000001_0 is done. And is in
the process of commiting
    [junit] attempt_20101119233228357_0002_m_000001_0: 2010-11-19 23:33:37,241 INFO  mapred.Task
(Task.java:sendDone(968)) - Task 'attempt_20101119233228357_0002_m_000001_0' done.
    [junit] 2010-11-19 23:33:39,906 INFO  mapreduce.Job (Job.java:monitorAndPrintJob(1108))
- Job complete: job_20101119233228357_0002
    [junit] 2010-11-19 23:33:39,907 INFO  mapred.AuditLogger (AuditLogger.java:logSuccess(84))
- USER=hudson	IP=127.0.0.1	OPERATION=VIEW_JOB_COUNTERS	TARGET=job_20101119233228357_0002 in
queue default	RESULT=SUCCESS
    [junit] 2010-11-19 23:33:39,911 INFO  mapreduce.Job (Job.java:monitorAndPrintJob(1111))
- Counters: 33
    [junit] 	FileSystemCounters
    [junit] 		FILE_BYTES_READ=518
    [junit] 		FILE_BYTES_WRITTEN=331
    [junit] 	Shuffle Errors
    [junit] 		BAD_ID=0
    [junit] 		CONNECTION=0
    [junit] 		IO_ERROR=0
    [junit] 		WRONG_LENGTH=0
    [junit] 		WRONG_MAP=0
    [junit] 		WRONG_REDUCE=0
    [junit] 	Job Counters 
    [junit] 		Total time spent by all maps waiting after reserving slots (ms)=0
    [junit] 		Total time spent by all reduces waiting after reserving slots (ms)=0
    [junit] 		Rack-local map tasks=1
    [junit] 		SLOTS_MILLIS_MAPS=3876
    [junit] 		SLOTS_MILLIS_REDUCES=3714
    [junit] 		Launched map tasks=1
    [junit] 		Launched reduce tasks=1
    [junit] 	Map-Reduce Framework
    [junit] 		Combine input records=13
    [junit] 		Combine output records=10
    [junit] 		CPU_MILLISECONDS=1250
    [junit] 		Failed Shuffles=0
    [junit] 		GC time elapsed (ms)=21
    [junit] 		Map input records=4
    [junit] 		Map output bytes=112
    [junit] 		Map output records=13
    [junit] 		Merged Map outputs=1
    [junit] 		PHYSICAL_MEMORY_BYTES=101396480
    [junit] 		Reduce input groups=10
    [junit] 		Reduce input records=10
    [junit] 		Reduce output records=10
    [junit] 		Reduce shuffle bytes=111
    [junit] 		Shuffled Maps =1
    [junit] 		Spilled Records=20
    [junit] 		SPLIT_RAW_BYTES=305
    [junit] 		VIRTUAL_MEMORY_BYTES=741572608
    [junit] a	1
    [junit] count	1
    [junit] file	1
    [junit] is	1
    [junit] more	1
    [junit] multi	1
    [junit] of	1
    [junit] test	4
    [junit] this	1
    [junit] word	1
    [junit] 
    [junit] 2010-11-19 23:33:39,918 INFO  util.AsyncDiskService (AsyncDiskService.java:shutdown(111))
- Shutting down all AsyncDiskService threads...
    [junit] 2010-11-19 23:33:39,919 INFO  util.AsyncDiskService (AsyncDiskService.java:awaitTermination(140))
- All AsyncDiskService threads are terminated.
    [junit] 2010-11-19 23:33:39,920 INFO  mapred.TaskTracker (TaskTracker.java:run(865)) -
Shutting down: Map-events fetcher for all reduce tasks on tracker_host0.foo.com:localhost/127.0.0.1:32872
    [junit] 2010-11-19 23:33:39,922 ERROR filecache.TrackerDistributedCacheManager (TrackerDistributedCacheManager.java:run(946))
- Exception in DistributedCache CleanupThread.
    [junit] java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.mapreduce.filecache.TrackerDistributedCacheManager$CleanupThread.run(TrackerDistributedCacheManager.java:943)
    [junit] 2010-11-19 23:33:42,455 INFO  mapred.JvmManager (JvmManager.java:runChild(472))
- JVM : jvm_20101119233228357_0002_m_1033664673 exited with exit code 0. Number of tasks it
ran: 1
    [junit] 2010-11-19 23:33:42,455 INFO  ipc.Server (Server.java:stop(1601)) - Stopping server
on 32872
    [junit] 2010-11-19 23:33:42,456 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 0 on 32872: exiting
    [junit] 2010-11-19 23:33:42,457 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 2 on 32872: exiting
    [junit] 2010-11-19 23:33:42,457 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 1 on 32872: exiting
    [junit] 2010-11-19 23:33:42,456 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 3 on 32872: exiting
    [junit] 2010-11-19 23:33:42,457 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC
Server listener on 32872
    [junit] 2010-11-19 23:33:42,458 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC
Server Responder
    [junit] 2010-11-19 23:33:42,458 INFO  mapred.TaskTracker (TaskTracker.java:shutdown(1256))
- Shutting down StatusHttpServer
    [junit] 2010-11-19 23:34:42,566 ERROR mapred.TaskTracker (TaskTracker.java:offerService(1584))
- Caught exception: java.io.IOException: Call to localhost/127.0.0.1:54783 failed on local
exception: java.nio.channels.ClosedByInterruptException
    [junit] 	at org.apache.hadoop.ipc.Client.wrapException(Client.java:1063)
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:1031)
    [junit] 	at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:198)
    [junit] 	at org.apache.hadoop.mapred.$Proxy1.heartbeat(Unknown Source)
    [junit] 	at org.apache.hadoop.mapred.TaskTracker.transmitHeartBeat(TaskTracker.java:1684)
    [junit] 	at org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1515)
    [junit] 	at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:2420)
    [junit] 	at org.apache.hadoop.mapred.MiniMRCluster$TaskTrackerRunner.run(MiniMRCluster.java:228)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] Caused by: java.nio.channels.ClosedByInterruptException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    [junit] 	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:341)
    [junit] 	at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:60)
    [junit] 	at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    [junit] 	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:151)
    [junit] 	at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:112)
    [junit] 	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    [junit] 	at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    [junit] 	at java.io.DataOutputStream.flush(DataOutputStream.java:106)
    [junit] 	at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:739)
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:1009)
    [junit] 	... 7 more
    [junit] 
    [junit] 2010-11-19 23:34:42,571 INFO  util.AsyncDiskService (AsyncDiskService.java:shutdown(111))
- Shutting down all AsyncDiskService threads...
    [junit] 2010-11-19 23:34:42,572 INFO  util.AsyncDiskService (AsyncDiskService.java:awaitTermination(140))
- All AsyncDiskService threads are terminated.
    [junit] 2010-11-19 23:34:42,575 INFO  util.AsyncDiskService (AsyncDiskService.java:shutdown(111))
- Shutting down all AsyncDiskService threads...
    [junit] 2010-11-19 23:34:42,576 INFO  util.AsyncDiskService (AsyncDiskService.java:awaitTermination(140))
- All AsyncDiskService threads are terminated.
    [junit] 2010-11-19 23:34:42,576 ERROR filecache.TrackerDistributedCacheManager (TrackerDistributedCacheManager.java:run(946))
- Exception in DistributedCache CleanupThread.
    [junit] java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.mapreduce.filecache.TrackerDistributedCacheManager$CleanupThread.run(TrackerDistributedCacheManager.java:943)
    [junit] 2010-11-19 23:34:42,576 INFO  mapred.TaskTracker (TaskTracker.java:run(865)) -
Shutting down: Map-events fetcher for all reduce tasks on tracker_host1.foo.com:localhost/127.0.0.1:47640
    [junit] 2010-11-19 23:34:42,577 INFO  ipc.Server (Server.java:stop(1601)) - Stopping server
on 47640
    [junit] 2010-11-19 23:34:42,577 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 0 on 47640: exiting
    [junit] 2010-11-19 23:34:42,577 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 3 on 47640: exiting
    [junit] 2010-11-19 23:34:42,577 INFO  mapred.TaskTracker (TaskTracker.java:shutdown(1256))
- Shutting down StatusHttpServer
    [junit] 2010-11-19 23:34:42,577 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 2 on 47640: exiting
    [junit] 2010-11-19 23:34:42,577 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC
Server Responder
    [junit] 2010-11-19 23:34:42,577 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 1 on 47640: exiting
    [junit] 2010-11-19 23:34:42,577 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC
Server listener on 47640
    [junit] 2010-11-19 23:34:42,580 ERROR mapred.TaskTracker (TaskTracker.java:offerService(1584))
- Caught exception: java.io.IOException: Jetty problem. Jetty didn't bind to a valid port
    [junit] 	at org.apache.hadoop.mapred.TaskTracker.checkJettyPort(TaskTracker.java:1389)
    [junit] 	at org.apache.hadoop.mapred.TaskTracker.offerService(TaskTracker.java:1562)
    [junit] 	at org.apache.hadoop.mapred.TaskTracker.run(TaskTracker.java:2420)
    [junit] 	at org.apache.hadoop.mapred.MiniMRCluster$TaskTrackerRunner.run(MiniMRCluster.java:228)
    [junit] 	at java.lang.Thread.run(Thread.java:619)
    [junit] 
    [junit] 2010-11-19 23:34:42,581 INFO  util.AsyncDiskService (AsyncDiskService.java:shutdown(111))
- Shutting down all AsyncDiskService threads...
    [junit] 2010-11-19 23:34:42,581 INFO  util.AsyncDiskService (AsyncDiskService.java:awaitTermination(140))
- All AsyncDiskService threads are terminated.
    [junit] 2010-11-19 23:34:42,581 INFO  mapred.JobTracker (JobTracker.java:close(1765))
- Stopping infoServer
    [junit] 2010-11-19 23:34:42,683 INFO  mapred.JobTracker (JobTracker.java:close(1773))
- Stopping interTrackerServer
    [junit] 2010-11-19 23:34:42,683 INFO  ipc.Server (Server.java:stop(1601)) - Stopping server
on 54783
    [junit] 2010-11-19 23:34:42,684 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 0 on 54783: exiting
    [junit] 2010-11-19 23:34:42,684 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC
Server listener on 54783
    [junit] 2010-11-19 23:34:42,684 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC
Server Responder
    [junit] 2010-11-19 23:34:42,684 INFO  mapred.JobTracker (JobTracker.java:offerService(1760))
- Stopped interTrackerServer
    [junit] 2010-11-19 23:34:42,684 INFO  mapred.JobTracker (JobTracker.java:stopExpireTrackersThread(1812))
- Stopping expireTrackers
    [junit] 2010-11-19 23:34:42,684 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 1 on 54783: exiting
    [junit] 2010-11-19 23:34:42,684 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 5 on 54783: exiting
    [junit] 2010-11-19 23:34:42,685 INFO  mapred.EagerTaskInitializationListener (EagerTaskInitializationListener.java:terminate(108))
- Stopping Job Init Manager thread
    [junit] 2010-11-19 23:34:42,685 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 4 on 54783: exiting
    [junit] 2010-11-19 23:34:42,685 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 6 on 54783: exiting
    [junit] 2010-11-19 23:34:42,684 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 9 on 54783: exiting
    [junit] 2010-11-19 23:34:42,685 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 3 on 54783: exiting
    [junit] 2010-11-19 23:34:42,685 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 8 on 54783: exiting
    [junit] 2010-11-19 23:34:42,684 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 7 on 54783: exiting
    [junit] 2010-11-19 23:34:42,684 INFO  ipc.Server (Server.java:run(1444)) - IPC Server
handler 2 on 54783: exiting
    [junit] 2010-11-19 23:34:42,685 INFO  mapred.EagerTaskInitializationListener (EagerTaskInitializationListener.java:run(61))
- JobInitManagerThread interrupted.
    [junit] 2010-11-19 23:34:42,686 INFO  mapred.EagerTaskInitializationListener (EagerTaskInitializationListener.java:run(65))
- Shutting down thread pool
    [junit] 2010-11-19 23:34:42,687 INFO  mapred.JobTracker (JobTracker.java:close(1783))
- Stopping expireLaunchingTasks
    [junit] 2010-11-19 23:34:42,687 INFO  jobhistory.JobHistory (JobHistory.java:shutDown(195))
- Interrupting History Cleaner
    [junit] 2010-11-19 23:34:42,687 INFO  jobhistory.JobHistory (JobHistory.java:run(544))
- History Cleaner thread exiting
    [junit] 2010-11-19 23:34:42,688 INFO  mapred.JobTracker (JobTracker.java:close(1806))
- stopped all jobtracker services
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 134.508 sec
    [junit] Running org.apache.hadoop.mapreduce.lib.input.TestFileInputFormat
    [junit] 2010-11-19 23:34:43,767 WARN  conf.Configuration (Configuration.java:set(582))
- fs.default.name is deprecated. Instead, use fs.defaultFS
    [junit] 2010-11-19 23:34:43,976 WARN  conf.Configuration (Configuration.java:set(582))
- fs.default.name is deprecated. Instead, use fs.defaultFS
    [junit] defaultfs.getUri() = s3://abc:xyz@hostname
    [junit] original = file:/foo
    [junit] results = [file:/foo]
    [junit] original = file:/bar
    [junit] results = [file:/bar]
    [junit] Tests run: 2, Failures: 0, Errors: 0, Time elapsed: 1.016 sec
    [junit] Running org.apache.hadoop.mapreduce.lib.output.TestFileOutputCommitter
    [junit] 2010-11-19 23:34:45,451 INFO  output.FileOutputCommitter (FileOutputCommitter.java:commitTask(173))
- Saved output of task 'attempt_200707121733_0001_m_000000_0' to <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/data/output>
    [junit] Tests run: 3, Failures: 0, Errors: 0, Time elapsed: 0.529 sec

checkfailure:

clover.check:

clover.setup:
[clover-setup] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover-setup] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover-setup] Clover: Open Source License registered to Apache.
[clover-setup] Clover is enabled with initstring '<https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/clover/db/hadoop_coverage.db'>

clover.info:

clover:

generate-clover-reports:
    [mkdir] Created dir: <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/clover/reports>
[clover-report] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover-report] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover-report] Clover: Open Source License registered to Apache.
[clover-report] Loading coverage database from: '<https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/clover/db/hadoop_coverage.db'>
[clover-report] Writing HTML report to '<https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/clover/reports'>
Fontconfig error: Cannot load default config file
[clover-report] Done. Processed 54 packages in 11970ms (221ms per package).
[clover-report] Clover Version 3.0.2, built on April 13 2010 (build-790)
[clover-report] Loaded from: /homes/hudson/tools/clover/latest/lib/clover.jar
[clover-report] Clover: Open Source License registered to Apache.
[clover-report] Loading coverage database from: '<https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/clover/db/hadoop_coverage.db'>
[clover-report] Writing report to '<https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/build/test/clover/reports/clover.xml'>

BUILD SUCCESSFUL
Total time: 25 minutes 50 seconds
[FINDBUGS] Collecting findbugs analysis files...
Publishing Javadoc
ERROR: No javadoc found in <https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-trunk-Commit/ws/trunk/api>:
'**/*' doesn't match anything: '**' exists but not '**/*'
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
Publishing Clover HTML report...
Publishing Clover XML report...
Publishing Clover coverage results...


Mime
View raw message