hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Jenkins Server <jenk...@builds.apache.org>
Subject Build failed in Jenkins: Hadoop-Hdfs-trunk #1213
Date Thu, 01 Nov 2012 12:51:34 GMT
See <https://builds.apache.org/job/Hadoop-Hdfs-trunk/1213/changes>

Changes:

[vinodkv] YARN-189. Fixed a deadlock between RM's ApplicationMasterService and the dispatcher.
Contributed by Thomas Graves.

[bobby] MAPREDUCE-4724. job history web ui applications page should be sorted to display last
app first (tgraves via bobby)

[bobby] YARN-166. capacity scheduler doesn't allow capacity < 1.0 (tgraves via bobby)

[bobby] YARN-159. RM web ui applications page should be sorted to display last app first (tgraves
via bobby)

[bobby] YARN-165. RM should point tracking URL to RM web page for app when AM fails (jlowe
via bobby)

[tgraves] MAPREDUCE-4752. Reduce MR AM memory usage through String Interning (Robert Evans
via tgraves)

------------------------------------------
[...truncated 11288 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.58 sec
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 sec
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.16 sec
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.32 sec
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.453 sec
Running org.apache.hadoop.hdfs.TestHftpURLTimeouts
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.911 sec
Running org.apache.hadoop.cli.TestHDFSCLI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.27 sec
Running org.apache.hadoop.fs.TestHdfsNativeCodeLoader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.129 sec
Running org.apache.hadoop.fs.TestGlobPaths
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.169 sec
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.068 sec
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.161 sec
Running org.apache.hadoop.fs.TestFcHdfsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.088 sec
Running org.apache.hadoop.fs.TestUrlStreamHandler
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.066 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.692 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.392 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.865 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.273 sec
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.696 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.838 sec
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.003 sec
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.205 sec
Running org.apache.hadoop.fs.TestFcHdfsSymlink
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.14 sec
Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.046 sec
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 59, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.368 sec

Results :

Tests run: 1610, Failures: 0, Errors: 0, Skipped: 4

[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (native_tests) @ hadoop-hdfs ---
[INFO] Executing tasks

main:
     [exec] 2012-11-01 12:51:28,657 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(319))
- starting cluster with 1 namenodes.
     [exec] Formatting using clusterid: testClusterID
     [exec] 2012-11-01 12:51:28,913 INFO  util.HostsFileReader (HostsFileReader.java:refresh(82))
- Refreshing hosts (include/exclude) list
     [exec] 2012-11-01 12:51:28,914 WARN  conf.Configuration (Configuration.java:warnOnceIfDeprecated(823))
- hadoop.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping
     [exec] 2012-11-01 12:51:28,914 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188))
- dfs.block.invalidate.limit=1000
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294))
- dfs.block.access.token.enable=false
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(280))
- defaultReplication         = 1
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(281))
- maxReplication             = 512
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(282))
- minReplication             = 1
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(283))
- maxReplicationStreams      = 2
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(284))
- shouldCheckForEnoughRacks  = false
     [exec] 2012-11-01 12:51:28,935 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(285))
- replicationRecheckInterval = 3000
     [exec] 2012-11-01 12:51:28,936 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(286))
- encryptDataTransfer        = false
     [exec] 2012-11-01 12:51:28,936 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(473))
- fsOwner             = jenkins (auth:SIMPLE)
     [exec] 2012-11-01 12:51:28,936 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(474))
- supergroup          = supergroup
     [exec] 2012-11-01 12:51:28,936 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(475))
- isPermissionEnabled = true
     [exec] 2012-11-01 12:51:28,936 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(489))
- HA Enabled: false
     [exec] 2012-11-01 12:51:28,941 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(521))
- Append Enabled: true
     [exec] 2012-11-01 12:51:29,149 INFO  namenode.NameNode (FSDirectory.java:<init>(143))
- Caching file names occuring more than 10 times
     [exec] 2012-11-01 12:51:29,150 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3753))
- dfs.namenode.safemode.threshold-pct = 0.9990000128746033
     [exec] 2012-11-01 12:51:29,150 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3754))
- dfs.namenode.safemode.min.datanodes = 0
     [exec] 2012-11-01 12:51:29,151 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3755))
- dfs.namenode.safemode.extension     = 0
     [exec] 2012-11-01 12:51:30,210 INFO  common.Storage (NNStorage.java:format(525)) - Storage
directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1>
has been successfully formatted.
     [exec] 2012-11-01 12:51:30,218 INFO  common.Storage (NNStorage.java:format(525)) - Storage
directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2>
has been successfully formatted.
     [exec] 2012-11-01 12:51:30,229 INFO  namenode.FSImage (FSImageFormat.java:save(494))
- Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/fsimage.ckpt_0000000000000000000>
using no compression
     [exec] 2012-11-01 12:51:30,229 INFO  namenode.FSImage (FSImageFormat.java:save(494))
- Saving image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage.ckpt_0000000000000000000>
using no compression
     [exec] 2012-11-01 12:51:30,239 INFO  namenode.FSImage (FSImageFormat.java:save(521))
- Image file of size 122 saved in 0 seconds.
     [exec] 2012-11-01 12:51:30,243 INFO  namenode.FSImage (FSImageFormat.java:save(521))
- Image file of size 122 saved in 0 seconds.
     [exec] 2012-11-01 12:51:30,259 INFO  namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(171))
- Going to retain 1 images with txid >= 0
     [exec] 2012-11-01 12:51:30,307 WARN  impl.MetricsConfig (MetricsConfig.java:loadFirst(123))
- Cannot locate configuration: tried hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
     [exec] 2012-11-01 12:51:30,362 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(341))
- Scheduled snapshot period at 10 second(s).
     [exec] 2012-11-01 12:51:30,362 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(183))
- NameNode metrics system started
     [exec] 2012-11-01 12:51:30,375 INFO  util.HostsFileReader (HostsFileReader.java:refresh(82))
- Refreshing hosts (include/exclude) list
     [exec] 2012-11-01 12:51:30,375 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(188))
- dfs.block.invalidate.limit=1000
     [exec] 2012-11-01 12:51:30,389 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(294))
- dfs.block.access.token.enable=false
     [exec] 2012-11-01 12:51:30,389 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(280))
- defaultReplication         = 1
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(281))
- maxReplication             = 512
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(282))
- minReplication             = 1
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(283))
- maxReplicationStreams      = 2
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(284))
- shouldCheckForEnoughRacks  = false
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(285))
- replicationRecheckInterval = 3000
     [exec] 2012-11-01 12:51:30,390 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(286))
- encryptDataTransfer        = false
     [exec] 2012-11-01 12:51:30,390 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(473))
- fsOwner             = jenkins (auth:SIMPLE)
     [exec] 2012-11-01 12:51:30,390 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(474))
- supergroup          = supergroup
     [exec] 2012-11-01 12:51:30,390 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(475))
- isPermissionEnabled = true
     [exec] 2012-11-01 12:51:30,391 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(489))
- HA Enabled: false
     [exec] 2012-11-01 12:51:30,391 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(521))
- Append Enabled: true
     [exec] 2012-11-01 12:51:30,391 INFO  namenode.NameNode (FSDirectory.java:<init>(143))
- Caching file names occuring more than 10 times
     [exec] 2012-11-01 12:51:30,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3753))
- dfs.namenode.safemode.threshold-pct = 0.9990000128746033
     [exec] 2012-11-01 12:51:30,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3754))
- dfs.namenode.safemode.min.datanodes = 0
     [exec] 2012-11-01 12:51:30,392 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(3755))
- dfs.namenode.safemode.extension     = 0
     [exec] 2012-11-01 12:51:30,397 INFO  common.Storage (Storage.java:tryLock(662)) - Lock
on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/in_use.lock>
acquired by nodename 14319@asf005.sp2.ygridcore.net
     [exec] 2012-11-01 12:51:30,400 INFO  common.Storage (Storage.java:tryLock(662)) - Lock
on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/in_use.lock>
acquired by nodename 14319@asf005.sp2.ygridcore.net
     [exec] 2012-11-01 12:51:30,404 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287))
- Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current>
     [exec] 2012-11-01 12:51:30,404 INFO  namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(287))
- Recovering unfinalized segments in <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current>
     [exec] 2012-11-01 12:51:30,405 INFO  namenode.FSImage (FSImage.java:loadFSImage(611))
- No edit log streams selected.
     [exec] 2012-11-01 12:51:30,407 INFO  namenode.FSImage (FSImageFormat.java:load(167))
- Loading image file <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000>
using no compression
     [exec] 2012-11-01 12:51:30,407 INFO  namenode.FSImage (FSImageFormat.java:load(170))
- Number of files = 1
     [exec] 2012-11-01 12:51:30,407 INFO  namenode.FSImage (FSImageFormat.java:loadFilesUnderConstruction(358))
- Number of files under construction = 0
     [exec] 2012-11-01 12:51:30,408 INFO  namenode.FSImage (FSImageFormat.java:load(192))
- Image file of size 122 loaded in 0 seconds.
     [exec] 2012-11-01 12:51:30,408 INFO  namenode.FSImage (FSImage.java:loadFSImage(754))
- Loaded image for txid 0 from <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/fsimage_0000000000000000000>
     [exec] 2012-11-01 12:51:30,412 INFO  namenode.FSEditLog (FSEditLog.java:startLogSegment(949))
- Starting log segment at 1
     [exec] 2012-11-01 12:51:30,632 INFO  namenode.NameCache (NameCache.java:initialized(143))
- initialized with 0 entries 0 lookups
     [exec] 2012-11-01 12:51:30,632 INFO  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(441))
- Finished loading FSImage in 240 msecs
     [exec] 2012-11-01 12:51:30,761 INFO  ipc.Server (Server.java:run(524)) - Starting Socket
Reader #1 for port 40223
     [exec] 2012-11-01 12:51:30,781 INFO  namenode.FSNamesystem (FSNamesystem.java:registerMBean(4615))
- Registered FSNamesystemState MBean
     [exec] 2012-11-01 12:51:30,796 INFO  namenode.FSNamesystem (FSNamesystem.java:getCompleteBlocksTotal(4307))
- Number of blocks under construction: 0
     [exec] 2012-11-01 12:51:30,796 INFO  namenode.FSNamesystem (FSNamesystem.java:initializeReplQueues(3858))
- initializing replication queues
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2205))
- Total number of blocks            = 0
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2206))
- Number of invalid blocks          = 0
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2207))
- Number of under-replicated blocks = 0
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2208))
- Number of  over-replicated blocks = 0
     [exec] 2012-11-01 12:51:30,808 INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatedBlocks(2210))
- Number of blocks being written    = 0
     [exec] 2012-11-01 12:51:30,808 INFO  hdfs.StateChange (FSNamesystem.java:initializeReplQueues(3863))
- STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks
completed in 12 msec
     [exec] 2012-11-01 12:51:30,808 INFO  hdfs.StateChange (FSNamesystem.java:leave(3835))
- STATE* Leaving safe mode after 0 secs
     [exec] 2012-11-01 12:51:30,809 INFO  hdfs.StateChange (FSNamesystem.java:leave(3845))
- STATE* Network topology has 0 racks and 0 datanodes
     [exec] 2012-11-01 12:51:30,809 INFO  hdfs.StateChange (FSNamesystem.java:leave(3848))
- STATE* UnderReplicatedBlocks has 0 blocks
     [exec] 2012-11-01 12:51:30,861 INFO  mortbay.log (Slf4jLog.java:info(67)) - Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
     [exec] 2012-11-01 12:51:30,916 INFO  http.HttpServer (HttpServer.java:addGlobalFilter(505))
- Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
     [exec] 2012-11-01 12:51:30,918 INFO  http.HttpServer (HttpServer.java:addFilter(483))
- Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context hdfs
     [exec] 2012-11-01 12:51:30,918 INFO  http.HttpServer (HttpServer.java:addFilter(490))
- Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
     [exec] 2012-11-01 12:51:30,921 INFO  http.HttpServer (WebHdfsFileSystem.java:isEnabled(142))
- dfs.webhdfs.enabled = false
     [exec] 2012-11-01 12:51:30,928 INFO  http.HttpServer (HttpServer.java:start(663)) - Jetty
bound to port 59969
     [exec] 2012-11-01 12:51:30,928 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2012-11-01 12:51:31,086 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:59969
     [exec] 2012-11-01 12:51:31,086 INFO  namenode.NameNode (NameNode.java:setHttpServerAddress(395))
- Web-server up at: localhost:59969
     [exec] 2012-11-01 12:51:31,086 INFO  ipc.Server (Server.java:run(648)) - IPC Server listener
on 40223: starting
     [exec] 2012-11-01 12:51:31,086 INFO  ipc.Server (Server.java:run(817)) - IPC Server Responder:
starting
     [exec] 2012-11-01 12:51:31,089 INFO  namenode.NameNode (NameNode.java:startCommonServices(492))
- NameNode RPC up at: localhost/127.0.0.1:40223
     [exec] 2012-11-01 12:51:31,089 INFO  namenode.FSNamesystem (FSNamesystem.java:startActiveServices(647))
- Starting services required for active state
     [exec] 2012-11-01 12:51:31,091 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:startDataNodes(1145))
- Starting DataNode 0 with dfs.datanode.data.dir: <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1,file>:<https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2>
     [exec] 2012-11-01 12:51:31,108 WARN  util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62))
- Unable to load native-hadoop library for your platform... using builtin-java classes where
applicable
     [exec] 2012-11-01 12:51:31,119 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(151))
- DataNode metrics system started (again)
     [exec] 2012-11-01 12:51:31,119 INFO  datanode.DataNode (DataNode.java:<init>(313))
- Configured hostname is 127.0.0.1
     [exec] 2012-11-01 12:51:31,124 INFO  datanode.DataNode (DataNode.java:initDataXceiver(539))
- Opened streaming server at /127.0.0.1:45280
     [exec] 2012-11-01 12:51:31,126 INFO  datanode.DataNode (DataXceiverServer.java:<init>(77))
- Balancing bandwith is 1048576 bytes/s
     [exec] 2012-11-01 12:51:31,127 INFO  http.HttpServer (HttpServer.java:addGlobalFilter(505))
- Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
     [exec] 2012-11-01 12:51:31,128 INFO  http.HttpServer (HttpServer.java:addFilter(483))
- Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context datanode
     [exec] 2012-11-01 12:51:31,128 INFO  http.HttpServer (HttpServer.java:addFilter(490))
- Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
     [exec] 2012-11-01 12:51:31,129 INFO  datanode.DataNode (DataNode.java:startInfoServer(365))
- Opened info server at localhost:0
     [exec] 2012-11-01 12:51:31,131 INFO  datanode.DataNode (WebHdfsFileSystem.java:isEnabled(142))
- dfs.webhdfs.enabled = false
     [exec] 2012-11-01 12:51:31,131 INFO  http.HttpServer (HttpServer.java:start(663)) - Jetty
bound to port 55286
     [exec] 2012-11-01 12:51:31,131 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
     [exec] 2012-11-01 12:51:31,269 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:55286
     [exec] 2012-11-01 12:51:31,276 INFO  ipc.Server (Server.java:run(524)) - Starting Socket
Reader #1 for port 42421
     [exec] 2012-11-01 12:51:31,280 INFO  datanode.DataNode (DataNode.java:initIpcServer(436))
- Opened IPC server at /127.0.0.1:42421
     [exec] 2012-11-01 12:51:31,287 INFO  datanode.DataNode (BlockPoolManager.java:refreshNamenodes(148))
- Refresh request received for nameservices: null
     [exec] 2012-11-01 12:51:31,289 INFO  datanode.DataNode (BlockPoolManager.java:doRefreshNamenodes(193))
- Starting BPOfferServices for nameservices: <default>
     [exec] 2012-11-01 12:51:31,296 INFO  datanode.DataNode (BPServiceActor.java:run(658))
- Block pool <registering> (storage id unknown) service to localhost/127.0.0.1:40223
starting to offer service
     [exec] 2012-11-01 12:51:31,300 INFO  ipc.Server (Server.java:run(817)) - IPC Server Responder:
starting
     [exec] 2012-11-01 12:51:31,300 INFO  ipc.Server (Server.java:run(648)) - IPC Server listener
on 42421: starting
     [exec] 2012-11-01 12:51:31,726 INFO  common.Storage (Storage.java:tryLock(662)) - Lock
on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock>
acquired by nodename 14319@asf005.sp2.ygridcore.net
     [exec] 2012-11-01 12:51:31,727 INFO  common.Storage (DataStorage.java:recoverTransitionRead(162))
- Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1>
is not formatted
     [exec] 2012-11-01 12:51:31,727 INFO  common.Storage (DataStorage.java:recoverTransitionRead(163))
- Formatting ...
     [exec] 2012-11-01 12:51:31,732 INFO  common.Storage (Storage.java:tryLock(662)) - Lock
on <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock>
acquired by nodename 14319@asf005.sp2.ygridcore.net
     [exec] 2012-11-01 12:51:31,732 INFO  common.Storage (DataStorage.java:recoverTransitionRead(162))
- Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2>
is not formatted
     [exec] 2012-11-01 12:51:31,733 INFO  common.Storage (DataStorage.java:recoverTransitionRead(163))
- Formatting ...
     [exec] 2012-11-01 12:51:31,770 INFO  common.Storage (Storage.java:lock(626)) - Locking
is disabled
     [exec] 2012-11-01 12:51:31,771 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116))
- Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1372242316-67.195.138.27-1351774289159>
is not formatted.
     [exec] 2012-11-01 12:51:31,771 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117))
- Formatting ...
     [exec] 2012-11-01 12:51:31,771 INFO  common.Storage (BlockPoolSliceStorage.java:format(171))
- Formatting block pool BP-1372242316-67.195.138.27-1351774289159 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-1372242316-67.195.138.27-1351774289159/current>
     [exec] 2012-11-01 12:51:31,773 INFO  common.Storage (Storage.java:lock(626)) - Locking
is disabled
     [exec] 2012-11-01 12:51:31,773 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(116))
- Storage directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1372242316-67.195.138.27-1351774289159>
is not formatted.
     [exec] 2012-11-01 12:51:31,773 INFO  common.Storage (BlockPoolSliceStorage.java:recoverTransitionRead(117))
- Formatting ...
     [exec] 2012-11-01 12:51:31,774 INFO  common.Storage (BlockPoolSliceStorage.java:format(171))
- Formatting block pool BP-1372242316-67.195.138.27-1351774289159 directory <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-1372242316-67.195.138.27-1351774289159/current>
     [exec] 2012-11-01 12:51:31,777 INFO  datanode.DataNode (DataNode.java:initStorage(852))
- Setting up storage: nsid=1188264114;bpid=BP-1372242316-67.195.138.27-1351774289159;lv=-40;nsInfo=lv=-40;cid=testClusterID;nsid=1188264114;c=0;bpid=BP-1372242316-67.195.138.27-1351774289159
     [exec] 2012-11-01 12:51:31,791 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197))
- Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>
     [exec] 2012-11-01 12:51:31,791 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:<init>(197))
- Added volume - <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>
     [exec] 2012-11-01 12:51:31,796 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:registerMBean(1209))
- Registered FSDatasetState MBean
     [exec] 2012-11-01 12:51:31,800 INFO  datanode.DirectoryScanner (DirectoryScanner.java:start(243))
- Periodic Directory Tree Verification scan starting at 1351783360800 with interval 21600000
     [exec] 2012-11-01 12:51:31,801 INFO  impl.FsDatasetImpl (FsDatasetImpl.java:addBlockPool(1577))
- Adding block pool BP-1372242316-67.195.138.27-1351774289159
     [exec] 2012-11-01 12:51:31,808 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1833))
- Waiting for cluster to become active
     [exec] 2012-11-01 12:51:31,809 INFO  datanode.DataNode (BPServiceActor.java:register(618))
- Block pool BP-1372242316-67.195.138.27-1351774289159 (storage id DS-956326259-67.195.138.27-45280-1351774291735)
service to localhost/127.0.0.1:40223 beginning handshake with NN
     [exec] 2012-11-01 12:51:31,811 INFO  hdfs.StateChange (DatanodeManager.java:registerDatanode(661))
- BLOCK* registerDatanode: from DatanodeRegistration(127.0.0.1, storageID=DS-956326259-67.195.138.27-45280-1351774291735,
infoPort=55286, ipcPort=42421, storageInfo=lv=-40;cid=testClusterID;nsid=1188264114;c=0) storage
DS-956326259-67.195.138.27-45280-1351774291735
     [exec] 2012-11-01 12:51:31,814 INFO  net.NetworkTopology (NetworkTopology.java:add(388))
- Adding a new node: /default-rack/127.0.0.1:45280
     [exec] 2012-11-01 12:51:31,815 INFO  datanode.DataNode (BPServiceActor.java:register(631))
- Block pool Block pool BP-1372242316-67.195.138.27-1351774289159 (storage id DS-956326259-67.195.138.27-45280-1351774291735)
service to localhost/127.0.0.1:40223 successfully registered with NN
     [exec] 2012-11-01 12:51:31,815 INFO  datanode.DataNode (BPServiceActor.java:offerService(499))
- For namenode localhost/127.0.0.1:40223 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL
of 21600000msec Initial delay: 0msec; heartBeatInterval=3000
     [exec] 2012-11-01 12:51:31,819 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(419))
- Namenode Block pool BP-1372242316-67.195.138.27-1351774289159 (storage id DS-956326259-67.195.138.27-45280-1351774291735)
service to localhost/127.0.0.1:40223 trying to claim ACTIVE state with txid=1
     [exec] 2012-11-01 12:51:31,819 INFO  datanode.DataNode (BPOfferService.java:updateActorStatesFromHeartbeat(431))
- Acknowledging ACTIVE Namenode Block pool BP-1372242316-67.195.138.27-1351774289159 (storage
id DS-956326259-67.195.138.27-45280-1351774291735) service to localhost/127.0.0.1:40223
     [exec] 2012-11-01 12:51:31,823 INFO  blockmanagement.BlockManager (BlockManager.java:processReport(1526))
- BLOCK* processReport: Received first block report from 127.0.0.1:45280 after becoming active.
Its block contents are no longer considered stale
     [exec] 2012-11-01 12:51:31,824 INFO  hdfs.StateChange (BlockManager.java:processReport(1539))
- BLOCK* processReport: from DatanodeRegistration(127.0.0.1, storageID=DS-956326259-67.195.138.27-45280-1351774291735,
infoPort=55286, ipcPort=42421, storageInfo=lv=-40;cid=testClusterID;nsid=1188264114;c=0),
blocks: 0, processing time: 2 msecs
     [exec] 2012-11-01 12:51:31,825 INFO  datanode.DataNode (BPServiceActor.java:blockReport(409))
- BlockReport of 0 blocks took 1 msec to generate and 5 msecs for RPC and NN processing
     [exec] 2012-11-01 12:51:31,825 INFO  datanode.DataNode (BPServiceActor.java:blockReport(428))
- sent block report, processed command:org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@1277a30
     [exec] 2012-11-01 12:51:31,827 INFO  datanode.BlockPoolSliceScanner (BlockPoolSliceScanner.java:<init>(156))
- Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1372242316-67.195.138.27-1351774289159
     [exec] 2012-11-01 12:51:31,831 INFO  datanode.DataBlockScanner (DataBlockScanner.java:addBlockPool(248))
- Added bpid=BP-1372242316-67.195.138.27-1351774289159 to blockPoolScannerMap, new size=1
     [exec] Aborted
     [exec] 2012-11-01 12:51:31,913 INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(1864))
- Cluster is active
     [exec] #
     [exec] # A fatal error has been detected by the Java Runtime Environment:
     [exec] #
     [exec] #  SIGSEGV (0xb) at pc=0xf6ee9b68, pid=14319, tid=4137109200
     [exec] #
     [exec] # JRE version: 6.0_26-b03
     [exec] # Java VM: Java HotSpot(TM) Server VM (20.1-b02 mixed mode linux-x86 )
     [exec] # Problematic frame:
     [exec] # V  [libjvm.so+0x3efb68]  unsigned+0xb8
     [exec] #
     [exec] # An error report file with more information is saved as:
     [exec] # <https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid14319.log>
     [exec] #
     [exec] # If you would like to submit a bug report, please visit:
     [exec] #   http://java.sun.com/webapps/bugreport/crash.jsp
     [exec] #
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS ................................ FAILURE [1:18:17.144s]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:18:17.929s
[INFO] Finished at: Thu Nov 01 12:51:32 UTC 2012
[INFO] Final Memory: 26M/491M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (native_tests)
on project hadoop-hdfs: An Ant BuildException has occured: exec returned: 134 -> [Help
1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following
articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating MAPREDUCE-4752
Updating MAPREDUCE-4724
Updating YARN-165
Updating YARN-166
Updating YARN-189
Updating YARN-159

Mime
View raw message