hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mingliang Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10644) Fix intermittently failing TestDFSUpgradeWithHA#testRollbackWithJournalNodes
Date Tue, 13 Sep 2016 20:45:21 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15488369#comment-15488369 ] 

Mingliang Liu commented on HDFS-10644:
--------------------------------------

In case the stack is missing in Jenkins cluster and hard to re-produce it locally, I attach the full log here:
h6.Standard Output
{code}
2016-09-13 19:42:00,959 [main] INFO  qjournal.MiniQJMHACluster (MiniQJMHACluster.java:<init>(99)) - Set MiniQJMHACluster basePort to 12296
2016-09-13 19:42:00,959 [main] INFO  qjournal.MiniJournalCluster (MiniJournalCluster.java:<init>(95)) - Starting MiniJournalCluster with 3 journal nodes
2016-09-13 19:42:00,964 [main] INFO  impl.MetricsConfig (MetricsConfig.java:loadFirst(115)) - loaded properties from hadoop-metrics2.properties
2016-09-13 19:42:00,969 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 0 second(s).
2016-09-13 19:42:00,970 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(192)) - JournalNode metrics system started
2016-09-13 19:42:00,971 [main] INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1516)) - Starting Web-server for journal at: http://localhost:0
2016-09-13 19:42:00,973 [main] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(246)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-09-13 19:42:00,979 [main] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined
2016-09-13 19:42:00,980 [main] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-09-13 19:42:00,980 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal
2016-09-13 19:42:00,981 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-09-13 19:42:00,981 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-09-13 19:42:00,981 [main] INFO  http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 58874
2016-09-13 19:42:00,982 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2016-09-13 19:42:01,032 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:58874
2016-09-13 19:42:01,033 [main] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 500 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2016-09-13 19:42:01,034 [Socket Reader #1 for port 44916] INFO  ipc.Server (Server.java:run(930)) - Starting Socket Reader #1 for port 44916
2016-09-13 19:42:01,036 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1164)) - IPC Server Responder: starting
2016-09-13 19:42:01,036 [IPC Server listener on 44916] INFO  ipc.Server (Server.java:run(1000)) - IPC Server listener on 44916: starting
2016-09-13 19:42:01,039 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(159)) - JournalNode metrics system started (again)
2016-09-13 19:42:01,040 [main] INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1516)) - Starting Web-server for journal at: http://localhost:0
2016-09-13 19:42:01,044 [main] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(246)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-09-13 19:42:01,044 [main] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined
2016-09-13 19:42:01,045 [main] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-09-13 19:42:01,046 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal
2016-09-13 19:42:01,046 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-09-13 19:42:01,046 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-09-13 19:42:01,046 [main] INFO  http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 58325
2016-09-13 19:42:01,046 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2016-09-13 19:42:01,086 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:58325
2016-09-13 19:42:01,087 [main] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 500 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2016-09-13 19:42:01,087 [Socket Reader #1 for port 38486] INFO  ipc.Server (Server.java:run(930)) - Starting Socket Reader #1 for port 38486
2016-09-13 19:42:01,088 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1164)) - IPC Server Responder: starting
2016-09-13 19:42:01,088 [IPC Server listener on 38486] INFO  ipc.Server (Server.java:run(1000)) - IPC Server listener on 38486: starting
2016-09-13 19:42:01,092 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(159)) - JournalNode metrics system started (again)
2016-09-13 19:42:01,098 [main] INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1516)) - Starting Web-server for journal at: http://localhost:0
2016-09-13 19:42:01,102 [main] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(246)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-09-13 19:42:01,103 [main] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.journal is not defined
2016-09-13 19:42:01,104 [main] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-09-13 19:42:01,104 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context journal
2016-09-13 19:42:01,106 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-09-13 19:42:01,106 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-09-13 19:42:01,107 [main] INFO  http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 33550
2016-09-13 19:42:01,107 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2016-09-13 19:42:01,143 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:33550
2016-09-13 19:42:01,144 [main] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 500 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2016-09-13 19:42:01,146 [Socket Reader #1 for port 42766] INFO  ipc.Server (Server.java:run(930)) - Starting Socket Reader #1 for port 42766
2016-09-13 19:42:01,152 [IPC Server listener on 42766] INFO  ipc.Server (Server.java:run(1000)) - IPC Server listener on 42766: starting
2016-09-13 19:42:01,153 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1164)) - IPC Server Responder: starting
2016-09-13 19:42:01,209 [IPC Server handler 0 on 38486] INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/waitactive
2016-09-13 19:42:01,209 [IPC Server handler 0 on 42766] INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/waitactive
2016-09-13 19:42:01,210 [IPC Server handler 0 on 38486] WARN  common.Storage (Storage.java:analyzeStorage(542)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/waitactive does not exist
2016-09-13 19:42:01,210 [IPC Server handler 0 on 42766] WARN  common.Storage (Storage.java:analyzeStorage(542)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/waitactive does not exist
2016-09-13 19:42:01,216 [IPC Server handler 2 on 44916] INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/waitactive
2016-09-13 19:42:01,216 [IPC Server handler 2 on 44916] WARN  common.Storage (Storage.java:analyzeStorage(542)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/waitactive does not exist
2016-09-13 19:42:01,254 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:<init>(465)) - starting cluster: numNameNodes=2, numDataNodes=0
Formatting using clusterid: testClusterID
2016-09-13 19:42:01,257 [main] INFO  namenode.FSEditLog (FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-09-13 19:42:01,257 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(709)) - KeyProvider: null
2016-09-13 19:42:01,258 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(716)) - fsLock is fair:true
2016-09-13 19:42:01,258 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(732)) - fsOwner             = jenkins (auth:SIMPLE)
2016-09-13 19:42:01,258 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(733)) - supergroup          = supergroup
2016-09-13 19:42:01,258 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(734)) - isPermissionEnabled = true
2016-09-13 19:42:01,258 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(743)) - Determined nameservice ID: ns1
2016-09-13 19:42:01,259 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(745)) - HA Enabled: true
2016-09-13 19:42:01,259 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(250)) - dfs.block.invalidate.limit=1000
2016-09-13 19:42:01,259 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(256)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2016-09-13 19:42:01,259 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(73)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-09-13 19:42:01,260 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(78)) - The block deletion will start around 2016 Sep 13 19:42:01
2016-09-13 19:42:01,263 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2016-09-13 19:42:01,263 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,263 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2016-09-13 19:42:01,264 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^22 = 4194304 entries
2016-09-13 19:42:01,268 [main] INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(457)) - dfs.block.access.token.enable=false
2016-09-13 19:42:01,268 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(150)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2016-09-13 19:42:01,269 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(151)) - dfs.namenode.safemode.min.datanodes = 0
2016-09-13 19:42:01,269 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(153)) - dfs.namenode.safemode.extension = 0
2016-09-13 19:42:01,269 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(443)) - defaultReplication         = 0
2016-09-13 19:42:01,269 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(444)) - maxReplication             = 512
2016-09-13 19:42:01,270 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(445)) - minReplication             = 1
2016-09-13 19:42:01,270 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(446)) - maxReplicationStreams      = 2
2016-09-13 19:42:01,270 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(447)) - replicationRecheckInterval = 3000
2016-09-13 19:42:01,270 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(448)) - encryptDataTransfer        = false
2016-09-13 19:42:01,270 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(449)) - maxNumBlocksToLog          = 1000
2016-09-13 19:42:01,271 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2016-09-13 19:42:01,271 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,272 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB
2016-09-13 19:42:01,272 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^21 = 2097152 entries
2016-09-13 19:42:01,274 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(257)) - ACLs enabled? false
2016-09-13 19:42:01,275 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(261)) - POSIX ACL inheritance enabled? false
2016-09-13 19:42:01,275 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(265)) - XAttrs enabled? true
2016-09-13 19:42:01,275 [main] INFO  namenode.NameNode (FSDirectory.java:<init>(329)) - Caching file names occuring more than 10 times
2016-09-13 19:42:01,275 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2016-09-13 19:42:01,276 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,276 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB
2016-09-13 19:42:01,276 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^19 = 524288 entries
2016-09-13 19:42:01,277 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2016-09-13 19:42:01,277 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10
2016-09-13 19:42:01,278 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2016-09-13 19:42:01,278 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(936)) - Retry cache on namenode is enabled
2016-09-13 19:42:01,278 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(944)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2016-09-13 19:42:01,278 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2016-09-13 19:42:01,279 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,279 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.8 GB = 559.3 KB
2016-09-13 19:42:01,279 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^16 = 65536 entries
2016-09-13 19:42:01,291 [IPC Server handler 3 on 38486] INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1
2016-09-13 19:42:01,292 [IPC Server handler 3 on 38486] WARN  common.Storage (Storage.java:analyzeStorage(542)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1 does not exist
2016-09-13 19:42:01,298 [IPC Server handler 3 on 42766] INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1
2016-09-13 19:42:01,298 [IPC Server handler 3 on 42766] WARN  common.Storage (Storage.java:analyzeStorage(542)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1 does not exist
2016-09-13 19:42:01,300 [IPC Server handler 4 on 44916] INFO  server.JournalNode (JournalNode.java:getOrCreateJournal(92)) - Initializing journal in directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1
2016-09-13 19:42:01,301 [IPC Server handler 4 on 44916] WARN  common.Storage (Storage.java:analyzeStorage(542)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1 does not exist
2016-09-13 19:42:01,305 [main] INFO  namenode.FSImage (FSImage.java:format(155)) - Allocated new BlockPoolId: BP-1945458659-172.17.0.3-1473795721305
2016-09-13 19:42:01,308 [main] INFO  common.Storage (NNStorage.java:format(585)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1 has been successfully formatted.
2016-09-13 19:42:01,310 [main] INFO  common.Storage (NNStorage.java:format(585)) - Storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2 has been successfully formatted.
2016-09-13 19:42:01,313 [IPC Server handler 4 on 38486] INFO  server.Journal (Journal.java:format(218)) - Formatting org.apache.hadoop.hdfs.qjournal.server.Journal@c26efa7 with namespace info: lv=-64;cid=testClusterID;nsid=797603622;c=1473795721305;bpid=BP-1945458659-172.17.0.3-1473795721305
2016-09-13 19:42:01,313 [IPC Server handler 4 on 38486] INFO  common.Storage (JNStorage.java:format(184)) - Formatting journal Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1 with nsid: 797603622
2016-09-13 19:42:01,314 [IPC Server handler 4 on 42766] INFO  server.Journal (Journal.java:format(218)) - Formatting org.apache.hadoop.hdfs.qjournal.server.Journal@18b60de3 with namespace info: lv=-64;cid=testClusterID;nsid=797603622;c=1473795721305;bpid=BP-1945458659-172.17.0.3-1473795721305
2016-09-13 19:42:01,314 [IPC Server handler 4 on 42766] INFO  common.Storage (JNStorage.java:format(184)) - Formatting journal Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1 with nsid: 797603622
2016-09-13 19:42:01,316 [IPC Server handler 4 on 38486] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:01,317 [IPC Server handler 4 on 42766] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:01,318 [IPC Server handler 0 on 44916] INFO  server.Journal (Journal.java:format(218)) - Formatting org.apache.hadoop.hdfs.qjournal.server.Journal@1a508719 with namespace info: lv=-64;cid=testClusterID;nsid=797603622;c=1473795721305;bpid=BP-1945458659-172.17.0.3-1473795721305
2016-09-13 19:42:01,319 [IPC Server handler 0 on 44916] INFO  common.Storage (JNStorage.java:format(184)) - Formatting journal Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1 with nsid: 797603622
2016-09-13 19:42:01,321 [IPC Server handler 0 on 44916] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:01,323 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(429)) - Saving image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 using no compression
2016-09-13 19:42:01,323 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(429)) - Saving image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 using no compression
2016-09-13 19:42:01,332 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(432)) - Image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/fsimage.ckpt_0000000000000000000 of size 335 bytes saved in 0 seconds.
2016-09-13 19:42:01,332 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(432)) - Image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/fsimage.ckpt_0000000000000000000 of size 335 bytes saved in 0 seconds.
2016-09-13 19:42:01,334 [main] INFO  namenode.NNStorageRetentionManager (NNStorageRetentionManager.java:getImageTxIdToRetain(203)) - Going to retain 1 images with txid >= 0
2016-09-13 19:42:01,356 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:copyNameDirs(1210)) - Copying namedir from primary node dir file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1 to file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3
2016-09-13 19:42:01,359 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:copyNameDirs(1210)) - Copying namedir from primary node dir file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1 to file:/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-4
2016-09-13 19:42:01,362 [main] INFO  namenode.NameNode (NameNode.java:createNameNode(1556)) - createNameNode []
2016-09-13 19:42:01,363 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(159)) - NameNode metrics system started (again)
2016-09-13 19:42:01,363 [main] INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is hdfs://ns1
2016-09-13 19:42:01,363 [main] INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(470)) - Clients are to use ns1 to access this namenode/service.
2016-09-13 19:42:01,366 [main] INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1516)) - Starting Web-server for hdfs at: http://localhost:12297
2016-09-13 19:42:01,368 [main] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(246)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-09-13 19:42:01,368 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@5df6163a] INFO  util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2016-09-13 19:42:01,369 [main] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.namenode is not defined
2016-09-13 19:42:01,371 [main] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-09-13 19:42:01,372 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2016-09-13 19:42:01,372 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-09-13 19:42:01,372 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-09-13 19:42:01,373 [main] INFO  http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(94)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-09-13 19:42:01,375 [main] INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(653)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2016-09-13 19:42:01,376 [main] INFO  http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 12297
2016-09-13 19:42:01,376 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2016-09-13 19:42:01,403 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:12297
2016-09-13 19:42:01,414 [main] INFO  namenode.FSEditLog (FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-09-13 19:42:01,415 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(709)) - KeyProvider: null
2016-09-13 19:42:01,415 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(716)) - fsLock is fair:true
2016-09-13 19:42:01,415 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(732)) - fsOwner             = jenkins (auth:SIMPLE)
2016-09-13 19:42:01,415 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(733)) - supergroup          = supergroup
2016-09-13 19:42:01,416 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(734)) - isPermissionEnabled = true
2016-09-13 19:42:01,416 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(743)) - Determined nameservice ID: ns1
2016-09-13 19:42:01,416 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(745)) - HA Enabled: true
2016-09-13 19:42:01,417 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(250)) - dfs.block.invalidate.limit=1000
2016-09-13 19:42:01,417 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(256)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2016-09-13 19:42:01,417 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(73)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-09-13 19:42:01,418 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(78)) - The block deletion will start around 2016 Sep 13 19:42:01
2016-09-13 19:42:01,418 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2016-09-13 19:42:01,418 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,419 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2016-09-13 19:42:01,419 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^22 = 4194304 entries
2016-09-13 19:42:01,423 [main] INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(457)) - dfs.block.access.token.enable=false
2016-09-13 19:42:01,424 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(150)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2016-09-13 19:42:01,424 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(151)) - dfs.namenode.safemode.min.datanodes = 0
2016-09-13 19:42:01,424 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(153)) - dfs.namenode.safemode.extension = 0
2016-09-13 19:42:01,425 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(443)) - defaultReplication         = 0
2016-09-13 19:42:01,425 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(444)) - maxReplication             = 512
2016-09-13 19:42:01,425 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(445)) - minReplication             = 1
2016-09-13 19:42:01,425 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(446)) - maxReplicationStreams      = 2
2016-09-13 19:42:01,425 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(447)) - replicationRecheckInterval = 3000
2016-09-13 19:42:01,426 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(448)) - encryptDataTransfer        = false
2016-09-13 19:42:01,426 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(449)) - maxNumBlocksToLog          = 1000
2016-09-13 19:42:01,426 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2016-09-13 19:42:01,427 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,427 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB
2016-09-13 19:42:01,427 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^21 = 2097152 entries
2016-09-13 19:42:01,430 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(257)) - ACLs enabled? false
2016-09-13 19:42:01,430 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(261)) - POSIX ACL inheritance enabled? false
2016-09-13 19:42:01,430 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(265)) - XAttrs enabled? true
2016-09-13 19:42:01,431 [main] INFO  namenode.NameNode (FSDirectory.java:<init>(329)) - Caching file names occuring more than 10 times
2016-09-13 19:42:01,431 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2016-09-13 19:42:01,431 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,431 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB
2016-09-13 19:42:01,432 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^19 = 524288 entries
2016-09-13 19:42:01,434 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2016-09-13 19:42:01,435 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10
2016-09-13 19:42:01,435 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2016-09-13 19:42:01,436 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(936)) - Retry cache on namenode is enabled
2016-09-13 19:42:01,436 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(944)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2016-09-13 19:42:01,436 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2016-09-13 19:42:01,436 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,437 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.8 GB = 559.3 KB
2016-09-13 19:42:01,437 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^16 = 65536 entries
2016-09-13 19:42:01,439 [main] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:01,440 [main] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:01,455 [main] INFO  namenode.FSImage (FSImage.java:loadFSImage(666)) - No edit log streams selected.
2016-09-13 19:42:01,456 [main] INFO  namenode.FSImage (FSImage.java:loadFSImageFile(731)) - Planning to load image: FSImageFile(file=/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2016-09-13 19:42:01,456 [main] INFO  namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(265)) - Loading 1 INodes.
2016-09-13 19:42:01,457 [main] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(185)) - Loaded FSImage in 0 seconds.
2016-09-13 19:42:01,457 [main] INFO  namenode.FSImage (FSImage.java:loadFSImage(897)) - Loaded image for txid 0 from /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/fsimage_0000000000000000000
2016-09-13 19:42:01,457 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1043)) - Need to save fs image? false (staleImage=false, haEnabled=true, isRollingUpgrade=false)
2016-09-13 19:42:01,458 [main] INFO  namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
2016-09-13 19:42:01,458 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(681)) - Finished loading FSImage in 20 msecs
2016-09-13 19:42:01,458 [main] INFO  namenode.NameNode (NameNodeRpcServer.java:<init>(427)) - RPC server is binding to localhost:12296
2016-09-13 19:42:01,458 [main] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2016-09-13 19:42:01,459 [Socket Reader #1 for port 12296] INFO  ipc.Server (Server.java:run(930)) - Starting Socket Reader #1 for port 12296
2016-09-13 19:42:01,460 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:registerMBean(4685)) - Registered FSNamesystemState MBean
2016-09-13 19:42:01,480 [main] INFO  namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(126)) - Number of blocks under construction: 0
2016-09-13 19:42:01,481 [main] INFO  Configuration.deprecation (Configuration.java:logDeprecation(1242)) - No unit for dfs.namenode.decommission.interval(3) assuming SECONDS
2016-09-13 19:42:01,502 [main] WARN  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:leaveSafeMode(366)) - forceExit used when normal exist would suffice. Treating force exit as normal safe mode exit.
2016-09-13 19:42:01,502 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(383)) - STATE* Leaving safe mode after 0 secs
2016-09-13 19:42:01,599 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(389)) - STATE* Network topology has 0 racks and 0 datanodes
2016-09-13 19:42:01,599 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(391)) - STATE* UnderReplicatedBlocks has 0 blocks
2016-09-13 19:42:01,607 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1164)) - IPC Server Responder: starting
2016-09-13 19:42:01,607 [IPC Server listener on 12296] INFO  ipc.Server (Server.java:run(1000)) - IPC Server listener on 12296: starting
2016-09-13 19:42:01,610 [main] INFO  namenode.NameNode (NameNode.java:startCommonServices(809)) - NameNode RPC up at: localhost/127.0.0.1:12296
2016-09-13 19:42:01,610 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:startStandbyServices(1309)) - Starting services required for standby state
2016-09-13 19:42:01,612 [main] INFO  ha.EditLogTailer (EditLogTailer.java:<init>(169)) - Will roll logs on active node every 120 seconds.
2016-09-13 19:42:01,617 [main] INFO  ha.StandbyCheckpointer (StandbyCheckpointer.java:start(137)) - Starting standby checkpoint thread...
Checkpointing active NN to possible NNs: [http://127.0.0.1:12299]
Serving checkpoints at http://localhost:12297
2016-09-13 19:42:01,618 [main] INFO  namenode.NameNode (NameNode.java:createNameNode(1556)) - createNameNode []
2016-09-13 19:42:01,618 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(159)) - NameNode metrics system started (again)
2016-09-13 19:42:01,619 [main] INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is hdfs://ns1
2016-09-13 19:42:01,619 [main] INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(470)) - Clients are to use ns1 to access this namenode/service.
2016-09-13 19:42:01,621 [main] INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1516)) - Starting Web-server for hdfs at: http://localhost:12299
2016-09-13 19:42:01,623 [main] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(246)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-09-13 19:42:01,623 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@65eb3597] INFO  util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2016-09-13 19:42:01,623 [main] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.namenode is not defined
2016-09-13 19:42:01,624 [main] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-09-13 19:42:01,624 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2016-09-13 19:42:01,625 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-09-13 19:42:01,625 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-09-13 19:42:01,626 [main] INFO  http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(94)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-09-13 19:42:01,626 [main] INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(653)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2016-09-13 19:42:01,626 [main] INFO  http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 12299
2016-09-13 19:42:01,626 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2016-09-13 19:42:01,647 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:12299
2016-09-13 19:42:01,649 [main] INFO  namenode.FSEditLog (FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-09-13 19:42:01,649 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(709)) - KeyProvider: null
2016-09-13 19:42:01,649 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(716)) - fsLock is fair:true
2016-09-13 19:42:01,650 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(732)) - fsOwner             = jenkins (auth:SIMPLE)
2016-09-13 19:42:01,650 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(733)) - supergroup          = supergroup
2016-09-13 19:42:01,650 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(734)) - isPermissionEnabled = true
2016-09-13 19:42:01,650 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(743)) - Determined nameservice ID: ns1
2016-09-13 19:42:01,650 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(745)) - HA Enabled: true
2016-09-13 19:42:01,651 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(250)) - dfs.block.invalidate.limit=1000
2016-09-13 19:42:01,651 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(256)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2016-09-13 19:42:01,651 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(73)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-09-13 19:42:01,652 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(78)) - The block deletion will start around 2016 Sep 13 19:42:01
2016-09-13 19:42:01,652 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2016-09-13 19:42:01,652 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,652 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2016-09-13 19:42:01,652 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^22 = 4194304 entries
2016-09-13 19:42:01,657 [main] INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(457)) - dfs.block.access.token.enable=false
2016-09-13 19:42:01,657 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(150)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2016-09-13 19:42:01,657 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(151)) - dfs.namenode.safemode.min.datanodes = 0
2016-09-13 19:42:01,657 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(153)) - dfs.namenode.safemode.extension = 0
2016-09-13 19:42:01,658 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(443)) - defaultReplication         = 0
2016-09-13 19:42:01,658 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(444)) - maxReplication             = 512
2016-09-13 19:42:01,658 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(445)) - minReplication             = 1
2016-09-13 19:42:01,658 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(446)) - maxReplicationStreams      = 2
2016-09-13 19:42:01,658 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(447)) - replicationRecheckInterval = 3000
2016-09-13 19:42:01,658 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(448)) - encryptDataTransfer        = false
2016-09-13 19:42:01,659 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(449)) - maxNumBlocksToLog          = 1000
2016-09-13 19:42:01,659 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2016-09-13 19:42:01,659 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,659 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB
2016-09-13 19:42:01,660 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^21 = 2097152 entries
2016-09-13 19:42:01,662 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(257)) - ACLs enabled? false
2016-09-13 19:42:01,662 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(261)) - POSIX ACL inheritance enabled? false
2016-09-13 19:42:01,663 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(265)) - XAttrs enabled? true
2016-09-13 19:42:01,663 [main] INFO  namenode.NameNode (FSDirectory.java:<init>(329)) - Caching file names occuring more than 10 times
2016-09-13 19:42:01,663 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2016-09-13 19:42:01,663 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,664 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB
2016-09-13 19:42:01,664 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^19 = 524288 entries
2016-09-13 19:42:01,665 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2016-09-13 19:42:01,665 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10
2016-09-13 19:42:01,665 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2016-09-13 19:42:01,665 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(936)) - Retry cache on namenode is enabled
2016-09-13 19:42:01,665 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(944)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2016-09-13 19:42:01,665 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2016-09-13 19:42:01,666 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:01,666 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.8 GB = 559.3 KB
2016-09-13 19:42:01,666 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^16 = 65536 entries
2016-09-13 19:42:01,668 [main] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:01,669 [main] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-4/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:01,678 [main] INFO  namenode.FSImage (FSImage.java:loadFSImage(666)) - No edit log streams selected.
2016-09-13 19:42:01,678 [main] INFO  namenode.FSImage (FSImage.java:loadFSImageFile(731)) - Planning to load image: FSImageFile(file=/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
2016-09-13 19:42:01,679 [main] INFO  namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(265)) - Loading 1 INodes.
2016-09-13 19:42:01,680 [main] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(185)) - Loaded FSImage in 0 seconds.
2016-09-13 19:42:01,680 [main] INFO  namenode.FSImage (FSImage.java:loadFSImage(897)) - Loaded image for txid 0 from /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3/current/fsimage_0000000000000000000
2016-09-13 19:42:01,680 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1043)) - Need to save fs image? false (staleImage=false, haEnabled=true, isRollingUpgrade=false)
2016-09-13 19:42:01,681 [main] INFO  namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
2016-09-13 19:42:01,681 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(681)) - Finished loading FSImage in 14 msecs
2016-09-13 19:42:01,681 [main] INFO  namenode.NameNode (NameNodeRpcServer.java:<init>(427)) - RPC server is binding to localhost:12298
2016-09-13 19:42:01,682 [main] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2016-09-13 19:42:01,690 [Socket Reader #1 for port 12298] INFO  ipc.Server (Server.java:run(930)) - Starting Socket Reader #1 for port 12298
2016-09-13 19:42:01,692 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:registerMBean(4685)) - Registered FSNamesystemState MBean
2016-09-13 19:42:01,703 [main] INFO  namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(126)) - Number of blocks under construction: 0
2016-09-13 19:42:01,704 [main] INFO  Configuration.deprecation (Configuration.java:logDeprecation(1242)) - No unit for dfs.namenode.decommission.interval(3) assuming SECONDS
2016-09-13 19:42:01,707 [main] WARN  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:leaveSafeMode(366)) - forceExit used when normal exist would suffice. Treating force exit as normal safe mode exit.
2016-09-13 19:42:01,707 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(383)) - STATE* Leaving safe mode after 0 secs
2016-09-13 19:42:01,707 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(389)) - STATE* Network topology has 0 racks and 0 datanodes
2016-09-13 19:42:01,708 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(391)) - STATE* UnderReplicatedBlocks has 0 blocks
2016-09-13 19:42:01,711 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1164)) - IPC Server Responder: starting
2016-09-13 19:42:01,711 [IPC Server listener on 12298] INFO  ipc.Server (Server.java:run(1000)) - IPC Server listener on 12298: starting
2016-09-13 19:42:01,713 [main] INFO  namenode.NameNode (NameNode.java:startCommonServices(809)) - NameNode RPC up at: localhost/127.0.0.1:12298
2016-09-13 19:42:01,713 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:startStandbyServices(1309)) - Starting services required for standby state
2016-09-13 19:42:01,714 [main] INFO  ha.EditLogTailer (EditLogTailer.java:<init>(169)) - Will roll l
...[truncated 80653 chars]...
5)) - Loaded FSImage in 0 seconds.
2016-09-13 19:42:02,647 [main] INFO  namenode.FSImage (FSImage.java:loadFSImage(897)) - Loaded image for txid 0 from /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/fsimage_0000000000000000000
2016-09-13 19:42:02,647 [main] INFO  namenode.FSImage (FSImage.java:loadEdits(835)) - Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@106c988 expecting start txid #1
2016-09-13 19:42:02,648 [main] INFO  namenode.FSImage (FSEditLogLoader.java:loadFSEdits(148)) - Start loading edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/edits_0000000000000000001-0000000000000000003, /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/edits_0000000000000000001-0000000000000000003, http://localhost:33550/getJournal?jid=ns1&segmentTxId=1&storageInfo=-64%3A797603622%3A1473795721305%3AtestClusterID, http://localhost:58874/getJournal?jid=ns1&segmentTxId=1&storageInfo=-64%3A797603622%3A1473795721305%3AtestClusterID
2016-09-13 19:42:02,648 [main] INFO  namenode.RedundantEditLogInputStream (RedundantEditLogInputStream.java:nextOp(177)) - Fast-forwarding stream '/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/edits_0000000000000000001-0000000000000000003' to transaction ID 1
2016-09-13 19:42:02,648 [main] INFO  namenode.FSImage (FSEditLogLoader.java:loadFSEdits(151)) - Edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/edits_0000000000000000001-0000000000000000003, /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/edits_0000000000000000001-0000000000000000003, http://localhost:33550/getJournal?jid=ns1&segmentTxId=1&storageInfo=-64%3A797603622%3A1473795721305%3AtestClusterID, http://localhost:58874/getJournal?jid=ns1&segmentTxId=1&storageInfo=-64%3A797603622%3A1473795721305%3AtestClusterID of size 116 edits # 3 loaded in 0 seconds
2016-09-13 19:42:02,649 [main] INFO  namenode.FSImage (FSImage.java:doUpgrade(402)) - Starting upgrade of local storage directories.
   old LV = -64; old CTime = 1473795721305.
   new LV = -64; new CTime = 1473795722649
2016-09-13 19:42:02,649 [main] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doPreUpgrade(117)) - Starting upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1
2016-09-13 19:42:02,650 [main] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doPreUpgrade(117)) - Starting upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2
2016-09-13 19:42:02,658 [IPC Server handler 0 on 44916] INFO  namenode.FileJournalManager (FileJournalManager.java:doPreUpgrade(616)) - Starting upgrade of edits directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1
2016-09-13 19:42:02,659 [IPC Server handler 0 on 44916] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doPreUpgrade(117)) - Starting upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1
2016-09-13 19:42:02,659 [IPC Server handler 4 on 42766] INFO  namenode.FileJournalManager (FileJournalManager.java:doPreUpgrade(616)) - Starting upgrade of edits directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1
2016-09-13 19:42:02,659 [IPC Server handler 4 on 42766] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doPreUpgrade(117)) - Starting upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1
2016-09-13 19:42:02,659 [IPC Server handler 4 on 38486] INFO  namenode.FileJournalManager (FileJournalManager.java:doPreUpgrade(616)) - Starting upgrade of edits directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1
2016-09-13 19:42:02,660 [IPC Server handler 4 on 38486] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doPreUpgrade(117)) - Starting upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1
2016-09-13 19:42:02,662 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(429)) - Saving image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/fsimage.ckpt_0000000000000000003 using no compression
2016-09-13 19:42:02,662 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(429)) - Saving image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/fsimage.ckpt_0000000000000000003 using no compression
2016-09-13 19:42:02,669 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(432)) - Image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/fsimage.ckpt_0000000000000000003 of size 410 bytes saved in 0 seconds.
2016-09-13 19:42:02,669 [FSImageSaver for /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1 of type IMAGE_AND_EDITS] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:save(432)) - Image file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/fsimage.ckpt_0000000000000000003 of size 410 bytes saved in 0 seconds.
2016-09-13 19:42:02,671 [main] INFO  namenode.FSImageTransactionalStorageInspector (FSImageTransactionalStorageInspector.java:inspectDirectory(78)) - No version file in /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1
2016-09-13 19:42:02,671 [main] INFO  namenode.FSImageTransactionalStorageInspector (FSImageTransactionalStorageInspector.java:inspectDirectory(78)) - No version file in /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2
2016-09-13 19:42:02,674 [main] INFO  namenode.FSImageTransactionalStorageInspector (FSImageTransactionalStorageInspector.java:inspectDirectory(78)) - No version file in /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1
2016-09-13 19:42:02,675 [main] INFO  namenode.FSImageTransactionalStorageInspector (FSImageTransactionalStorageInspector.java:inspectDirectory(78)) - No version file in /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2
2016-09-13 19:42:02,675 [IPC Server handler 1 on 42766] INFO  server.Journal (Journal.java:doUpgrade(1030)) - Starting upgrade of edits directory: .
   old LV = -64; old CTime = 1473795721305.
   new LV = -64; new CTime = 1473795722649
2016-09-13 19:42:02,676 [IPC Server handler 1 on 42766] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doUpgrade(184)) - Performing upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1
2016-09-13 19:42:02,676 [IPC Server handler 1 on 38486] INFO  server.Journal (Journal.java:doUpgrade(1030)) - Starting upgrade of edits directory: .
   old LV = -64; old CTime = 1473795721305.
   new LV = -64; new CTime = 1473795722649
2016-09-13 19:42:02,676 [IPC Server handler 1 on 38486] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doUpgrade(184)) - Performing upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1
2016-09-13 19:42:02,676 [IPC Server handler 1 on 44916] INFO  server.Journal (Journal.java:doUpgrade(1030)) - Starting upgrade of edits directory: .
   old LV = -64; old CTime = 1473795721305.
   new LV = -64; new CTime = 1473795722649
2016-09-13 19:42:02,676 [IPC Server handler 1 on 44916] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doUpgrade(184)) - Performing upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1
2016-09-13 19:42:02,680 [main] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doUpgrade(184)) - Performing upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1
2016-09-13 19:42:02,681 [main] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doUpgrade(184)) - Performing upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2
2016-09-13 19:42:02,683 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1043)) - Need to save fs image? false (staleImage=false, haEnabled=true, isRollingUpgrade=false)
2016-09-13 19:42:02,685 [main] INFO  namenode.FSEditLog (FSEditLog.java:startLogSegment(1273)) - Starting log segment at 4
2016-09-13 19:42:02,687 [IPC Server handler 3 on 38486] INFO  server.Journal (Journal.java:startLogSegment(560)) - Updating lastWriterEpoch from 2 to 3 for client /127.0.0.1
2016-09-13 19:42:02,687 [IPC Server handler 4 on 44916] INFO  server.Journal (Journal.java:startLogSegment(560)) - Updating lastWriterEpoch from 2 to 3 for client /127.0.0.1
2016-09-13 19:42:02,688 [IPC Server handler 2 on 42766] INFO  server.Journal (Journal.java:startLogSegment(560)) - Updating lastWriterEpoch from 2 to 3 for client /127.0.0.1
2016-09-13 19:42:02,698 [main] INFO  namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
2016-09-13 19:42:02,698 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(681)) - Finished loading FSImage in 87 msecs
2016-09-13 19:42:02,698 [main] INFO  namenode.NameNode (NameNodeRpcServer.java:<init>(427)) - RPC server is binding to localhost:12296
2016-09-13 19:42:02,699 [main] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2016-09-13 19:42:02,699 [Socket Reader #1 for port 12296] INFO  ipc.Server (Server.java:run(930)) - Starting Socket Reader #1 for port 12296
2016-09-13 19:42:02,701 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:registerMBean(4685)) - Registered FSNamesystemState MBean
2016-09-13 19:42:02,711 [main] INFO  namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(126)) - Number of blocks under construction: 0
2016-09-13 19:42:02,711 [main] INFO  Configuration.deprecation (Configuration.java:logDeprecation(1242)) - No unit for dfs.namenode.decommission.interval(3) assuming SECONDS
2016-09-13 19:42:02,712 [main] WARN  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:leaveSafeMode(366)) - forceExit used when normal exist would suffice. Treating force exit as normal safe mode exit.
2016-09-13 19:42:02,712 [main] INFO  blockmanagement.BlockManager (BlockManager.java:initializeReplQueues(4440)) - initializing replication queues
2016-09-13 19:42:02,713 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(383)) - STATE* Leaving safe mode after 0 secs
2016-09-13 19:42:02,713 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(389)) - STATE* Network topology has 0 racks and 0 datanodes
2016-09-13 19:42:02,713 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(391)) - STATE* UnderReplicatedBlocks has 0 blocks
2016-09-13 19:42:02,718 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3187)) - Total number of blocks            = 0
2016-09-13 19:42:02,718 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3188)) - Number of invalid blocks          = 0
2016-09-13 19:42:02,718 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3189)) - Number of under-replicated blocks = 0
2016-09-13 19:42:02,719 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3190)) - Number of  over-replicated blocks = 0
2016-09-13 19:42:02,719 [Reconstruction Queue Initializer] INFO  blockmanagement.BlockManager (BlockManager.java:processMisReplicatesAsync(3192)) - Number of blocks being written    = 0
2016-09-13 19:42:02,719 [Reconstruction Queue Initializer] INFO  hdfs.StateChange (BlockManager.java:processMisReplicatesAsync(3194)) - STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 6 msec
2016-09-13 19:42:02,719 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1164)) - IPC Server Responder: starting
2016-09-13 19:42:02,720 [IPC Server listener on 12296] INFO  ipc.Server (Server.java:run(1000)) - IPC Server listener on 12296: starting
2016-09-13 19:42:02,721 [main] INFO  namenode.NameNode (NameNode.java:startCommonServices(809)) - NameNode RPC up at: localhost/127.0.0.1:12296
2016-09-13 19:42:02,721 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1152)) - Starting services required for active state
2016-09-13 19:42:02,722 [main] INFO  namenode.FSDirectory (FSDirectory.java:updateCountForQuota(688)) - Initializing quota with 4 thread(s)
2016-09-13 19:42:02,722 [main] INFO  namenode.FSDirectory (FSDirectory.java:updateCountForQuota(697)) - Quota initialization completed in 0 milliseconds
name space=2
storage space=0
storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0
2016-09-13 19:42:02,725 [CacheReplicationMonitor(1988885478)] INFO  blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(161)) - Starting CacheReplicationMonitor with interval 30000 milliseconds
2016-09-13 19:42:02,744 [IPC Server handler 0 on 12296] INFO  FSNamesystem.audit (FSNamesystem.java:logAuditMessage(7126)) - allowed=true	ugi=jenkins (auth:SIMPLE)	ip=/127.0.0.1	cmd=mkdirs	src=/foo2	dst=null	perm=jenkins:supergroup:rwxr-xr-x	proto=rpc
2016-09-13 19:42:02,746 [main] INFO  ha.BootstrapStandby (BootstrapStandby.java:parseConfAndFindOtherNN(411)) - Found nn: nn0, ipc: localhost/127.0.0.1:12296
=====================================================
About to bootstrap Standby ID nn1 from:
           Nameservice ID: ns1
        Other Namenode ID: nn0
  Other NN's HTTP address: http://localhost:12297
  Other NN's IPC  address: localhost/127.0.0.1:12296
             Namespace ID: 797603622
            Block pool ID: BP-1945458659-172.17.0.3-1473795721305
               Cluster ID: testClusterID
           Layout version: -64
       isUpgradeFinalized: false
=====================================================
2016-09-13 19:42:02,752 [main] INFO  ha.BootstrapStandby (BootstrapStandby.java:doRun(212)) - The active NameNode is in Upgrade. Prepare the upgrade for the standby NameNode as well.
2016-09-13 19:42:02,752 [main] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:02,753 [main] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-4/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:02,759 [main] INFO  namenode.FSEditLog (FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-09-13 19:42:02,776 [main] INFO  namenode.TransferFsImage (TransferFsImage.java:getFileClient(423)) - Opening connection to http://localhost:12297/imagetransfer?getimage=1&txid=3&storageInfo=-64:797603622:1473795722649:testClusterID&bootstrapstandby=true
2016-09-13 19:42:02,782 [main] INFO  namenode.TransferFsImage (TransferFsImage.java:receiveFile(601)) - Combined time for fsimage download and fsync to all disks took 0.00s. The fsimage download took 0.00s at 0.00 KB/s. Synchronous (fsync) write to disk of /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3/current/fsimage.ckpt_0000000000000000003 took 0.00s. Synchronous (fsync) write to disk of /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-4/current/fsimage.ckpt_0000000000000000003 took 0.00s.
2016-09-13 19:42:02,782 [main] INFO  namenode.TransferFsImage (TransferFsImage.java:downloadImageToStorage(145)) - Downloaded file fsimage.ckpt_0000000000000000003 size 410 bytes.
2016-09-13 19:42:02,786 [main] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doUpgrade(184)) - Performing upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3
2016-09-13 19:42:02,787 [main] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doUpgrade(184)) - Performing upgrade of storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-4
2016-09-13 19:42:02,787 [main] INFO  namenode.NameNode (NameNode.java:createNameNode(1556)) - createNameNode []
2016-09-13 19:42:02,788 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:init(159)) - NameNode metrics system started (again)
2016-09-13 19:42:02,788 [main] INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is hdfs://localhost:12298
2016-09-13 19:42:02,788 [main] INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(470)) - Clients are to use localhost:12298 to access this namenode/service.
2016-09-13 19:42:02,791 [org.apache.hadoop.util.JvmPauseMonitor$Monitor@19a5b637] INFO  util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2016-09-13 19:42:02,791 [main] INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1516)) - Starting Web-server for hdfs at: http://localhost:12299
2016-09-13 19:42:02,793 [main] INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(246)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2016-09-13 19:42:02,794 [main] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.namenode is not defined
2016-09-13 19:42:02,794 [main] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-09-13 19:42:02,795 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2016-09-13 19:42:02,795 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-09-13 19:42:02,795 [main] INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-09-13 19:42:02,796 [main] INFO  http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(94)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2016-09-13 19:42:02,796 [main] INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(653)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2016-09-13 19:42:02,796 [main] INFO  http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 12299
2016-09-13 19:42:02,796 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2016-09-13 19:42:02,818 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Started SelectChannelConnector@localhost:12299
2016-09-13 19:42:02,823 [main] INFO  namenode.FSEditLog (FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-09-13 19:42:02,823 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(709)) - KeyProvider: null
2016-09-13 19:42:02,823 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(716)) - fsLock is fair:true
2016-09-13 19:42:02,823 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(732)) - fsOwner             = jenkins (auth:SIMPLE)
2016-09-13 19:42:02,824 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(733)) - supergroup          = supergroup
2016-09-13 19:42:02,824 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(734)) - isPermissionEnabled = true
2016-09-13 19:42:02,824 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(743)) - Determined nameservice ID: ns1
2016-09-13 19:42:02,824 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(745)) - HA Enabled: true
2016-09-13 19:42:02,825 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(250)) - dfs.block.invalidate.limit=1000
2016-09-13 19:42:02,825 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(256)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2016-09-13 19:42:02,825 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(73)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-09-13 19:42:02,825 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(78)) - The block deletion will start around 2016 Sep 13 19:42:02
2016-09-13 19:42:02,826 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2016-09-13 19:42:02,826 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:02,826 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2016-09-13 19:42:02,826 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^22 = 4194304 entries
2016-09-13 19:42:02,831 [main] INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(457)) - dfs.block.access.token.enable=false
2016-09-13 19:42:02,831 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(150)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2016-09-13 19:42:02,831 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(151)) - dfs.namenode.safemode.min.datanodes = 0
2016-09-13 19:42:02,832 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(153)) - dfs.namenode.safemode.extension = 0
2016-09-13 19:42:02,832 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(443)) - defaultReplication         = 0
2016-09-13 19:42:02,832 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(444)) - maxReplication             = 512
2016-09-13 19:42:02,832 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(445)) - minReplication             = 1
2016-09-13 19:42:02,832 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(446)) - maxReplicationStreams      = 2
2016-09-13 19:42:02,832 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(447)) - replicationRecheckInterval = 3000
2016-09-13 19:42:02,833 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(448)) - encryptDataTransfer        = false
2016-09-13 19:42:02,833 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(449)) - maxNumBlocksToLog          = 1000
2016-09-13 19:42:02,833 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2016-09-13 19:42:02,833 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:02,833 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB
2016-09-13 19:42:02,834 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^21 = 2097152 entries
2016-09-13 19:42:02,836 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(257)) - ACLs enabled? false
2016-09-13 19:42:02,836 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(261)) - POSIX ACL inheritance enabled? false
2016-09-13 19:42:02,836 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(265)) - XAttrs enabled? true
2016-09-13 19:42:02,836 [main] INFO  namenode.NameNode (FSDirectory.java:<init>(329)) - Caching file names occuring more than 10 times
2016-09-13 19:42:02,837 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2016-09-13 19:42:02,837 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:02,837 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB
2016-09-13 19:42:02,837 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^19 = 524288 entries
2016-09-13 19:42:02,839 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2016-09-13 19:42:02,839 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10
2016-09-13 19:42:02,839 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2016-09-13 19:42:02,839 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(936)) - Retry cache on namenode is enabled
2016-09-13 19:42:02,839 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(944)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2016-09-13 19:42:02,840 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2016-09-13 19:42:02,840 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:02,840 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.8 GB = 559.3 KB
2016-09-13 19:42:02,840 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^16 = 65536 entries
2016-09-13 19:42:02,841 [main] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:02,842 [main] INFO  common.Storage (Storage.java:tryLock(789)) - Lock on /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-4/in_use.lock acquired by nodename 5403@514ff61187c2
2016-09-13 19:42:02,855 [main] INFO  namenode.FSImage (FSImage.java:loadFSImage(666)) - No edit log streams selected.
2016-09-13 19:42:02,855 [main] INFO  namenode.FSImage (FSImage.java:loadFSImageFile(731)) - Planning to load image: FSImageFile(file=/testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3/current/fsimage_0000000000000000003, cpktTxId=0000000000000000003)
2016-09-13 19:42:02,856 [main] INFO  namenode.FSImageFormatPBINode (FSImageFormatPBINode.java:loadINodeSection(265)) - Loading 2 INodes.
2016-09-13 19:42:02,857 [main] INFO  namenode.FSImageFormatProtobuf (FSImageFormatProtobuf.java:load(185)) - Loaded FSImage in 0 seconds.
2016-09-13 19:42:02,857 [main] INFO  namenode.FSImage (FSImage.java:loadFSImage(897)) - Loaded image for txid 3 from /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-3/current/fsimage_0000000000000000003
2016-09-13 19:42:02,857 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFSImage(1043)) - Need to save fs image? false (staleImage=false, haEnabled=true, isRollingUpgrade=false)
2016-09-13 19:42:02,857 [main] INFO  namenode.NameCache (NameCache.java:initialized(143)) - initialized with 0 entries 0 lookups
2016-09-13 19:42:02,858 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(681)) - Finished loading FSImage in 17 msecs
2016-09-13 19:42:02,858 [main] INFO  namenode.NameNode (NameNodeRpcServer.java:<init>(427)) - RPC server is binding to localhost:12298
2016-09-13 19:42:02,858 [main] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(75)) - Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
2016-09-13 19:42:02,859 [Socket Reader #1 for port 12298] INFO  ipc.Server (Server.java:run(930)) - Starting Socket Reader #1 for port 12298
2016-09-13 19:42:02,861 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:registerMBean(4685)) - Registered FSNamesystemState MBean
2016-09-13 19:42:02,880 [main] INFO  namenode.LeaseManager (LeaseManager.java:getNumUnderConstructionBlocks(126)) - Number of blocks under construction: 0
2016-09-13 19:42:02,881 [main] INFO  Configuration.deprecation (Configuration.java:logDeprecation(1242)) - No unit for dfs.namenode.decommission.interval(3) assuming SECONDS
2016-09-13 19:42:02,882 [main] WARN  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:leaveSafeMode(366)) - forceExit used when normal exist would suffice. Treating force exit as normal safe mode exit.
2016-09-13 19:42:02,882 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(383)) - STATE* Leaving safe mode after 0 secs
2016-09-13 19:42:02,882 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(389)) - STATE* Network topology has 0 racks and 0 datanodes
2016-09-13 19:42:02,882 [main] INFO  hdfs.StateChange (BlockManagerSafeMode.java:leaveSafeMode(391)) - STATE* UnderReplicatedBlocks has 0 blocks
2016-09-13 19:42:02,884 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1164)) - IPC Server Responder: starting
2016-09-13 19:42:02,884 [IPC Server listener on 12298] INFO  ipc.Server (Server.java:run(1000)) - IPC Server listener on 12298: starting
2016-09-13 19:42:02,886 [main] INFO  namenode.NameNode (NameNode.java:startCommonServices(809)) - NameNode RPC up at: localhost/127.0.0.1:12298
2016-09-13 19:42:02,887 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:startStandbyServices(1309)) - Starting services required for standby state
2016-09-13 19:42:02,887 [main] INFO  ha.EditLogTailer (EditLogTailer.java:<init>(169)) - Will roll logs on active node every 120 seconds.
2016-09-13 19:42:02,888 [main] INFO  ha.StandbyCheckpointer (StandbyCheckpointer.java:start(137)) - Starting standby checkpoint thread...
Checkpointing active NN to possible NNs: [http://localhost:12297]
Serving checkpoints at http://localhost:12299
2016-09-13 19:42:02,888 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:restartNameNode(2042)) - Restarted the namenode
2016-09-13 19:42:02,897 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitActive(2543)) - Cluster is active
2016-09-13 19:42:02,899 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1907)) - Shutting down the Mini HDFS Cluster
2016-09-13 19:42:02,899 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:stopAndJoinNameNode(1981)) - Shutting down the namenode
2016-09-13 19:42:02,899 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1259)) - Stopping services started for active state
2016-09-13 19:42:02,899 [main] INFO  namenode.FSEditLog (FSEditLog.java:endCurrentLogSegment(1318)) - Ending log segment 4
2016-09-13 19:42:02,900 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$NameNodeEditLogRoller@58a7dc4] INFO  namenode.FSNamesystem (FSNamesystem.java:run(3935)) - NameNodeEditLogRoller was interrupted, exiting
2016-09-13 19:42:02,900 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$LazyPersistFileScrubber@61b60600] INFO  namenode.FSNamesystem (FSNamesystem.java:run(4022)) - LazyPersistFileScrubber was interrupted, exiting
2016-09-13 19:42:02,902 [main] INFO  namenode.FSEditLog (FSEditLog.java:printStatistics(721)) - Number of transactions: 3 Total time for transactions(ms): 0 Number of transactions batched in Syncs: 3 Number of syncs: 3 SyncTimes(ms): 6 1 1 
2016-09-13 19:42:02,904 [IPC Server handler 0 on 38486] INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1/current/edits_inprogress_0000000000000000004 -> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1/current/edits_0000000000000000004-0000000000000000006
2016-09-13 19:42:02,905 [IPC Server handler 0 on 42766] INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1/current/edits_inprogress_0000000000000000004 -> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1/current/edits_0000000000000000004-0000000000000000006
2016-09-13 19:42:02,906 [main] INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/edits_inprogress_0000000000000000004 -> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1/current/edits_0000000000000000004-0000000000000000006
2016-09-13 19:42:02,906 [main] INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/edits_inprogress_0000000000000000004 -> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2/current/edits_0000000000000000004-0000000000000000006
2016-09-13 19:42:02,907 [IPC Server handler 2 on 44916] INFO  namenode.FileJournalManager (FileJournalManager.java:finalizeLogSegment(143)) - Finalizing edits file /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1/current/edits_inprogress_0000000000000000004 -> /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1/current/edits_0000000000000000004-0000000000000000006
2016-09-13 19:42:02,907 [CacheReplicationMonitor(1988885478)] INFO  blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(170)) - Shutting down CacheReplicationMonitor
2016-09-13 19:42:02,913 [main] INFO  ipc.Server (Server.java:stop(2895)) - Stopping server on 12296
2016-09-13 19:42:02,915 [IPC Server listener on 12296] INFO  ipc.Server (Server.java:run(1032)) - Stopping IPC Server listener on 12296
2016-09-13 19:42:02,915 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1169)) - Stopping IPC Server Responder
2016-09-13 19:42:02,915 [ReplicationMonitor] INFO  blockmanagement.BlockManager (BlockManager.java:run(4208)) - Stopping ReplicationMonitor.
2016-09-13 19:42:02,919 [StorageInfoMonitor] INFO  blockmanagement.BlockManager (BlockManager.java:run(4243)) - Stopping thread.
2016-09-13 19:42:02,929 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1259)) - Stopping services started for active state
2016-09-13 19:42:02,929 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1352)) - Stopping services started for standby state
2016-09-13 19:42:02,930 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:12297
2016-09-13 19:42:03,034 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:stopAndJoinNameNode(1981)) - Shutting down the namenode
2016-09-13 19:42:03,034 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1352)) - Stopping services started for standby state
2016-09-13 19:42:03,035 [Edit log tailer] WARN  ha.EditLogTailer (EditLogTailer.java:doWork(447)) - Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:445)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:380)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:397)
	at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:476)
	at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:393)
2016-09-13 19:42:03,035 [main] INFO  ipc.Server (Server.java:stop(2895)) - Stopping server on 12298
2016-09-13 19:42:03,036 [IPC Server listener on 12298] INFO  ipc.Server (Server.java:run(1032)) - Stopping IPC Server listener on 12298
2016-09-13 19:42:03,036 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1169)) - Stopping IPC Server Responder
2016-09-13 19:42:03,036 [StorageInfoMonitor] INFO  blockmanagement.BlockManager (BlockManager.java:run(4243)) - Stopping thread.
2016-09-13 19:42:03,036 [ReplicationMonitor] INFO  blockmanagement.BlockManager (BlockManager.java:run(4208)) - Stopping ReplicationMonitor.
2016-09-13 19:42:03,044 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:stopActiveServices(1259)) - Stopping services started for active state
2016-09-13 19:42:03,045 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:stopStandbyServices(1352)) - Stopping services started for standby state
2016-09-13 19:42:03,055 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:12299
2016-09-13 19:42:03,073 [main] INFO  namenode.FSEditLog (FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-09-13 19:42:03,074 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(709)) - KeyProvider: null
2016-09-13 19:42:03,074 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(716)) - fsLock is fair:true
2016-09-13 19:42:03,074 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(732)) - fsOwner             = jenkins (auth:SIMPLE)
2016-09-13 19:42:03,074 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(733)) - supergroup          = supergroup
2016-09-13 19:42:03,074 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(734)) - isPermissionEnabled = true
2016-09-13 19:42:03,075 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(743)) - Determined nameservice ID: ns1
2016-09-13 19:42:03,075 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(745)) - HA Enabled: true
2016-09-13 19:42:03,075 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(250)) - dfs.block.invalidate.limit=1000
2016-09-13 19:42:03,075 [main] INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(256)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2016-09-13 19:42:03,076 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(73)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2016-09-13 19:42:03,076 [main] INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(78)) - The block deletion will start around 2016 Sep 13 19:42:03
2016-09-13 19:42:03,076 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2016-09-13 19:42:03,076 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:03,076 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1.8 GB = 36.4 MB
2016-09-13 19:42:03,077 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^22 = 4194304 entries
2016-09-13 19:42:03,081 [main] INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(457)) - dfs.block.access.token.enable=false
2016-09-13 19:42:03,082 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(150)) - dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2016-09-13 19:42:03,082 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(151)) - dfs.namenode.safemode.min.datanodes = 0
2016-09-13 19:42:03,082 [main] INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(153)) - dfs.namenode.safemode.extension = 0
2016-09-13 19:42:03,082 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(443)) - defaultReplication         = 0
2016-09-13 19:42:03,082 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(444)) - maxReplication             = 512
2016-09-13 19:42:03,082 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(445)) - minReplication             = 1
2016-09-13 19:42:03,082 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(446)) - maxReplicationStreams      = 2
2016-09-13 19:42:03,083 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(447)) - replicationRecheckInterval = 3000
2016-09-13 19:42:03,083 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(448)) - encryptDataTransfer        = false
2016-09-13 19:42:03,083 [main] INFO  blockmanagement.BlockManager (BlockManager.java:<init>(449)) - maxNumBlocksToLog          = 1000
2016-09-13 19:42:03,083 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2016-09-13 19:42:03,083 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:03,084 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1.8 GB = 18.2 MB
2016-09-13 19:42:03,084 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^21 = 2097152 entries
2016-09-13 19:42:03,086 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(257)) - ACLs enabled? false
2016-09-13 19:42:03,086 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(261)) - POSIX ACL inheritance enabled? false
2016-09-13 19:42:03,086 [main] INFO  namenode.FSDirectory (FSDirectory.java:<init>(265)) - XAttrs enabled? true
2016-09-13 19:42:03,087 [main] INFO  namenode.NameNode (FSDirectory.java:<init>(329)) - Caching file names occuring more than 10 times
2016-09-13 19:42:03,087 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2016-09-13 19:42:03,087 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:03,087 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1.8 GB = 4.6 MB
2016-09-13 19:42:03,087 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^19 = 524288 entries
2016-09-13 19:42:03,088 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(65)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2016-09-13 19:42:03,088 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(67)) - NNTop conf: dfs.namenode.top.num.users = 10
2016-09-13 19:42:03,088 [main] INFO  metrics.TopMetrics (TopMetrics.java:logConf(69)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2016-09-13 19:42:03,088 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(936)) - Retry cache on namenode is enabled
2016-09-13 19:42:03,089 [main] INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(944)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2016-09-13 19:42:03,089 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2016-09-13 19:42:03,089 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2016-09-13 19:42:03,089 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1.8 GB = 559.3 KB
2016-09-13 19:42:03,089 [main] INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^16 = 65536 entries
2016-09-13 19:42:03,092 [main] INFO  namenode.FSEditLog (FSEditLog.java:newInstance(222)) - Edit logging is async:false
2016-09-13 19:42:03,092 [main] INFO  namenode.FSImage (FSImage.java:doRollback(465)) - Can perform rollback for Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1
2016-09-13 19:42:03,092 [main] INFO  namenode.FSImage (FSImage.java:doRollback(465)) - Can perform rollback for Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2
2016-09-13 19:42:03,106 [main] INFO  namenode.FSImage (FSImage.java:doRollback(475)) - Can perform rollback for shared edit log.
2016-09-13 19:42:03,107 [main] INFO  namenode.FSImage (FSImage.java:doRollback(488)) - Rolling back storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1.
   new LV = -64; new CTime = 1473795721305
2016-09-13 19:42:03,107 [main] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doRollBack(234)) - Rollback of /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-1 is complete.
2016-09-13 19:42:03,107 [main] INFO  namenode.FSImage (FSImage.java:doRollback(488)) - Rolling back storage directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2.
   new LV = -64; new CTime = 1473795721305
2016-09-13 19:42:03,109 [main] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doRollBack(234)) - Rollback of /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/name-0-2 is complete.
2016-09-13 19:42:03,112 [IPC Server handler 3 on 44916] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doRollBack(234)) - Rollback of /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1 is complete.
2016-09-13 19:42:03,112 [IPC Server handler 3 on 42766] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doRollBack(234)) - Rollback of /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1 is complete.
2016-09-13 19:42:03,112 [IPC Server handler 2 on 38486] INFO  namenode.NNUpgradeUtil (NNUpgradeUtil.java:doRollBack(234)) - Rollback of /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1 is complete.
2016-09-13 19:42:03,115 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:shutdown(1907)) - Shutting down the Mini HDFS Cluster
2016-09-13 19:42:03,115 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:stopAndJoinNameNode(1981)) - Shutting down the namenode
2016-09-13 19:42:03,115 [main] INFO  hdfs.MiniDFSCluster (MiniDFSCluster.java:stopAndJoinNameNode(1981)) - Shutting down the namenode
2016-09-13 19:42:03,116 [main] INFO  ipc.Server (Server.java:stop(2895)) - Stopping server on 44916
2016-09-13 19:42:03,117 [IPC Server listener on 44916] INFO  ipc.Server (Server.java:run(1032)) - Stopping IPC Server listener on 44916
2016-09-13 19:42:03,117 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1169)) - Stopping IPC Server Responder
2016-09-13 19:42:03,118 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
2016-09-13 19:42:03,218 [main] INFO  common.Storage (JNStorage.java:close(248)) - Closing journal storage for Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/waitactive
2016-09-13 19:42:03,218 [main] INFO  common.Storage (JNStorage.java:close(248)) - Closing journal storage for Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-0/ns1
2016-09-13 19:42:03,219 [main] INFO  ipc.Server (Server.java:stop(2895)) - Stopping server on 38486
2016-09-13 19:42:03,222 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1169)) - Stopping IPC Server Responder
2016-09-13 19:42:03,223 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
2016-09-13 19:42:03,225 [IPC Server listener on 38486] INFO  ipc.Server (Server.java:run(1032)) - Stopping IPC Server listener on 38486
2016-09-13 19:42:03,324 [main] INFO  common.Storage (JNStorage.java:close(248)) - Closing journal storage for Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/waitactive
2016-09-13 19:42:03,324 [main] INFO  common.Storage (JNStorage.java:close(248)) - Closing journal storage for Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-1/ns1
2016-09-13 19:42:03,324 [main] INFO  ipc.Server (Server.java:stop(2895)) - Stopping server on 42766
2016-09-13 19:42:03,325 [IPC Server listener on 42766] INFO  ipc.Server (Server.java:run(1032)) - Stopping IPC Server listener on 42766
2016-09-13 19:42:03,325 [IPC Server Responder] INFO  ipc.Server (Server.java:run(1169)) - Stopping IPC Server Responder
2016-09-13 19:42:03,326 [main] INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped SelectChannelConnector@localhost:0
2016-09-13 19:42:03,426 [main] INFO  common.Storage (JNStorage.java:close(248)) - Closing journal storage for Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/waitactive
2016-09-13 19:42:03,427 [main] INFO  common.Storage (JNStorage.java:close(248)) - Closing journal storage for Storage Directory /testptch/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/1/dfs/journalnode-2/ns1
2016-09-13 19:42:03,427 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system...
2016-09-13 19:42:03,429 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped.
2016-09-13 19:42:03,429 [main] INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - NameNode metrics system shutdown complete.
{code}
h6.Standard Error
{code}
Data exists in QJM to [127.0.0.1:44916, 127.0.0.1:38486, 127.0.0.1:42766]. Formatting anyway.
"rollBack" will remove the current state of the file system,
returning you to the state prior to initiating your recent.
upgrade. This action is permanent and cannot be undone. If you
are performing a rollback in an HA environment, you should be
certain that no NameNode process is running on any host.
{code}

> Fix intermittently failing TestDFSUpgradeWithHA#testRollbackWithJournalNodes
> ----------------------------------------------------------------------------
>
>                 Key: HDFS-10644
>                 URL: https://issues.apache.org/jira/browse/HDFS-10644
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: test
>    Affects Versions: 3.0.0-alpha2
>            Reporter: Mingliang Liu
>
> See [example stack trace | https://builds.apache.org/job/PreCommit-HADOOP-Build/9996/testReport/org.apache.hadoop.hdfs.server.namenode.ha/TestDFSUpgradeWithHA/testRollbackWithJournalNodes/] along with stand output/error.
> h6. Stacktrace
> {code}
> java.lang.AssertionError: null
> 	at org.junit.Assert.fail(Assert.java:86)
> 	at org.junit.Assert.assertTrue(Assert.java:41)
> 	at org.junit.Assert.assertTrue(Assert.java:52)
> 	at org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA.testRollbackWithJournalNodes(TestDFSUpgradeWithHA.java:687)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message