ambari-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Theodore Omtzigt (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (AMBARI-13462) NameNode fails to start due to unexpected version of storage directory
Date Sun, 18 Oct 2015 17:30:05 GMT

    [ https://issues.apache.org/jira/browse/AMBARI-13462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14962523#comment-14962523
] 

Theodore Omtzigt commented on AMBARI-13462:
-------------------------------------------

can't find a button to attach the log file, so here is the contents of the namenode log

************************************************************/
2015-10-18 13:15:14,621 INFO  namenode.NameNode (StringUtils.java:startupShutdownMessage(633))
- STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = dl001.data-lake.net/192.168.1.201
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /usr/hdp/current/hadoop-client/conf:/opt/hadoop-2.6.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/htrace-core-3.0.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/curator-framework-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/curator-client-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.0/share/hadoop/common/hadoop-nfs-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/usr/hdp/2.3.2.0-2950/hadoop-yarn/share/hadoop/yarn/*:/usr/hdp/2.3.2.0-2950/hadoop-mapreduce/share/hadoop/mapreduce/*::::
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1;
compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.8.0_40
************************************************************/
2015-10-18 13:15:14,632 INFO  namenode.NameNode (SignalLogger.java:register(91)) - registered
UNIX signal handlers for [TERM, HUP, INT]
2015-10-18 13:15:14,635 INFO  namenode.NameNode (NameNode.java:createNameNode(1367)) - createNameNode
[]
2015-10-18 13:15:14,937 INFO  impl.MetricsConfig (MetricsConfig.java:loadFirst(111)) - loaded
properties from hadoop-metrics2.properties
2015-10-18 13:15:15,090 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(63))
- Initializing Timeline metrics sink.
2015-10-18 13:15:15,091 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(81))
- Identified hostname = dl001.data-lake.net, serviceName = namenode
2015-10-18 13:15:15,118 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(93))
- Collector Uri: http://dl003.data-lake.net:6188/ws/v1/timeline/metrics
2015-10-18 13:15:15,127 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(195))
- Sink timeline started
2015-10-18 13:15:15,204 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376))
- Scheduled snapshot period at 60 second(s).
2015-10-18 13:15:15,204 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) -
NameNode metrics system started
2015-10-18 13:15:15,205 INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(349))
- fs.defaultFS is hdfs://dl001.data-lake.net:8020
2015-10-18 13:15:15,205 INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(369))
- Clients are to use dl001.data-lake.net:8020 to access this namenode/service.
2015-10-18 13:15:15,282 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1694))
- Starting Web-server for hdfs at: http://dl001.data-lake.net:50070
2015-10-18 13:15:15,321 INFO  mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via org.mortbay.log.Slf4jLog
2015-10-18 13:15:15,325 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80))
- Http request log for http.requests.namenode is not defined
2015-10-18 13:15:15,334 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(699)) - Added
global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-10-18 13:15:15,340 INFO  http.HttpServer2 (HttpServer2.java:addFilter(677)) - Added filter
static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context hdfs
2015-10-18 13:15:15,340 INFO  http.HttpServer2 (HttpServer2.java:addFilter(684)) - Added filter
static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs
2015-10-18 13:15:15,340 INFO  http.HttpServer2 (HttpServer2.java:addFilter(684)) - Added filter
static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
2015-10-18 13:15:15,363 INFO  http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(86)) -
Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-10-18 13:15:15,365 INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(603))
- addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2015-10-18 13:15:15,378 INFO  http.HttpServer2 (HttpServer2.java:openListeners(887)) - Jetty
bound to port 50070
2015-10-18 13:15:15,379 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2015-10-18 13:15:15,587 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@dl001.data-lake.net:50070
2015-10-18 13:15:15,620 WARN  common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-10-18 13:15:15,621 WARN  common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-10-18 13:15:15,621 WARN  namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(705))
- Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss
due to lack of redundant storage directories!
2015-10-18 13:15:15,622 WARN  namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(710))
- Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of
data loss due to lack of redundant storage directories!
2015-10-18 13:15:15,627 WARN  common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-10-18 13:15:15,627 WARN  common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode
should be specified as a URI in configuration files. Please update hdfs configuration.
2015-10-18 13:15:15,633 WARN  common.Storage (NNStorage.java:setRestoreFailedStorage(210))
- set restore failed storage to true
2015-10-18 13:15:15,657 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(774))
- No KeyProvider found.
2015-10-18 13:15:15,663 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(782))
- Enabling async auditlog
2015-10-18 13:15:15,665 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(786))
- fsLock is fair:false
2015-10-18 13:15:15,695 INFO  blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(79))
- Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is
less than dfs.namenode.heartbeat.recheck-interval
2015-10-18 13:15:15,702 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(232))
- dfs.block.invalidate.limit=1000
2015-10-18 13:15:15,702 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(238))
- dfs.namenode.datanode.registration.ip-hostname-check=true
2015-10-18 13:15:15,705 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71))
- dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
2015-10-18 13:15:15,705 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76))
- The block deletion will start around 2015 Oct 18 14:15:15
2015-10-18 13:15:15,707 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing
capacity for map BlocksMap
2015-10-18 13:15:15,707 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type
      = 64-bit
2015-10-18 13:15:15,709 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0%
max memory 2.0 GB = 40.4 MB
2015-10-18 13:15:15,709 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity
     = 2^22 = 4194304 entries
2015-10-18 13:15:15,720 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(364))
- dfs.block.access.token.enable=true
2015-10-18 13:15:15,720 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(384))
- dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s),
dfs.encrypt.data.transfer.algorithm=null
2015-10-18 13:15:15,855 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(349))
- defaultReplication         = 3
2015-10-18 13:15:15,856 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(350))
- maxReplication             = 50
2015-10-18 13:15:15,856 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(351))
- minReplication             = 1
2015-10-18 13:15:15,856 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(352))
- maxReplicationStreams      = 2
2015-10-18 13:15:15,856 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(353))
- shouldCheckForEnoughRacks  = true
2015-10-18 13:15:15,856 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(354))
- replicationRecheckInterval = 3000
2015-10-18 13:15:15,856 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(355))
- encryptDataTransfer        = false
2015-10-18 13:15:15,856 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(356))
- maxNumBlocksToLog          = 1000
2015-10-18 13:15:15,861 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(809))
- fsOwner             = hdfs (auth:SIMPLE)
2015-10-18 13:15:15,862 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(810))
- supergroup          = hdfs
2015-10-18 13:15:15,862 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(811))
- isPermissionEnabled = true
2015-10-18 13:15:15,862 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(822))
- HA Enabled: false
2015-10-18 13:15:15,864 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(859))
- Append Enabled: true
2015-10-18 13:15:15,894 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing
capacity for map INodeMap
2015-10-18 13:15:15,894 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type
      = 64-bit
2015-10-18 13:15:15,895 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 1.0%
max memory 2.0 GB = 20.2 MB
2015-10-18 13:15:15,895 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity
     = 2^21 = 2097152 entries
2015-10-18 13:15:15,898 INFO  namenode.NameNode (FSDirectory.java:<init>(234)) - Caching
file names occuring more than 10 times
2015-10-18 13:15:15,906 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing
capacity for map cachedBlocks
2015-10-18 13:15:15,907 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type
      = 64-bit
2015-10-18 13:15:15,907 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.25%
max memory 2.0 GB = 5.1 MB
2015-10-18 13:15:15,907 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity
     = 2^19 = 524288 entries
2015-10-18 13:15:15,909 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(5663))
- dfs.namenode.safemode.threshold-pct = 1.0
2015-10-18 13:15:15,909 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(5664))
- dfs.namenode.safemode.min.datanodes = 0
2015-10-18 13:15:15,909 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(5665))
- dfs.namenode.safemode.extension     = 30000
2015-10-18 13:15:15,910 INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(957))
- Retry cache on namenode is enabled
2015-10-18 13:15:15,910 INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(965))
- Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-10-18 13:15:15,912 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing
capacity for map NameNodeRetryCache
2015-10-18 13:15:15,912 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type
      = 64-bit
2015-10-18 13:15:15,913 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746%
max memory 2.0 GB = 621.3 KB
2015-10-18 13:15:15,913 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity
     = 2^16 = 65536 entries
2015-10-18 13:15:15,916 INFO  namenode.NNConf (NNConf.java:<init>(62)) - ACLs enabled?
false
2015-10-18 13:15:15,916 INFO  namenode.NNConf (NNConf.java:<init>(66)) - XAttrs enabled?
true
2015-10-18 13:15:15,917 INFO  namenode.NNConf (NNConf.java:<init>(74)) - Maximum size
of an xattr: 16384
2015-10-18 13:15:15,925 INFO  common.Storage (Storage.java:tryLock(715)) - Lock on /hadoop/hdfs/namenode/in_use.lock
acquired by nodename 19255@dl001.data-lake.net
2015-10-18 13:15:15,950 WARN  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(741))
- Encountered exception loading fsimage
org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage
directory /hadoop/hdfs/namenode. Reported: -63. Expecting = -60.
        at org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageInfo.java:178)
        at org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:131)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:610)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:639)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:325)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-10-18 13:15:15,953 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@dl001.data-lake.net:50070
2015-10-18 13:15:16,054 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) -
Stopping NameNode metrics system...
2015-10-18 13:15:16,055 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(135))
- timeline thread interrupted.
2015-10-18 13:15:16,055 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) -
NameNode metrics system stopped.
2015-10-18 13:15:16,056 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(605))
- NameNode metrics system shutdown complete.
2015-10-18 13:15:16,056 FATAL namenode.NameNode (NameNode.java:main(1509)) - Failed to start
namenode.
org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage
directory /hadoop/hdfs/namenode. Reported: -63. Expecting = -60.
        at org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageInfo.java:178)
        at org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:131)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:610)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:639)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:325)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2015-10-18 13:15:16,059 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with
status 1
2015-10-18 13:15:16,061 INFO  namenode.NameNode (StringUtils.java:run(659)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dl001.data-lake.net/192.168.1.201
************************************************************/
2015-10-18 13:20:31,413 INFO  namenode.NameNode (StringUtils.java:startupShutdownMessage(633))
- STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = dl001.data-lake.net/192.168.1.201
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /usr/hdp/current/hadoop-client/conf:/opt/hadoop-2.6.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-el-1.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/htrace-core-3.0.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/activation-1.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jsch-0.1.42.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jersey-json-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-digester-1.8.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/asm-3.2.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/curator-framework-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-io-2.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/avro-1.7.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/junit-4.11.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/xmlenc-0.52.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/curator-client-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-net-3.1.jar:/opt/hadoop-2.6.0/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.0/share/hadoop/common/hadoop-nfs-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/opt/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/asm-3.2.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/opt/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/usr/hdp/2.3.2.0-2950/hadoop-yarn/share/hadoop/yarn/*:/usr/hdp/2.3.2.0-2950/hadoop-mapreduce/share/hadoop/mapreduce/*::::
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1;
compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.8.0_40
************************************************************/
2015-10-18 13:20:31,423 INFO  namenode.NameNode (SignalLogger.java:register(91)) - registered
UNIX signal handlers for [TERM, HUP, INT]
2015-10-18 13:20:31,425 INFO  namenode.NameNode (NameNode.java:createNameNode(1367)) - createNameNode
[]
2015-10-18 13:20:31,720 INFO  impl.MetricsConfig (MetricsConfig.java:loadFirst(111)) - loaded
properties from hadoop-metrics2.properties
2015-10-18 13:20:31,871 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(63))
- Initializing Timeline metrics sink.
2015-10-18 13:20:31,872 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(81))
- Identified hostname = dl001.data-lake.net, serviceName = namenode
2015-10-18 13:20:31,897 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(93))
- Collector Uri: http://dl003.data-lake.net:6188/ws/v1/timeline/metrics
2015-10-18 13:20:31,906 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(195))
- Sink timeline started
2015-10-18 13:20:31,980 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376))
- Scheduled snapshot period at 60 second(s).
2015-10-18 13:20:31,980 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) -
NameNode metrics system started
2015-10-18 13:20:31,981 INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(349))
- fs.defaultFS is hdfs://dl001.data-lake.net:8020
2015-10-18 13:20:31,982 INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(369))
- Clients are to use dl001.data-lake.net:8020 to access this namenode/service.
2015-10-18 13:20:32,054 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1694))
- Starting Web-server for hdfs at: http://dl001.data-lake.net:50070
2015-10-18 13:20:32,094 INFO  mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via org.mortbay.log.Slf4jLog
2015-10-18 13:20:32,098 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80))
- Http request log for http.requests.namenode is not defined
2015-10-18 13:20:32,107 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(699)) - Added
global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-10-18 13:20:32,112 INFO  http.HttpServer2 (HttpServer2.java:addFilter(677)) - Added filter
static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context hdfs
2015-10-18 13:20:32,112 INFO  http.HttpServer2 (HttpServer2.java:addFilter(684)) - Added filter
static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs
2015-10-18 13:20:32,112 INFO  http.HttpServer2 (HttpServer2.java:addFilter(684)) - Added filter
static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
2015-10-18 13:20:32,135 INFO  http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(86)) -
Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-10-18 13:20:32,137 INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(603))
- addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
2015-10-18 13:20:32,151 INFO  http.HttpServer2 (HttpServer2.java:start(830)) - HttpServer.start()
threw a non Bind IOException
java.net.BindException: Port in use: dl001.data-lake.net:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:437)
        at sun.nio.ch.Net.bind(Net.java:429)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886)
        ... 8 more
2015-10-18 13:20:32,153 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) -
Stopping NameNode metrics system...
2015-10-18 13:20:32,154 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(135))
- timeline thread interrupted.
2015-10-18 13:20:32,154 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) -
NameNode metrics system stopped.
2015-10-18 13:20:32,154 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(605))
- NameNode metrics system shutdown complete.
2015-10-18 13:20:32,154 FATAL namenode.NameNode (NameNode.java:main(1509)) - Failed to start
namenode.
java.net.BindException: Port in use: dl001.data-lake.net:50070
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:891)
        at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:827)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:703)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:590)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind0(Native Method)
        at sun.nio.ch.Net.bind(Net.java:437)
        at sun.nio.ch.Net.bind(Net.java:429)
        at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
        at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
        at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:886)
        ... 8 more
2015-10-18 13:20:32,156 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - Exiting with
status 1
2015-10-18 13:20:32,158 INFO  namenode.NameNode (StringUtils.java:run(659)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at dl001.data-lake.net/192.168.1.201
************************************************************/


> NameNode fails to start due to unexpected version of storage directory
> ----------------------------------------------------------------------
>
>                 Key: AMBARI-13462
>                 URL: https://issues.apache.org/jira/browse/AMBARI-13462
>             Project: Ambari
>          Issue Type: Bug
>          Components: ambari-server
>    Affects Versions: 2.1.2
>         Environment: Ubuntu 14.04 LTS fresh install
>            Reporter: Theodore Omtzigt
>
> NameNode service does not start on fresh install through Wizard.
> 2015-10-18 10:18:54,427 FATAL namenode.NameNode (NameNode.java:main(1509)) - Failed to
start namenode.
> org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of
storage directory /hadoop/hdfs/namenode. Reported: -63. Expecting = -60.
> Removing the content of /hadoop/hdfs/namenode, and restarting the NameNode through the
Ambari server, yields a repeatable process to trigger this bug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message