hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stephen Chu (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-3813) Log error message if security and WebHDFS are enabled but principal/keytab are not configured
Date Thu, 04 Oct 2012 15:51:47 GMT

     [ https://issues.apache.org/jira/browse/HDFS-3813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Stephen Chu updated HDFS-3813:
------------------------------

    Attachment: error_output

I manually tested this. Here is the output of starting the NameNode when security + WebHDFS
are enabled but the principal is not set:

{code}
[schu@cs-10-20-90-154 ~]$ hdfs namenode
12/10/04 08:46:59 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = cs-10-20-90-154.cloud.cloudera.com/10.20.90.154
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 3.0.0-SNAPSHOT
STARTUP_MSG:   classpath = /home/schu/hadoop-3.0.0-SNAPSHOT/etc/hadoop:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jersey-server-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-digester-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-codec-1.4.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/xmlenc-0.52.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/protobuf-java-2.4.0a.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jsp-api-2.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/paranamer-2.3.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jline-0.9.94.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.6.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jetty-6.1.26.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/snappy-java-1.0.3.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-io-2.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jsch-0.1.42.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/log4j-1.2.17.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-el-1.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/hadoop-auth-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jersey-core-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/avro-1.5.3.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-lang-2.5.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jersey-json-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-cli-1.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/zookeeper-3.4.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/asm-3.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/slf4j-api-1.6.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/guava-11.0.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/kfs-0.3.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/activation-1.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/servlet-api-2.5.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-math-2.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/netty-3.2.4.Final.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/commons-net-3.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/hadoop-annotations-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/lib/jettison-1.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-SNAPSHOT-test-sources.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-SNAPSHOT-sources.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-SNAPSHOT-tests.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/contrib/capacity-scheduler/*.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/jersey-server-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/protobuf-java-2.4.0a.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/commons-daemon-1.0.3.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/jersey-core-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/asm-3.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/hadoop-hdfs-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/jersey-server-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/guice-3.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/junit-4.8.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/protobuf-java-2.4.0a.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/javax.inject-1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/jersey-guice-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/snappy-java-1.0.3.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/jersey-core-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/avro-1.5.3.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/asm-3.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/netty-3.2.4.Final.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/hadoop-annotations-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-client-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-server-tests-3.0.0-SNAPSHOT-tests.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-server-tests-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-site-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-api-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-server-common-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/jersey-server-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/junit-4.8.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/protobuf-java-2.4.0a.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/jersey-guice-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/snappy-java-1.0.3.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/jersey-core-1.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/avro-1.5.3.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/netty-3.2.4.Final.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/hadoop-annotations-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.0.0-SNAPSHOT-tests.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.0.0-SNAPSHOT.jar
STARTUP_MSG:   build = git://cs-10-20-90-154.cloud.cloudera.com/home/schu/hadoop1/hadoop-common-project/hadoop-common
-r 0c35a7fa79421e0afb2cd8fbc072cd514415cb63; compiled by 'schu' on Wed Oct  3 20:52:11 PDT
2012
************************************************************/
12/10/04 08:46:59 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
12/10/04 08:46:59 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
12/10/04 08:46:59 INFO impl.MetricsSystemImpl: NameNode metrics system started
12/10/04 08:47:00 INFO security.UserGroupInformation: Login successful for user hdfs/cs-10-20-90-154.cloud.cloudera.com@CLOUD.CLOUDERA.COM
using keytab file /home/schu/hadoop-3.0.0-SNAPSHOT/etc/hadoop/hdfs.keytab
12/10/04 08:47:00 WARN common.Util: Path /dfs/nn/ should be specified as a URI in configuration
files. Please update hdfs configuration.
12/10/04 08:47:00 WARN common.Util: Path /dfs/nn/ should be specified as a URI in configuration
files. Please update hdfs configuration.
12/10/04 08:47:00 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir)
configured. Beware of dataloss due to lack of redundant storage directories!
12/10/04 08:47:00 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir)
configured. Beware of dataloss due to lack of redundant storage directories!
12/10/04 08:47:00 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list
12/10/04 08:47:00 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
12/10/04 08:47:00 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=true
12/10/04 08:47:00 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600
min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
12/10/04 08:47:00 INFO blockmanagement.BlockManager: defaultReplication         = 1
12/10/04 08:47:00 INFO blockmanagement.BlockManager: maxReplication             = 512
12/10/04 08:47:00 INFO blockmanagement.BlockManager: minReplication             = 1
12/10/04 08:47:00 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
12/10/04 08:47:00 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
12/10/04 08:47:00 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
12/10/04 08:47:00 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
12/10/04 08:47:00 INFO namenode.FSNamesystem: fsOwner             = hdfs/cs-10-20-90-154.cloud.cloudera.com@CLOUD.CLOUDERA.COM
(auth:KERBEROS)
12/10/04 08:47:00 INFO namenode.FSNamesystem: supergroup          = supergroup
12/10/04 08:47:00 INFO namenode.FSNamesystem: isPermissionEnabled = true
12/10/04 08:47:00 INFO namenode.FSNamesystem: HA Enabled: false
12/10/04 08:47:00 INFO namenode.FSNamesystem: Append Enabled: true
12/10/04 08:47:00 INFO namenode.NameNode: Caching file names occuring more than 10 times 
12/10/04 08:47:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
12/10/04 08:47:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
12/10/04 08:47:00 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 0
12/10/04 08:47:00 INFO common.Storage: Lock on /dfs/nn/in_use.lock acquired by nodename 13781@cs-10-20-90-154.cloud.cloudera.com
12/10/04 08:47:00 INFO namenode.FileJournalManager: Recovering unfinalized segments in /dfs/nn/current
12/10/04 08:47:00 INFO namenode.FSImage: Loading image file /dfs/nn/current/fsimage_0000000000000000058
using no compression
12/10/04 08:47:00 INFO namenode.FSImage: Number of files = 1
12/10/04 08:47:00 INFO namenode.FSImage: Number of files under construction = 0
12/10/04 08:47:00 INFO namenode.FSImage: Image file of size 663 loaded in 0 seconds.
12/10/04 08:47:00 INFO namenode.FSImage: Loaded image for txid 58 from /dfs/nn/current/fsimage_0000000000000000058
12/10/04 08:47:00 INFO namenode.FSImage: Reading org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@25e222e
expecting start txid #59
12/10/04 08:47:00 INFO namenode.EditLogInputStream: Fast-forwarding stream '/dfs/nn/current/edits_0000000000000000059-0000000000000000062'
to transaction ID 59
12/10/04 08:47:00 INFO namenode.FSImage: Edits file /dfs/nn/current/edits_0000000000000000059-0000000000000000062
of size 90 edits # 4 loaded in 0 seconds.
12/10/04 08:47:00 INFO namenode.FSEditLog: Starting log segment at 63
12/10/04 08:47:00 INFO namenode.NameCache: initialized with 0 entries 0 lookups
12/10/04 08:47:00 INFO namenode.FSNamesystem: Finished loading FSImage in 503 msecs
12/10/04 08:47:01 INFO ipc.Server: Starting Socket Reader #1 for port 8020
12/10/04 08:47:01 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean
12/10/04 08:47:01 WARN common.Util: Path /dfs/nn/ should be specified as a URI in configuration
files. Please update hdfs configuration.
12/10/04 08:47:01 INFO namenode.FSNamesystem: Number of blocks under construction: 0
12/10/04 08:47:01 INFO namenode.FSNamesystem: initializing replication queues
12/10/04 08:47:01 INFO blockmanagement.BlockManager: Total number of blocks            = 0
12/10/04 08:47:01 INFO blockmanagement.BlockManager: Number of invalid blocks          = 0
12/10/04 08:47:01 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0
12/10/04 08:47:01 INFO blockmanagement.BlockManager: Number of  over-replicated blocks = 0
12/10/04 08:47:01 INFO blockmanagement.BlockManager: Number of blocks being written    = 0
12/10/04 08:47:01 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for
invalid, over- and under-replicated blocks completed in 11 msec
12/10/04 08:47:01 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
12/10/04 08:47:01 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
12/10/04 08:47:01 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
12/10/04 08:47:01 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current
master key for generating delegation tokens
12/10/04 08:47:01 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation
token remover thread, tokenRemoverScanInterval=60 min(s)
12/10/04 08:47:01 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current
master key for generating delegation tokens
12/10/04 08:47:01 INFO block.BlockTokenSecretManager: Updating block keys
12/10/04 08:47:01 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via org.mortbay.log.Slf4jLog
12/10/04 08:47:01 INFO http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
12/10/04 08:47:01 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context hdfs
12/10/04 08:47:01 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context static
12/10/04 08:47:01 INFO http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
to context logs
12/10/04 08:47:01 INFO http.HttpServer: dfs.webhdfs.enabled = true
12/10/04 08:47:01 ERROR http.HttpServer: 'dfs.web.authentication.kerberos.principal.key' configuration
not set
12/10/04 08:47:01 INFO http.HttpServer: Added filter 'SPNEGO' (class=org.apache.hadoop.hdfs.web.AuthFilter)
12/10/04 08:47:01 INFO http.HttpServer: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources,
pathSpec=/webhdfs/v1/*
12/10/04 08:47:01 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to getDelegationToken
12/10/04 08:47:01 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to renewDelegationToken
12/10/04 08:47:01 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to cancelDelegationToken
12/10/04 08:47:01 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to fsck
12/10/04 08:47:01 INFO http.HttpServer: Adding Kerberos (SPNEGO) filter to getimage
12/10/04 08:47:01 INFO http.HttpServer: Jetty bound to port 50070
12/10/04 08:47:01 INFO mortbay.log: jetty-6.1.26
12/10/04 08:47:01 INFO server.KerberosAuthenticationHandler: Login using keytab /home/schu/hadoop-3.0.0-SNAPSHOT/etc/hadoop/hdfs.keytab,
for principal HTTP/cs-10-20-90-154.cloud.cloudera.com@CLOUD.CLOUDERA.COM
12/10/04 08:47:01 INFO server.KerberosAuthenticationHandler: Initialized, principal [HTTP/cs-10-20-90-154.cloud.cloudera.com@CLOUD.CLOUDERA.COM]
from keytab [/home/schu/hadoop-3.0.0-SNAPSHOT/etc/hadoop/hdfs.keytab]
12/10/04 08:47:01 WARN server.AuthenticationFilter: 'signature.secret' configuration not set,
using a random value as secret
12/10/04 08:47:01 WARN mortbay.log: failed SPNEGO: javax.servlet.ServletException: javax.servlet.ServletException:
Principal not defined in configuration
12/10/04 08:47:01 WARN mortbay.log: Failed startup of context org.mortbay.jetty.webapp.WebAppContext@acaf083{/,file:/home/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/hdfs/webapps/hdfs}
javax.servlet.ServletException: javax.servlet.ServletException: Principal not defined in configuration
	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:185)
	at org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:146)
	at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
	at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
	at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
	at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
	at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
	at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
	at org.mortbay.jetty.Server.doStart(Server.java:224)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:664)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:152)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:548)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:481)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:444)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:609)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1137)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1203)
Caused by: javax.servlet.ServletException: Principal not defined in configuration
	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:146)
	... 24 more
12/10/04 08:47:01 WARN mortbay.log: Nested in javax.servlet.ServletException: javax.servlet.ServletException:
Principal not defined in configuration:
javax.servlet.ServletException: Principal not defined in configuration
	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:146)
	at org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:146)
	at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
	at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
	at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
	at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
	at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
	at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
	at org.mortbay.jetty.Server.doStart(Server.java:224)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:664)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:152)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:548)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:481)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:444)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:609)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1137)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1203)
12/10/04 08:47:01 INFO mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
12/10/04 08:47:01 INFO mortbay.log: Stopped SelectChannelConnector@0.0.0.0:50070
12/10/04 08:47:01 INFO namenode.FSNamesystem: Stopping services started for active state
12/10/04 08:47:01 ERROR delegation.AbstractDelegationTokenSecretManager: InterruptedExcpetion
recieved for ExpiredTokenRemover thread java.lang.InterruptedException: sleep interrupted
12/10/04 08:47:01 INFO namenode.FSEditLog: Ending log segment 63
12/10/04 08:47:01 INFO namenode.FSEditLog: Number of transactions: 4 Total time for transactions(ms):
0Number of transactions batched in Syncs: 0 Number of syncs: 5 SyncTimes(ms): 212 
12/10/04 08:47:01 INFO namenode.FileJournalManager: Finalizing edits file /dfs/nn/current/edits_inprogress_0000000000000000063
-> /dfs/nn/current/edits_0000000000000000063-0000000000000000066
12/10/04 08:47:01 WARN blockmanagement.BlockManager: ReplicationMonitor thread received InterruptedException.
java.lang.InterruptedException: sleep interrupted
	at java.lang.Thread.sleep(Native Method)
	at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor.run(BlockManager.java:2996)
	at java.lang.Thread.run(Thread.java:662)
12/10/04 08:47:01 WARN blockmanagement.DecommissionManager: Monitor interrupted: java.lang.InterruptedException:
sleep interrupted
12/10/04 08:47:01 INFO namenode.FSNamesystem: Stopping services started for active state
12/10/04 08:47:01 INFO namenode.FSNamesystem: Stopping services started for standby state
12/10/04 08:47:01 INFO ipc.Server: Stopping server on 8020
12/10/04 08:47:01 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
12/10/04 08:47:01 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
12/10/04 08:47:01 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
12/10/04 08:47:01 FATAL namenode.NameNode: Exception in namenode join
java.io.IOException: Unable to initialize WebAppContext
	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:686)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:152)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:548)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:481)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:444)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:609)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1137)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1203)
Caused by: javax.servlet.ServletException: javax.servlet.ServletException: Principal not defined
in configuration
	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:185)
	at org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:146)
	at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
	at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
	at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
	at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
	at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
	at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
	at org.mortbay.jetty.Server.doStart(Server.java:224)
	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:664)
	... 8 more
Caused by: javax.servlet.ServletException: Principal not defined in configuration
	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:146)
	... 24 more
12/10/04 08:47:01 INFO util.ExitUtil: Exiting with status 1
12/10/04 08:47:01 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at cs-10-20-90-154.cloud.cloudera.com/10.20.90.154
************************************************************/
[schu@cs-10-20-90-154 ~]$ 
{code}
                
> Log error message if security and WebHDFS are enabled but principal/keytab are not configured
> ---------------------------------------------------------------------------------------------
>
>                 Key: HDFS-3813
>                 URL: https://issues.apache.org/jira/browse/HDFS-3813
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: security, webhdfs
>    Affects Versions: 2.0.0-alpha
>            Reporter: Stephen Chu
>            Assignee: Stephen Chu
>              Labels: newbie
>             Fix For: 3.0.0
>
>         Attachments: error_output, HDFS-3813.patch
>
>
> I configured a secure HDFS cluster, but failed to start the NameNode because I had enabled
WebHDFS without specifying _dfs.web.authentication.kerberos.principal_ in hdfs-site.xml.
> In the NN logs, I saw:
> {noformat}
> 2012-05-28 17:50:13,021 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
Login using keytab /etc/hdfs.keytab, for principal HTTP/c1225.hal.cloudera.com@HAL.CLOUDERA.COM
> 2012-05-28 17:50:13,030 INFO org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler:
Initialized, principal [HTTP/c1225.hal.cloudera.com@HAL.CLOUDERA.COM] from keytab [/etc/hdfs.keytab]
> 2012-05-28 17:50:13,031 WARN org.apache.hadoop.security.authentication.server.AuthenticationFilter:
'signature.secret' configuration not set, using a random value as secret
> 2012-05-28 17:50:13,032 WARN org.mortbay.log: failed SPNEGO: javax.servlet.ServletException:
javax.servlet.ServletException: Principal not defined in configuration
> 2012-05-28 17:50:13,033 WARN org.mortbay.log: Failed startup of context org.mortbay.jetty.webapp.WebAppContext@21453d72{/,file:/usr/lib/hadoop-hdfs/webapps/hdfs}
> javax.servlet.ServletException: javax.servlet.ServletException: Principal not defined
in configuration
> 	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:185)
> 	at org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:146)
> 	at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> 	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> 	at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> 	at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> 	at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> 	at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> 	at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> 	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> 	at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> 	at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> 	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> 	at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> 	at org.mortbay.jetty.Server.doStart(Server.java:224)
> 	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> 	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:617)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:173)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:529)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:471)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
> Caused by: javax.servlet.ServletException: Principal not defined in configuration
> 	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:146)
> 	... 24 more
> 2012-05-28 17:50:13,034 WARN org.mortbay.log: Nested in javax.servlet.ServletException:
javax.servlet.ServletException: Principal not defined in configuration:
> javax.servlet.ServletException: Principal not defined in configuration
> 	at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.init(KerberosAuthenticationHandler.java:146)
> 	at org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:146)
> 	at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
> 	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> 	at org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
> 	at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
> 	at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
> 	at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
> 	at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
> 	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> 	at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
> 	at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
> 	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> 	at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
> 	at org.mortbay.jetty.Server.doStart(Server.java:224)
> 	at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
> 	at org.apache.hadoop.http.HttpServer.start(HttpServer.java:617)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:173)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:529)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:471)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
> 2012-05-28 17:50:13,041 INFO org.mortbay.log: Started SelectChannelConnector@c1225.hal.cloudera.com:50070
> 2012-05-28 17:50:13,041 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server
up at: c1225.hal.cloudera.com:50070
> 2012-05-28 17:50:13,042 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
> 2012-05-28 17:50:13,042 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 17020:
starting
> 2012-05-28 17:50:13,045 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode
up at: c1225.hal.cloudera.com/172.29.98.216:17020
> 2012-05-28 17:50:13,045 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting
services required for standby state
> 2012-05-28 17:50:13,048 INFO org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer:
Will roll logs on active node at c1226.hal.cloudera.com/172.29.98.217:17020 every 120 seconds.
> 2012-05-28 17:50:13,058 INFO org.apache.hadoop.hdfs.server.namenode.ha.StandbyCheckpointer:
Starting standby checkpoint thread...
> Checkpointing active NN at c1226.hal.cloudera.com:50070
> Serving checkpoints at c1225.hal.cloudera.com/172.29.98.216:50070
> {noformat}
> I couldn't figure out what I had misconfigured, but ATM found that I was missing _dfs.web.authentication.kerberos.principal_.
> Logging an error if this property is not configured when WebHDFS and security are enabled
would be useful for future users running into the same problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message