Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9B2B217AA7 for ; Tue, 28 Apr 2015 06:31:52 +0000 (UTC) Received: (qmail 43406 invoked by uid 500); 28 Apr 2015 06:31:47 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 43291 invoked by uid 500); 28 Apr 2015 06:31:47 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 43278 invoked by uid 99); 28 Apr 2015 06:31:47 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Apr 2015 06:31:46 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: message received from 54.164.171.186 which is an MX secondary for user@hadoop.apache.org) Received: from [54.164.171.186] (HELO mx1-us-east.apache.org) (54.164.171.186) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Apr 2015 06:31:40 +0000 Received: from mail-qc0-f176.google.com (mail-qc0-f176.google.com [209.85.216.176]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 6CFB643E38 for ; Tue, 28 Apr 2015 06:31:20 +0000 (UTC) Received: by qcbii10 with SMTP id ii10so65976106qcb.2 for ; Mon, 27 Apr 2015 23:30:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=ZXebWbve1aZTPYse9iG+gbslHR0ZwzeTnQSw5Z4TCKU=; b=fIwF7ZFBPX0ZtuS0g85aMwi1PaD1CwvZKkHziC8t/3K+q9sYpgd6SotzVrKMS2cDEo 1AguagsiSBXYu66a/+/OJNXEsyBlAjt+IxYIEb7Ns8AEm87a+LnoSCIIB+gA3cvKx+1W 3cVPTGWpHAji1kgyr6gE1f7cxMcIU79rwtbJWfjHQYlaO1t9eR4eYDGcIrWsrOSpNtMZ zK+zAjTsFZGimLdFItKdtRIR2q0D0yU72xfCJcLeCj4QqE99npfFW66T1OTmuXhn231j 8tIUN+SLh2cISSIMOHJ5zmnpglGwcq2GFn/FFzqYeOvjkoM+e3xnZWWCOZp+SC48pfc9 /VtA== MIME-Version: 1.0 X-Received: by 10.55.31.5 with SMTP id f5mr17159302qkf.42.1430202635193; Mon, 27 Apr 2015 23:30:35 -0700 (PDT) Received: by 10.229.48.196 with HTTP; Mon, 27 Apr 2015 23:30:35 -0700 (PDT) In-Reply-To: <380321132.6969234.1430202226153.JavaMail.yahoo@mail.yahoo.com> References: <380321132.6969234.1430202226153.JavaMail.yahoo@mail.yahoo.com> Date: Tue, 28 Apr 2015 12:00:35 +0530 Message-ID: Subject: Re: Name node starting intermittently From: Ravindra Kumar Naik To: user@hadoop.apache.org, Anand Murali Content-Type: multipart/alternative; boundary=001a1147f36a07126c0514c30059 X-Virus-Checked: Checked by ClamAV on apache.org --001a1147f36a07126c0514c30059 Content-Type: text/plain; charset=UTF-8 Hi, Could you please post your hdfs-site.xml Regards, Ravindra On Tue, Apr 28, 2015 at 11:53 AM, Anand Murali wrote: > Ravindra: > > I am trying to use Hadoop out of the box. Please provide a remedy to fix > this. I shall be thankful. I am a beginner with Hadoop. > > Thanks > > Anand Murali > 11/7, 'Anand Vihar', Kandasamy St, Mylapore > Chennai - 600 004, India > Ph: (044)- 28474593/ 43526162 (voicemail) > > > > On Tuesday, April 28, 2015 11:51 AM, Ravindra Kumar Naik < > ravin.iitb@gmail.com> wrote: > > > Hi, > > Using /tmp/ folder for hdfs storage directory is not a good idea. > The /tmp/ directory is wiped out after reboot. > > Regards, > Ravindra > > > On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali > wrote: > > Dear All: > > I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mode. Yesterday > there was normal startup and shutdown couple of times. This morning it is > not so. Find below section of log file. Shall be thankful if somebody can > advise. > > > STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git > -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on > 2014-11-13T21:10Z > STARTUP_MSG: java = 1.7.0_75 > ************************************************************/ > 2015-04-28 11:21:48,167 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal > handlers for [TERM, HUP, INT] > 2015-04-28 11:21:48,176 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode [] > 2015-04-28 11:21:48,574 INFO > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from > hadoop-metrics2.properties > 2015-04-28 11:21:48,805 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot > period at 10 second(s). > 2015-04-28 11:21:48,805 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system > started > 2015-04-28 11:21:48,806 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is > hdfs://localhost:9000 > 2015-04-28 11:21:48,806 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use > localhost:9000 to access this namenode/service. > 2015-04-28 11:21:49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting > Web-server for hdfs at: http://0.0.0.0:50070 > 2015-04-28 11:21:49,287 INFO org.mortbay.log: Logging to > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via > org.mortbay.log.Slf4jLog > 2015-04-28 11:21:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http > request log for http.requests.namenode is not defined > 2015-04-28 11:21:49,303 INFO org.apache.hadoop.http.HttpServer2: Added > global filter 'safety' > (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) > 2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServer2: Added > filter static_user_filter > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to > context hdfs > 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added > filter static_user_filter > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to > context static > 2015-04-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added > filter static_user_filter > (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to > context logs > 2015-04-28 11:21:49,394 INFO org.apache.hadoop.http.HttpServer2: Added > filter 'org.apache.hadoop.hdfs.web.AuthFilter' > (class=org.apache.hadoop.hdfs.web.AuthFilter) > 2015-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: > addJerseyResourcePackage: > packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, > pathSpec=/webhdfs/v1/* > 2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2: Jetty > bound to port 50070 > 2015-04-28 11:21:49,448 INFO org.mortbay.log: jetty-6.1.26 > 2015-04-28 11:21:49,906 INFO org.mortbay.log: Started HttpServer2$ > SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 > 2015-04-28 11:21:50,021 WARN > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage > directory (dfs.namenode.name.dir) configured. Beware of data loss due to > lack of redundant storage directories! > 2015-04-28 11:21:50,021 WARN > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace > edits storage directory (dfs.namenode.edits.dir) configured. Beware of data > loss due to lack of redundant storage directories! > 2015-04-28 11:21:50,070 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found. > 2015-04-28 11:21:50,138 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true > 2015-04-28 11:21:50,215 INFO > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: > dfs.block.invalidate.limit=1000 > 2015-04-28 11:21:50,216 INFO > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: > dfs.namenode.datanode.registration.ip-hostname-check=true > 2015-04-28 11:21:50,243 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 > 2015-04-28 11:21:50,244 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block > deletion will start around 2015 Apr 28 11:21:50 > 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: Computing > capacity for map BlocksMap > 2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type = > 64-bit > 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory > 889 MB = 17.8 MB > 2015-04-28 11:21:50,252 INFO org.apache.hadoop.util.GSet: capacity = > 2^21 = 2097152 entries > 2015-04-28 11:21:50,274 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > dfs.block.access.token.enable=false > 2015-04-28 11:21:50,274 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > defaultReplication = 1 > 2015-04-28 11:21:50,274 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > maxReplication = 512 > 2015-04-28 11:21:50,274 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > minReplication = 1 > 2015-04-28 11:21:50,274 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > maxReplicationStreams = 2 > 2015-04-28 11:21:50,274 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > shouldCheckForEnoughRacks = false > 2015-04-28 11:21:50,274 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > replicationRecheckInterval = 3000 > 2015-04-28 11:21:50,274 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > encryptDataTransfer = false > 2015-04-28 11:21:50,274 INFO > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: > maxNumBlocksToLog = 1000 > 2015-04-28 11:21:50,278 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = > anand_vihar (auth:SIMPLE) > 2015-04-28 11:21:50,278 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = > supergroup > 2015-04-28 11:21:50,278 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = > true > 2015-04-28 11:21:50,278 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false > 2015-04-28 11:21:50,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true > 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing > capacity for map INodeMap > 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: VM type = > 64-bit > 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: 1.0% max memory > 889 MB = 8.9 MB > 2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: capacity = > 2^20 = 1048576 entries > 2015-04-28 11:21:50,777 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names > occuring more than 10 times > 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: Computing > capacity for map cachedBlocks > 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM type = > 64-bit > 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory > 889 MB = 2.2 MB > 2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: capacity = > 2^18 = 262144 entries > 2015-04-28 11:21:50,828 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > dfs.namenode.safemode.threshold-pct = 0.9990000128746033 > 2015-04-28 11:21:50,828 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > dfs.namenode.safemode.min.datanodes = 0 > 2015-04-28 11:21:50,828 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: > dfs.namenode.safemode.extension = 30000 > 2015-04-28 11:21:50,829 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on > namenode is enabled > 2015-04-28 11:21:50,829 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use > 0.03 of total heap and retry cache entry expiry time is 600000 millis > 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing > capacity for map NameNodeRetryCache > 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: VM type = > 64-bit > 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: > 0.029999999329447746% max memory 889 MB = 273.1 KB > 2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity = > 2^15 = 32768 entries > 2015-04-28 11:21:50,833 INFO > org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false > 2015-04-28 11:21:50,833 INFO > org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true > 2015-04-28 11:21:50,833 INFO > org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: > 16384 > 2015-04-28 11:21:50,834 WARN org.apache.hadoop.hdfs.server.common.Storage: > Storage directory /tmp/hadoop-anand_vihar/dfs/name does not exist > 2015-04-28 11:21:50,835 WARN > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception > loading fsimage > org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: > Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: > storage directory does not exist or is not accessible. > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:762) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:746) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) > 2015-04-28 11:21:50,869 INFO org.mortbay.log: Stopped HttpServer2$ > SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 > 2015-04-28 11:21:50,969 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode > metrics system... > 2015-04-28 11:21:50,970 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system > stopped. > 2015-04-28 11:21:50,970 INFO > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system > shutdown complete. > 2015-04-28 11:21:50,970 FATAL > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: > Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsistent state: > storage directory does not exist or is not accessible. > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:313) > at > org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:536) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:762) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:746) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504) > 2015-04-28 11:21:50,972 INFO org.apache.hadoop.util.ExitUtil: Exiting with > status 1 > 2015-04-28 11:21:50,973 INFO > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: > /************************************************************ > SHUTDOWN_MSG: Shutting down NameNode at Latitude-E5540/127.0.1.1 > ************************************************************/ > > Regards, > > Anand Murali > 11/7, 'Anand Vihar', Kandasamy St, Mylapore > Chennai - 600 004, India > Ph: (044)- 28474593/ 43526162 (voicemail) > > > > > --001a1147f36a07126c0514c30059 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi,

Could you please post your= hdfs-site.xml

Regards,
Ravindra

On Tue, Apr 28, 2015 at 11:= 53 AM, Anand Murali <anand_vihar@yahoo.com> wrote:
Ravindra:

I am trying to use Hadoop out of the box. Please provide a remedy to = fix this. I shall be thankful. I am a beginner with Hadoop.

Thanks
<= span>
=C2=A0
Anand Murali=C2=A0=C2=A0
11/7, = 9;Anand Vihar', Kandasamy St, Mylapore
Chennai - 600 004, India
Ph: (044)- 284= 74593/=C2=A043526162 (voicemail)



On Tuesday, April 28, 2015 11:51 AM, = Ravindra Kumar Naik <ravin.iitb@gmail.com> wrote:


Hi,

Using /tmp/ folder for hdfs storage directory is not a= good idea.
The /tmp/ directory is wiped out after = reboot.

Regards,
Ravindra


On Tue, Apr 28, 2015 at 11:31 AM, Anand Murali <anand_vihar@yahoo.com> wrote:
Dear All:

I am running Hadoop-2.6 on Ubuntu 15.04 desktop pseudo mo= de. Yesterday there was normal startup and shutdown couple of times. This m= orning it is not so. Find below section of log file. Shall be thankful if s= omebody can advise.


STARTUP_MSG:=C2=A0=C2=A0= build =3D https://git-wip-us.apache.= org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; c= ompiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP= _MSG:=C2=A0=C2=A0 java =3D 1.7.0_75
*********************= ***************************************/
2015-04-28 11:21= :48,167 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UN= IX signal handlers for [TERM, HUP, INT]
2015-04-28 11:21:= 48,176 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode= []
2015-04-28 11:21:48,574 INFO org.apache.hadoop.metric= s2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.imp= l.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-04-28 11:21:48,805 INFO org.apache.hadoop.metrics2.impl.Metrics= SystemImpl: NameNode metrics system started
2015-04-28 11= :21:48,806 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.default= FS is hdfs://localhost:9000
2015-04-28 11:21:48,806 INFO = org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use localho= st:9000 to access this namenode/service.
2015-04-28 11:21= :49,193 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs a= t: http://0.0.0.0:50070
2015-04-28 11:21:49,= 287 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.= mortbay.log) via org.mortbay.log.Slf4jLog
2015-04-28 11:2= 1:49,291 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for h= ttp.requests.namenode is not defined
2015-04-28 11:21:49,= 303 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safet= y' (class=3Dorg.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-04-28 11:21:49,305 INFO org.apache.hadoop.http.HttpServe= r2: Added filter static_user_filter (class=3Dorg.apache.hadoop.http.lib.Sta= ticUserWebFilter$StaticUserFilter) to context hdfs
2015-0= 4-28 11:21:49,306 INFO org.apache.hadoop.http.HttpServer2: Added filter sta= tic_user_filter (class=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$Sta= ticUserFilter) to context static
2015-04-28 11:21:49,306 = INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (c= lass=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to = context logs
2015-04-28 11:21:49,394 INFO org.apache.hado= op.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilte= r' (class=3Dorg.apache.hadoop.hdfs.web.AuthFilter)
20= 15-04-28 11:21:49,397 INFO org.apache.hadoop.http.HttpServer2: addJerseyRes= ourcePackage: packageName=3Dorg.apache.hadoop.hdfs.server.namenode.web.reso= urces;org.apache.hadoop.hdfs.web.resources, pathSpec=3D/webhdfs/v1/*
2015-04-28 11:21:49,448 INFO org.apache.hadoop.http.HttpServer2= : Jetty bound to port 50070
2015-04-28 11:21:49,448 INFO = org.mortbay.log: jetty-6.1.26
2015-04-28 11:21:49,906 INF= O org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.server.namen= ode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) = configured. Beware of data loss due to lack of redundant storage directorie= s!
2015-04-28 11:21:50,021 WARN org.apache.hadoop.hdfs.se= rver.namenode.FSNamesystem: Only one namespace edits storage directory (dfs= .namenode.edits.dir) configured. Beware of data loss due to lack of redunda= nt storage directories!
2015-04-28 11:21:50,070 INFO org.= apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-04-28 11:21:50,138 INFO org.apache.hadoop.hdfs.server.na= menode.FSNamesystem: fsLock is fair:true
2015-04-28 11:21= :50,215 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:= dfs.block.invalidate.limit=3D1000
2015-04-28 11:21:50,21= 6 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.n= amenode.datanode.registration.ip-hostname-check=3Dtrue
20= 15-04-28 11:21:50,243 INFO org.apache.hadoop.hdfs.server.blockmanagement.Bl= ockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:= 00:00.000
2015-04-28 11:21:50,244 INFO org.apache.hadoop.= hdfs.server.blockmanagement.BlockManager: The block deletion will start aro= und 2015 Apr 28 11:21:50
2015-04-28 11:21:50,246 INFO org= .apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-04-28 11:21:50,246 INFO org.apache.hadoop.util.GSet: VM type=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 64-bit
2015-04-28 1= 1:21:50,252 INFO org.apache.hadoop.util.GSet: 2.0% max memory 889 MB =3D 17= .8 MB
2015-04-28 11:21:50,252 INFO org.apache.hadoop.util= .GSet: capacity=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 2^21 =3D 2097152 entries<= br clear=3D"none">2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.serve= r.blockmanagement.BlockManager: dfs.block.access.token.enable=3Dfalse
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blo= ckmanagement.BlockManager: defaultReplication=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 =3D 1
2015-04-28 11:21:50,274 INFO org= .apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 5= 12
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.se= rver.blockmanagement.BlockManager: minReplication=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 1
20= 15-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanagement.Bl= ockManager: maxReplicationStreams=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 2
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blo= ckmanagement.BlockManager: shouldCheckForEnoughRacks=C2=A0 =3D false
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.bloc= kmanagement.BlockManager: replicationRecheckInterval =3D 3000
2015-04-28 11:21:50,274 INFO org.apache.hadoop.hdfs.server.blockmanage= ment.BlockManager: encryptDataTransfer=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =3D false
2015-04-28 11:21:50,274 INFO org.apache.= hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 1000
20= 15-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesys= tem: fsOwner=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2= =A0=C2=A0 =3D anand_vihar (auth:SIMPLE)
2015-04-28 11:21:= 50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D supergroup
2015-04-28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.nam= enode.FSNamesystem: isPermissionEnabled =3D true
2015-04-= 28 11:21:50,278 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: H= A Enabled: false
2015-04-28 11:21:50,290 INFO org.apache.= hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util.GSet: Computing c= apacity for map INodeMap
2015-04-28 11:21:50,776 INFO org= .apache.hadoop.util.GSet: VM type=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 6= 4-bit
2015-04-28 11:21:50,776 INFO org.apache.hadoop.util= .GSet: 1.0% max memory 889 MB =3D 8.9 MB
2015-04-28 11:21= :50,776 INFO org.apache.hadoop.util.GSet: capacity=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 =3D 2^20 =3D 1048576 entries
2015-04-28 11:21:50,7= 77 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names= occuring more than 10 times
2015-04-28 11:21:50,827 INFO= org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-04-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: VM t= ype=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 64-bit
2015-0= 4-28 11:21:50,827 INFO org.apache.hadoop.util.GSet: 0.25% max memory 889 MB= =3D 2.2 MB
2015-04-28 11:21:50,827 INFO org.apache.hadoo= p.util.GSet: capacity=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 2^18 =3D 262144 ent= ries
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs.= server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct =3D 0.999= 0000128746033
2015-04-28 11:21:50,828 INFO org.apache.had= oop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = =3D 0
2015-04-28 11:21:50,828 INFO org.apache.hadoop.hdfs= .server.namenode.FSNamesystem: dfs.namenode.safemode.extension=C2=A0=C2=A0= =C2=A0=C2=A0 =3D 30000
2015-04-28 11:21:50,829 INFO org.a= pache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is = enabled
2015-04-28 11:21:50,829 INFO org.apache.hadoop.hd= fs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap an= d retry cache entry expiry time is 600000 millis
2015-04-= 28 11:21:50,831 INFO org.apache.hadoop.util.GSet: Computing capacity for ma= p NameNodeRetryCache
2015-04-28 11:21:50,831 INFO org.apa= che.hadoop.util.GSet: VM type=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 64-bi= t
2015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSe= t: 0.029999999329447746% max memory 889 MB =3D 273.1 KB
2= 015-04-28 11:21:50,831 INFO org.apache.hadoop.util.GSet: capacity=C2=A0=C2= =A0=C2=A0=C2=A0=C2=A0 =3D 2^15 =3D 32768 entries
2015-04-= 28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs en= abled? false
2015-04-28 11:21:50,833 INFO org.apache.hado= op.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015= -04-28 11:21:50,833 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Max= imum size of an xattr: 16384
2015-04-28 11:21:50,834 WARN= org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoo= p-anand_vihar/dfs/name does not exist
2015-04-28 11:21:50= ,835 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered = exception loading fsimage
org.apache.hadoop.hdfs.server.c= ommon.InconsistentFSStateException: Directory /tmp/hadoop-anand_vihar/dfs/n= ame is in an inconsistent state: storage directory does not exist or is not= accessible.
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs= .server.namenode.FSImage.recoverStorageDirs(FSImage.java:313)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.r= ecoverTransitionRead(FSImage.java:202)
=C2=A0=C2=A0=C2=A0= at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNames= ystem.java:1020)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.= hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:739)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.N= ameNode.loadNamesystem(NameNode.java:536)
=C2=A0=C2=A0=C2= =A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.= java:595)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.se= rver.namenode.NameNode.<init>(NameNode.java:762)
= =C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.<i= nit>(NameNode.java:746)
=C2=A0=C2=A0=C2=A0 at org.apac= he.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)<= br clear=3D"none">=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namen= ode.NameNode.main(NameNode.java:1504)
2015-04-28 11:21:50= ,869 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070<= br clear=3D"none">2015-04-28 11:21:50,969 INFO org.apache.hadoop.metrics2.i= mpl.MetricsSystemImpl: Stopping NameNode metrics system...
2015-04-28 11:21:50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSyste= mImpl: NameNode metrics system stopped.
2015-04-28 11:21:= 50,970 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode met= rics system shutdown complete.
2015-04-28 11:21:50,970 FA= TAL org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start nameno= de.
org.apache.hadoop.hdfs.server.common.InconsistentFSSt= ateException: Directory /tmp/hadoop-anand_vihar/dfs/name is in an inconsist= ent state: storage directory does not exist or is not accessible.
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.FSIm= age.recoverStorageDirs(FSImage.java:313)
=C2=A0=C2=A0=C2= =A0 at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead= (FSImage.java:202)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoo= p.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1020)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode= .FSNamesystem.loadFromDisk(FSNamesystem.java:739)
=C2=A0= =C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesys= tem(NameNode.java:536)
=C2=A0=C2=A0=C2=A0 at org.apache.h= adoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:595)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.Name= Node.<init>(NameNode.java:762)
=C2=A0=C2=A0=C2=A0 a= t org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.jav= a:746)
=C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.serve= r.namenode.NameNode.createNameNode(NameNode.java:1438)
= =C2=A0=C2=A0=C2=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.main(= NameNode.java:1504)
2015-04-28 11:21:50,972 INFO org.apac= he.hadoop.util.ExitUtil: Exiting with status 1
2015-04-28= 11:21:50,973 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOW= N_MSG:
/************************************************= ************
SHUTDOWN_MSG: Shutting down NameNode at Lati= tude-E5540/127.0.1.1
***************************= *********************************/

Regards,
=C2=A0
Anand Murali=C2=A0=C2=A0
11/7, 'Anand Vihar', Kandasamy St, M= ylapore
Chennai - 600 004, India
Ph: (044)- 28474593/=C2=A043526162 (voicemail)




--001a1147f36a07126c0514c30059--