Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 615BD10527 for ; Tue, 24 Dec 2013 15:34:32 +0000 (UTC) Received: (qmail 74185 invoked by uid 500); 24 Dec 2013 15:34:13 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 74014 invoked by uid 500); 24 Dec 2013 15:34:12 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 74007 invoked by uid 99); 24 Dec 2013 15:34:10 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Dec 2013 15:34:10 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of manoj444@gmail.com designates 209.85.213.169 as permitted sender) Received: from [209.85.213.169] (HELO mail-ig0-f169.google.com) (209.85.213.169) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Dec 2013 15:34:03 +0000 Received: by mail-ig0-f169.google.com with SMTP id hk11so23807838igb.0 for ; Tue, 24 Dec 2013 07:33:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=Gu+l0X+RQQ+pAcZMySx3gH+wlqMFtCDVic6Rm0NMzrQ=; b=Hdg5c2CpbZnB1dYCNeq+H8f03NiAdfproDI0UIJxut6QAl/yff3djTXOJ6QsD3q4cD N+Irlpb+NLfVNW4DDFUunAm9bpoe30CvwdvO9jaOqXtGHhA+6KwtOmcgI+pTEsdQiQwm 7iwJ0w2J6UndbQ24Y4qNqYOePPOj8+/K/jX8ItkW70WYIou30sMZVIT9Dv4Ro3HKhyCC HoGGoVA7XzTmC09LA4gdSr074edgAG6cDHYYRVxw+/p/jVeVALmfKWX7iUDwLt4Ju9Pr iQOldCBCuviORj3BPCbo+6tTC2dO4fFWdKkGyoBSjLZoKu+JDS8g/39aRtxhNwRp9HI+ EhiA== X-Received: by 10.42.147.66 with SMTP id m2mr1795900icv.59.1387899222055; Tue, 24 Dec 2013 07:33:42 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.151.199 with HTTP; Tue, 24 Dec 2013 07:33:20 -0800 (PST) In-Reply-To: References: From: Manoj Babu Date: Tue, 24 Dec 2013 21:03:20 +0530 Message-ID: Subject: Re: Getting error unrecognized option -jvm on starting nodemanager To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=90e6ba1efd321d628b04ee4978ff X-Virus-Checked: Checked by ClamAV on apache.org --90e6ba1efd321d628b04ee4978ff Content-Type: text/plain; charset=ISO-8859-1 Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/*in_use.**lock acquired by nodename* 7518@localhost.localdomain stop all instance running and then do the steps. Cheers! Manoj. On Tue, Dec 24, 2013 at 8:48 PM, Sitaraman Vilayannur < vrsitaramanietflists@gmail.com> wrote: > I did press Y tried it several times and once more now. > Sitaraman > > > On Tue, Dec 24, 2013 at 8:38 PM, Nitin Pawar wrote: > >> see the error .. it says not formatted >> did you press Y or y ? >> try again :) >> >> >> On Tue, Dec 24, 2013 at 8:35 PM, Sitaraman Vilayannur < >> vrsitaramanietflists@gmail.com> wrote: >> >>> Hi Nitin, >>> Even after formatting using hdfs namenode -format, i keep seeing >>> namenode not formatted in the logs when i try to start namenode........ >>> 12/24 20:33:26 INFO namenode.FSNamesystem: supergroup=supergroup >>> 13/12/24 20:33:26 INFO namenode.FSNamesystem: isPermissionEnabled=true >>> 13/12/24 20:33:26 INFO namenode.NameNode: Caching file names occuring >>> more than 10 times >>> 13/12/24 20:33:26 INFO namenode.NNStorage: Storage directory >>> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode has been successfully >>> formatted. >>> 13/12/24 20:33:26 INFO namenode.FSImage: Saving image file >>> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 >>> using no compression >>> 13/12/24 20:33:26 INFO namenode.FSImage: Image file of size 124 saved in >>> 0 seconds. >>> 13/12/24 20:33:26 INFO namenode.NNStorageRetentionManager: Going to >>> retain 1 images with txid >= 0 >>> 13/12/24 20:33:26 INFO util.ExitUtil: Exiting with status 0 >>> 13/12/24 20:33:26 INFO namenode.NameNode: SHUTDOWN_MSG: >>> >>> /************************************************************ >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 >>> ************************************************************/ >>> >>> >>> 2013-12-24 20:33:46,337 INFO >>> org.apache.hadoop.hdfs.server.common.Storage: Lock on >>> /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock acquired by >>> nodename 7518@localhost.localdomain >>> 2013-12-24 20:33:46,339 INFO org.mortbay.log: Stopped >>> SelectChannelConnector@0.0.0.0:50070 >>> 2013-12-24 20:33:46,340 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode >>> metrics system... >>> 2013-12-24 20:33:46,340 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system >>> stopped. >>> 2013-12-24 20:33:46,340 INFO >>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system >>> shutdown complete. >>> 2013-12-24 20:33:46,340 FATAL >>> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join >>> java.io.IOException: NameNode is not formatted. >>> at >>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210) >>> >>> at >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787) >>> at >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568) >>> at >>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443) >>> at >>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491) >>> at >>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684) >>> at >>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669) >>> at >>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254) >>> at >>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) >>> 2013-12-24 20:33:46,342 INFO org.apache.hadoop.util.ExitUtil: Exiting >>> with status 1 >>> 2013-12-24 20:33:46,343 INFO >>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: >>> >>> /************************************************************ >>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 >>> ************************************************************/ >>> >>> >>> >>> >>> On Tue, Dec 24, 2013 at 3:13 PM, Nitin Pawar wrote: >>> >>>> the issue here is you tried one version of hadoop and then changed to a >>>> different version. >>>> >>>> You can not do that directly with hadoop. You need to follow a process >>>> while upgrading hadoop versions. >>>> >>>> For now as you are just starting with hadoop, I would recommend just >>>> run a dfs format and start the hdfs again >>>> >>>> >>>> On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur < >>>> vrsitaramanietflists@gmail.com> wrote: >>>> >>>>> When i run namenode with upgrade option i get the following error and >>>>> and namenode dosent start... >>>>> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange: >>>>> STATE* Network topology has 0 racks and 0 datanodes >>>>> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange: >>>>> STATE* UnderReplicatedBlocks has 0 blocks >>>>> 2013-12-24 14:48:38,631 INFO org.apache.hadoop.ipc.Server: IPC Server >>>>> Responder: starting >>>>> 2013-12-24 14:48:38,632 INFO org.apache.hadoop.ipc.Server: IPC Server >>>>> listener on 9000: starting >>>>> 2013-12-24 14:48:38,633 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at: >>>>> 192.168.1.2/192.168.1.2:9000 >>>>> 2013-12-24 14:48:38,633 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services >>>>> required for active state >>>>> 2013-12-24 14:50:50,060 ERROR >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15: >>>>> SIGTERM >>>>> 2013-12-24 14:50:50,062 INFO >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: >>>>> /************************************************************ >>>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/ >>>>> 127.0.0.1 >>>>> ************************************************************/ >>>>> >>>>> >>>>> On 12/24/13, Sitaraman Vilayannur >>>>> wrote: >>>>> > Found it, >>>>> > I get the following error on starting namenode in 2.2 >>>>> > >>>>> 10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar >>>>> > STARTUP_MSG: build = >>>>> https://svn.apache.org/repos/asf/hadoop/common >>>>> > -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z >>>>> > STARTUP_MSG: java = 1.7.0_45 >>>>> > ************************************************************/ >>>>> > 2013-12-24 13:25:48,876 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX >>>>> > signal handlers for [TERM, HUP, INT] >>>>> > 2013-12-24 13:25:49,042 INFO >>>>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from >>>>> > hadoop-metrics2.properties >>>>> > 2013-12-24 13:25:49,102 INFO >>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot >>>>> > period at 10 second(s). >>>>> > 2013-12-24 13:25:49,102 INFO >>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics >>>>> > system started >>>>> > 2013-12-24 13:25:49,232 WARN org.apache.hadoop.util.NativeCodeLoader: >>>>> > Unable to load native-hadoop library for your platform... using >>>>> > builtin-java classes where applicable >>>>> > 2013-12-24 13:25:49,375 INFO org.mortbay.log: Logging to >>>>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via >>>>> > org.mortbay.log.Slf4jLog >>>>> > 2013-12-24 13:25:49,410 INFO org.apache.hadoop.http.HttpServer: Added >>>>> > global filter 'safety' >>>>> > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) >>>>> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added >>>>> > filter static_user_filter >>>>> > >>>>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) >>>>> > to context hdfs >>>>> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added >>>>> > filter static_user_filter >>>>> > >>>>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) >>>>> > to context static >>>>> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added >>>>> > filter static_user_filter >>>>> > >>>>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) >>>>> > to context logs >>>>> > 2013-12-24 13:25:49,422 INFO org.apache.hadoop.http.HttpServer: >>>>> > dfs.webhdfs.enabled = false >>>>> > 2013-12-24 13:25:49,432 INFO org.apache.hadoop.http.HttpServer: Jetty >>>>> > bound to port 50070 >>>>> > 2013-12-24 13:25:49,432 INFO org.mortbay.log: jetty-6.1.26 >>>>> > 2013-12-24 13:25:49,459 WARN org.mortbay.log: Can't reuse >>>>> > /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using >>>>> > /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_2787234685293301311 >>>>> > 2013-12-24 13:25:49,610 INFO org.mortbay.log: Started >>>>> > SelectChannelConnector@0.0.0.0:50070 >>>>> > 2013-12-24 13:25:49,611 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: >>>>> > 0.0.0.0:50070 >>>>> > 2013-12-24 13:25:49,628 WARN >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image >>>>> > storage directory (dfs.namenode.name.dir) configured. Beware of >>>>> > dataloss due to lack of redundant storage directories! >>>>> > 2013-12-24 13:25:49,628 WARN >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one >>>>> > namespace edits storage directory (dfs.namenode.edits.dir) >>>>> configured. >>>>> > Beware of dataloss due to lack of redundant storage directories! >>>>> > 2013-12-24 13:25:49,668 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.HostFileManager: read >>>>> includes: >>>>> > HostSet( >>>>> > ) >>>>> > 2013-12-24 13:25:49,669 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.HostFileManager: read >>>>> excludes: >>>>> > HostSet( >>>>> > ) >>>>> > 2013-12-24 13:25:49,670 INFO >>>>> > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: >>>>> > dfs.block.invalidate.limit=1000 >>>>> > 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: Computing >>>>> > capacity for map BlocksMap >>>>> > 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: VM type >>>>> = >>>>> > 64-bit >>>>> > 2013-12-24 13:25:49,673 INFO org.apache.hadoop.util.GSet: 2.0% max >>>>> > memory = 889 MB >>>>> > 2013-12-24 13:25:49,673 INFO org.apache.hadoop.util.GSet: capacity >>>>> > = 2^21 = 2097152 entries >>>>> > 2013-12-24 13:25:49,677 INFO >>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >>>>> > dfs.block.access.token.enable=false >>>>> > 2013-12-24 13:25:49,677 INFO >>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >>>>> > defaultReplication = 1 >>>>> > 2013-12-24 13:25:49,677 INFO >>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >>>>> > maxReplication = 512 >>>>> > 2013-12-24 13:25:49,677 INFO >>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >>>>> > minReplication = 1 >>>>> > 2013-12-24 13:25:49,677 INFO >>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >>>>> > maxReplicationStreams = 2 >>>>> > 2013-12-24 13:25:49,677 INFO >>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >>>>> > shouldCheckForEnoughRacks = false >>>>> > 2013-12-24 13:25:49,677 INFO >>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >>>>> > replicationRecheckInterval = 3000 >>>>> > 2013-12-24 13:25:49,677 INFO >>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: >>>>> > encryptDataTransfer = false >>>>> > 2013-12-24 13:25:49,681 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner >>>>> > = sitaraman (auth:SIMPLE) >>>>> > 2013-12-24 13:25:49,681 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup >>>>> > = supergroup >>>>> > 2013-12-24 13:25:49,681 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>> > isPermissionEnabled = true >>>>> > 2013-12-24 13:25:49,681 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: >>>>> false >>>>> > 2013-12-24 13:25:49,682 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: >>>>> > true >>>>> > 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: Computing >>>>> > capacity for map INodeMap >>>>> > 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: VM type >>>>> = >>>>> > 64-bit >>>>> > 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: 1.0% max >>>>> > memory = 889 MB >>>>> > 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: capacity >>>>> > = 2^20 = 1048576 entries >>>>> > 2013-12-24 13:25:49,802 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names >>>>> > occuring more than 10 times >>>>> > 2013-12-24 13:25:49,804 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>> > dfs.namenode.safemode.threshold-pct = 0.9990000128746033 >>>>> > 2013-12-24 13:25:49,804 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>> > dfs.namenode.safemode.min.datanodes = 0 >>>>> > 2013-12-24 13:25:49,804 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: >>>>> > dfs.namenode.safemode.extension = 30000 >>>>> > 2013-12-24 13:25:49,805 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on >>>>> > namenode is enabled >>>>> > 2013-12-24 13:25:49,805 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will >>>>> > use 0.03 of total heap and retry cache entry expiry time is 600000 >>>>> > millis >>>>> > 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: Computing >>>>> > capacity for map Namenode Retry Cache >>>>> > 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: VM type >>>>> = >>>>> > 64-bit >>>>> > 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: >>>>> > 0.029999999329447746% max memory = 889 MB >>>>> > 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: capacity >>>>> > = 2^15 = 32768 entries >>>>> > 2013-12-24 13:25:49,816 INFO >>>>> > org.apache.hadoop.hdfs.server.common.Storage: Lock on >>>>> > /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/in_use.lock >>>>> > acquired by nodename 19170@localhost.localdomain >>>>> > 2013-12-24 13:25:49,861 INFO org.mortbay.log: Stopped >>>>> > SelectChannelConnector@0.0.0.0:50070 >>>>> > 2013-12-24 13:25:49,964 INFO >>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode >>>>> > metrics system... >>>>> > 2013-12-24 13:25:49,965 INFO >>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics >>>>> > system stopped. >>>>> > 2013-12-24 13:25:49,965 INFO >>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics >>>>> > system shutdown complete. >>>>> > 2013-12-24 13:25:49,965 FATAL >>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in >>>>> namenode >>>>> > join >>>>> > java.io.IOException: >>>>> > File system image contains an old layout version -39. >>>>> > An upgrade to version -47 is required. >>>>> > Please restart NameNode with -upgrade option. >>>>> > at >>>>> > >>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:221) >>>>> > at >>>>> > >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787) >>>>> > at >>>>> > >>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568) >>>>> > at >>>>> > >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443) >>>>> > at >>>>> > >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491) >>>>> > at >>>>> > >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684) >>>>> > at >>>>> > >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669) >>>>> > at >>>>> > >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254) >>>>> > at >>>>> > >>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) >>>>> > 2013-12-24 13:25:49,967 INFO org.apache.hadoop.util.ExitUtil: Exiting >>>>> > with status 1 >>>>> > 2013-12-24 13:25:49,968 INFO >>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: >>>>> > /************************************************************ >>>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/ >>>>> 127.0.0.1 >>>>> > ************************************************************/ >>>>> > >>>>> > On 12/24/13, Sitaraman Vilayannur >>>>> wrote: >>>>> >> The line beginning with ulimit that i have appended below, i thought >>>>> >> was the log file? >>>>> >> >>>>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-localhost.localdomain.out >>>>> >> Sitaraman >>>>> >> On 12/24/13, Nitin Pawar wrote: >>>>> >>> Without log, very hard to guess what's happening. >>>>> >>> >>>>> >>> Can you clean up the log directory and then start over and check >>>>> for the >>>>> >>> logs again. >>>>> >>> >>>>> >>> >>>>> >>> On Tue, Dec 24, 2013 at 11:44 AM, Sitaraman Vilayannur < >>>>> >>> vrsitaramanietflists@gmail.com> wrote: >>>>> >>> >>>>> >>>> Hi Nitin, >>>>> >>>> I moved to the release 2.2.0 on starting node manager it remains >>>>> >>>> silent without errors but nodemanager dosent start....while it >>>>> does in >>>>> >>>> the earlier 0.23 version >>>>> >>>> >>>>> >>>> >>>>> >>>> ./hadoop-daemon.sh start namenode >>>>> >>>> starting namenode, logging to >>>>> >>>> >>>>> >>>> >>>>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-localhost.localdomain.out >>>>> >>>> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library >>>>> >>>> /usr/local/Software/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 >>>>> which >>>>> >>>> might have disabled stack guard. The VM will try to fix the stack >>>>> >>>> guard now. >>>>> >>>> It's highly recommended that you fix the library with 'execstack >>>>> -c >>>>> >>>> ', or link it with '-z noexecstack'. >>>>> >>>> [sitaraman@localhost sbin]$ jps >>>>> >>>> 13444 Jps >>>>> >>>> [sitaraman@localhost sbin]$ vi >>>>> >>>> >>>>> >>>> >>>>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-localhost.localdomain.out >>>>> >>>> >>>>> >>>> >>>>> >>>> ulimit -a for user sitaraman >>>>> >>>> core file size (blocks, -c) 0 >>>>> >>>> data seg size (kbytes, -d) unlimited >>>>> >>>> scheduling priority (-e) 0 >>>>> >>>> file size (blocks, -f) unlimited >>>>> >>>> pending signals (-i) 135104 >>>>> >>>> max locked memory (kbytes, -l) 32 >>>>> >>>> max memory size (kbytes, -m) unlimited >>>>> >>>> open files (-n) 1024 >>>>> >>>> pipe size (512 bytes, -p) 8 >>>>> >>>> POSIX message queues (bytes, -q) 819200 >>>>> >>>> real-time priority (-r) 0 >>>>> >>>> stack size (kbytes, -s) 10240 >>>>> >>>> cpu time (seconds, -t) unlimited >>>>> >>>> max user processes (-u) 135104 >>>>> >>>> virtual memory (kbytes, -v) unlimited >>>>> >>>> file locks (-x) unlimited >>>>> >>>> >>>>> >>>> >>>>> >>>> On 12/24/13, Nitin Pawar wrote: >>>>> >>>> > For now you can ignore this warning, >>>>> >>>> > it was your first program so you can try building other things >>>>> and >>>>> >>>> > slowly >>>>> >>>> > run the commands mentioned the log message to fix these small >>>>> >>>> > warnings. >>>>> >>>> > >>>>> >>>> > >>>>> >>>> > On Tue, Dec 24, 2013 at 10:07 AM, Sitaraman Vilayannur < >>>>> >>>> > vrsitaramanietflists@gmail.com> wrote: >>>>> >>>> > >>>>> >>>> >> Thanks Nitin, That worked, >>>>> >>>> >> When i run the Pi example, i get the following warning at the >>>>> end, >>>>> >>>> >> what must i do about this warning....thanks much for your help. >>>>> >>>> >> Sitaraman >>>>> >>>> >> inished in 20.82 seconds >>>>> >>>> >> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded >>>>> library >>>>> >>>> >> >>>>> /usr/local/Software/hadoop-0.23.10/lib/native/libhadoop.so.1.0.0 >>>>> >>>> >> which >>>>> >>>> >> might have disabled stack guard. The VM will try to fix the >>>>> stack >>>>> >>>> >> guard now. >>>>> >>>> >> It's highly recommended that you fix the library with >>>>> 'execstack -c >>>>> >>>> >> ', or link it with '-z noexecstack'. >>>>> >>>> >> 13/12/24 10:05:19 WARN util.NativeCodeLoader: Unable to load >>>>> >>>> >> native-hadoop library for your platform... using builtin-java >>>>> >>>> >> classes >>>>> >>>> >> where applicable >>>>> >>>> >> Estimated value of Pi is 3.14127500000000000000 >>>>> >>>> >> [sitaraman@localhost mapreduce]$ >>>>> >>>> >> >>>>> >>>> >> On 12/23/13, Nitin Pawar wrote: >>>>> >>>> >> > Can you try starting the process as non root user. >>>>> >>>> >> > Give proper permissions to the user and start it as a >>>>> different >>>>> >>>> >> > user. >>>>> >>>> >> > >>>>> >>>> >> > Thanks, >>>>> >>>> >> > Nitin >>>>> >>>> >> > >>>>> >>>> >> > >>>>> >>>> >> > On Mon, Dec 23, 2013 at 2:15 PM, Sitaraman Vilayannur < >>>>> >>>> >> > vrsitaramanietflists@gmail.com> wrote: >>>>> >>>> >> > >>>>> >>>> >> >> Hi, >>>>> >>>> >> >> When i attempt to start nodemanager i get the following >>>>> error. >>>>> >>>> >> >> Any >>>>> >>>> >> >> help >>>>> >>>> >> >> appreciated. I was able to start resource manager >>>>> datanode, >>>>> >>>> namenode >>>>> >>>> >> >> and >>>>> >>>> >> >> secondarynamenode, >>>>> >>>> >> >> >>>>> >>>> >> >> >>>>> >>>> >> >> ./yarn-daemon.sh start nodemanager >>>>> >>>> >> >> starting nodemanager, logging to >>>>> >>>> >> >> >>>>> >>>> >> >>>>> >>>> >>>>> /usr/local/Software/hadoop-0.23.10/logs/yarn-root-nodemanager-localhost.localdomain.out >>>>> >>>> >> >> Unrecognized option: -jvm >>>>> >>>> >> >> Error: Could not create the Java Virtual Machine. >>>>> >>>> >> >> Error: A fatal exception has occurred. Program will exit. >>>>> >>>> >> >> [root@localhost sbin]# emacs >>>>> >>>> >> >> >>>>> >>>> >> >>>>> >>>> >>>>> /usr/local/Software/hadoop-0.23.10/logs/yarn-root-nodemanager-localhost.localdomain.out >>>>> >>>> >> >> & >>>>> >>>> >> >> [4] 29004 >>>>> >>>> >> >> [root@localhost sbin]# jps >>>>> >>>> >> >> 28402 SecondaryNameNode >>>>> >>>> >> >> 30280 Jps >>>>> >>>> >> >> 28299 DataNode >>>>> >>>> >> >> 6729 Main >>>>> >>>> >> >> 26044 ResourceManager >>>>> >>>> >> >> 28197 NameNode >>>>> >>>> >> >> >>>>> >>>> >> > >>>>> >>>> >> > >>>>> >>>> >> > >>>>> >>>> >> > -- >>>>> >>>> >> > Nitin Pawar >>>>> >>>> >> > >>>>> >>>> >> >>>>> >>>> > >>>>> >>>> > >>>>> >>>> > >>>>> >>>> > -- >>>>> >>>> > Nitin Pawar >>>>> >>>> > >>>>> >>>> >>>>> >>> >>>>> >>> >>>>> >>> >>>>> >>> -- >>>>> >>> Nitin Pawar >>>>> >>> >>>>> >> >>>>> > >>>>> >>>> >>>> >>>> >>>> -- >>>> Nitin Pawar >>>> >>> >>> >> >> >> -- >> Nitin Pawar >> > > --90e6ba1efd321d628b04ee4978ff Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/nameno= de/in_use.lock acquired by nodename 7518@localhost.localdomain=

stop all ins= tance running and then do the steps.

Cheers!
Manoj.<= /div>


On Tue, Dec 24, 2013 at 8:48 PM, Sitaram= an Vilayannur <vrsitaramanietflists@gmail.com> = wrote:
I did press Y tried it seve= ral times and once more now.=
Sitaraman


On Tue, Dec 24, 2013 at 8:38= PM, Nitin Pawar <nitinpawar432@gmail.com> wrote:
see the error .. it says no= t formatted=A0
did you press Y or y ?=A0
try again :)=A0


On Tue, Dec 24, 2013 at 8:35 P= M, Sitaraman Vilayannur <vrsitaramanietflists@gmail.com&g= t; wrote:
Hi Nitin,
=A0= Even after formatting using hdfs namenode -format, i keep seeing namenode n= ot formatted in the logs when i try to start namenode........
12/24 20:33:26 INFO namenode.FSNamesystem: supergroup=3Dsupergroup
13/12/24 20:33:26 INFO namenode.FSNamesystem: isPermissionEnabled=3Dtrue13/12/24 20:33:26 INFO namenode.NameNode: Caching file names occuring more= than 10 times
13/12/24 20:33:26 INFO namenode.NNStorage: Storage direct= ory /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode has been successf= ully formatted.
13/12/24 20:33:26 INFO namenode.FSImage: Saving image file /usr/local/Softw= are/hadoop-0.23.10/data/hdfs/namenode/current/fsimage.ckpt_0000000000000000= 000 using no compression
13/12/24 20:33:26 INFO namenode.FSImage: Image = file of size 124 saved in 0 seconds.
13/12/24 20:33:26 INFO namenode.NNStorageRetentionManager: Going to retain = 1 images with txid >=3D 0
13/12/24 20:33:26 INFO util.ExitUtil: Exiti= ng with status 0
13/12/24 20:33:26 INFO namenode.NameNode: SHUTDOWN_MSG:=

/************************************************************
SHUTDOWN_M= SG: Shutting down NameNode at localhost.localdomain/127.0.0.1
**********************************= **************************/


2013-12-24 20:33:46,337 INFO org.apache.hadoop.hdfs.server.co= mmon.Storage: Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/i= n_use.lock acquired by nodename 7518@localhost.localdomain
2013-12-24 20= :33:46,339 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:5007= 0
2013-12-24 20:33:46,340 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: Stopping NameNode metrics system...
2013-12-24 20:33:46,340 INFO or= g.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system st= opped.
2013-12-24 20:33:46,340 INFO org.apache.hadoop.metrics2.impl.MetricsSystemI= mpl: NameNode metrics system shutdown complete.
2013-12-24 20:33:46,340 = FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenod= e join
java.io.IOException: NameNode is not formatted.
=A0=A0=A0=A0=A0=A0=A0 at= org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSIma= ge.java:210)

=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server= .namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.FSNamesyste= m.loadFromDisk(FSNamesystem.java:568)
=A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.i= nitialize(NameNode.java:491)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.&l= t;init>(NameNode.java:684)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop= .hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
=A0=A0=A0= =A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameN= ode(NameNode.java:1254)
=A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.ma= in(NameNode.java:1320)
2013-12-24 20:33:46,342 INFO org.apache.had= oop.util.ExitUtil: Exiting with status 1
2013-12-24 20:33:46,343 INFO or= g.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:

/************************************************************
SHUTDOWN_M= SG: Shutting down NameNode at localhost.localdomain/127.0.0.1
**********************************= **************************/




On Tue, Dec 24, 2013 at 3:13 PM, Nitin Pawar <nitin= pawar432@gmail.com> wrote:
the issue here is you tried= one version of hadoop and then changed to a different version.

You can not do that directly with hadoop. You need to follow a process= while upgrading hadoop versions.=A0

For now as you are just starting with hadoop, I would r= ecommend just run a dfs format and start the hdfs again=A0


On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <= ;vrsita= ramanietflists@gmail.com> wrote:
When i run namenode with upgrade option i ge= t the following error and
and namenode dosent start...
2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
STATE* Network topology has 0 racks and 0 datanodes
2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
STATE* UnderReplicatedBlocks has 0 blocks
2013-12-24 14:48:38,631 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2013-12-24 14:48:38,632 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 9000: starting
2013-12-24 14:48:38,633 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up at:
192.168.1= .2/192.168.1.2:9000
2013-12-24 14:48:38,633 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services
required for active state
2013-12-24 14:50:50,060 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL 15:
SIGTERM
2013-12-24 14:50:50,062 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/


On 12/24/13, Sitaraman Vilayannur <vrsitaramanietflists@gmail.com> wrote= :
> Found it,
> =A0I get the following error on starting namenode in 2.2
> 10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10= /contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contri= b/capacity-scheduler/*.jar
> STARTUP_MSG: =A0 build =3D https://svn.apache.org/repos/asf/hadoop/c= ommon
> -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
> STARTUP_MSG: =A0 java =3D 1.7.0_45
> ************************************************************/
> 2013-12-24 13:25:48,876 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX
> signal handlers for [TERM, HUP, INT]
> 2013-12-24 13:25:49,042 INFO
> org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from<= br> > hadoop-metrics2.properties
> 2013-12-24 13:25:49,102 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot<= br> > period at 10 second(s).
> 2013-12-24 13:25:49,102 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics > system started
> 2013-12-24 13:25:49,232 WARN org.apache.hadoop.util.NativeCodeLoader:<= br> > Unable to load native-hadoop library for your platform... using
> builtin-java classes where applicable
> 2013-12-24 13:25:49,375 INFO org.mortbay.log: Logging to
> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
> org.mortbay.log.Slf4jLog
> 2013-12-24 13:25:49,410 INFO org.apache.hadoop.http.HttpServer: Added<= br> > global filter 'safety'
> (class=3Dorg.apache.hadoop.http.HttpServer$QuotingInputFilter)
> 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added<= br> > filter static_user_filter
> (class=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilt= er)
> to context hdfs
> 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added<= br> > filter static_user_filter
> (class=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilt= er)
> to context static
> 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added<= br> > filter static_user_filter
> (class=3Dorg.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilt= er)
> to context logs
> 2013-12-24 13:25:49,422 INFO org.apache.hadoop.http.HttpServer:
> dfs.webhdfs.enabled =3D false
> 2013-12-24 13:25:49,432 INFO org.apache.hadoop.http.HttpServer: Jetty<= br> > bound to port 50070
> 2013-12-24 13:25:49,432 INFO org.mortbay.log: jetty-6.1.26
> 2013-12-24 13:25:49,459 WARN org.mortbay.log: Can't reuse
> /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using
> /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_2787234685293301311
> 2013-12-24 13:25:49,610 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50070
> 2013-12-24 13:25:49,611 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
> 0.0.0.0:50070 > 2013-12-24 13:25:49,628 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image > storage directory (dfs.namenode.name.dir) configured. Beware of
> dataloss due to lack of redundant storage directories!
> 2013-12-24 13:25:49,628 WARN
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one
> namespace edits storage directory (dfs.namenode.edits.dir) configured.=
> Beware of dataloss due to lack of redundant storage directories!
> 2013-12-24 13:25:49,668 INFO
> org.apache.hadoop.hdfs.server.namenode.HostFileManager: read includes:=
> HostSet(
> )
> 2013-12-24 13:25:49,669 INFO
> org.apache.hadoop.hdfs.server.namenode.HostFileManager: read excludes:=
> HostSet(
> )
> 2013-12-24 13:25:49,670 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
> dfs.block.invalidate.limit=3D1000
> 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: Computing > capacity for map BlocksMap
> 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: VM type =A0 = =A0 =A0 =3D
> 64-bit
> 2013-12-24 13:25:49,673 INFO org.apache.hadoop.util.GSet: 2.0% max
> memory =3D 889 MB
> 2013-12-24 13:25:49,673 INFO org.apache.hadoop.util.GSet: capacity
> =A0=3D 2^21 =3D 2097152 entries
> 2013-12-24 13:25:49,677 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> dfs.block.access.token.enable=3Dfalse
> 2013-12-24 13:25:49,677 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> defaultReplication =A0 =A0 =A0 =A0 =3D 1
> 2013-12-24 13:25:49,677 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplication =A0 =A0 =A0 =A0 =A0 =A0 =3D 512
> 2013-12-24 13:25:49,677 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> minReplication =A0 =A0 =A0 =A0 =A0 =A0 =3D 1
> 2013-12-24 13:25:49,677 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> maxReplicationStreams =A0 =A0 =A0=3D 2
> 2013-12-24 13:25:49,677 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> shouldCheckForEnoughRacks =A0=3D false
> 2013-12-24 13:25:49,677 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> replicationRecheckInterval =3D 3000
> 2013-12-24 13:25:49,677 INFO
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
> encryptDataTransfer =A0 =A0 =A0 =A0=3D false
> 2013-12-24 13:25:49,681 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner
> =A0 =3D sitaraman (auth:SIMPLE)
> 2013-12-24 13:25:49,681 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup
> =A0 =3D supergroup
> 2013-12-24 13:25:49,681 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> isPermissionEnabled =3D true
> 2013-12-24 13:25:49,681 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false=
> 2013-12-24 13:25:49,682 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: > true
> 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: Computing > capacity for map INodeMap
> 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: VM type =A0 = =A0 =A0 =3D
> 64-bit
> 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: 1.0% max
> memory =3D 889 MB
> 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: capacity
> =A0=3D 2^20 =3D 1048576 entries
> 2013-12-24 13:25:49,802 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names > occuring more than 10 times
> 2013-12-24 13:25:49,804 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.threshold-pct =3D 0.9990000128746033
> 2013-12-24 13:25:49,804 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.min.datanodes =3D 0
> 2013-12-24 13:25:49,804 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> dfs.namenode.safemode.extension =A0 =A0 =3D 30000
> 2013-12-24 13:25:49,805 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on > namenode is enabled
> 2013-12-24 13:25:49,805 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will<= br> > use 0.03 of total heap and retry cache entry expiry time is 600000
> millis
> 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: Computing > capacity for map Namenode Retry Cache
> 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: VM type =A0 = =A0 =A0 =3D
> 64-bit
> 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet:
> 0.029999999329447746% max memory =3D 889 MB
> 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: capacity
> =A0=3D 2^15 =3D 32768 entries
> 2013-12-24 13:25:49,816 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Lock on
> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/in_use.lock
> acquired by nodename 19170@localhost.localdomain
> 2013-12-24 13:25:49,861 INFO org.mortbay.log: Stopped
> SelectChannelConnector@0.0.0.0:50070
> 2013-12-24 13:25:49,964 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode > metrics system...
> 2013-12-24 13:25:49,965 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics > system stopped.
> 2013-12-24 13:25:49,965 INFO
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics > system shutdown complete.
> 2013-12-24 13:25:49,965 FATAL
> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode=
> join
> java.io.IOException:
> File system image contains an old layout version -39.
> An upgrade to version -47 is required.
> Please restart NameNode with -upgrade option.
> =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(F= SImage.java:221)
> =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSName= system.java:787)
> =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNam= esystem.java:568)
> =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNod= e.java:443)
> =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.ja= va:491)
> =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.= java:684)
> =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.= java:669)
> =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNod= e.java:1254)
> =A0 =A0 =A0 at
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:132= 0)
> 2013-12-24 13:25:49,967 INFO org.apache.hadoop.util.ExitUtil: Exiting<= br> > with status 1
> 2013-12-24 13:25:49,968 INFO
> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
> ************************************************************/
>
> On 12/24/13, Sitaraman Vilayannur <vrsitaramanietflists@gmail.com> = wrote:
>> The line beginning with ulimit that i have appended below, i thoug= ht
>> was the log file?
>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-lo= calhost.localdomain.out
>> Sitaraman
>> On 12/24/13, Nitin Pawar <nitinpawar432@gmail.com> wrote:
>>> Without log, very hard to guess what's happening.
>>>
>>> Can you clean up the log directory and then start over and che= ck for the
>>> logs again.
>>>
>>>
>>> On Tue, Dec 24, 2013 at 11:44 AM, Sitaraman Vilayannur < >>> vrsitaramanietflists@gmail.com> wrote:
>>>
>>>> Hi Nitin,
>>>> =A0I moved to the release 2.2.0 on starting node manager i= t remains
>>>> silent without errors but nodemanager dosent start....whil= e it does in
>>>> the earlier 0.23 version
>>>>
>>>>
>>>> ./hadoop-daemon.sh start namenode
>>>> starting namenode, logging to
>>>>
>>>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-nam= enode-localhost.localdomain.out
>>>> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded= library
>>>> /usr/local/Software/hadoop-2.2.0/lib/native/libhadoop.so.1= .0.0 which
>>>> might have disabled stack guard. The VM will try to fix th= e stack
>>>> guard now.
>>>> It's highly recommended that you fix the library with = 'execstack -c
>>>> <libfile>', or link it with '-z noexecstack&= #39;.
>>>> [sitaraman@localhost sbin]$ jps
>>>> 13444 Jps
>>>> [sitaraman@localhost sbin]$ vi
>>>>
>>>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-nam= enode-localhost.localdomain.out
>>>>
>>>>
>>>> ulimit -a for user sitaraman
>>>> core file size =A0 =A0 =A0 =A0 =A0(blocks, -c) 0
>>>> data seg size =A0 =A0 =A0 =A0 =A0 (kbytes, -d) unlimited >>>> scheduling priority =A0 =A0 =A0 =A0 =A0 =A0 (-e) 0
>>>> file size =A0 =A0 =A0 =A0 =A0 =A0 =A0 (blocks, -f) unlimit= ed
>>>> pending signals =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 (-i) 13510= 4
>>>> max locked memory =A0 =A0 =A0 (kbytes, -l) 32
>>>> max memory size =A0 =A0 =A0 =A0 (kbytes, -m) unlimited
>>>> open files =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0(-n)= 1024
>>>> pipe size =A0 =A0 =A0 =A0 =A0 =A0(512 bytes, -p) 8
>>>> POSIX message queues =A0 =A0 (bytes, -q) 819200
>>>> real-time priority =A0 =A0 =A0 =A0 =A0 =A0 =A0(-r) 0
>>>> stack size =A0 =A0 =A0 =A0 =A0 =A0 =A0(kbytes, -s) 10240 >>>> cpu time =A0 =A0 =A0 =A0 =A0 =A0 =A0 (seconds, -t) unlimit= ed
>>>> max user processes =A0 =A0 =A0 =A0 =A0 =A0 =A0(-u) 135104<= br> >>>> virtual memory =A0 =A0 =A0 =A0 =A0(kbytes, -v) unlimited >>>> file locks =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0(-x)= unlimited
>>>>
>>>>
>>>> On 12/24/13, Nitin Pawar <nitinpawar432@gmail.com> wrote:
>>>> > For now you can ignore this warning,
>>>> > it was your first program so you can try building oth= er things and
>>>> > slowly
>>>> > run the commands mentioned the log message to fix the= se small
>>>> > warnings.
>>>> >
>>>> >
>>>> > On Tue, Dec 24, 2013 at 10:07 AM, Sitaraman Vilayannu= r <
>>>> > vrsitaramanietflists@gmail.com> wrote:
>>>> >
>>>> >> Thanks Nitin, That worked,
>>>> >> When i run the Pi example, i get the following wa= rning at the end,
>>>> >> what must i do about this warning....thanks much = for your help.
>>>> >> Sitaraman
>>>> >> inished in 20.82 seconds
>>>> >> Java HotSpot(TM) 64-Bit Server VM warning: You ha= ve loaded library
>>>> >> /usr/local/Software/hadoop-0.23.10/lib/native/lib= hadoop.so.1.0.0
>>>> >> which
>>>> >> might have disabled stack guard. The VM will try = to fix the stack
>>>> >> guard now.
>>>> >> It's highly recommended that you fix the libr= ary with 'execstack -c
>>>> >> <libfile>', or link it with '-z noe= xecstack'.
>>>> >> 13/12/24 10:05:19 WARN util.NativeCodeLoader: Una= ble to load
>>>> >> native-hadoop library for your platform... using = builtin-java
>>>> >> classes
>>>> >> where applicable
>>>> >> Estimated value of Pi is 3.14127500000000000000 >>>> >> [sitaraman@localhost mapreduce]$
>>>> >>
>>>> >> On 12/23/13, Nitin Pawar <nitinpawar432@gmail.com> wr= ote:
>>>> >> > Can you try starting the process as non root= user.
>>>> >> > Give proper permissions to the user and star= t it as a different
>>>> >> > user.
>>>> >> >
>>>> >> > Thanks,
>>>> >> > Nitin
>>>> >> >
>>>> >> >
>>>> >> > On Mon, Dec 23, 2013 at 2:15 PM, Sitaraman V= ilayannur <
>>>> >> > vrsitaramanietflists@gmail.com> wrote:
>>>> >> >
>>>> >> >> Hi,
>>>> >> >> =A0When i attempt to start nodemanager i= get the following error.
>>>> >> >> Any
>>>> >> >> help
>>>> >> >> appreciated. =A0 I was able to start res= ource manager datanode,
>>>> namenode
>>>> >> >> and
>>>> >> >> secondarynamenode,
>>>> >> >>
>>>> >> >>
>>>> >> >> =A0 =A0./yarn-daemon.sh start nodemanage= r
>>>> >> >> starting nodemanager, logging to
>>>> >> >>
>>>> >>
>>>> /usr/local/Software/hadoop-0.23.10/logs/yarn-root-nodemana= ger-localhost.localdomain.out
>>>> >> >> Unrecognized option: -jvm
>>>> >> >> Error: Could not create the Java Virtual= Machine.
>>>> >> >> Error: A fatal exception has occurred. P= rogram will exit.
>>>> >> >> [root@localhost sbin]# emacs
>>>> >> >>
>>>> >>
>>>> /usr/local/Software/hadoop-0.23.10/logs/yarn-root-nodemana= ger-localhost.localdomain.out
>>>> >> >> &
>>>> >> >> [4] 29004
>>>> >> >> [root@localhost sbin]# jps
>>>> >> >> 28402 SecondaryNameNode
>>>> >> >> 30280 Jps
>>>> >> >> 28299 DataNode
>>>> >> >> 6729 Main
>>>> >> >> 26044 ResourceManager
>>>> >> >> 28197 NameNode
>>>> >> >>
>>>> >> >
>>>> >> >
>>>> >> >
>>>> >> > --
>>>> >> > Nitin Pawar
>>>> >> >
>>>> >>
>>>> >
>>>> >
>>>> >
>>>> > --
>>>> > Nitin Pawar
>>>> >
>>>>
>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>



<= /div>--
Nitin Pawar




<= /div>--
Nitin Pawar


--90e6ba1efd321d628b04ee4978ff--