Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 265C710E04 for ; Tue, 24 Dec 2013 07:58:47 +0000 (UTC) Received: (qmail 35760 invoked by uid 500); 24 Dec 2013 07:58:34 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 35649 invoked by uid 500); 24 Dec 2013 07:58:34 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 35642 invoked by uid 99); 24 Dec 2013 07:58:33 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Dec 2013 07:58:33 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of vrsitaramanietflists@gmail.com designates 209.85.217.193 as permitted sender) Received: from [209.85.217.193] (HELO mail-lb0-f193.google.com) (209.85.217.193) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Dec 2013 07:58:27 +0000 Received: by mail-lb0-f193.google.com with SMTP id y6so832367lbh.4 for ; Mon, 23 Dec 2013 23:58:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=R6NzKM2Lo1Els5/4+kCCrfHUDUVy13/Q++Zf+pmtvT4=; b=U+4how7OOetN6xklZvQnaQ33ZPfUngUS93Xni0oknEKF8PbkZrT2El08gZs5aNkUvv GY6nJ/L7aYrFBflVeMTMeUyhpPAKZXV7KGCbQm/Ojb5ZiLP8sFfKPWjgFLVXgXK7CLkt T2wPejHTm3AkhUJk4Mcq7iykPKW6cLKEzydGVXHjCg/UCg9945w4wB5NxncJ87MNPToe L8AgamFg38hlAWNW/crXkMuxvn+yQxgzSrgJERcmng+vePqcbq7ckTjIBhi8xEsIYscj yls7e0upJNVByKilyrpRPxSLAxO6VGRocTr7G15JJxLgmeg7aGMtqUoulOlVtAfN9v1c Kw7A== MIME-Version: 1.0 X-Received: by 10.152.10.10 with SMTP id e10mr321611lab.56.1387871886834; Mon, 23 Dec 2013 23:58:06 -0800 (PST) Received: by 10.114.199.194 with HTTP; Mon, 23 Dec 2013 23:58:06 -0800 (PST) In-Reply-To: References: Date: Tue, 24 Dec 2013 13:28:06 +0530 Message-ID: Subject: Re: Getting error unrecognized option -jvm on starting nodemanager From: Sitaraman Vilayannur To: user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org Found it, I get the following error on starting namenode in 2.2 10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z STARTUP_MSG: java = 1.7.0_45 ************************************************************/ 2013-12-24 13:25:48,876 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2013-12-24 13:25:49,042 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2013-12-24 13:25:49,102 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2013-12-24 13:25:49,102 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 2013-12-24 13:25:49,232 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2013-12-24 13:25:49,375 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2013-12-24 13:25:49,410 INFO org.apache.hadoop.http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter) 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2013-12-24 13:25:49,422 INFO org.apache.hadoop.http.HttpServer: dfs.webhdfs.enabled = false 2013-12-24 13:25:49,432 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50070 2013-12-24 13:25:49,432 INFO org.mortbay.log: jetty-6.1.26 2013-12-24 13:25:49,459 WARN org.mortbay.log: Can't reuse /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_2787234685293301311 2013-12-24 13:25:49,610 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070 2013-12-24 13:25:49,611 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at: 0.0.0.0:50070 2013-12-24 13:25:49,628 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories! 2013-12-24 13:25:49,628 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories! 2013-12-24 13:25:49,668 INFO org.apache.hadoop.hdfs.server.namenode.HostFileManager: read includes: HostSet( ) 2013-12-24 13:25:49,669 INFO org.apache.hadoop.hdfs.server.namenode.HostFileManager: read excludes: HostSet( ) 2013-12-24 13:25:49,670 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2013-12-24 13:25:49,673 INFO org.apache.hadoop.util.GSet: 2.0% max memory = 889 MB 2013-12-24 13:25:49,673 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries 2013-12-24 13:25:49,677 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false 2013-12-24 13:25:49,677 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 1 2013-12-24 13:25:49,677 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512 2013-12-24 13:25:49,677 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1 2013-12-24 13:25:49,677 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 2 2013-12-24 13:25:49,677 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = false 2013-12-24 13:25:49,677 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000 2013-12-24 13:25:49,677 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false 2013-12-24 13:25:49,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = sitaraman (auth:SIMPLE) 2013-12-24 13:25:49,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup 2013-12-24 13:25:49,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = true 2013-12-24 13:25:49,681 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false 2013-12-24 13:25:49,682 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: 1.0% max memory = 889 MB 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries 2013-12-24 13:25:49,802 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 2013-12-24 13:25:49,804 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2013-12-24 13:25:49,804 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 2013-12-24 13:25:49,804 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 2013-12-24 13:25:49,805 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled 2013-12-24 13:25:49,805 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: Computing capacity for map Namenode Retry Cache 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: VM type = 64-bit 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory = 889 MB 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries 2013-12-24 13:25:49,816 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/in_use.lock acquired by nodename 19170@localhost.localdomain 2013-12-24 13:25:49,861 INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:50070 2013-12-24 13:25:49,964 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system... 2013-12-24 13:25:49,965 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped. 2013-12-24 13:25:49,965 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 2013-12-24 13:25:49,965 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join java.io.IOException: File system image contains an old layout version -39. An upgrade to version -47 is required. Please restart NameNode with -upgrade option. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:221) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:684) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:669) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320) 2013-12-24 13:25:49,967 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2013-12-24 13:25:49,968 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1 ************************************************************/ On 12/24/13, Sitaraman Vilayannur wrote: > The line beginning with ulimit that i have appended below, i thought > was the log file? > /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-localhost.localdomain.out > Sitaraman > On 12/24/13, Nitin Pawar wrote: >> Without log, very hard to guess what's happening. >> >> Can you clean up the log directory and then start over and check for the >> logs again. >> >> >> On Tue, Dec 24, 2013 at 11:44 AM, Sitaraman Vilayannur < >> vrsitaramanietflists@gmail.com> wrote: >> >>> Hi Nitin, >>> I moved to the release 2.2.0 on starting node manager it remains >>> silent without errors but nodemanager dosent start....while it does in >>> the earlier 0.23 version >>> >>> >>> ./hadoop-daemon.sh start namenode >>> starting namenode, logging to >>> >>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-localhost.localdomain.out >>> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library >>> /usr/local/Software/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0 which >>> might have disabled stack guard. The VM will try to fix the stack >>> guard now. >>> It's highly recommended that you fix the library with 'execstack -c >>> ', or link it with '-z noexecstack'. >>> [sitaraman@localhost sbin]$ jps >>> 13444 Jps >>> [sitaraman@localhost sbin]$ vi >>> >>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-localhost.localdomain.out >>> >>> >>> ulimit -a for user sitaraman >>> core file size (blocks, -c) 0 >>> data seg size (kbytes, -d) unlimited >>> scheduling priority (-e) 0 >>> file size (blocks, -f) unlimited >>> pending signals (-i) 135104 >>> max locked memory (kbytes, -l) 32 >>> max memory size (kbytes, -m) unlimited >>> open files (-n) 1024 >>> pipe size (512 bytes, -p) 8 >>> POSIX message queues (bytes, -q) 819200 >>> real-time priority (-r) 0 >>> stack size (kbytes, -s) 10240 >>> cpu time (seconds, -t) unlimited >>> max user processes (-u) 135104 >>> virtual memory (kbytes, -v) unlimited >>> file locks (-x) unlimited >>> >>> >>> On 12/24/13, Nitin Pawar wrote: >>> > For now you can ignore this warning, >>> > it was your first program so you can try building other things and >>> > slowly >>> > run the commands mentioned the log message to fix these small >>> > warnings. >>> > >>> > >>> > On Tue, Dec 24, 2013 at 10:07 AM, Sitaraman Vilayannur < >>> > vrsitaramanietflists@gmail.com> wrote: >>> > >>> >> Thanks Nitin, That worked, >>> >> When i run the Pi example, i get the following warning at the end, >>> >> what must i do about this warning....thanks much for your help. >>> >> Sitaraman >>> >> inished in 20.82 seconds >>> >> Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library >>> >> /usr/local/Software/hadoop-0.23.10/lib/native/libhadoop.so.1.0.0 >>> >> which >>> >> might have disabled stack guard. The VM will try to fix the stack >>> >> guard now. >>> >> It's highly recommended that you fix the library with 'execstack -c >>> >> ', or link it with '-z noexecstack'. >>> >> 13/12/24 10:05:19 WARN util.NativeCodeLoader: Unable to load >>> >> native-hadoop library for your platform... using builtin-java classes >>> >> where applicable >>> >> Estimated value of Pi is 3.14127500000000000000 >>> >> [sitaraman@localhost mapreduce]$ >>> >> >>> >> On 12/23/13, Nitin Pawar wrote: >>> >> > Can you try starting the process as non root user. >>> >> > Give proper permissions to the user and start it as a different >>> >> > user. >>> >> > >>> >> > Thanks, >>> >> > Nitin >>> >> > >>> >> > >>> >> > On Mon, Dec 23, 2013 at 2:15 PM, Sitaraman Vilayannur < >>> >> > vrsitaramanietflists@gmail.com> wrote: >>> >> > >>> >> >> Hi, >>> >> >> When i attempt to start nodemanager i get the following error. >>> >> >> Any >>> >> >> help >>> >> >> appreciated. I was able to start resource manager datanode, >>> namenode >>> >> >> and >>> >> >> secondarynamenode, >>> >> >> >>> >> >> >>> >> >> ./yarn-daemon.sh start nodemanager >>> >> >> starting nodemanager, logging to >>> >> >> >>> >> >>> /usr/local/Software/hadoop-0.23.10/logs/yarn-root-nodemanager-localhost.localdomain.out >>> >> >> Unrecognized option: -jvm >>> >> >> Error: Could not create the Java Virtual Machine. >>> >> >> Error: A fatal exception has occurred. Program will exit. >>> >> >> [root@localhost sbin]# emacs >>> >> >> >>> >> >>> /usr/local/Software/hadoop-0.23.10/logs/yarn-root-nodemanager-localhost.localdomain.out >>> >> >> & >>> >> >> [4] 29004 >>> >> >> [root@localhost sbin]# jps >>> >> >> 28402 SecondaryNameNode >>> >> >> 30280 Jps >>> >> >> 28299 DataNode >>> >> >> 6729 Main >>> >> >> 26044 ResourceManager >>> >> >> 28197 NameNode >>> >> >> >>> >> > >>> >> > >>> >> > >>> >> > -- >>> >> > Nitin Pawar >>> >> > >>> >> >>> > >>> > >>> > >>> > -- >>> > Nitin Pawar >>> > >>> >> >> >> >> -- >> Nitin Pawar >> >