hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sitaraman Vilayannur <vrsitaramanietfli...@gmail.com>
Subject Re: Getting error unrecognized option -jvm on starting nodemanager
Date Tue, 24 Dec 2013 23:02:32 GMT
Hi Manoj,
 JPS says no instances are running?
[sitaraman@localhost sbin]$ jps
8934 Jps
You have new mail in /var/spool/mail/root
[sitaraman@localhost sbin]$

On 12/24/13, Manoj Babu <manoj444@gmail.com> wrote:
> Lock on /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/*in_use.**lock
> acquired by nodename* 7518@localhost.localdomain
>
> stop all instance running and then do the steps.
>
> Cheers!
> Manoj.
>
>
> On Tue, Dec 24, 2013 at 8:48 PM, Sitaraman Vilayannur <
> vrsitaramanietflists@gmail.com> wrote:
>
>> I did press Y tried it several times and once more now.
>> Sitaraman
>>
>>
>> On Tue, Dec 24, 2013 at 8:38 PM, Nitin Pawar
>> <nitinpawar432@gmail.com>wrote:
>>
>>> see the error .. it says not formatted
>>> did you press Y or y ?
>>> try again :)
>>>
>>>
>>> On Tue, Dec 24, 2013 at 8:35 PM, Sitaraman Vilayannur <
>>> vrsitaramanietflists@gmail.com> wrote:
>>>
>>>> Hi Nitin,
>>>>  Even after formatting using hdfs namenode -format, i keep seeing
>>>> namenode not formatted in the logs when i try to start namenode........
>>>> 12/24 20:33:26 INFO namenode.FSNamesystem: supergroup=supergroup
>>>> 13/12/24 20:33:26 INFO namenode.FSNamesystem: isPermissionEnabled=true
>>>> 13/12/24 20:33:26 INFO namenode.NameNode: Caching file names occuring
>>>> more than 10 times
>>>> 13/12/24 20:33:26 INFO namenode.NNStorage: Storage directory
>>>> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode has been
>>>> successfully
>>>> formatted.
>>>> 13/12/24 20:33:26 INFO namenode.FSImage: Saving image file
>>>> /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/current/fsimage.ckpt_0000000000000000000
>>>> using no compression
>>>> 13/12/24 20:33:26 INFO namenode.FSImage: Image file of size 124 saved
>>>> in
>>>> 0 seconds.
>>>> 13/12/24 20:33:26 INFO namenode.NNStorageRetentionManager: Going to
>>>> retain 1 images with txid >= 0
>>>> 13/12/24 20:33:26 INFO util.ExitUtil: Exiting with status 0
>>>> 13/12/24 20:33:26 INFO namenode.NameNode: SHUTDOWN_MSG:
>>>>
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>>> ************************************************************/
>>>>
>>>>
>>>> 2013-12-24 20:33:46,337 INFO
>>>> org.apache.hadoop.hdfs.server.common.Storage: Lock on
>>>> /usr/local/Software/hadoop-2.2.0/data/hdfs/namenode/in_use.lock acquired
>>>> by
>>>> nodename 7518@localhost.localdomain
>>>> 2013-12-24 20:33:46,339 INFO org.mortbay.log: Stopped
>>>> SelectChannelConnector@0.0.0.0:50070
>>>> 2013-12-24 20:33:46,340 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode
>>>> metrics system...
>>>> 2013-12-24 20:33:46,340 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>>>> system
>>>> stopped.
>>>> 2013-12-24 20:33:46,340 INFO
>>>> org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
>>>> system
>>>> shutdown complete.
>>>> 2013-12-24 20:33:46,340 FATAL
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode
>>>> join
>>>> java.io.IOException: NameNode is not formatted.
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:210)
>>>>
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
>>>>         at
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
>>>> 2013-12-24 20:33:46,342 INFO org.apache.hadoop.util.ExitUtil: Exiting
>>>> with status 1
>>>> 2013-12-24 20:33:46,343 INFO
>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>
>>>> /************************************************************
>>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
>>>> ************************************************************/
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, Dec 24, 2013 at 3:13 PM, Nitin Pawar
>>>> <nitinpawar432@gmail.com>wrote:
>>>>
>>>>> the issue here is you tried one version of hadoop and then changed to
>>>>> a
>>>>> different version.
>>>>>
>>>>> You can not do that directly with hadoop. You need to follow a process
>>>>> while upgrading hadoop versions.
>>>>>
>>>>> For now as you are just starting with hadoop, I would recommend just
>>>>> run a dfs format and start the hdfs again
>>>>>
>>>>>
>>>>> On Tue, Dec 24, 2013 at 2:57 PM, Sitaraman Vilayannur <
>>>>> vrsitaramanietflists@gmail.com> wrote:
>>>>>
>>>>>> When i run namenode with upgrade option i get the following error
and
>>>>>> and namenode dosent start...
>>>>>> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
>>>>>> STATE* Network topology has 0 racks and 0 datanodes
>>>>>> 2013-12-24 14:48:38,595 INFO org.apache.hadoop.hdfs.StateChange:
>>>>>> STATE* UnderReplicatedBlocks has 0 blocks
>>>>>> 2013-12-24 14:48:38,631 INFO org.apache.hadoop.ipc.Server: IPC Server
>>>>>> Responder: starting
>>>>>> 2013-12-24 14:48:38,632 INFO org.apache.hadoop.ipc.Server: IPC Server
>>>>>> listener on 9000: starting
>>>>>> 2013-12-24 14:48:38,633 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: NameNode RPC up
at:
>>>>>> 192.168.1.2/192.168.1.2:9000
>>>>>> 2013-12-24 14:48:38,633 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting
>>>>>> services
>>>>>> required for active state
>>>>>> 2013-12-24 14:50:50,060 ERROR
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: RECEIVED SIGNAL
15:
>>>>>> SIGTERM
>>>>>> 2013-12-24 14:50:50,062 INFO
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>> /************************************************************
>>>>>> SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>>>>> 127.0.0.1
>>>>>> ************************************************************/
>>>>>>
>>>>>>
>>>>>> On 12/24/13, Sitaraman Vilayannur <vrsitaramanietflists@gmail.com>
>>>>>> wrote:
>>>>>> > Found it,
>>>>>> >  I get the following error on starting namenode in 2.2
>>>>>> >
>>>>>> 10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar:/usr/local/Software/hadoop-0.23.10/contrib/capacity-scheduler/*.jar
>>>>>> > STARTUP_MSG:   build =
>>>>>> https://svn.apache.org/repos/asf/hadoop/common
>>>>>> > -r 1529768; compiled by 'hortonmu' on 2013-10-07T06:28Z
>>>>>> > STARTUP_MSG:   java = 1.7.0_45
>>>>>> > ************************************************************/
>>>>>> > 2013-12-24 13:25:48,876 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: registered
UNIX
>>>>>> > signal handlers for [TERM, HUP, INT]
>>>>>> > 2013-12-24 13:25:49,042 INFO
>>>>>> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
>>>>>> > from
>>>>>> > hadoop-metrics2.properties
>>>>>> > 2013-12-24 13:25:49,102 INFO
>>>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
>>>>>> > snapshot
>>>>>> > period at 10 second(s).
>>>>>> > 2013-12-24 13:25:49,102 INFO
>>>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
metrics
>>>>>> > system started
>>>>>> > 2013-12-24 13:25:49,232 WARN
>>>>>> > org.apache.hadoop.util.NativeCodeLoader:
>>>>>> > Unable to load native-hadoop library for your platform... using
>>>>>> > builtin-java classes where applicable
>>>>>> > 2013-12-24 13:25:49,375 INFO org.mortbay.log: Logging to
>>>>>> > org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>>>>> > org.mortbay.log.Slf4jLog
>>>>>> > 2013-12-24 13:25:49,410 INFO org.apache.hadoop.http.HttpServer:
>>>>>> > Added
>>>>>> > global filter 'safety'
>>>>>> > (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>>>>>> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer:
>>>>>> > Added
>>>>>> > filter static_user_filter
>>>>>> >
>>>>>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
>>>>>> > to context hdfs
>>>>>> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer:
>>>>>> > Added
>>>>>> > filter static_user_filter
>>>>>> >
>>>>>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
>>>>>> > to context static
>>>>>> > 2013-12-24 13:25:49,412 INFO org.apache.hadoop.http.HttpServer:
>>>>>> > Added
>>>>>> > filter static_user_filter
>>>>>> >
>>>>>> (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter)
>>>>>> > to context logs
>>>>>> > 2013-12-24 13:25:49,422 INFO org.apache.hadoop.http.HttpServer:
>>>>>> > dfs.webhdfs.enabled = false
>>>>>> > 2013-12-24 13:25:49,432 INFO org.apache.hadoop.http.HttpServer:
>>>>>> > Jetty
>>>>>> > bound to port 50070
>>>>>> > 2013-12-24 13:25:49,432 INFO org.mortbay.log: jetty-6.1.26
>>>>>> > 2013-12-24 13:25:49,459 WARN org.mortbay.log: Can't reuse
>>>>>> > /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08, using
>>>>>> > /tmp/Jetty_0_0_0_0_50070_hdfs____w2cu08_2787234685293301311
>>>>>> > 2013-12-24 13:25:49,610 INFO org.mortbay.log: Started
>>>>>> > SelectChannelConnector@0.0.0.0:50070
>>>>>> > 2013-12-24 13:25:49,611 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server
up at:
>>>>>> > 0.0.0.0:50070
>>>>>> > 2013-12-24 13:25:49,628 WARN
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one
image
>>>>>> > storage directory (dfs.namenode.name.dir) configured. Beware
of
>>>>>> > dataloss due to lack of redundant storage directories!
>>>>>> > 2013-12-24 13:25:49,628 WARN
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one
>>>>>> > namespace edits storage directory (dfs.namenode.edits.dir)
>>>>>> configured.
>>>>>> > Beware of dataloss due to lack of redundant storage directories!
>>>>>> > 2013-12-24 13:25:49,668 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.HostFileManager: read
>>>>>> includes:
>>>>>> > HostSet(
>>>>>> > )
>>>>>> > 2013-12-24 13:25:49,669 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.HostFileManager: read
>>>>>> excludes:
>>>>>> > HostSet(
>>>>>> > )
>>>>>> > 2013-12-24 13:25:49,670 INFO
>>>>>> > org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager:
>>>>>> > dfs.block.invalidate.limit=1000
>>>>>> > 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: Computing
>>>>>> > capacity for map BlocksMap
>>>>>> > 2013-12-24 13:25:49,672 INFO org.apache.hadoop.util.GSet: VM
type
>>>>>>     =
>>>>>> > 64-bit
>>>>>> > 2013-12-24 13:25:49,673 INFO org.apache.hadoop.util.GSet: 2.0%
max
>>>>>> > memory = 889 MB
>>>>>> > 2013-12-24 13:25:49,673 INFO org.apache.hadoop.util.GSet: capacity
>>>>>> >  = 2^21 = 2097152 entries
>>>>>> > 2013-12-24 13:25:49,677 INFO
>>>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>>>>>> > dfs.block.access.token.enable=false
>>>>>> > 2013-12-24 13:25:49,677 INFO
>>>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>>>>>> > defaultReplication         = 1
>>>>>> > 2013-12-24 13:25:49,677 INFO
>>>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>>>>>> > maxReplication             = 512
>>>>>> > 2013-12-24 13:25:49,677 INFO
>>>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>>>>>> > minReplication             = 1
>>>>>> > 2013-12-24 13:25:49,677 INFO
>>>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>>>>>> > maxReplicationStreams      = 2
>>>>>> > 2013-12-24 13:25:49,677 INFO
>>>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>>>>>> > shouldCheckForEnoughRacks  = false
>>>>>> > 2013-12-24 13:25:49,677 INFO
>>>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>>>>>> > replicationRecheckInterval = 3000
>>>>>> > 2013-12-24 13:25:49,677 INFO
>>>>>> > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager:
>>>>>> > encryptDataTransfer        = false
>>>>>> > 2013-12-24 13:25:49,681 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner
>>>>>> >   = sitaraman (auth:SIMPLE)
>>>>>> > 2013-12-24 13:25:49,681 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup
>>>>>> >   = supergroup
>>>>>> > 2013-12-24 13:25:49,681 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> > isPermissionEnabled = true
>>>>>> > 2013-12-24 13:25:49,681 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled:
>>>>>> false
>>>>>> > 2013-12-24 13:25:49,682 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append
>>>>>> > Enabled:
>>>>>> > true
>>>>>> > 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: Computing
>>>>>> > capacity for map INodeMap
>>>>>> > 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: VM
type
>>>>>>     =
>>>>>> > 64-bit
>>>>>> > 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: 1.0%
max
>>>>>> > memory = 889 MB
>>>>>> > 2013-12-24 13:25:49,801 INFO org.apache.hadoop.util.GSet: capacity
>>>>>> >  = 2^20 = 1048576 entries
>>>>>> > 2013-12-24 13:25:49,802 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file
names
>>>>>> > occuring more than 10 times
>>>>>> > 2013-12-24 13:25:49,804 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> > dfs.namenode.safemode.threshold-pct = 0.9990000128746033
>>>>>> > 2013-12-24 13:25:49,804 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> > dfs.namenode.safemode.min.datanodes = 0
>>>>>> > 2013-12-24 13:25:49,804 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
>>>>>> > dfs.namenode.safemode.extension     = 30000
>>>>>> > 2013-12-24 13:25:49,805 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache
on
>>>>>> > namenode is enabled
>>>>>> > 2013-12-24 13:25:49,805 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache
>>>>>> > will
>>>>>> > use 0.03 of total heap and retry cache entry expiry time is
600000
>>>>>> > millis
>>>>>> > 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: Computing
>>>>>> > capacity for map Namenode Retry Cache
>>>>>> > 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: VM
type
>>>>>>     =
>>>>>> > 64-bit
>>>>>> > 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet:
>>>>>> > 0.029999999329447746% max memory = 889 MB
>>>>>> > 2013-12-24 13:25:49,807 INFO org.apache.hadoop.util.GSet: capacity
>>>>>> >  = 2^15 = 32768 entries
>>>>>> > 2013-12-24 13:25:49,816 INFO
>>>>>> > org.apache.hadoop.hdfs.server.common.Storage: Lock on
>>>>>> > /usr/local/Software/hadoop-0.23.10/data/hdfs/namenode/in_use.lock
>>>>>> > acquired by nodename 19170@localhost.localdomain
>>>>>> > 2013-12-24 13:25:49,861 INFO org.mortbay.log: Stopped
>>>>>> > SelectChannelConnector@0.0.0.0:50070
>>>>>> > 2013-12-24 13:25:49,964 INFO
>>>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping
>>>>>> > NameNode
>>>>>> > metrics system...
>>>>>> > 2013-12-24 13:25:49,965 INFO
>>>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
metrics
>>>>>> > system stopped.
>>>>>> > 2013-12-24 13:25:49,965 INFO
>>>>>> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
metrics
>>>>>> > system shutdown complete.
>>>>>> > 2013-12-24 13:25:49,965 FATAL
>>>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in
>>>>>> namenode
>>>>>> > join
>>>>>> > java.io.IOException:
>>>>>> > File system image contains an old layout version -39.
>>>>>> > An upgrade to version -47 is required.
>>>>>> > Please restart NameNode with -upgrade option.
>>>>>> >       at
>>>>>> >
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:221)
>>>>>> >       at
>>>>>> >
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:787)
>>>>>> >       at
>>>>>> >
>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:568)
>>>>>> >       at
>>>>>> >
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:443)
>>>>>> >       at
>>>>>> >
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:491)
>>>>>> >       at
>>>>>> >
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:684)
>>>>>> >       at
>>>>>> >
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:669)
>>>>>> >       at
>>>>>> >
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1254)
>>>>>> >       at
>>>>>> >
>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1320)
>>>>>> > 2013-12-24 13:25:49,967 INFO org.apache.hadoop.util.ExitUtil:
>>>>>> > Exiting
>>>>>> > with status 1
>>>>>> > 2013-12-24 13:25:49,968 INFO
>>>>>> > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
>>>>>> > /************************************************************
>>>>>> > SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/
>>>>>> 127.0.0.1
>>>>>> > ************************************************************/
>>>>>> >
>>>>>> > On 12/24/13, Sitaraman Vilayannur <vrsitaramanietflists@gmail.com>
>>>>>> wrote:
>>>>>> >> The line beginning with ulimit that i have appended below,
i
>>>>>> >> thought
>>>>>> >> was the log file?
>>>>>> >>
>>>>>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-localhost.localdomain.out
>>>>>> >> Sitaraman
>>>>>> >> On 12/24/13, Nitin Pawar <nitinpawar432@gmail.com>
wrote:
>>>>>> >>> Without log, very hard to guess what's happening.
>>>>>> >>>
>>>>>> >>> Can you clean up the log directory and then start over
and check
>>>>>> for the
>>>>>> >>> logs again.
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> On Tue, Dec 24, 2013 at 11:44 AM, Sitaraman Vilayannur
<
>>>>>> >>> vrsitaramanietflists@gmail.com> wrote:
>>>>>> >>>
>>>>>> >>>> Hi Nitin,
>>>>>> >>>>  I moved to the release 2.2.0 on starting node manager
it
>>>>>> >>>> remains
>>>>>> >>>> silent without errors but nodemanager dosent start....while
it
>>>>>> does in
>>>>>> >>>> the earlier 0.23 version
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>> ./hadoop-daemon.sh start namenode
>>>>>> >>>> starting namenode, logging to
>>>>>> >>>>
>>>>>> >>>>
>>>>>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-localhost.localdomain.out
>>>>>> >>>> Java HotSpot(TM) 64-Bit Server VM warning: You have
loaded
>>>>>> >>>> library
>>>>>> >>>> /usr/local/Software/hadoop-2.2.0/lib/native/libhadoop.so.1.0.0
>>>>>> which
>>>>>> >>>> might have disabled stack guard. The VM will try
to fix the
>>>>>> >>>> stack
>>>>>> >>>> guard now.
>>>>>> >>>> It's highly recommended that you fix the library
with 'execstack
>>>>>> -c
>>>>>> >>>> <libfile>', or link it with '-z noexecstack'.
>>>>>> >>>> [sitaraman@localhost sbin]$ jps
>>>>>> >>>> 13444 Jps
>>>>>> >>>> [sitaraman@localhost sbin]$ vi
>>>>>> >>>>
>>>>>> >>>>
>>>>>> /usr/local/Software/hadoop-2.2.0/logs/hadoop-sitaraman-namenode-localhost.localdomain.out
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>> ulimit -a for user sitaraman
>>>>>> >>>> core file size          (blocks, -c) 0
>>>>>> >>>> data seg size           (kbytes, -d) unlimited
>>>>>> >>>> scheduling priority             (-e) 0
>>>>>> >>>> file size               (blocks, -f) unlimited
>>>>>> >>>> pending signals                 (-i) 135104
>>>>>> >>>> max locked memory       (kbytes, -l) 32
>>>>>> >>>> max memory size         (kbytes, -m) unlimited
>>>>>> >>>> open files                      (-n) 1024
>>>>>> >>>> pipe size            (512 bytes, -p) 8
>>>>>> >>>> POSIX message queues     (bytes, -q) 819200
>>>>>> >>>> real-time priority              (-r) 0
>>>>>> >>>> stack size              (kbytes, -s) 10240
>>>>>> >>>> cpu time               (seconds, -t) unlimited
>>>>>> >>>> max user processes              (-u) 135104
>>>>>> >>>> virtual memory          (kbytes, -v) unlimited
>>>>>> >>>> file locks                      (-x) unlimited
>>>>>> >>>>
>>>>>> >>>>
>>>>>> >>>> On 12/24/13, Nitin Pawar <nitinpawar432@gmail.com>
wrote:
>>>>>> >>>> > For now you can ignore this warning,
>>>>>> >>>> > it was your first program so you can try building
other things
>>>>>> and
>>>>>> >>>> > slowly
>>>>>> >>>> > run the commands mentioned the log message
to fix these small
>>>>>> >>>> > warnings.
>>>>>> >>>> >
>>>>>> >>>> >
>>>>>> >>>> > On Tue, Dec 24, 2013 at 10:07 AM, Sitaraman
Vilayannur <
>>>>>> >>>> > vrsitaramanietflists@gmail.com> wrote:
>>>>>> >>>> >
>>>>>> >>>> >> Thanks Nitin, That worked,
>>>>>> >>>> >> When i run the Pi example, i get the following
warning at the
>>>>>> end,
>>>>>> >>>> >> what must i do about this warning....thanks
much for your
>>>>>> >>>> >> help.
>>>>>> >>>> >> Sitaraman
>>>>>> >>>> >> inished in 20.82 seconds
>>>>>> >>>> >> Java HotSpot(TM) 64-Bit Server VM warning:
You have loaded
>>>>>> library
>>>>>> >>>> >>
>>>>>> /usr/local/Software/hadoop-0.23.10/lib/native/libhadoop.so.1.0.0
>>>>>> >>>> >> which
>>>>>> >>>> >> might have disabled stack guard. The VM
will try to fix the
>>>>>> stack
>>>>>> >>>> >> guard now.
>>>>>> >>>> >> It's highly recommended that you fix the
library with
>>>>>> 'execstack -c
>>>>>> >>>> >> <libfile>', or link it with '-z noexecstack'.
>>>>>> >>>> >> 13/12/24 10:05:19 WARN util.NativeCodeLoader:
Unable to load
>>>>>> >>>> >> native-hadoop library for your platform...
using builtin-java
>>>>>> >>>> >> classes
>>>>>> >>>> >> where applicable
>>>>>> >>>> >> Estimated value of Pi is 3.14127500000000000000
>>>>>> >>>> >> [sitaraman@localhost mapreduce]$
>>>>>> >>>> >>
>>>>>> >>>> >> On 12/23/13, Nitin Pawar <nitinpawar432@gmail.com>
wrote:
>>>>>> >>>> >> > Can you try starting the process as
non root user.
>>>>>> >>>> >> > Give proper permissions to the user
and start it as a
>>>>>> different
>>>>>> >>>> >> > user.
>>>>>> >>>> >> >
>>>>>> >>>> >> > Thanks,
>>>>>> >>>> >> > Nitin
>>>>>> >>>> >> >
>>>>>> >>>> >> >
>>>>>> >>>> >> > On Mon, Dec 23, 2013 at 2:15 PM, Sitaraman
Vilayannur <
>>>>>> >>>> >> > vrsitaramanietflists@gmail.com>
wrote:
>>>>>> >>>> >> >
>>>>>> >>>> >> >> Hi,
>>>>>> >>>> >> >>  When i attempt to start nodemanager
i get the following
>>>>>> error.
>>>>>> >>>> >> >> Any
>>>>>> >>>> >> >> help
>>>>>> >>>> >> >> appreciated.   I was able to start
resource manager
>>>>>> datanode,
>>>>>> >>>> namenode
>>>>>> >>>> >> >> and
>>>>>> >>>> >> >> secondarynamenode,
>>>>>> >>>> >> >>
>>>>>> >>>> >> >>
>>>>>> >>>> >> >>    ./yarn-daemon.sh start nodemanager
>>>>>> >>>> >> >> starting nodemanager, logging
to
>>>>>> >>>> >> >>
>>>>>> >>>> >>
>>>>>> >>>>
>>>>>> /usr/local/Software/hadoop-0.23.10/logs/yarn-root-nodemanager-localhost.localdomain.out
>>>>>> >>>> >> >> Unrecognized option: -jvm
>>>>>> >>>> >> >> Error: Could not create the Java
Virtual Machine.
>>>>>> >>>> >> >> Error: A fatal exception has occurred.
Program will exit.
>>>>>> >>>> >> >> [root@localhost sbin]# emacs
>>>>>> >>>> >> >>
>>>>>> >>>> >>
>>>>>> >>>>
>>>>>> /usr/local/Software/hadoop-0.23.10/logs/yarn-root-nodemanager-localhost.localdomain.out
>>>>>> >>>> >> >> &
>>>>>> >>>> >> >> [4] 29004
>>>>>> >>>> >> >> [root@localhost sbin]# jps
>>>>>> >>>> >> >> 28402 SecondaryNameNode
>>>>>> >>>> >> >> 30280 Jps
>>>>>> >>>> >> >> 28299 DataNode
>>>>>> >>>> >> >> 6729 Main
>>>>>> >>>> >> >> 26044 ResourceManager
>>>>>> >>>> >> >> 28197 NameNode
>>>>>> >>>> >> >>
>>>>>> >>>> >> >
>>>>>> >>>> >> >
>>>>>> >>>> >> >
>>>>>> >>>> >> > --
>>>>>> >>>> >> > Nitin Pawar
>>>>>> >>>> >> >
>>>>>> >>>> >>
>>>>>> >>>> >
>>>>>> >>>> >
>>>>>> >>>> >
>>>>>> >>>> > --
>>>>>> >>>> > Nitin Pawar
>>>>>> >>>> >
>>>>>> >>>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>> --
>>>>>> >>> Nitin Pawar
>>>>>> >>>
>>>>>> >>
>>>>>> >
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Nitin Pawar
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Nitin Pawar
>>>
>>
>>
>

Mime
View raw message