hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nitin Pawar <nitinpawar...@gmail.com>
Subject Re: Hadoop 1.0.3 setup
Date Mon, 09 Jul 2012 13:23:29 GMT
from the error it looks like the port is already in use.

can you please confirm that all of the below have a different port to
operate
namenode
datanode
jobtracker
tasktracker
secondary namenode

there should not be any common port used by any of these services

On Mon, Jul 9, 2012 at 6:51 PM, prabhu K <prabhu.hadoop@gmail.com> wrote:

> can you please have any idea on the inline issue?
>
> On Mon, Jul 9, 2012 at 5:29 PM, prabhu K <prabhu.hadoop@gmail.com> wrote:
>
> > Hi users,
> >
> > I have installed hadoop 1.0.3 version, completed the single node setup.
> > and then run the start-all.sh script,
> >
> > am getting the following output.
> >
> >
> > hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop/bin$ ./start-all.sh
> > *Warning: $HADOOP_HOME is deprecated.*
> >
> > starting namenode, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-namenode-md-trngpoc1.out
> > localhost: starting datanode, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-datanode-md-trngpoc1.out
> > localhost: starting secondarynamenode, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-md-trngpoc1.out
> > starting jobtracker, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-jobtracker-md-trngpoc1.out
> > localhost: starting tasktracker, logging to
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-tasktracker-md-trngpoc1.out
> >
> >
> > and I run the jps command am getting following output. am not getting the
> > namenode,datanode,jobtracker in the jps list.
> >
> >
> > hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop/bin$ jps
> > 20620 TaskTracker
> > 20670 Jps
> > 20347 SecondaryNameNode
> >
> >
> >
> > when i see the namenode log file, am getting the following output:
> >
> > hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop/logs$ more
> > hadoop-hduser-namenode-md-trngpoc1.log
> > 2012-07-09 17:05:42,989 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> > /************************************************************
> > STARTUP_MSG: Starting NameNode
> > STARTUP_MSG:   host = md-trngpoc1/10.5.114.110
> > STARTUP_MSG:   args = []
> > STARTUP_MSG:   version = 1.0.3
> > STARTUP_MSG:   build =
> > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> > 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > ************************************************************/
> > 2012-07-09 17:05:43,082 INFO
> > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> > hadoop-metrics2.properties
> > 2012-07-09 17:05:43,089 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > MetricsSystem,sub=Stats registered.
> > 2012-07-09 17:05:43,090 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> > period at 10 second(s).
> > 2012-07-09 17:05:43,090 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> system
> > started
> > 2012-07-09 17:05:43,169 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> ugi
> > registered.
> > 2012-07-09 17:05:43,174 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> jvm
> > registered.
> > 2012-07-09 17:05:43,175 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > NameNode registered.
> > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet: VM
> > type       = 32-bit
> > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> > memory = 17.77875 MB
> > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet:
> > capacity      = 2^22 = 4194304 entries
> > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet:
> > recommended=4194304, actual=4194304
> > 2012-07-09 17:05:43,211 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
> > 2012-07-09 17:05:43,211 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> supergroup=supergroup
> > 2012-07-09 17:05:43,211 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > isPermissionEnabled=true
> > 2012-07-09 17:05:43,216 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > dfs.block.invalidate.limit=100
> > 2012-07-09 17:05:43,216 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> > accessTokenLifetime=0 min
> > (s)
> > 2012-07-09 17:05:43,352 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > FSNamesystemStateMBean and NameNodeMXBean
> > 2012-07-09 17:05:43,365 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> > occuring more than 10 times
> > 2012-07-09 17:05:43,372 INFO
> org.apache.hadoop.hdfs.server.common.Storage:
> > Number of files = 1
> > 2012-07-09 17:05:43,375 INFO
> org.apache.hadoop.hdfs.server.common.Storage:
> > Number of files under construction = 0
> > 2012-07-09 17:05:43,375 INFO
> org.apache.hadoop.hdfs.server.common.Storage:
> > Image file of size 112 loaded in 0 seconds.
> > 2012-07-09 17:05:43,375 INFO
> org.apache.hadoop.hdfs.server.common.Storage:
> > Edits file /app/hadoop_dir/hadoop/tmp/dfs/name/current/edits of size 4
> > edits # 0 loaded in 0
> > seconds.
> > 2012-07-09 17:05:43,376 INFO
> org.apache.hadoop.hdfs.server.common.Storage:
> > Image file of size 112 saved in 0 seconds.
> > 2012-07-09 17:05:43,614 INFO
> org.apache.hadoop.hdfs.server.common.Storage:
> > Image file of size 112 saved in 0 seconds.
> > 2012-07-09 17:05:43,844 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> > entries 0 lookups
> > 2012-07-09 17:05:43,844 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> > FSImage in 637 msecs
> > 2012-07-09 17:05:43,857 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
> blocks
> > = 0
> > 2012-07-09 17:05:43,857 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> > blocks = 0
> > 2012-07-09 17:05:43,857 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > under-replicated blocks = 0
> > 2012-07-09 17:05:43,857 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > over-replicated blocks = 0
> > 2012-07-09 17:05:43,857 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > Safe mode termination scan for invalid, over- and under-replicated blocks
> > completed in 12 msec
> > 2012-07-09 17:05:43,857 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > Leaving safe mode after 0 secs.
> > 2012-07-09 17:05:43,858 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > Network topology has 0 racks and 0 datanodes
> > 2012-07-09 17:05:43,858 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > UnderReplicatedBlocks has 0 blocks
> > 2012-07-09 17:05:43,863 INFO org.apache.hadoop.util.HostsFileReader:
> > Refreshing hosts (include/exclude) list
> > 2012-07-09 17:05:43,867 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
> > QueueProcessingStatistics: First cycle completed 0 blocks in 3 msec
> > 2012-07-09 17:05:43,867 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
> > QueueProcessingStatistics: Queue flush completed 0 blocks in 3 msec pro
> > cessing time, 3 msec clock time, 1 cycles
> > 2012-07-09 17:05:43,867 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
> > QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
> > 2012-07-09 17:05:43,867 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
> > QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec pr
> > ocessing time, 0 msec clock time, 1 cycles
> > 2012-07-09 17:05:43,867 INFO
> > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > FSNamesystemMetrics registered.
> > 2012-07-09 17:05:43,874 WARN
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> > thread received InterruptedException.java.lang.InterruptedException
> > : sleep interrupted
> > 2012-07-09 17:05:43,874 INFO
> > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > transactions: 0 Total time for transactions(ms): 0Number of transactions
> bat
> > ched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> > 2012-07-09 17:05:43,875 INFO
> > org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> > Monitor
> > java.lang.InterruptedException: sleep interrupted
> >         at java.lang.Thread.sleep(Native Method)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
> >         at java.lang.Thread.run(Thread.java:662)
> > 2012-07-09 17:05:43,907 ERROR
> > org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException:
> > Problem binding to md-trngpoc1/10.5.114.110:54310 : Address alrea
> > dy in use
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:227)
> >         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
> >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
> >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
> >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
> >         at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
> >         at
> > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
> > Caused by: java.net.BindException: Address already in use
> >         at sun.nio.ch.Net.bind(Native Method)
> >         at
> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
> >         at
> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> >         at org.apache.hadoop.ipc.Server.bind(Server.java:225)
> >         ... 8 more
> >
> > 2012-07-09 17:05:43,908 INFO
> > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> > /************************************************************
> > SHUTDOWN_MSG: Shutting down NameNode at md-trngpoc1/10.5.114.110
> > ************************************************************/
> >
> >
> > Please suggest on this issue. What i am doing wrong?
> >
> > Thanks,
> > Prabhu
> >
>



-- 
Nitin Pawar

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message