hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Suresh Srinivas <sur...@hortonworks.com>
Subject Re: Namenode and Jobtracker dont start
Date Wed, 18 Jul 2012 17:47:58 GMT
Can you share information on the java version that you are using.
- Is it as obvious as some previous processes still running and new
processes cannot bind to the port?
- Another pointer -
http://stackoverflow.com/questions/8360913/weird-java-net-socketexception-permission-denied-connect-error-when-running-groo

On Wed, Jul 18, 2012 at 7:29 AM, Björn-Elmar Macek
<macek@cs.uni-kassel.de>wrote:

> Hi,
>
> i have lately been running into problems since i started running hadoop on
> a cluster:
>
> The setup is the following:
> 1 Computer is NameNode and Jobtracker
> 1 Computer is SecondaryNameNode
> 2 Computers are TaskTracker and DataNode
>
> I ran into problems with running the wordcount example: NameNode and
> Jobtracker do not start properly both having connection problems of some
> kind.
> And this is although ssh is configured that way, that no prompt happens
> when i connect from any node in the cluster to any other.
>
> Is there any reason why this happens?
>
> The logs look like the following:
> \________ JOBTRACKER____________________**______________________________
> 2012-07-18 16:08:05,808 INFO org.apache.hadoop.mapred.**JobTracker:
> STARTUP_MSG:
> /****************************************************************
> STARTUP_MSG: Starting JobTracker
> STARTUP_MSG:   host = its-cs100.its.uni-kassel.de/**141.51.205.10<http://its-cs100.its.uni-kassel.de/141.51.205.10>
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 1.0.2
> STARTUP_MSG:   build = https://svn.apache.org/repos/**
> asf/hadoop/common/branches/**branch-1.0.2<https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2>-r
1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
> ****************************************************************/
> 2012-07-18 16:08:06,479 INFO org.apache.hadoop.metrics2.**impl.MetricsConfig:
> loaded properties from hadoop-metrics2.properties
> 2012-07-18 16:08:06,534 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
> MBean for source MetricsSystem,sub=Stats registered.
> 2012-07-18 16:08:06,554 INFO org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
> Scheduled snapshot period at 10 second(s).
> 2012-07-18 16:08:06,554 INFO org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
> JobTracker metrics system started
> 2012-07-18 16:08:07,157 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
> MBean for source QueueMetrics,q=default registered.
> 2012-07-18 16:08:10,395 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
> MBean for source ugi registered.
> 2012-07-18 16:08:10,417 INFO org.apache.hadoop.security.**
> token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
> current master key for generating delegation tokens
> 2012-07-18 16:08:10,436 INFO org.apache.hadoop.mapred.**JobTracker:
> Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> 2012-07-18 16:08:10,438 INFO org.apache.hadoop.util.**HostsFileReader:
> Refreshing hosts (include/exclude) list
> 2012-07-18 16:08:10,440 INFO org.apache.hadoop.security.**
> token.delegation.**AbstractDelegationTokenSecretM**anager: Starting
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> 2012-07-18 16:08:10,465 INFO org.apache.hadoop.security.**
> token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
> current master key for generating delegation tokens
> 2012-07-18 16:08:10,510 INFO org.apache.hadoop.mapred.**JobTracker:
> Starting jobtracker with owner as bmacek
> 2012-07-18 16:08:10,620 WARN org.apache.hadoop.mapred.**JobTracker: Error
> starting tracker: java.net.SocketException: Permission denied
>     at sun.nio.ch.Net.bind(Native Method)
>     at sun.nio.ch.**ServerSocketChannelImpl.bind(**
> ServerSocketChannelImpl.java:**119)
>     at sun.nio.ch.**ServerSocketAdaptor.bind(**
> ServerSocketAdaptor.java:59)
>     at org.apache.hadoop.ipc.Server.**bind(Server.java:225)
>     at org.apache.hadoop.ipc.Server$**Listener.<init>(Server.java:**301)
>     at org.apache.hadoop.ipc.Server.<**init>(Server.java:1483)
>     at org.apache.hadoop.ipc.RPC$**Server.<init>(RPC.java:545)
>     at org.apache.hadoop.ipc.RPC.**getServer(RPC.java:506)
>     at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
> java:2306)
>     at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
> java:2192)
>     at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
> java:2186)
>     at org.apache.hadoop.mapred.**JobTracker.startTracker(**
> JobTracker.java:300)
>     at org.apache.hadoop.mapred.**JobTracker.startTracker(**
> JobTracker.java:291)
>     at org.apache.hadoop.mapred.**JobTracker.main(JobTracker.**java:4978)
>
> 2012-07-18 16:08:13,861 WARN org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
> Source name QueueMetrics,q=default already exists!
> 2012-07-18 16:08:13,885 WARN org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
> Source name ugi already exists!
> 2012-07-18 16:08:13,885 INFO org.apache.hadoop.security.**
> token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
> current master key for generating delegation tokens
> 2012-07-18 16:08:13,910 INFO org.apache.hadoop.mapred.**JobTracker:
> Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> 2012-07-18 16:08:13,911 INFO org.apache.hadoop.util.**HostsFileReader:
> Refreshing hosts (include/exclude) list
> 2012-07-18 16:08:13,911 INFO org.apache.hadoop.mapred.**JobTracker:
> Starting jobtracker with owner as bmacek
> 2012-07-18 16:08:13,912 WARN org.apache.hadoop.mapred.**JobTracker: Error
> starting tracker: java.net.SocketException: Permission denied
>     at sun.nio.ch.Net.bind(Native Method)
>     at sun.nio.ch.**ServerSocketChannelImpl.bind(**
> ServerSocketChannelImpl.java:**119)
>     at sun.nio.ch.**ServerSocketAdaptor.bind(**
> ServerSocketAdaptor.java:59)
>     at org.apache.hadoop.ipc.Server.**bind(Server.java:225)
>     at org.apache.hadoop.ipc.Server$**Listener.<init>(Server.java:**301)
>     at org.apache.hadoop.ipc.Server.<**init>(Server.java:1483)
>     at org.apache.hadoop.ipc.RPC$**Server.<init>(RPC.java:545)
>     at org.apache.hadoop.ipc.RPC.**getServer(RPC.java:506)
>     at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
> java:2306)
>     at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
> java:2192)
>     at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
> java:2186)
>     at org.apache.hadoop.mapred.**JobTracker.startTracker(**
> JobTracker.java:300)
>     at org.apache.hadoop.mapred.**JobTracker.startTracker(**
> JobTracker.java:291)
>     at org.apache.hadoop.mapred.**JobTracker.main(JobTracker.**java:4978)
>
> 2012-07-18 16:08:13,912 INFO org.apache.hadoop.security.**
> token.delegation.**AbstractDelegationTokenSecretM**anager: Starting
> expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
> 2012-07-18 16:08:13,913 INFO org.apache.hadoop.security.**
> token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
> current master key for generating delegation tokens
> 2012-07-18 16:08:21,348 WARN org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
> Source name QueueMetrics,q=default already exists!
> 2012-07-18 16:08:21,390 WARN org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
> Source name ugi already exists!
> 2012-07-18 16:08:21,390 INFO org.apache.hadoop.security.**
> token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
> current master key for generating delegation tokens
> 2012-07-18 16:08:21,426 INFO org.apache.hadoop.mapred.**JobTracker:
> Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
> 2012-07-18 16:08:21,427 INFO org.apache.hadoop.util.**HostsFileReader:
> Refreshing hosts (include/exclude) list
> 2012-07-18 16:08:21,427 INFO org.apache.hadoop.mapred.**JobTracker:
> Starting jobtracker with owner as bmacek
> 2012-07-18 16:08:21,428 WARN org.apache.hadoop.mapred.**JobTracker: Error
> starting tracker: java.net.SocketException: Permission denied
>     at sun.nio.ch.Net.bind(Native Method)
>     at sun.nio.ch.**ServerSocketChannelImpl.bind(**
> ServerSocketChannelImpl.java:**119)
>     at sun.nio.ch.**ServerSocketAdaptor.bind(**
> ServerSocketAdaptor.java:59)
>     at org.apache.hadoop.ipc.Server.**bind(Server.java:225)
>     at org.apache.hadoop.ipc.Server$**Listener.<init>(Server.java:**301)
>     at org.apache.hadoop.ipc.Server.<**init>(Server.java:1483)
>     at org.apache.hadoop.ipc.RPC$**Server.<init>(RPC.java:545)
>     at org.apache.hadoop.ipc.RPC.**getServer(RPC.java:506)
>     at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
> java:2306)
>     at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
> java:2192)
>     at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
> java:2186)
>     at org.apache.hadoop.mapred.**JobTracker.startTracker(**
> JobTracker.java:300)
>     at org.apache.hadoop.mapred.**JobTracker.startTracker(**
> JobTracker.java:291)
>     at org.apache.hadoop.mapred.**JobTracker.main(JobTracker.**java:4978)
>
>
> \________ DATANODE______________________**____________________________
> 2012-07-18 16:07:58,759 INFO org.apache.hadoop.hdfs.server.**namenode.NameNode:
> STARTUP_MSG:
> /****************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = its-cs100.its.uni-kassel.de/**141.51.205.10<http://its-cs100.its.uni-kassel.de/141.51.205.10>
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 1.0.2
> STARTUP_MSG:   build = https://svn.apache.org/repos/**
> asf/hadoop/common/branches/**branch-1.0.2<https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2>-r
1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
> ****************************************************************/
> 2012-07-18 16:07:59,738 INFO org.apache.hadoop.metrics2.**impl.MetricsConfig:
> loaded properties from hadoop-metrics2.properties
> 2012-07-18 16:07:59,790 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
> MBean for source MetricsSystem,sub=Stats registered.
> 2012-07-18 16:07:59,807 INFO org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
> Scheduled snapshot period at 10 second(s).
> 2012-07-18 16:07:59,807 INFO org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
> NameNode metrics system started
> 2012-07-18 16:08:00,382 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
> MBean for source ugi registered.
> 2012-07-18 16:08:00,454 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
> MBean for source jvm registered.
> 2012-07-18 16:08:00,456 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
> MBean for source NameNode registered.
> 2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.**GSet: VM type
>       = 64-bit
> 2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.**GSet: 2% max
> memory = 17.77875 MB
> 2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.**GSet: capacity
>      = 2^21 = 2097152 entries
> 2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.**GSet:
> recommended=2097152, actual=2097152
> 2012-07-18 16:08:00,812 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> fsOwner=bmacek
> 2012-07-18 16:08:00,812 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> supergroup=supergroup
> 2012-07-18 16:08:00,824 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> isPermissionEnabled=true
> 2012-07-18 16:08:00,846 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> dfs.block.invalidate.limit=100
> 2012-07-18 16:08:00,846 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> accessTokenLifetime=0 min(s)
> 2012-07-18 16:08:02,746 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> Registered FSNamesystemStateMBean and NameNodeMXBean
> 2012-07-18 16:08:02,868 INFO org.apache.hadoop.hdfs.server.**namenode.NameNode:
> Caching file names occuring more than 10 times
> 2012-07-18 16:08:02,932 INFO org.apache.hadoop.hdfs.server.**common.Storage:
> Number of files = 1
> 2012-07-18 16:08:02,963 INFO org.apache.hadoop.hdfs.server.**common.Storage:
> Number of files under construction = 0
> 2012-07-18 16:08:02,966 INFO org.apache.hadoop.hdfs.server.**common.Storage:
> Image file of size 112 loaded in 0 seconds.
> 2012-07-18 16:08:02,975 INFO org.apache.hadoop.hdfs.server.**common.Storage:
> Edits file /home/work/bmacek/hadoop/**master/current/edits of size 4
> edits # 0 loaded in 0 seconds.
> 2012-07-18 16:08:02,977 INFO org.apache.hadoop.hdfs.server.**common.Storage:
> Image file of size 112 saved in 0 seconds.
> 2012-07-18 16:08:03,191 INFO org.apache.hadoop.hdfs.server.**common.Storage:
> Image file of size 112 saved in 0 seconds.
> 2012-07-18 16:08:03,334 INFO org.apache.hadoop.hdfs.server.**namenode.NameCache:
> initialized with 0 entries 0 lookups
> 2012-07-18 16:08:03,334 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> Finished loading FSImage in 2567 msecs
> 2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> Total number of blocks = 0
> 2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> Number of invalid blocks = 0
> 2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> Number of under-replicated blocks = 0
> 2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> Number of over-replicated blocks = 0
> 2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.**StateChange: STATE*
> Safe mode termination scan for invalid, over- and under-replicated blocks
> completed in 61 msec
> 2012-07-18 16:08:03,402 INFO org.apache.hadoop.hdfs.**StateChange: STATE*
> Leaving safe mode after 2 secs.
> 2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.**StateChange: STATE*
> Network topology has 0 racks and 0 datanodes
> 2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.**StateChange: STATE*
> UnderReplicatedBlocks has 0 blocks
> 2012-07-18 16:08:03,472 INFO org.apache.hadoop.util.**HostsFileReader:
> Refreshing hosts (include/exclude) list
> 2012-07-18 16:08:03,488 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in
> 1 msec
> 2012-07-18 16:08:03,490 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in
> 1 msec processing time, 1 msec clock time, 1 cycles
> 2012-07-18 16:08:03,490 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks
> in 0 msec
> 2012-07-18 16:08:03,490 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks
> in 0 msec processing time, 0 msec clock time, 1 cycles
> 2012-07-18 16:08:03,495 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
> MBean for source FSNamesystemMetrics registered.
> 2012-07-18 16:08:03,553 WARN org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> ReplicationMonitor thread received InterruptedException.java.**lang.InterruptedException:
> sleep interrupted
> 2012-07-18 16:08:03,555 INFO org.apache.hadoop.hdfs.server.**namenode.DecommissionManager:
> Interrupted Monitor
> java.lang.**InterruptedException: sleep interrupted
>     at java.lang.Thread.sleep(Native Method)
>     at org.apache.hadoop.hdfs.server.**namenode.DecommissionManager$**
> Monitor.run(**DecommissionManager.java:65)
>     at java.lang.Thread.run(Thread.**java:619)
> 2012-07-18 16:08:03,556 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
> Number of transactions: 0 Total time for transactions(ms): 0Number of
> transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> 2012-07-18 16:08:03,594 ERROR org.apache.hadoop.hdfs.server.**namenode.NameNode:
> java.net.SocketException: Permission denied
>     at sun.nio.ch.Net.bind(Native Method)
>     at sun.nio.ch.**ServerSocketChannelImpl.bind(**
> ServerSocketChannelImpl.java:**119)
>     at sun.nio.ch.**ServerSocketAdaptor.bind(**
> ServerSocketAdaptor.java:59)
>     at org.apache.hadoop.ipc.Server.**bind(Server.java:225)
>     at org.apache.hadoop.ipc.Server$**Listener.<init>(Server.java:**301)
>     at org.apache.hadoop.ipc.Server.<**init>(Server.java:1483)
>     at org.apache.hadoop.ipc.RPC$**Server.<init>(RPC.java:545)
>     at org.apache.hadoop.ipc.RPC.**getServer(RPC.java:506)
>     at org.apache.hadoop.hdfs.server.**namenode.NameNode.initialize(**
> NameNode.java:294)
>     at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**
> NameNode.java:496)
>     at org.apache.hadoop.hdfs.server.**namenode.NameNode.**
> createNameNode(NameNode.java:**1279)
>     at org.apache.hadoop.hdfs.server.**namenode.NameNode.main(**
> NameNode.java:1288)
>
> 2012-07-18 16:08:03,627 INFO org.apache.hadoop.hdfs.server.**namenode.NameNode:
> SHUTDOWN_MSG:
> /****************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at its-cs100.its.uni-kassel.de/**
> 141.51.205.10 <http://its-cs100.its.uni-kassel.de/141.51.205.10>
> ****************************************************************/
>



-- 
http://hortonworks.com/download/

Mime
View raw message