hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Error in starting tasktracker
Date Wed, 18 May 2011 13:42:04 GMT
Hello Deepak,

Since your problems appear to be more related to Cloudera's
distribution including Apache Hadoop, I'm moving the mail discussion
to the cdh-user@cloudera.org list. I've bcc'd
common-user@hadoop.apache.org. Please continue the discussion on
cdh-user@cloudera.org for this.

The folder issue might just be the reason the JT fails to start with
such a status (I believe I've seen it happen for a TT once). Ensure
that the permissions are set right for the logs folder (I think it
ought to be rwx-rwx-r-x). Is this a tarball or a package install?

On Wed, May 18, 2011 at 3:14 PM, Subhramanian, Deepak
<deepak.subhramanian@newsint.co.uk> wrote:
> Even though the log says the
> folder file:/usr/lib/hadoop-0.20/logs/history/done is created, I cannot see
> the folder in the directory.  Is that the root cause of the  error . Any
> thoughts ?
>
>
> 2011-05-18 09:18:23,177 INFO org.apache.hadoop.http.HttpServer: Jetty bound
> to port 50030
> 2011-05-18 09:18:23,177 INFO org.mortbay.log: jetty-6.1.26
> 2011-05-18 09:18:24,574 INFO org.mortbay.log: Started
> SelectChannelConnector@0.0.0.0:50030
> 2011-05-18 09:18:24,575 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=JobTracker, sessionId=
> 2011-05-18 09:18:24,576 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
> up at: 8021
> 2011-05-18 09:18:24,576 INFO org.apache.hadoop.mapred.JobTracker: JobTracker
> webserver: 50030
> 2011-05-18 09:18:25,970 INFO org.apache.hadoop.ipc.Client: Retrying connect
> to server: localhost/127.0.0.1:8020. Already tried 0 time(s).
> 2011-05-18 09:18:27,013 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
> up the system directory
> 2011-05-18 09:18:27,355 INFO org.apache.hadoop.mapred.JobHistory: Creating
> DONE folder at file:/usr/lib/hadoop-0.20/logs/history/done
> 2011-05-18 09:18:27,450 WARN org.apache.hadoop.util.NativeCodeLoader: Unable
> to load native-hadoop library for your platform... using builtin-java
> classes where applicable
> 2011-05-18 09:18:27,680 WARN org.apache.hadoop.mapred.JobTracker: Error
> starting tracker: org.apache.hadoop.util.Shell$ExitCodeException: chmod:
> cannot access `/var/log/hadoop-0
> .20/history/done': No such file or directory
>
>        at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
>        at org.apache.hadoop.util.Shell.run(Shell.java:182)
>        at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>        at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>
>
>
>
> On 17 May 2011 18:32, Subhramanian, Deepak <
> deepak.subhramanian@newsint.co.uk> wrote:
>
>> I reinstalled everything and am able to start everything other than the
>> jobtracker. Jobtracker still gives the port in use even though I verified
>> that the port is not running using netstat.
>>
>> ipedited:/usr/lib/hadoop-0.20/logs/history # /usr/java/jdk1.6.0_25/bin/jps
>> 7435 SecondaryNameNode
>> 7517 TaskTracker
>> 7361 NameNode
>> 7632 Jps
>> 1872 Bootstrap
>> 7221 DataNode
>>
>>
>> 2011-05-17 17:25:10,277 INFO org.apache.hadoop.mapred.JobTracker:
>> STARTUP_MSG:
>> /************************************************************
>> STARTUP_MSG: Starting JobTracker
>> STARTUP_MSG:   host = ipedited
>> STARTUP_MSG:   args = []
>> STARTUP_MSG:   version = 0.20.2-cdh3u0
>> STARTUP_MSG:   build =  -r 81256ad0f2e4ab2bd34b04f53d25a6c23686dd14;
>> compiled by 'hudson' on Fri Mar 25 20:19:33 PDT 2011
>> ************************************************************/
>> 2011-05-17 17:25:10,895 INFO
>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>> Updating the current master key for generating delegation tokens
>> 2011-05-17 17:25:10,897 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
>> configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>> limitMaxMemForMapTasks, limitMaxMem
>> ForReduceTasks) (-1, -1, -1, -1)
>> 2011-05-17 17:25:10,899 INFO org.apache.hadoop.util.HostsFileReader:
>> Refreshing hosts (include/exclude) list
>> 2011-05-17 17:25:10,961 INFO
>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>> Starting expired delegation token remover thread, tokenRemoverScan
>> Interval=60 min(s)
>> 2011-05-17 17:25:10,961 INFO
>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>> Updating the current master key for generating delegation tokens
>> 2011-05-17 17:25:11,106 INFO org.apache.hadoop.mapred.JobTracker: Starting
>> jobtracker with owner as mapred
>> 2011-05-17 17:25:11,211 INFO org.apache.hadoop.ipc.Server: Starting Socket
>> Reader #1 for port 8021
>> 2011-05-17 17:25:11,211 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>> Initializing RPC Metrics with hostName=JobTracker, port=8021
>> 2011-05-17 17:25:11,215 INFO
>> org.apache.hadoop.ipc.metrics.RpcDetailedMetrics: Initializing RPC Metrics
>> with hostName=JobTracker, port=8021
>> 2011-05-17 17:25:11,275 INFO org.mortbay.log: Logging to
>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>> org.mortbay.log.Slf4jLog
>> 2011-05-17 17:25:11,407 INFO org.apache.hadoop.http.HttpServer: Added
>> global filtersafety
>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>> 2011-05-17 17:25:11,439 INFO org.apache.hadoop.http.HttpServer: Port
>> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
>> Opening the listener on 50030
>> 2011-05-17 17:25:11,440 INFO org.apache.hadoop.http.HttpServer:
>> listener.getLocalPort() returned 50030
>> webServer.getConnectors()[0].getLocalPort() returned 50030
>> 2011-05-17 17:25:11,440 INFO org.apache.hadoop.http.HttpServer: Jetty bound
>> to port 50030
>> 2011-05-17 17:25:11,440 INFO org.mortbay.log: jetty-6.1.26
>> 2011-05-17 17:25:12,504 INFO org.mortbay.log: Started
>> SelectChannelConnector@0.0.0.0:50030
>> 2011-05-17 17:25:12,506 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>> Initializing JVM Metrics with processName=JobTracker, sessionId=
>> 2011-05-17 17:25:12,576 INFO org.apache.hadoop.mapred.JobTracker:
>> JobTracker up at: 8021
>> 2011-05-17 17:25:12,576 INFO org.apache.hadoop.mapred.JobTracker:
>> JobTracker webserver: 50030
>> 2011-05-17 17:25:13,945 INFO org.apache.hadoop.ipc.Client: Retrying connect
>> to server: localhost/127.0.0.1:8020. Already tried 0 time(s).
>> 2011-05-17 17:25:15,181 INFO org.apache.hadoop.mapred.JobTracker: Cleaning
>> up the system directory
>> 2011-05-17 17:25:15,648 INFO org.apache.hadoop.mapred.JobHistory: Creating
>> DONE folder at file:/usr/lib/hadoop-0.20/logs/history/done
>> 2011-05-17 17:25:15,651 WARN org.apache.hadoop.util.NativeCodeLoader:
>> Unable to load native-hadoop library for your platform... using builtin-java
>> classes where applicable
>> 2011-05-17 17:25:15,812 WARN org.apache.hadoop.mapred.JobTracker: Error
>> starting tracker: org.apache.hadoop.util.Shell$ExitCodeException: chmod:
>> cannot access `/var/log/hadoop-0
>> .20/history/done': No such file or directory
>>
>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
>>         at org.apache.hadoop.util.Shell.run(Shell.java:182)
>>         at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:508)
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
>>         at
>> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:319)
>>         at
>> org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
>>         at
>> org.apache.hadoop.mapred.JobHistory.initDone(JobHistory.java:370)
>>         at org.apache.hadoop.mapred.JobTracker$4.run(JobTracker.java:2311)
>>         at org.apache.hadoop.mapred.JobTracker$4.run(JobTracker.java:2309)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>>         at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2308)
>>         at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2043)
>>         at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)
>>         at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)
>>         at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4767)
>>
>> 2011-05-17 17:25:16,824 INFO
>> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
>> set up for Hadoop, not re-installing.
>> 2011-05-17 17:25:16,825 INFO
>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>> Updating the current master key for generating delegation tokens
>> 2011-05-17 17:25:16,845 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
>> configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>> limitMaxMemForMapTasks, limitMaxMem
>> ForReduceTasks) (-1, -1, -1, -1)
>> 2011-05-17 17:25:16,845 INFO org.apache.hadoop.util.HostsFileReader:
>> Refreshing hosts (include/exclude) list
>> 2011-05-17 17:25:16,857 INFO
>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>> Starting expired delegation token remover thread, tokenRemoverScan
>> Interval=60 min(s)
>> 2011-05-17 17:25:16,952 INFO
>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>> Updating the current master key for generating delegation tokens
>> 2011-05-17 17:25:16,942 INFO org.apache.hadoop.mapred.JobTracker: Starting
>> jobtracker with owner as mapred
>> 2011-05-17 17:25:16,954 FATAL org.apache.hadoop.mapred.JobTracker:
>> java.net.BindException: Problem binding to localhost/127.0.0.1:8021 :
>> Address already in use
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:230)
>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:319)
>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1510)
>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:539)
>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:500)
>>         at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2136)
>>         at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2043)
>>         at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)
>>         at
>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)
>>         at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4767)
>> Caused by: java.net.BindException: Address already in use
>>         at sun.nio.ch.Net.bind(Native Method)
>>         at
>> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
>>         at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>>         at org.apache.hadoop.ipc.Server.bind(Server.java:228)
>>         ... 9 more
>>
>> 2011-05-17 17:25:17,064 INFO org.apache.hadoop.mapred.JobTracker:
>> SHUTDOWN_MSG:
>>
>>
>>
>> On 17 May 2011 17:19, Subhramanian, Deepak <
>> deepak.subhramanian@newsint.co.uk> wrote:
>>
>>> Hi Harsh,
>>>
>>> I tried changing the port and tried again without luck. I changed the port
>>> to 8023. And it says port 8023 in use. But when I did netstat 8023 is not
>>> listed.
>>>
>>> I am also using oozie configured in the system . While trying to work with
>>> oozie the permissions of some of the directories got changed. Then I had to
>>> reinstall everything again. I installed hadoop in standalone mode first. In
>>> standalone mode it was using port 9000 and 9001. Then also I was not able to
>>> start the tasktracker and job tracker. Then I updated the installation to
>>> pseudo distributed mode. It is again giving the error.
>>>
>>>
>>> 2011-05-17 16:03:21,288 INFO org.apache.hadoop.mapred.TaskTracker:
>>> STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting TaskTracker
>>> STARTUP_MSG:   host = ip-edited
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2-cdh3u0
>>> STARTUP_MSG:   build =  -r 81256ad0f2e4ab2bd34b04f53d25a6c23686dd14;
>>> compiled by 'hudson' on Fri Mar 25 20:19:33 PDT 2011
>>> ************************************************************/
>>> 2011-05-17 16:03:22,608 INFO org.mortbay.log: Logging to
>>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>> org.mortbay.log.Slf4jLog
>>>  2011-05-17 16:03:22,737 INFO org.apache.hadoop.http.HttpServer: Added
>>> global filtersafety
>>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>>> 2011-05-17 16:03:22,792 INFO org.apache.hadoop.mapred.TaskLogsTruncater:
>>> Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
>>> 2011-05-17 16:03:22,796 INFO org.apache.hadoop.mapred.TaskTracker:
>>> Starting tasktracker with owner as mapred
>>> 2011-05-17 16:03:22,857 WARN org.apache.hadoop.util.NativeCodeLoader:
>>> Unable to load native-hadoop library for your platform... using builtin-java
>>> classes where applicable
>>> 2011-05-17 16:03:22,873 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>>> Initializing JVM Metrics with processName=TaskTracker, sessionId=
>>> 2011-05-17 16:03:22,900 INFO org.apache.hadoop.ipc.Server: Starting Socket
>>> Reader #1 for port 55559
>>> 2011-05-17 16:03:22,901 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>>> Initializing RPC Metrics with hostName=TaskTracker, port=55559
>>> 2011-05-17 16:03:22,974 INFO
>>> org.apache.hadoop.ipc.metrics.RpcDetailedMetrics: Initializing RPC Metrics
>>> with hostName=TaskTracker, port=55559
>>> 2011-05-17 16:03:22,977 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> Responder: starting
>>> 2011-05-17 16:03:22,978 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> listener on 55559: starting
>>> 2011-05-17 16:03:22,978 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 0 on 55559: starting
>>> 2011-05-17 16:03:22,978 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 1 on 55559: starting
>>> 2011-05-17 16:03:22,979 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 2 on 55559: starting
>>> 2011-05-17 16:03:22,979 INFO org.apache.hadoop.mapred.TaskTracker:
>>> TaskTracker up at: localhost/127.0.0.1:55559
>>> 2011-05-17 16:03:22,979 INFO org.apache.hadoop.mapred.TaskTracker:
>>> Starting tracker tracker_ipedited
>>> 2011-05-17 16:03:22,987 INFO org.apache.hadoop.ipc.Server: IPC Server
>>> handler 3 on 55559: starting
>>> 2011-05-17 16:03:23,089 ERROR org.apache.hadoop.mapred.TaskTracker: Can
>>> not start task tracker because java.io.IOException: Call to localhost/
>>> 127.0.0.1:8023 failed on local exce
>>>  ption: java.io.IOException: Connection reset by peer
>>>         at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139)
>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>>>         at org.apache.hadoop.mapred.$Proxy4.getProtocolVersion(Unknown
>>> Source)
>>>         at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
>>>         at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:342)
>>>
>>> 2011-05-17 16:03:15,914 INFO org.apache.hadoop.mapred.JobTracker:
>>> STARTUP_MSG:
>>> /************************************************************
>>> STARTUP_MSG: Starting JobTracker
>>> STARTUP_MSG:   host = ip-edited
>>> STARTUP_MSG:   args = []
>>> STARTUP_MSG:   version = 0.20.2-cdh3u0
>>> STARTUP_MSG:   build =  -r 81256ad0f2e4ab2bd34b04f53d25a6c23686dd14;
>>> compiled by 'hudson' on Fri Mar 25 20:19:33 PDT 2011
>>> ************************************************************/
>>> 2011-05-17 16:03:16,439 INFO
>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>>> Updating the current master key for generating delegation tokens
>>> 2011-05-17 16:03:16,502 INFO org.apache.hadoop.mapred.JobTracker:
>>> Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>>> limitMaxMemForMapTasks, limitMaxMem
>>> ForReduceTasks) (-1, -1, -1, -1)
>>> 2011-05-17 16:03:16,504 INFO org.apache.hadoop.util.HostsFileReader:
>>> Refreshing hosts (include/exclude) list
>>> 2011-05-17 16:03:16,508 INFO
>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>>> Starting expired delegation token remover thread, tokenRemoverScan
>>> Interval=60 min(s)
>>> 2011-05-17 16:03:16,508 INFO
>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>>> Updating the current master key for generating delegation tokens
>>> 2011-05-17 16:03:16,589 INFO org.apache.hadoop.mapred.JobTracker: Starting
>>> jobtracker with owner as mapred
>>> 2011-05-17 16:03:16,620 INFO org.apache.hadoop.ipc.Server: Starting Socket
>>> Reader #1 for port 8023
>>> 2011-05-17 16:03:16,620 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
>>> Initializing RPC Metrics with hostName=JobTracker, port=8023
>>> 2011-05-17 16:03:16,624 INFO
>>> org.apache.hadoop.ipc.metrics.RpcDetailedMetrics: Initializing RPC Metrics
>>> with hostName=JobTracker, port=8023
>>> 2011-05-17 16:03:16,760 INFO org.mortbay.log: Logging to
>>> org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
>>> org.mortbay.log.Slf4jLog
>>> 2011-05-17 16:03:16,840 INFO org.apache.hadoop.http.HttpServer: Added
>>> global filtersafety
>>> (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
>>> 2011-05-17 16:03:16,926 INFO org.apache.hadoop.http.HttpServer: Port
>>> returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
>>> Opening the listener on 50030
>>> 2011-05-17 16:03:16,927 INFO org.apache.hadoop.http.HttpServer:
>>> listener.getLocalPort() returned 50030
>>> webServer.getConnectors()[0].getLocalPort() returned 50030
>>> 2011-05-17 16:03:16,927 INFO org.apache.hadoop.http.HttpServer: Jetty
>>> bound to port 50030
>>> 2011-05-17 16:03:16,927 INFO org.mortbay.log: jetty-6.1.26
>>> 2011-05-17 16:03:18,093 INFO org.mortbay.log: Started
>>> SelectChannelConnector@0.0.0.0:50030
>>> 2011-05-17 16:03:18,095 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
>>> Initializing JVM Metrics with processName=JobTracker, sessionId=
>>> 2011-05-17 16:03:18,096 INFO org.apache.hadoop.mapred.JobTracker:
>>> JobTracker up at: 8023
>>> 2011-05-17 16:03:18,096 INFO org.apache.hadoop.mapred.JobTracker:
>>> JobTracker webserver: 50030
>>> 2011-05-17 16:03:19,351 INFO org.apache.hadoop.ipc.Client: Retrying
>>> connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s).
>>> 2011-05-17 16:03:20,641 INFO org.apache.hadoop.mapred.JobTracker: Creating
>>> the system directory
>>> 2011-05-17 16:03:21,064 INFO org.apache.hadoop.mapred.JobHistory: Creating
>>> DONE folder at file:/usr/lib/hadoop-0.20/logs/history/done
>>> 2011-05-17 16:03:21,067 WARN org.apache.hadoop.util.NativeCodeLoader:
>>> Unable to load native-hadoop library for your platform... using builtin-java
>>> classes where applicable
>>> 2011-05-17 16:03:21,114 WARN org.apache.hadoop.mapred.JobTracker: Error
>>> starting tracker: org.apache.hadoop.util.Shell$ExitCodeException: chmod:
>>> cannot access `/var/log/hadoop-0
>>>  .20/history/done': No such file or directory
>>>
>>>         at org.apache.hadoop.util.Shell.runCommand(Shell.java:255)
>>>         at org.apache.hadoop.util.Shell.run(Shell.java:182)
>>>         at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
>>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
>>>         at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
>>>         at
>>> org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:508)
>>>         at
>>> org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
>>>         at
>>> org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:319)
>>>         at
>>> org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:189)
>>>         at
>>> org.apache.hadoop.mapred.JobHistory.initDone(JobHistory.java:370)
>>>         at org.apache.hadoop.mapred.JobTracker$4.run(JobTracker.java:2311)
>>>         at org.apache.hadoop.mapred.JobTracker$4.run(JobTracker.java:2309)
>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>         at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
>>>         at
>>> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2308)
>>>         at
>>> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2043)
>>>         at
>>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)
>>>         at
>>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)
>>>         at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4767)
>>>
>>> 2011-05-17 16:03:22,142 INFO
>>> org.apache.hadoop.security.UserGroupInformation: JAAS Configuration already
>>> set up for Hadoop, not re-installing.
>>> 2011-05-17 16:03:22,142 INFO
>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>>> Updating the current master key for generating delegation tokens
>>> 2011-05-17 16:03:22,160 INFO org.apache.hadoop.mapred.JobTracker:
>>> Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
>>> limitMaxMemForMapTasks, limitMaxMem
>>> ForReduceTasks) (-1, -1, -1, -1)
>>> 2011-05-17 16:03:22,160 INFO org.apache.hadoop.util.HostsFileReader:
>>> Refreshing hosts (include/exclude) list
>>> 2011-05-17 16:03:22,297 INFO
>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>>> Starting expired delegation token remover thread, tokenRemoverScan
>>> Interval=60 min(s)
>>> 2011-05-17 16:03:22,442 INFO
>>> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
>>> Updating the current master key for generating delegation tokens
>>> 2011-05-17 16:03:22,538 INFO org.apache.hadoop.mapred.JobTracker: Starting
>>> jobtracker with owner as mapred
>>> 2011-05-17 16:03:22,539 FATAL org.apache.hadoop.mapred.JobTracker:
>>> java.net.BindException: Problem binding to localhost/127.0.0.1:8023 :
>>> Address already in use
>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:230)
>>>         at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:319)
>>>         at org.apache.hadoop.ipc.Server.<init>(Server.java:1510)
>>>         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:539)
>>>         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:500)
>>>         at
>>> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2136)
>>>         at
>>> org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2043)
>>>         at
>>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)
>>>         at
>>> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)
>>>         at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4767)
>>> Caused by: java.net.BindException: Address already in use
>>>         at sun.nio.ch.Net.bind(Native Method)
>>>         at
>>> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
>>>         at
>>> sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
>>>         at org.apache.hadoop.ipc.Server.bind(Server.java:228)
>>>         ... 9 more
>>>
>>> 2011-05-17 16:03:22,548 INFO org.apache.hadoop.mapred.JobTracker:
>>> SHUTDOWN_MSG:
>>>  /************************************************************
>>> SHUTDOWN_MSG: Shutting down JobTracker at ip
>>>
>>> On 17 May 2011 16:57, Harsh J <harsh@cloudera.com> wrote:
>>>
>>>> Deepak,
>>>>
>>>> From the logs it appears as if some service on your machine already
>>>> uses the specified 8021 port. Try shutting down whatever might be
>>>> using that if possible, or switch your JT's port to something else.
>>>>
>>>> On Tue, May 17, 2011 at 9:19 PM, Subhramanian, Deepak
>>>> <deepak.subhramanian@newsint.co.uk> wrote:
>>>> > Hi ,
>>>> >
>>>> > I am using cdh3 in pseudo distributed mode and getting the following
>>>> error
>>>> > while starting the task tracker and job tracker. Any suggestions.?
>>>>
>>>> --
>>>> Harsh J
>>>>
>>>
>>>
>>>
>>> --
>>>
>>
>
>
> --
> Deepak Subhramanian
> Data & Analytics
> News International, Digital Technology
> Email: deepak.subhramanian@newsint.co.uk <simonjsmith@newsint.co.uk>
>
> --
> "Please consider the environment before printing this e-mail"
>
> The Newspaper Marketing Agency: Opening Up Newspapers:
> www.nmauk.co.uk
>
> This e-mail and any attachments are confidential, may be legally privileged and are the
property of
> News International Limited (which is the holding company for the News International group,
is
> registered in England under number 81701 and whose registered office is 3 Thomas More
Square,
> London E98 1XY, VAT number GB 243 8054 69), on whose systems they were generated.
>
> If you have received this e-mail in error, please notify the sender immediately and do
not use,
> distribute, store or copy it in any way. Statements or opinions in this e-mail or any
attachment are
> those of the author and are not necessarily agreed or authorised by News International
Limited or
> any member of its group. News International Limited may monitor outgoing or incoming
emails as
> permitted by law. It accepts no liability for viruses introduced by this e-mail or attachments.
>



-- 
Harsh J

Mime
View raw message