hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From maha <m...@umail.ucsb.edu>
Subject Re: UI doesn't work
Date Tue, 28 Dec 2010 22:44:27 GMT
Thanks James, you think those are obvious stuff, but they are not to me!  Here is the update:

  1- I cleared Browser cache
  2- I used IP address for masters/slaves/mapred-core.xml/core-site.xml   which still identifies
it as (( speed.cs.ucsb.edu/128.111.43.50 )) in logs. 
  3- Namenode page (( http://128.111.43.50:50030/ ))  redirected to --> (( http://128.111.43.50:50070/dfshealth.jsp))
   which shows the 404 Error.   Is that a correct redirection?
  4- log for JobTracker shows something new :

   2010-12-28 14:15:11,870 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG:   host = speed.cs.ucsb.edu/128.111.43.50
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
-r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2010-12-28 14:15:11,983 INFO org.apache.hadoop.mapred.JobTracker: Scheduler configured with
(memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks, limitMaxMemForReduceTasks)
(-1, -1, -1, -1)
2010-12-28 14:15:12,033 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing RPC Metrics
with hostName=JobTracker, port=9001
2010-12-28 14:15:12,096 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via org.mortbay.log.Slf4jLog
2010-12-28 14:15:12,290 INFO org.apache.hadoop.http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort()
before open() is -1. Opening the listener on 50030
2010-12-28 14:15:12,291 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort() returned
50030 webServer.getConnectors()[0].getLocalPort() returned 50030
2010-12-28 14:15:12,291 INFO org.apache.hadoop.http.HttpServer: Jetty bound to port 50030
2010-12-28 14:15:12,291 INFO org.mortbay.log: jetty-6.1.14
2010-12-28 14:18:28,261 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50030
2010-12-28 14:18:28,265 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics
with processName=JobTracker, sessionId=
2010-12-28 14:18:28,266 INFO org.apache.hadoop.mapred.JobTracker: JobTracker up at: 9001
2010-12-28 14:18:28,266 INFO org.apache.hadoop.mapred.JobTracker: JobTracker webserver: 50030
2010-12-28 14:18:28,513 INFO org.apache.hadoop.mapred.JobTracker: Cleaning up the system directory
2010-12-28 14:18:28,577 INFO org.apache.hadoop.mapred.CompletedJobStatusStore: Completed job
store is inactive
2010-12-28 14:18:28,667 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2010-12-28 14:18:28,668 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 9001: starting
2010-12-28 14:18:28,668 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9001: starting
2010-12-28 14:18:28,668 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9001: starting
2010-12-28 14:18:28,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9001: starting
2010-12-28 14:18:28,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 5 on 9001: starting
2010-12-28 14:18:28,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 9001: starting
2010-12-28 14:18:28,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 9001: starting
2010-12-28 14:18:28,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 6 on 9001: starting
2010-12-28 14:18:28,672 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9001: starting
2010-12-28 14:18:28,673 INFO org.apache.hadoop.ipc.Server: IPC Server handler 8 on 9001: starting
2010-12-28 14:18:28,673 INFO org.apache.hadoop.mapred.JobTracker: Starting RUNNING
2010-12-28 14:18:28,673 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9001: starting
2010-12-28 14:18:28,684 WARN org.apache.hadoop.mapred.JobTracker: Serious problem, cannot
find record of 'previous' heartbeat for 'tracker_pinky.cs.ucsb.edu:localhost/127.0.0.1:56875';
reinitializing the tasktracker
2010-12-28 14:18:28,684 WARN org.apache.hadoop.ipc.Server: IPC Server Responder, call getProtocolVersion(org.apache.hadoop.mapred.JobSubmissionProtocol,
20) from 128.111.43.50:59775: output error

<<<<<<@@@@@<<<<<   This might be because I forced to leave
SAFEMODE ?  >>>>>>>@@@@@>>>>>>>>

2010-12-28 14:18:28,696 INFO org.apache.hadoop.ipc.Server: IPC Server handler 1 on 9001 caught:
java.nio.channels.ClosedChannelException
	at sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:144)
	at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:342)
	at org.apache.hadoop.ipc.Server.channelWrite(Server.java:1195)
	at org.apache.hadoop.ipc.Server.access$1900(Server.java:77)
	at org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:613)
	at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:677)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:981)


<<<<<<@@@@@<<<<<  Why is it using another node (pinky) in
the following line ??? although dfsadmin -report shows one dataNode which is speed >>>>>>>@@@@@>>>>>>>>

2010-12-28 14:18:28,744 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/pinky.cs.ucsb.edu
2010-12-28 14:18:28,872 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /default-rack/speed.cs.ucsb.edu
2010-12-28 14:18:29,385 INFO org.apache.hadoop.mapred.JobTracker: Initializing job_201012281415_0001
2010-12-28 14:18:29,386 INFO org.apache.hadoop.mapred.JobInProgress: Initializing job_201012281415_0001
2010-12-28 14:18:29,585 INFO org.apache.hadoop.mapred.JobInProgress: Input size for job job_201012281415_0001
= 459393. Number of splits = 21



 I'd like to add:

  I'm connecting remotely to the cluster which uses NFS. 

    Maha







On Dec 28, 2010, at 1:37 PM, James Seigel wrote:

> Nope, just on my iPhone I thought you'd tried a different port ( bad memory :) )
> 
> Try accessing it with an ip address you get from doing an ipconfig on
> the machine.
> 
> Then look at the logs and see if there are any errors or indications
> that it is being hit properly.
> 
> Does your browser follow redirects properly?  As well try clearing the
> cache on your browser.
> 
> Sorry for checking out the obvious stuff but sometimes it is :).
> 
> Cheers
> James
> 
> Sent from my mobile. Please excuse the typos.
> 
> On 2010-12-28, at 2:30 PM, maha <maha@umail.ucsb.edu> wrote:
> 
>> Hi James,
>> 
>>  I'm accessing  ---> http://speed.cs.ucsb.edu:50030/   for the job tracker and
 port: 50070 for the name node just like Hadoop quick start.
>> 
>> Did you mean to change the port in my mapred-site.xml file ?
>> 
>> <property>
>>   <name>mapred.job.tracker</name>
>>   <value>speed.cs.ucsb.edu:9001</value>
>> </property>
>> 
>> 
>> Maha
>> 
>> 
>> On Dec 28, 2010, at 1:01 PM, James Seigel wrote:
>> 
>>> For job tracker go to port 50030 see if that helps
>>> 
>>> James
>>> 
>>> Sent from my mobile. Please excuse the typos.
>>> 
>>> On 2010-12-28, at 1:36 PM, maha <maha@umail.ucsb.edu> wrote:
>>> 
>>>> James said:
>>>> 
>>>> Is the job tracker running on that machine?    YES
>>>> Is there a firewall in the way?  I don't think so, because it used to work
for me. How can I check that?
>>>> 
>>>> ========================================================================================================================================
>>>> Harsh said:
>>>> 
>>>> Did you do any ant operation on your release copy of Hadoop prior to
>>>> starting it, by the way?
>>>> 
>>>> NO, I get the following error:
>>>> 
>>>> BUILD FAILED
>>>> /cs/sandbox/student/maha/hadoop-0.20.2/build.xml:316: Unable to find a javac
compiler;
>>>> com.sun.tools.javac.Main is not on the classpath.
>>>> Perhaps JAVA_HOME does not point to the JDK.
>>>> It is currently set to "/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0/jre"
>>>> 
>>>> I had to change JAVA_HOME to point to --> /usr/lib/jvm/jre-1.6.0-openjdk
  because I used to get an error when trying to run a jar file. The error was:
>>>> 
>>>>> bin/hadoop: line 258: /etc/alternatives/java/bin/java: Not a directory
>>>>> bin/hadoop: line 289: /etc/alternatives/java/bin/java: Not a directory
>>>>> bin/hadoop: line 289: exec: /etc/alternatives/java/bin/java: cannot
>>>>> execute: Not a directory
>>>> 
>>>> 
>>>> ========================================================================================================================================
>>>> Adarsh said:
>>>> 
>>>> logs of namenode + jobtracker
>>>> 
>>>> <<<<< namenode log >>>>
>>>> 
>>>> [maha@speed logs]$ cat hadoop-maha-namenode-speed.cs.ucsb.edu.log
>>>> 2010-12-28 12:23:25,006 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting NameNode
>>>> STARTUP_MSG:   host = speed.cs.ucsb.edu/128.111.43.50
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 0.20.2
>>>> STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
-r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>>>> ************************************************************/
>>>> 2010-12-28 12:23:25,126 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing
RPC Metrics with hostName=NameNode, port=9000
>>>> 2010-12-28 12:23:25,130 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
Namenode up at: speed.cs.ucsb.edu/128.111.43.50:9000
>>>> 2010-12-28 12:23:25,133 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing
JVM Metrics with processName=NameNode, sessionId=null
>>>> 2010-12-28 12:23:25,134 INFO org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics:
Initializing NameNodeMeterics using context object:org.apache.hadoop.metrics.spi.NullContext
>>>> 2010-12-28 12:23:25,258 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
fsOwner=maha,grad
>>>> 2010-12-28 12:23:25,258 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
supergroup=supergroup
>>>> 2010-12-28 12:23:25,258 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
>>>> 2010-12-28 12:23:25,269 INFO org.apache.hadoop.hdfs.server.namenode.metrics.FSNamesystemMetrics:
Initializing FSNamesystemMetrics using context object:org.apache.hadoop.metrics.spi.NullContext
>>>> 2010-12-28 12:23:25,270 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
Registered FSNamesystemStatusMBean
>>>> 2010-12-28 12:23:25,316 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files = 6
>>>> 2010-12-28 12:23:25,323 INFO org.apache.hadoop.hdfs.server.common.Storage:
Number of files under construction = 0
>>>> 2010-12-28 12:23:25,323 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 551 loaded in 0 seconds.
>>>> 2010-12-28 12:23:25,323 INFO org.apache.hadoop.hdfs.server.common.Storage:
Edits file /tmp/hadoop-maha/dfs/name/current/edits of size 4 edits # 0 loaded in 0 seconds.
>>>> 2010-12-28 12:23:25,358 INFO org.apache.hadoop.hdfs.server.common.Storage:
Image file of size 551 saved in 0 seconds.
>>>> 2010-12-28 12:23:25,711 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
Finished loading FSImage in 542 msecs
>>>> 2010-12-28 12:23:25,715 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe
mode ON.
>>>> The ratio of reported blocks 0.0000 has not reached the threshold 0.9990.
Safe mode will be turned off automatically.
>>>> 2010-12-28 12:23:25,834 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via org.mortbay.log.Slf4jLog
>>>> 2010-12-28 12:23:25,901 INFO org.apache.hadoop.http.HttpServer: Port returned
by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on
50070
>>>> 2010-12-28 12:23:25,902 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort()
returned 50070 webServer.getConnectors()[0].getLocalPort() returned 50070
>>>> 2010-12-28 12:23:25,902 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50070
>>>> 2010-12-28 12:23:25,902 INFO org.mortbay.log: jetty-6.1.14
>>>> 2010-12-28 12:23:26,360 INFO org.mortbay.log: Started SelectChannelConnector@0.0.0.0:50070
>>>> 2010-12-28 12:23:26,360 INFO org.apache.hadoop.hdfs.server.namenode.NameNode:
Web-server up at: 0.0.0.0:50070
>>>> 2010-12-28 12:23:26,360 INFO org.apache.hadoop.ipc.Server: IPC Server Responder:
starting
>>>> 2010-12-28 12:23:26,362 INFO org.apache.hadoop.ipc.Server: IPC Server listener
on 9000: starting
>>>> 2010-12-28 12:23:26,362 INFO org.apache.hadoop.ipc.Server: IPC Server handler
0 on 9000: starting
>>>> 2010-12-28 12:23:26,366 INFO org.apache.hadoop.ipc.Server: IPC Server handler
1 on 9000: starting
>>>> 2010-12-28 12:23:26,369 INFO org.apache.hadoop.ipc.Server: IPC Server handler
2 on 9000: starting
>>>> 2010-12-28 12:23:26,370 INFO org.apache.hadoop.ipc.Server: IPC Server handler
3 on 9000: starting
>>>> 2010-12-28 12:23:26,370 INFO org.apache.hadoop.ipc.Server: IPC Server handler
5 on 9000: starting
>>>> 2010-12-28 12:23:26,370 INFO org.apache.hadoop.ipc.Server: IPC Server handler
6 on 9000: starting
>>>> 2010-12-28 12:23:26,370 INFO org.apache.hadoop.ipc.Server: IPC Server handler
7 on 9000: starting
>>>> 2010-12-28 12:23:26,370 INFO org.apache.hadoop.ipc.Server: IPC Server handler
8 on 9000: starting
>>>> 2010-12-28 12:23:26,371 INFO org.apache.hadoop.ipc.Server: IPC Server handler
4 on 9000: starting
>>>> 2010-12-28 12:23:26,372 INFO org.apache.hadoop.ipc.Server: IPC Server handler
9 on 9000: starting
>>>> 
>>>> <<<<< JobTracker log >>>>
>>>> 
>>>> [maha@speed logs]$ cat hadoop-maha-jobtracker-speed.cs.ucsb.edu.log
>>>> 2010-12-28 12:23:29,321 INFO org.apache.hadoop.mapred.JobTracker: STARTUP_MSG:
>>>> /************************************************************
>>>> STARTUP_MSG: Starting JobTracker
>>>> STARTUP_MSG:   host = speed.cs.ucsb.edu/128.111.43.50
>>>> STARTUP_MSG:   args = []
>>>> STARTUP_MSG:   version = 0.20.2
>>>> STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20
-r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
>>>> ************************************************************/
>>>> 2010-12-28 12:23:29,443 INFO org.apache.hadoop.mapred.JobTracker: Scheduler
configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
limitMaxMemForReduceTasks) (-1, -1, -1, -1)
>>>> 2010-12-28 12:23:29,487 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: Initializing
RPC Metrics with hostName=JobTracker, port=9001
>>>> 2010-12-28 12:23:29,559 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log)
via org.mortbay.log.Slf4jLog
>>>> 2010-12-28 12:23:29,745 INFO org.apache.hadoop.http.HttpServer: Port returned
by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on
50030
>>>> 2010-12-28 12:23:29,746 INFO org.apache.hadoop.http.HttpServer: listener.getLocalPort()
returned 50030 webServer.getConnectors()[0].getLocalPort() returned 50030
>>>> 2010-12-28 12:23:29,746 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50030
>>>> 2010-12-28 12:23:29,746 INFO org.mortbay.log: jetty-6.1.14
>>>> 
>>>> 
>>>> 
>>>>    Thanks guys for your help,
>>>>         Maha
>>>> 
>>>> 
>> 


Mime
View raw message