hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Johan Oskarson (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-347) Implement HDFS content browsing interface
Date Fri, 21 Jul 2006 10:11:14 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-347?page=comments#action_12422603 ] 
            
Johan Oskarson commented on HADOOP-347:
---------------------------------------

This patch is causing problems for me, if a computer have a second dfs data dir in the config
it doesn't start properly because of:

Exception in thread "main" java.io.IOException: Problem starting http server
        at org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:182)
        at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:170)
        at org.apache.hadoop.dfs.DataNode.makeInstanceForDir(DataNode.java:1045)
        at org.apache.hadoop.dfs.DataNode.run(DataNode.java:999)
        at org.apache.hadoop.dfs.DataNode.runAndWait(DataNode.java:1015)
        at org.apache.hadoop.dfs.DataNode.main(DataNode.java:1066)
Caused by: org.mortbay.util.MultiException[java.net.BindException: Address already in use]
        at org.mortbay.http.HttpServer.doStart(HttpServer.java:731)
        at org.mortbay.util.Container.start(Container.java:72)
        at org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:159)
        ... 5 more

I noticed there is code in the start method to pick a new port if the MultiException is thrown,
however it doesn't seem to work.
For now I've just moved this.infoServer.start(); in DataNode.java up one line so the exception
is caught and ignored, since I can't use the dfs web interface anyway (dfs nodes are all behind
a gateway)

> Implement HDFS content browsing interface
> -----------------------------------------
>
>                 Key: HADOOP-347
>                 URL: http://issues.apache.org/jira/browse/HADOOP-347
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: dfs
>    Affects Versions: 0.1.0, 0.2.0, 0.1.1, 0.3.0, 0.4.0, 0.2.1, 0.3.1, 0.3.2
>            Reporter: Devaraj Das
>         Assigned To: Devaraj Das
>             Fix For: 0.5.0
>
>         Attachments: content_browsing.patch, fsnamesystem.patch
>
>
> Implement HDFS content browsing interface over HTTP. Clients would connect to the NameNode
and this would send a redirect to a random DataNode. The DataNode, via dfs client, would proxy
to namenode for metadata browsing and to other datanodes for content. One can also view the
local blocks on any DataNode. Head, Tail will be provided as shorthands for viewing the first
block and the last block of a file. 
> For full file viewing, the data displayed per HTTP request will be a block with a PREV/NEXT
link. The block size for viewing can be a configurable parameter (the user sets it via the
web browser) to the HTTP server (e.g., 256 KB can be the default block size for viewing files).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message