hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marco Nicosia (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-5010) Replace HFTP/HSFTP with plain HTTP/HTTPS
Date Wed, 14 Jan 2009 02:56:59 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Marco Nicosia updated HADOOP-5010:

      Component/s:     (was: contrib/hdfsproxy)
    Fix Version/s: 0.20.0

bq. Counting the listPaths servlet, there are already two interfaces for browsing HDFS over
HTTP, aren't there? This seems to be asking for a way to manipulate HDFS without the Hadoop
jar. If reading is sufficient, then the HFTP servlets should suffice for hand-rolled tools

Reading is sufficient (from my original request). I didn't know that there's a combination
of HTTP requests which will allow an http client to get directory listings and file data.

Does listPaths and the .../data/... component respect the dfs.web.ugi directive? (But then,
this is what HDFS proxy was invented for, so permissions should be a non-issue.) When Hadoop
becomes kerberized, these servlets need to require credentials over HTTP.

bq. but they need to be documented.

Yes. Switching this bug to a documentation task.

> Replace HFTP/HSFTP with plain HTTP/HTTPS
> ----------------------------------------
>                 Key: HADOOP-5010
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5010
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: documentation
>    Affects Versions: 0.18.0
>            Reporter: Marco Nicosia
>             Fix For: 0.20.0
> In HADOOP-1563, [~cutting] wrote:
> bq. The URI for this should be something like hftp://host:port/a/b/c, since, while HTTP
will be used as the transport, this will not be a FileSystem for arbitrary HTTP urls.
> Recently, we've been talking about implementing an HDFS proxy (HADOOP-4575) which would
be a secure way to make HFTP/HSFTP available. In so doing, we may even remove HFTP/HSFTP from
being offered on the HDFS itself (that's another discussion).
> In the case of the HDFS proxy, does it make sense to do away with the artificial HFTP/HSFTP
protocols, and instead simply offer standard HTTP and HTTPS? That would allow non-HDFS-specific
clients, as well as using various standard HTTP infrastructure, such as load balancers, etc.
> NB, to the best of my knowledge, HFTP is only documented on the [distcp|http://hadoop.apache.org/core/docs/current/distcp.html]
page, and HSFTP is not documented at all?

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message