hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5010) Replace HFTP/HSFTP with plain HTTP/HTTPS
Date Tue, 13 Jan 2009 22:56:59 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12663523#action_12663523
] 

Chris Douglas commented on HADOOP-5010:
---------------------------------------

bq. HFTP is only documented on the distcp page, and HSFTP is not documented at all?

HSFTP is the same protocol- the same server- over an SSL connector; we can speak of them interchangeably.
The HFTP protocol is not documented outside of its FileSystem implementation, which should
be remedied, but the premise for this issue seems ill defined.

I don't know to what "plain", "pure", and "standard" HTTP refers in a filesystem context,
if not adherence to an RFC for which there are already tools. If not WebDAV, then either some
other standard must be chosen, or we define our own conventions for listing directories, writing/appending
to files, deleting resources, managing permissions, etc. Unless we also want to write a client-
which returns us to where we started- are there better options than picking a standard and
(partially?) implementing it?

bq. Here the focus seems to be on a servlet that implements the server-side of this for HDFS.
That seems reasonable. It would also be browsable, which is nice.

Counting the listPaths servlet, there are already two interfaces for browsing HDFS over HTTP,
aren't there? This seems to be asking for a way to manipulate HDFS without the Hadoop jar.
If reading is sufficient, then the HFTP servlets should suffice for hand-rolled tools, but
they need to be documented.

> Replace HFTP/HSFTP with plain HTTP/HTTPS
> ----------------------------------------
>
>                 Key: HADOOP-5010
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5010
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: contrib/hdfsproxy
>    Affects Versions: 0.18.0
>            Reporter: Marco Nicosia
>
> In HADOOP-1563, [~cutting] wrote:
> bq. The URI for this should be something like hftp://host:port/a/b/c, since, while HTTP
will be used as the transport, this will not be a FileSystem for arbitrary HTTP urls.
> Recently, we've been talking about implementing an HDFS proxy (HADOOP-4575) which would
be a secure way to make HFTP/HSFTP available. In so doing, we may even remove HFTP/HSFTP from
being offered on the HDFS itself (that's another discussion).
> In the case of the HDFS proxy, does it make sense to do away with the artificial HFTP/HSFTP
protocols, and instead simply offer standard HTTP and HTTPS? That would allow non-HDFS-specific
clients, as well as using various standard HTTP infrastructure, such as load balancers, etc.
> NB, to the best of my knowledge, HFTP is only documented on the [distcp|http://hadoop.apache.org/core/docs/current/distcp.html]
page, and HSFTP is not documented at all?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message