hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Dawid Weiss (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (HADOOP-2534) File manager frontend for Hadoop DFS (with proof of concept).
Date Thu, 06 Mar 2008 09:22:58 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-2534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Dawid Weiss resolved HADOOP-2534.
---------------------------------

    Resolution: Won't Fix

As pointed out on the mailing list, there are better alternatives for accessing DFS (like
mount via webdav). 

> File manager frontend for Hadoop DFS (with proof of concept).
> -------------------------------------------------------------
>
>                 Key: HADOOP-2534
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2534
>             Project: Hadoop Core
>          Issue Type: Wish
>          Components: dfs, io
>            Reporter: Dawid Weiss
>         Attachments: upload.png
>
>
> I had problems classifying this, but since it's not an improvement and neither a task,
I thought I'd put it under "wishes". I like command line, but using hadoop fs -X ... leaves
my fingers hurt after some time. I though it would be great to have a file manager-like front
end to DFS. So I modified muCommander (Java-based) a little bit and voila -- it works _great_,
especially for browsing/ uploading and deleting stuff.
> I uploaded the binary and WebStart-launchable version here:
> http://project.carrot2.org/varia/mucommander-hdfs
> Look at screenshots, they will give you a clue about how it works. I had some thoughts
about publishing the source code -- muCommander is GPLed... so I guess it can't reside in
Hadoop's repository anyway, no matter what we do. If you need sources, let me know.
> Finally, a few thoughts stemming from the coding session:
>     *  DF utility does not work under Windows. This has been addressed recently on the
mailing list (HADOOP-33), so it's not a big issue I guess.
>     * I support the claim that it would be sensible to introduce a client interface to
DFS and provide two implementations -- one with intelligent spooling on local disk (using
DF) and one with some simpler form of spooling (in /tmp for example). Note the funky shape
of the upload chart above resulting from delay between spooling and chunk upload. I don't
know if this can be worked around in any way.
>     * Incompatible protocol version causes exceptions. Since the protocol changes quite
frequently (isn't it version 20 at the moment?), some way of choosing the connection protocol
to Hadoop and keeping the most recent versions around would be very useful for external clients.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message