hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2885) Restructure the hadoop.dfs package
Date Mon, 10 Mar 2008 20:32:47 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12577168#action_12577168

Doug Cutting commented on HADOOP-2885:

Sanjay asks: "which of the two interfaces is hdfs's interface?"

For HDFS to date, the advertised public interface is fs.FileSystem.  We've talked that someday,
when we feel the wire protocol is stable, we might make it a public interface, to permit Java-free
clients, but we're not there yet.  Making the wire protocol public will substantially impact
its ability to evolve.

(1) is my first choice.

Folks can easily repackage jars, so the number of jars should not be a big factor in this.
 This issue is primarily about what's public and what's private, and HDFS's implementation
should be private.

The discrepancy from KFS and S3 seems reasonable: HDFS is explicitly designed to implement
Hadoop's FileSystem API, while KFS and S3 are not, and need some adapter code.  That adapter
code is simple enough that we can include it in core.  We do not include their entire implementation
in core, and HDFS does not require adapter code, since it directly implements the FileSystem
API.  These differences account for the discrepancy.

So I don't see any of (1)'s cons as significant.

Eric says: "it would be terrific if we did not need to recompile a client to run against two
dot releases of hadoop".  That has more to do with the stability of the abstract FileSystem
API rather than changes to HDFS's wire protocol.  We should already guarantee that.  Our back-compatiblity
goal is that, if an application compiles against release X without warnings, it should be
able to upgrade to X+1 without recompilation, but will have to recompile and fix new warnings
before upgrading to X+2.  However we've not always met this goal...

> Restructure the hadoop.dfs package
> ----------------------------------
>                 Key: HADOOP-2885
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2885
>             Project: Hadoop Core
>          Issue Type: Sub-task
>          Components: dfs
>            Reporter: Sanjay Radia
>            Assignee: Sanjay Radia
>            Priority: Minor
>             Fix For: 0.17.0
>         Attachments: Prototype dfs package.png
> This Jira proposes restructurign the package hadoop.dfs.
> 1. Move all server side and internal protocols (NN-DD etc) to hadoop.dfs.server.*
> 2. Further breakdown of dfs.server.
> - dfs.server.namenode.*
> - dfs.server.datanode.*
> - dfs.server.balancer.*
> - dfs.server.common.* - stuff shared between the various servers
> - dfs.protocol.*  - internal protocol between DN, NN and Balancer etc.
> 3. Client interface:
> - hadoop.dfs.DistributedFileSystem.java
> - hadoop.dfs.ChecksumDistributedFileSystem.java
> - hadoop.dfs.HftpFilesystem.java
> - hadoop.dfs.protocol.* - the client side protocol

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message