hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Clampffer (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-11106) libhdfs++: Some refactoring to better organize files
Date Thu, 02 Mar 2017 23:52:45 GMT

     [ https://issues.apache.org/jira/browse/HDFS-11106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

James Clampffer updated HDFS-11106:
    Attachment: HDFS-11106.HDFS-8707.001.patch

New patch (001) ready to review.  Just mechanical changes to structure the RPC code.  Mostly
pulling things out of rpc_engine.h and rpc_connection.cc.

> libhdfs++: Some refactoring to better organize files
> ----------------------------------------------------
>                 Key: HDFS-11106
>                 URL: https://issues.apache.org/jira/browse/HDFS-11106
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: hdfs-client
>            Reporter: James Clampffer
>            Assignee: James Clampffer
>         Attachments: HDFS-11106.HDFS-8707.000.patch, HDFS-11106.HDFS-8707.001.patch
> I propose splitting some of the files that have grown wild over time into files that
align with more specific functionality.  It's probably best to do this in a few pieces so
it doesn't invalidate anyone's patches in progress.  Here's what I have in mind, looking for
feedback if 1) it's not worth doing for some reason 2) it will break your patch and you'd
like this to wait.  I'd also like to consolidate related functions, mostly protobuf helpers,
that are spread around the library into dedicated files. 
> Targets (can split each into a separate patch):
> * (done in patch 000, committed) separate the implementation of operations from async
shim code in files like filesystem.cc (make a filesystem_shims.cc).  The shims are just boilerplate
code that only need to change if the signature of their async counterparts change. 
> * (done in patch 000, committed) merge base64.cc into util.cc; base64.cc only contains
a single utility function. 
> * (done in patch 000, committed) rename hdfs_public_api.h/cc to hdfs_ioservice.h/cc.
 Originally all of the implementation declarations of the public API classes like FileSystemImpl
were going to live in here.  Currently only the hdfs::IoServiceImpl lives in there and the
other Impl classes have their own dedicated files. 
> * split hdfs.cc into hdfs.cc and hdfs_ext.cc.  Already have a separate hdfs_ext.h for
C bindings for libhdfs++ specific extensions so implementations of those that live in hdfs.cc
would be moved out.  Just makes things a little cleaner.
> * split apart various RPC code based on classes.  Things like Request and RpcConnection
get defined in rpc_engine.h and then implemented in a handful of files which get confusing
to navigate e.g. why would one expect Request's implementation to be in rpc_connection.cc.
> * Move all of the protobuf<->C++ struct conversion helpers and protobuf wire serialization/deserialization
functions into a single file.  Gives us less protobuf header includes and less accidental
duplication of these sorts of functions.
> Like any refactoring some of it comes down to personal preferences.  My hope is that
by breaking these into smaller patches/commits relatively fast forward progress can be made
on stuff everyone agrees while things that people are concerned about can be worked out in
a way that satisfies everyone.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message