hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-2916) Refactor src structure, but leave package structure along
Date Wed, 14 May 2008 22:11:55 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-2916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Raghu Angadi updated HADOOP-2916:
---------------------------------

    Attachment: convert-patch.sed

> This patch will break every src/java patch file, no?
Any such re-org will break most patches irrespective of whether its a small or big reorg.
We have to deal with it. 

Attached covert-patch.sed script converts old patch to new one (usage: {sed -f covert-patch.sed
< old.patch > new.patch}).  

With the larger re-org where individual files move and probably get renamed, the conversion
script needs to be lot more complex

I am going to test the covertion a little bit. It should cover 99% of the patches.


> Refactor src structure, but leave package structure along
> ---------------------------------------------------------
>
>                 Key: HADOOP-2916
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2916
>             Project: Hadoop Core
>          Issue Type: Sub-task
>          Components: dfs
>            Reporter: Sanjay Radia
>            Assignee: Raghu Angadi
>         Attachments: convert-patch.sed, HADOOP-2916.patch, svn-commands.sh, svn-commands.sh
>
>
> This Jira proposes that the src structure be split  as below.
>  The package structure remains the same for this Jira. (Package renaming is part of other
JIras  such as HADOOP-2885).
> The idea is that the src will be split   BEFORE the package restructuring
> The new proposed src structure is
> src/test - unchanged
> src/java - will no longer exit , its content will be move to one of core, hdfs, or mapred
> src/core - this will contain the core classes that hadoop applications need to link against.
>   It will contain client side libraries of  all fs file systems:  local, hdfs, kfs, etc
>   jar name hadoop_core.jar
>    src/core/org.apache.hadoop.{conf, fs, filechache, io, ipc, log, metrics, net, record,
security, tools, util)
>    src/core/org.apache.hadoop.dfs - this will contain only the client side parts of dfs.
>                    HADOOP-2885 will rename package dfs  to package  fs.hdfs 
> src/hdfs/org.apache.hadoop.dfs - this will contain only the server side of hdfs. 
>       HADOOP-2885 will rename package dfs  to package  fs.hdfs later; a compatible dfs.DistributedFileSystem
will be left for compatibility/
>    jar name hadoop_hdfs.jar - this jar can be used to launce NNs and DNs etc.
> src/mapred/org.apache.hadoop.mapred.*
>    Initially one jar:  hadoop_mapred.jar
>    Later this may be split into client-side and server-side jars.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message