hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhe Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7285) Erasure Coding Support inside HDFS
Date Thu, 26 Feb 2015 19:42:15 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14339006#comment-14339006
] 

Zhe Zhang commented on HDFS-7285:
---------------------------------

We had another meetup last Friday (2/20). Below is a summary, followed by a plan to generate
a functional prototype.

*Attendees*: [~zhzhan], [~jingzhao], and [~drankye]

*Summary of BlockInfo extension*
# The following diagram illustrates the extension of {{BlockInfo}} to handle striped block
groups (I recreated it from a whiteboard drawing). This is mainly contributed by Jing and
thanks again for the great work!
{code}
                BlockInfo
               /   |     \
BlockInfoStriped   |      BlockInfoContiguous
       |           |            |
       |       BlockInfoUC?     |
       |       /         \      |
BlockInfoStripedUC       BlockInfoContiguousUC
{code}
# {{BlockInfoStriped}} and {{BlockInfoContiguous}} are already created under HDFS-7743 and
HDFS-7716
# {BlockInfoStripedUC}} and {{BlockInfoContiguousUC}} are created under HDFS-7749. The current
plan is to keep them separate despite the duplicate codes. A later effort will abstract out
a common {{BlockInfoUC}} class.
# HDFS-7837 as well as part of HDFS-7749 handle persisting {{BlockInfo}} variants in multiple
places:
#* BlockManager
#* INodeFile
#* FSImage
#* Editlog

*Remaining NameNode tasks*
# {{LocatedBlocks}} should be extended for striped reader (HDFS-7853)
# Initial XAttr structure for EC configuration (HDFS-7839)
# Other tasks, including HDFS-7369, do not block creating an initial prototype and should
have a lower priority.

*DataNode high level thoughts*
# The NN will select a DN as the ECWorker in charge of recovering the lost data or parity
block. That worker node might or might not be the same as the storage target (e.g., ECWorker
should have powerful CPU)
# At this stage we should use a simple logic assuming ECWorker is the final target. It should
construct the recovered block and store locally, before pushing to next targets if necessary

*EC policies*
# A set of default EC schemas should be embedded as part of HDFS
# An interface should be provided to define new EC schemas (either through command line or
manipulate and refresh an XML file)
# EC and block layout (striping vs. contiguous) should be 2 orthogonal configuration dimensions:
in the next phase we can enable contiguous+EC. At this phase we can assume striping layout
when EC is enabled.

*Plan for a PoC prototype*
# An initial PoC prototype should contain the following features:
#* Configure a file to be stored in striping + EC format
#* Client requests to allocate and persist the striped block groups in NN
#* NN returns located striped block group
#* Client writes to the allocated DNs in striping fashion
#* NN correctly processes striped block reports
#* Blocks in the striped block group can go through the state machine of UC-COMMITTED-COMPLETE.
{{UNDER_RECOVERY}} doesn't have to be supported at this stage.
#* Client can close the file
#* Client can read back the content correctly
#* _Optional_: File system states and metrics are correctly updated -- fsimage, edit logs,
quota, etc.
# I think the following JIRAs should be resolved for the prototype:
#* HDFS-7749: need to fix a few Jenkins test failures
#* HDFS-7837
#* HDFS-7853
#* HDFS-7839
#* HDFS-7782

It's quite likely that the list is incomplete. So please feel free to add to it. Thanks!

> Erasure Coding Support inside HDFS
> ----------------------------------
>
>                 Key: HDFS-7285
>                 URL: https://issues.apache.org/jira/browse/HDFS-7285
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Weihua Jiang
>            Assignee: Zhe Zhang
>         Attachments: ECAnalyzer.py, ECParser.py, HDFSErasureCodingDesign-20141028.pdf,
HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf,
fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice of data
reliability, comparing to the existing HDFS 3-replica approach. For example, if we use a 10+4
Reed Solomon coding, we can allow loss of 4 blocks, with storage overhead only being 40%.
This makes EC a quite attractive alternative for big data storage, particularly for cold data.

> Facebook had a related open source project called HDFS-RAID. It used to be one of the
contribute packages in HDFS but had been removed since Hadoop 2.0 for maintain reason. The
drawbacks are: 1) it is on top of HDFS and depends on MapReduce to do encoding and decoding
tasks; 2) it can only be used for cold files that are intended not to be appended anymore;
3) the pure Java EC coding implementation is extremely slow in practical use. Due to these,
it might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that gets rid of
any external dependencies, makes it self-contained and independently maintained. This design
lays the EC feature on the storage type support and considers compatible with existing HDFS
features like caching, snapshot, encryption, high availability and etc. This design will also
support different EC coding schemes, implementations and policies for different deployment
scenarios. By utilizing advanced libraries (e.g. Intel ISA-L library), an implementation can
greatly improve the performance of EC encoding/decoding and makes the EC solution even more
attractive. We will post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message