hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jing Zhao (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6584) Support Archival Storage
Date Thu, 18 Sep 2014 18:15:37 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14139273#comment-14139273

Jing Zhao commented on HDFS-6584:

Failures of TestEncryptionZonesWithKMS, TestWebHdfsFileSystemContract, and TestPipelinesFailover
are also seen in other Jenkins run and should be unrelated. Failure of TestOfflineEditsViewer
is expected since we need to update the editsStored binary file. Failure of TestStorageMover
cannot be reproduced in my local machine (I run the test 100 times but still could not reproduce
the failure). Maybe it's related to the Jenkins environment. We can track it in a separate

I think the feature is ready to be merged into trunk once the vote is closed. [~szetszwo],
can you close the vote in the dev mailing list?

> Support Archival Storage
> ------------------------
>                 Key: HDFS-6584
>                 URL: https://issues.apache.org/jira/browse/HDFS-6584
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: balancer, namenode
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>         Attachments: HDFS-6584.000.patch, HDFSArchivalStorageDesign20140623.pdf, HDFSArchivalStorageDesign20140715.pdf,
archival-storage-testplan.pdf, h6584_20140907.patch, h6584_20140908.patch, h6584_20140908b.patch,
h6584_20140911.patch, h6584_20140911b.patch, h6584_20140915.patch, h6584_20140916.patch, h6584_20140916.patch,
h6584_20140917.patch, h6584_20140917b.patch, h6584_20140918.patch, h6584_20140918b.patch
> In most of the Hadoop clusters, as more and more data is stored for longer time, the
demand for storage is outstripping the compute. Hadoop needs a cost effective and easy to
manage solution to meet this demand for storage. Current solution is:
> - Delete the old unused data. This comes at operational cost of identifying unnecessary
data and deleting them manually.
> - Add more nodes to the clusters. This adds along with storage capacity unnecessary compute
capacity to the cluster.
> Hadoop needs a solution to decouple growing storage capacity from compute capacity. Nodes
with higher density and less expensive storage with low compute power are becoming available
and can be used as cold storage in the clusters. Based on policy the data from hot storage
can be moved to cold storage. Adding more nodes to the cold storage can grow the storage independent
of the compute capacity in the cluster.

This message was sent by Atlassian JIRA

View raw message