hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6584) Support Archival Storage
Date Fri, 19 Sep 2014 11:34:47 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140347#comment-14140347
] 

Hudson commented on HDFS-6584:
------------------------------

SUCCESS: Integrated in Hadoop-Yarn-trunk #685 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/685/])
Fix hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt for HDFS-6584 after (szetszwo: rev fd3cddf3640a0dbd14556368ae4c6e803083bcfc)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
Revise the HDFS-6584 entry CHANGES.txt. (szetszwo: rev 5d01a684a38a765eabec53ce88687f1808b6c956)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Support Archival Storage
> ------------------------
>
>                 Key: HDFS-6584
>                 URL: https://issues.apache.org/jira/browse/HDFS-6584
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: balancer, namenode
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>         Attachments: HDFS-6584.000.patch, HDFSArchivalStorageDesign20140623.pdf, HDFSArchivalStorageDesign20140715.pdf,
archival-storage-testplan.pdf, h6584_20140907.patch, h6584_20140908.patch, h6584_20140908b.patch,
h6584_20140911.patch, h6584_20140911b.patch, h6584_20140915.patch, h6584_20140916.patch, h6584_20140916.patch,
h6584_20140917.patch, h6584_20140917b.patch, h6584_20140918.patch, h6584_20140918b.patch
>
>
> In most of the Hadoop clusters, as more and more data is stored for longer time, the
demand for storage is outstripping the compute. Hadoop needs a cost effective and easy to
manage solution to meet this demand for storage. Current solution is:
> - Delete the old unused data. This comes at operational cost of identifying unnecessary
data and deleting them manually.
> - Add more nodes to the clusters. This adds along with storage capacity unnecessary compute
capacity to the cluster.
> Hadoop needs a solution to decouple growing storage capacity from compute capacity. Nodes
with higher density and less expensive storage with low compute power are becoming available
and can be used as cold storage in the clusters. Based on policy the data from hot storage
can be moved to cold storage. Adding more nodes to the cold storage can grow the storage independent
of the compute capacity in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message