hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6584) Support Archival Storage
Date Sat, 27 Sep 2014 11:36:41 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150548#comment-14150548
] 

Hudson commented on HDFS-6584:
------------------------------

FAILURE: Integrated in Hadoop-Yarn-trunk #693 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/693/])
CHANGES.txt: move HDFS-6584 and its subtasks to Release 2.6.0. (szetszwo: rev a6049aa994dbddc3f640d2121e3aabb3a363943d)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Support Archival Storage
> ------------------------
>
>                 Key: HDFS-6584
>                 URL: https://issues.apache.org/jira/browse/HDFS-6584
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: balancer, namenode
>            Reporter: Tsz Wo Nicholas Sze
>            Assignee: Tsz Wo Nicholas Sze
>             Fix For: 2.6.0
>
>         Attachments: HDFS-6584.000.patch, HDFSArchivalStorageDesign20140623.pdf, HDFSArchivalStorageDesign20140715.pdf,
archival-storage-testplan.pdf, h6584_20140907.patch, h6584_20140908.patch, h6584_20140908b.patch,
h6584_20140911.patch, h6584_20140911b.patch, h6584_20140915.patch, h6584_20140916.patch, h6584_20140916.patch,
h6584_20140917.patch, h6584_20140917b.patch, h6584_20140918.patch, h6584_20140918b.patch
>
>
> In most of the Hadoop clusters, as more and more data is stored for longer time, the
demand for storage is outstripping the compute. Hadoop needs a cost effective and easy to
manage solution to meet this demand for storage. Current solution is:
> - Delete the old unused data. This comes at operational cost of identifying unnecessary
data and deleting them manually.
> - Add more nodes to the clusters. This adds along with storage capacity unnecessary compute
capacity to the cluster.
> Hadoop needs a solution to decouple growing storage capacity from compute capacity. Nodes
with higher density and less expensive storage with low compute power are becoming available
and can be used as cold storage in the clusters. Based on policy the data from hot storage
can be moved to cold storage. Adding more nodes to the cold storage can grow the storage independent
of the compute capacity in the cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message