hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arpit Agarwal (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
Date Sat, 16 May 2015 16:21:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Arpit Agarwal updated HDFS-8157:
--------------------------------
       Resolution: Fixed
    Fix Version/s: 2.8.0
     Release Note: This change requires setting the dfs.datanode.max.locked.memory configuration
key to use the HDFS Lazy Persist feature. Its value limits the combined off-heap memory for
blocks in RAM via caching and lazy persist writes.
     Hadoop Flags: Reviewed
           Status: Resolved  (was: Patch Available)

Committed for 2.8.0. Thanks for the reviews.

> Writes to RAM DISK reserve locked memory for block files
> --------------------------------------------------------
>
>                 Key: HDFS-8157
>                 URL: https://issues.apache.org/jira/browse/HDFS-8157
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>             Fix For: 2.8.0
>
>         Attachments: HDFS-8157.01.patch, HDFS-8157.02.patch, HDFS-8157.03.patch, HDFS-8157.04.patch
>
>
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will reserve locked
memory via the FsDatasetCache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message