hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8157) Writes to RAM DISK reserve locked memory for block files
Date Tue, 05 May 2015 21:08:01 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14529275#comment-14529275

Colin Patrick McCabe commented on HDFS-8157:

Thanks for this, [~arpitagarwal].

I don't think we should add {{DataNode#skipNativeIoCheckForTesting}}.  To simulate locking
memory without adding a dependency on NativeIO, then just create a custom cache manipulator.
 This custom manipulator can always return true for {{verifyCanMlock}}.  There are some other
unit tests doing this.

public void releaseReservedSpace(long bytesToRelease, boolean releaseLockedMemory);
I would rather have a separate function for releasing the memory than overload the meaning
of this one.

Maybe I am missing something, but I don't understand the purpose behind {{releaseRoundDown}}.
 Why would we round down to a page size when allocating or releasing memory?

> Writes to RAM DISK reserve locked memory for block files
> --------------------------------------------------------
>                 Key: HDFS-8157
>                 URL: https://issues.apache.org/jira/browse/HDFS-8157
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode
>            Reporter: Arpit Agarwal
>            Assignee: Arpit Agarwal
>         Attachments: HDFS-8157.01.patch
> Per discussion on HDFS-6919, the first step is that writes to RAM disk will reserve locked
memory via the FsDatasetCache.

This message was sent by Atlassian JIRA

View raw message