hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4817) make HDFS advisory caching configurable on a per-file basis
Date Mon, 07 Oct 2013 13:32:57 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13788137#comment-13788137
] 

Hudson commented on HDFS-4817:
------------------------------

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1545 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1545/])
HDFS-4817. Moving changelog to Release 2.2.0 section to reflect the backport. (acmurthy: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529751)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> make HDFS advisory caching configurable on a per-file basis
> -----------------------------------------------------------
>
>                 Key: HDFS-4817
>                 URL: https://issues.apache.org/jira/browse/HDFS-4817
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>    Affects Versions: 3.0.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.2.0
>
>         Attachments: HDFS-4817.001.patch, HDFS-4817.002.patch, HDFS-4817.004.patch, HDFS-4817.006.patch,
HDFS-4817.007.patch, HDFS-4817.008.patch, HDFS-4817.009.patch, HDFS-4817.010.patch, HDFS-4817-b2.1.001.patch
>
>
> HADOOP-7753 and related JIRAs introduced some performance optimizations for the DataNode.
 One of them was readahead.  When readahead is enabled, the DataNode starts reading the next
bytes it thinks it will need in the block file, before the client requests them.  This helps
hide the latency of rotational media and send larger reads down to the device.  Another optimization
was "drop-behind."  Using this optimization, we could remove files from the Linux page cache
after they were no longer needed.
> Using {{dfs.datanode.drop.cache.behind.writes}} and {{dfs.datanode.drop.cache.behind.reads}}
can improve performance  substantially on many MapReduce jobs.  In our internal benchmarks,
we have seen speedups of 40% on certain workloads.  The reason is because if we know the block
data will not be read again any time soon, keeping it out of memory allows more memory to
be used by the other processes on the system.  See HADOOP-7714 for more benchmarks.
> We would like to turn on these configurations on a per-file or per-client basis, rather
than on the DataNode as a whole.  This will allow more users to actually make use of them.
 It would also be good to add unit tests for the drop-cache code path, to ensure that it is
functioning as we expect.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message