hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4879) Add "blocked ArrayList" collection to avoid CMS full GCs
Date Sat, 07 Sep 2013 13:43:54 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13761039#comment-13761039
] 

Hudson commented on HDFS-4879:
------------------------------

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1541 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1541/])
HDFS-4879. Add BlockedArrayList collection to avoid CMS full GCs (Contributed by Todd Lipcon)
(cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1520667)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/ChunkedArrayList.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestChunkedArrayList.java

                
> Add "blocked ArrayList" collection to avoid CMS full GCs
> --------------------------------------------------------
>
>                 Key: HDFS-4879
>                 URL: https://issues.apache.org/jira/browse/HDFS-4879
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 3.0.0, 2.0.4-alpha
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>             Fix For: 2.3.0
>
>         Attachments: hdfs-4879.txt, hdfs-4879.txt, hdfs-4879.txt, hdfs-4879.txt
>
>
> We recently saw an issue where a large deletion was issued which caused 25M blocks to
be collected during {{deleteInternal}}. Currently, the list of collected blocks is an ArrayList,
meaning that we had to allocate a contiguous 25M-entry array (~400MB). After a NN has been
running for a long amount of time, the old generation may become fragmented such that it's
hard to find a 400MB contiguous chunk of heap.
> In general, we should try to design the NN such that the only large objects are long-lived
and created at startup time. We can improve this particular case (and perhaps some others)
by introducing a new List implementation which is made of a linked list of arrays, each of
which is size-limited (eg to 1MB).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message