hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Daryn Sharp (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4879) Add "blocked ArrayList" collection to avoid CMS full GCs
Date Thu, 06 Jun 2013 22:32:21 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677607#comment-13677607
] 

Daryn Sharp commented on HDFS-4879:
-----------------------------------

bq.  I started out down this path, but given that the target use cases today only require
accumulating entries and then enumerating them, I didn't want to add a bunch of unused code
for future use cases we haven't found yet.

How about a middle of the road approach: implement as a list but throw {{UnsupportedOperationException}}
for the unimplemented methods?  Then it becomes a drop-in replacement that doesn't require
changing all the data types in the code.

bq. {quote}Consider removing multiple calls to addChunk() to seed the main list by folding
the logic into add? It could add a new chunk if the list is either empty, or the existing
full chunk logic.{quote}
bq. I'm not following what you mean here. Which code path are you talking about?

Ie. By adding an isEmpty check, I think the ctor no longer needs to add a chunk.  It's a minor
suggestion.

{code}
public boolean add(T e) {
  if (chunks.isEmpty() || lastChunk.size() >= lastChunkCapacity) {
    int newCapacity = lastChunkCapacity + (lastChunkCapacity << 1);
    addChunk(Math.min(newCapacity, maxChunkSize));
  }
  return lastChunk.add(e);
}
{code}

Regarding capacity increase, I understand ArrayList does it to avoid excessive reallocs. 
In this impl, wouldn't a uniform chunk size would be more desirable?  Otherwise the last few
chunks of an extremely large list will be huge.  I don't have a strong opinion either way.
                
> Add "blocked ArrayList" collection to avoid CMS full GCs
> --------------------------------------------------------
>
>                 Key: HDFS-4879
>                 URL: https://issues.apache.org/jira/browse/HDFS-4879
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 3.0.0, 2.0.4-alpha
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hdfs-4879.txt, hdfs-4879.txt
>
>
> We recently saw an issue where a large deletion was issued which caused 25M blocks to
be collected during {{deleteInternal}}. Currently, the list of collected blocks is an ArrayList,
meaning that we had to allocate a contiguous 25M-entry array (~400MB). After a NN has been
running for a long amount of time, the old generation may become fragmented such that it's
hard to find a 400MB contiguous chunk of heap.
> In general, we should try to design the NN such that the only large objects are long-lived
and created at startup time. We can improve this particular case (and perhaps some others)
by introducing a new List implementation which is made of a linked list of arrays, each of
which is size-limited (eg to 1MB).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message