hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tomasz Nykiel (Created) (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-2476) More CPU efficient data structure for under-replicated/over-replicated/invalidate blocks
Date Thu, 20 Oct 2011 02:40:12 GMT
More CPU efficient data structure for under-replicated/over-replicated/invalidate blocks

                 Key: HDFS-2476
                 URL: https://issues.apache.org/jira/browse/HDFS-2476
             Project: Hadoop HDFS
          Issue Type: Sub-task
          Components: name-node
            Reporter: Tomasz Nykiel
            Assignee: Tomasz Nykiel

This patch introduces two hash data structures for storing under-replicated, over-replicated
and invalidated blocks. 

1. LightWeightHashSet
2. LightWeightLinkedSet

Currently in all these cases we are using java.util.TreeSet which adds unnecessary overhead.

The main bottlenecks addressed by this patch are:
-cluster instability times, when these queues (especially under-replicated) tend to grow quite
-initial cluster startup, when the queues are initialized, after leaving safemode,
-block reports,
-explicit acks for block addition and deletion

1. The introduced structures are CPU-optimized.
2. They shrink and expand according to current capacity.
3. Add/contains/delete ops are performed in O(1) time (unlike current log n for TreeSet).
4. The sets are equipped with fast access methods for polling a number of elements (get+remove),
which are used for handling the queues.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message