hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "jiraposter@reviews.apache.org (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-2476) More CPU efficient data structure for under-replicated/over-replicated/invalidate blocks
Date Thu, 20 Oct 2011 22:56:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-2476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13132161#comment-13132161
] 

jiraposter@reviews.apache.org commented on HDFS-2476:
-----------------------------------------------------


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/2515/
-----------------------------------------------------------

Review request for Hairong Kuang.


Summary
-------

This patch introduces two hash data structures for storing under-replicated, over-replicated
and invalidated blocks.

1. LightWeightHashSet
2. LightWeightLinkedSet

Currently in all these cases we are using java.util.TreeSet which adds unnecessary overhead.

The main bottlenecks addressed by this patch are:
-cluster instability times, when these queues (especially under-replicated) tend to grow quite
drastically,
-initial cluster startup, when the queues are initialized, after leaving safemode,
-block reports,
-explicit acks for block addition and deletion

1. The introduced structures are CPU-optimized.
2. They shrink and expand according to current capacity.
3. Add/contains/delete ops are performed in O(1) time (unlike current log n for TreeSet).
4. The sets are equipped with fast access methods for polling a number of elements (get+remove),
which are used for handling the queues.


This addresses bug HDFS-2476.
    https://issues.apache.org/jira/browse/HDFS-2476


Diffs
-----

  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
1187124 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java
1187124 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/InvalidateBlocks.java
1187124 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/UnderReplicatedBlocks.java
1187124 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
1187124 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNodeRpcServer.java
1187124 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NamenodeFsck.java
1187124 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSck.java
1187124 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LightWeightHashSet.java
PRE-CREATION 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/LightWeightLinkedSet.java
PRE-CREATION 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestListCorruptFileBlocks.java
1187124 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestLightWeightHashSet.java
PRE-CREATION 
  trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/util/TestLightWeightLinkedSet.java
PRE-CREATION 

Diff: https://reviews.apache.org/r/2515/diff


Testing
-------

Provided JUnit tests.


Thanks,

Tomasz


                
> More CPU efficient data structure for under-replicated/over-replicated/invalidate blocks
> ----------------------------------------------------------------------------------------
>
>                 Key: HDFS-2476
>                 URL: https://issues.apache.org/jira/browse/HDFS-2476
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: name-node
>            Reporter: Tomasz Nykiel
>            Assignee: Tomasz Nykiel
>         Attachments: hashStructures.patch, hashStructures.patch-2
>
>
> This patch introduces two hash data structures for storing under-replicated, over-replicated
and invalidated blocks. 
> 1. LightWeightHashSet
> 2. LightWeightLinkedSet
> Currently in all these cases we are using java.util.TreeSet which adds unnecessary overhead.
> The main bottlenecks addressed by this patch are:
> -cluster instability times, when these queues (especially under-replicated) tend to grow
quite drastically,
> -initial cluster startup, when the queues are initialized, after leaving safemode,
> -block reports,
> -explicit acks for block addition and deletion
> 1. The introduced structures are CPU-optimized.
> 2. They shrink and expand according to current capacity.
> 3. Add/contains/delete ops are performed in O(1) time (unlike current log n for TreeSet).
> 4. The sets are equipped with fast access methods for polling a number of elements (get+remove),
which are used for handling the queues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message