hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yi Liu (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-8862) BlockManager#excessReplicateMap should use a HashMap
Date Tue, 18 Aug 2015 01:31:45 GMT

     [ https://issues.apache.org/jira/browse/HDFS-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Yi Liu updated HDFS-8862:
-------------------------
       Resolution: Fixed
     Hadoop Flags: Reviewed
    Fix Version/s: 2.8.0
           Status: Resolved  (was: Patch Available)

> BlockManager#excessReplicateMap should use a HashMap
> ----------------------------------------------------
>
>                 Key: HDFS-8862
>                 URL: https://issues.apache.org/jira/browse/HDFS-8862
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: namenode
>            Reporter: Yi Liu
>            Assignee: Yi Liu
>             Fix For: 2.8.0
>
>         Attachments: HDFS-8862.001.patch
>
>
> Per [~cmccabe]'s comments in HDFS-8792, this JIRA is to discuss improving {{BlockManager#excessReplicateMap}}.
> That's right HashMap don't ever shrink when elements are removed,  but TreeMap entry
needs to store more (memory) references (left,  right, parent) than HashMap entry (only one
reference next),  even when there is element removing and cause some entry empty, the empty
HashMap entry is just a {{null}} reference (4 bytes),  so they are close at this point.  On
the other hand, the key of {{excessReplicateMap}} is datanode uuid, so the entries number
is almost fixed, so HashMap memory is good than TreeMap memory in this case.   I think the
most important is the search/insert/remove performance, HashMap is absolutely better than
TreeMap.  Because we don't need to sort,  we should use HashMap instead of TreeMap



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message