hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7435) PB encoding of block reports is very inefficient
Date Wed, 03 Dec 2014 18:22:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14233280#comment-14233280
] 

Suresh Srinivas commented on HDFS-7435:
---------------------------------------

[~kihwal], agree with you. We are on the same page.

[~daryn], some comments:
bq. All I'm saying is that the wire protocol may not need to be built around that implementation
detail - but I think a middle ground is
{ block-count, chunk-count, blocks[], chunk-count, blocks[], ... }
Have you looked at Jing's patch? Can you post comments on why it cannot be enhanced to add
chunk-count? I believe its close to being ready with additional changes? Is there a need to
do a branch new patch?

bq. . In your use case, I'm presuming each disk is a storage. If yes, each unencoded storage
report will be ~4MB, ~1.5MB encoded. The DN heap must be >64GB, easily more for HA, so
a handful of MB appears to pales in comparison to how much memory the DN wastes building the
reports. 
I do not follow these numbers well. BTW for very large nodes (< 200 TB), Datanodes can
be run with just 8G heap space. Not sure about the > 64GB number you are mentioning.

> PB encoding of block reports is very inefficient
> ------------------------------------------------
>
>                 Key: HDFS-7435
>                 URL: https://issues.apache.org/jira/browse/HDFS-7435
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, namenode
>    Affects Versions: 2.0.0-alpha, 3.0.0
>            Reporter: Daryn Sharp
>            Assignee: Daryn Sharp
>            Priority: Critical
>         Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, HDFS-7435.patch
>
>
> Block reports are encoded as a PB repeating long.  Repeating fields use an {{ArrayList}}
with default capacity of 10.  A block report containing tens or hundreds of thousand of longs
(3 for each replica) is extremely expensive since the {{ArrayList}} must realloc many times.
 Also, decoding repeating fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message