hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arpit Agarwal (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-5153) Datanode should stagger block reports from individual storages
Date Wed, 15 Jan 2014 20:08:20 GMT

     [ https://issues.apache.org/jira/browse/HDFS-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Arpit Agarwal updated HDFS-5153:

    Status: Patch Available  (was: Open)

> Datanode should stagger block reports from individual storages
> --------------------------------------------------------------
>                 Key: HDFS-5153
>                 URL: https://issues.apache.org/jira/browse/HDFS-5153
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 3.0.0
>            Reporter: Arpit Agarwal
>         Attachments: HDFS-5153.01.patch
> When the number of blocks on the DataNode grows large we start running into a few issues:
> # Block reports take a long time to process on the NameNode. In testing we have seen
that a block report with 6 Million blocks takes close to one second to process on the NameNode.
The NameSystem write lock is held during this time.
> # We start hitting the default protobuf message limit of 64MB somewhere around 10 Million
blocks. While we can increase the message size limit it already takes over 7 seconds to serialize/unserialize
a block report of this size.
> HDFS-2832 has introduced the concept of a DataNode as a collection of storages i.e. the
NameNode is aware of all the volumes (storage directories) attached to a given DataNode. This
makes it easy to split block reports from the DN by sending one report per storage directory
to mitigate the above problems.

This message was sent by Atlassian JIRA

View raw message