hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-5788) listLocatedStatus response can be very large
Date Thu, 23 Jan 2014 17:04:43 GMT

     [ https://issues.apache.org/jira/browse/HDFS-5788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Kihwal Lee updated HDFS-5788:

       Resolution: Fixed
    Fix Version/s: 2.4.0
     Hadoop Flags: Reviewed
           Status: Resolved  (was: Patch Available)

Thanks for working on the issue, Nathan. I've committed it to trunk and branch-2.

> listLocatedStatus response can be very large
> --------------------------------------------
>                 Key: HDFS-5788
>                 URL: https://issues.apache.org/jira/browse/HDFS-5788
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 3.0.0, 0.23.10, 2.2.0
>            Reporter: Nathan Roberts
>            Assignee: Nathan Roberts
>             Fix For: 3.0.0, 2.4.0
>         Attachments: HDFS-5788.patch
> Currently we limit the size of listStatus requests to a default of 1000 entries. This
works fine except in the case of listLocatedStatus where the location information can be quite
large. As an example, a directory with 7000 entries, 4 blocks each, 3 way replication - a
listLocatedStatus response is over 1MB. This can chew up very large amounts of memory in the
NN if lots of clients try to do this simultaneously.
> Seems like it would be better if we also considered the amount of location information
being returned when deciding how many files to return.
> Patch will follow shortly.

This message was sent by Atlassian JIRA

View raw message