hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christian Kunz (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-713) dfs list operation is too expensive
Date Tue, 13 Nov 2007 01:51:50 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Christian Kunz updated HADOOP-713:
----------------------------------

    Priority: Blocker  (was: Major)

Changing to blocker based on conversation with Sameer.

> dfs list operation is too expensive
> -----------------------------------
>
>                 Key: HADOOP-713
>                 URL: https://issues.apache.org/jira/browse/HADOOP-713
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.8.0
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>            Priority: Blocker
>
> A list request to dfs returns an array of DFSFileInfo. A DFSFileInfo of a directory contains
a field called contentsLen, indicating its size  which gets computed at the namenode side
by resursively going through its subdirs. At the same time, the whole dfs directory tree is
locked.
> The list operation is used a lot by DFSClient for listing a directory, getting a file's
size and # of replicas, and getting the size of dfs. Only the last operation needs the field
contentsLen to be computed.
> To reduce its cost, we can add a flag to the list request. ContentsLen is computed If
the flag is set. By default, the flag is false.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message