hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-79) listFiles optimization
Date Wed, 15 Mar 2006 02:41:56 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-79?page=comments#action_12370447 ] 

Konstantin Shvachko commented on HADOOP-79:

No, it was not really a bottleneck.
Interesting about the profiling. Is it JMX or something else?
Yes, DFSFileInfo is for reporting only, so it does not save space.
It might save some code though, since when you have one field you 
probably don't need two different finctions to extract it.

> listFiles optimization
> ----------------------
>          Key: HADOOP-79
>          URL: http://issues.apache.org/jira/browse/HADOOP-79
>      Project: Hadoop
>         Type: Improvement
>   Components: dfs
>     Reporter: Konstantin Shvachko
>     Assignee: Konstantin Shvachko
>      Fix For: 0.1
>  Attachments: DFSFileInfo.patch
> In FSDirectory.getListing() looking at line
> listing[i] = new DFSFileInfo(curName, cur.computeFileLength(), cur.computeContentsLength(),
> 1. computeContentsLength() is actually calling computeFileLength(), so this is called
> meaning that file length is calculated twice.
> 2. isDir() is looking for the INode (starting from the rootDir) that has actually been
> just two lines above, note that the tree is locked by that time.
> I propose a simple optimization for this, see attachment.
> 3. A related question: Why DFSFileInfo needs 2 separate fields len for file length and
> contentsLen for directory contents size? It looks like these fields are mutually exclusive,
> and we can use just one, interpreting it one way or another with respect to the value
of isDir.

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message