hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pete Wyckoff (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3797) FUSE module chokes on directories with lots (10,000+ or so) files
Date Mon, 21 Jul 2008 17:45:31 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12615325#action_12615325
] 

Pete Wyckoff commented on HADOOP-3797:
--------------------------------------

Hi Craig,

On a directory with 30,000 files, the module's memory and CPU go up dramatically and at 15
minutes, I killed it.  With a few hundred or a thousand files, it 's just slow, but at the
10s of thousands it becomes unusable.

-- pete


> FUSE module chokes on directories with lots (10,000+ or so) files
> -----------------------------------------------------------------
>
>                 Key: HADOOP-3797
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3797
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: contrib/fuse-dfs
>            Reporter: Pete Wyckoff
>
> For some reason, fuse is calling getattr for every file after doing a readdir. The readdir
supplies the same info so there's no reason for the getattr calls (that I can see) and it
does not do this for subdirectories.
> I don't know why it's doing this, so I sent an email to the fuse development list.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message