hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Doug Cutting <cutt...@apache.org>
Subject Re: maximum number of files on 1GB heap memory?
Date Mon, 02 Oct 2006 18:27:10 GMT
김형준 wrote:
> I measured inode memory usage.
> It uses about 70 ~ 100byte per one file.
> This means that NameNode can serve only 10 million files.
> (1GB memory / 100 bytes = 10,737,418 files)

If files averaged 1G, then this would permit 100 petabytes, right? 
That's a pretty big filesystem!

> In general, Web search engine has over 100 million files.
> Is there any other way handling many files?

A web search engine might need to support billions of pages, but 
typically pages are not stored one-per-file, but rather in larger files 
each containing many thousands of pages.

Doug

Mime
View raw message