hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcos Ortiz <mlor...@uci.cu>
Subject Re: Block size in HDFS
Date Fri, 10 Jun 2011 16:01:37 GMT
On 06/10/2011 10:35 AM, Pedro Costa wrote:
> Hi,
>
> If I define HDFS to use blocks of 64 MB, and I store in HDFS a 1KB
> file, this file will ocupy 64MB in the HDFS?
>
> Thanks,
HDFS is not very efficient storing small files, because each file is 
stored in a block (of 64 MB in your case), and the block metadata
is held in memory by the NN. But you should know that this 1KB file only 
will use 1KB of disc space.

For small files, you can use Hadoop archives.
Regards

-- 
Marcos Luís Ortíz Valmaseda
  Software Engineer (UCI)
  http://marcosluis2186.posterous.com
  http://twitter.com/marcosluis2186



Mime
View raw message