hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Praveen Sripati <praveensrip...@gmail.com>
Subject Number of bytes per checksum
Date Fri, 24 Jun 2011 14:24:41 GMT

Hi,

Why is the checksum done for io.bytes.per.checksum (defaults to 512) 
instead of the complete block at once (dfs.block.size defaults to 
67108864)? If a block is corrupt then the entire block has to be 
replicated anyway. Isn't it more efficient to do the checksum for 
complete block at once?

-- 
Thanks,
Praveen


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message