hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lior Schachter <li...@infolinks.com>
Subject hdsf block size cont.
Date Thu, 17 Mar 2011 13:10:25 GMT
Hi,
If I have is big gzip files (>>block size) does the M/R will split a single
file to multiple blocks and send them to different mappers ?
The behavior I currently see is that a map is still open per file (and not
per block).

I will also appreciate it  if you can share your experience in defining
block size (compared to HDFS size and to job processing size).


Thanks,
Lior

Mime
View raw message