hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "S, Manoj" <mano...@intel.com>
Subject RE: issues with decrease the default.block.size
Date Fri, 10 May 2013 02:57:46 GMT
http://search-hadoop.com/m/pF9001VX6SH/default.block.size&subj=Re+about+block+size
http://search-hadoop.com/m/HItS5IClD21/block+size&subj=Newbie+question+on+block+size+calculation

http://www.bodhtree.com/blog/2012/09/28/hadoop-how-to-manage-huge-numbers-of-small-files-in-hdfs/

http://wiki.apache.org/hadoop/HowManyMapsAndReduces

Thanks,
Manoj

From: YouPeng Yang [mailto:yypvsxf19870706@gmail.com]
Sent: Thursday, May 09, 2013 9:13 PM
To: user@hadoop.apache.org
Subject: issues with decrease the default.block.size


hi ALL

     I am going to setup a new hadoop  environment, .Because  of  there  are lots of small
 files, I would  like to change  the  default.block.size to 16MB
other than adopting the ways to merge  the files into large  enough (e.g using  sequencefiles).
    I want to ask are  there  any bad influences or issues?

Regards


Mime
View raw message