hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From maha <m...@umail.ucsb.edu>
Subject Spilled Records
Date Tue, 22 Feb 2011 02:51:58 GMT
Hello every one,

 Does spilled records mean that the sort-buffer size for sorting is not enough to sort all
the input records, hence some records are written to local disk ?

 If so, I tried setting my io.sort.mb from the default 100 to 200 and there was still the
same # of spilled records. Why ?

 Does changing io.sort.record.percent to be .9 instead .8 might produce unexpected exceptions

Thank you,
View raw message