hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From donal0412 <donal0...@gmail.com>
Subject dfs.write.packet.size set to 2G
Date Tue, 08 Nov 2011 07:32:29 GMT
Hi,
I want to store lots of files in HDFS, the file size is <= 2G.
I don't want the file to split into blocks,because I need the whole file 
while processing it, and I don't want to transfer blocks to one node 
when processing it.
A easy way to do this would be set dfs.write.packet.size to 2G, I wonder 
if some one has similar experiences  or known whether this is  practicable.
Will there be performance problems when set the packet size to a big number?

Thanks!
donal

Mime
View raw message