hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Lilley <john.lil...@redpoint.net>
Subject RE: HDFS buffer sizes
Date Fri, 24 Jan 2014 14:34:57 GMT
Ah, I see... it is a constant
CommonConfigurationKeysPublic.java:  public static final int IO_FILE_BUFFER_SIZE_DEFAULT =
Are there benefits to increasing this for large reads or writes?

From: Arpit Agarwal [mailto:aagarwal@hortonworks.com]
Sent: Thursday, January 23, 2014 3:31 PM
To: user@hadoop.apache.org
Subject: Re: HDFS buffer sizes

HDFS does not appear to use dfs.stream-buffer-size.

On Thu, Jan 23, 2014 at 6:57 AM, John Lilley <john.lilley@redpoint.net<mailto:john.lilley@redpoint.net>>
What is the interaction between dfs.stream-buffer-size and dfs.client-write-packet-size?
I see that the default for dfs.stream-buffer-size is 4K.  Does anyone have experience using
larger buffers to optimize large writes?

NOTICE: This message is intended for the use of the individual or entity to which it is addressed
and may contain information that is confidential, privileged and exempt from disclosure under
applicable law. If the reader of this message is not the intended recipient, you are hereby
notified that any printing, copying, dissemination, distribution, disclosure or forwarding
of this communication is strictly prohibited. If you have received this communication in error,
please contact the sender immediately and delete it from your system. Thank You.

View raw message