hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Gopal V (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4070) DFSClient ignores bufferSize argument & always performs small writes
Date Wed, 17 Oct 2012 16:04:04 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13477984#comment-13477984
] 

Gopal V commented on HDFS-4070:
-------------------------------

The property is global, which makes it impossible for a client to open streams of varying
buffers.

I am suggesting using the buffersize argument in fs.create() to produce a suitable packetsize
for that stream.

Right now for hadoop-1.x, my math says 

{code}

bufferSize &= ~bytesPerChecksum

writePacketSize = (bufferSize - bytesPerChecksum) 
                  + checksumSize * ((bufferSize - bytesPerChecksum)/bytesPerChecksum) 
                  + bytesPerChecksum/checksumSize + PKT_HEADER_LEN + SIZE_OF_INTEGER
{code}

which is where the 1056405 derives from 1048576 (i.e contains exactly aligned 1048576 byte
writes to block & 8192 byte checksum writes).

Working on a patch, will put it up for review.
                
> DFSClient ignores bufferSize argument & always performs small writes
> --------------------------------------------------------------------
>
>                 Key: HDFS-4070
>                 URL: https://issues.apache.org/jira/browse/HDFS-4070
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 1.0.3, 2.0.3-alpha
>         Environment: RHEL 5.5 x86_64 (ec2)
>            Reporter: Gopal V
>            Priority: Minor
>
> The following code illustrates the issue at hand 
> {code}
>  protected void map(LongWritable offset, Text value, Context context) 
>     	        throws IOException, InterruptedException {
> 			OutputStream out = fs.create(new Path("/tmp/benchmark/",value.toString()), true, 1024*1024);

> 			int i;
> 			for(i = 0; i < 1024*1024; i++) {
> 				out.write(buffer, 0, 1024);
> 			}
> 			out.close();
> 			context.write(value, new IntWritable(i));
>     	}
> {code}
> This code is run as a single map-only task with an input file on disk and map-output
to disk.
> {{# su - hdfs -c 'hadoop jar /tmp/dfs-test-1.0-SNAPSHOT-job.jar  file:///tmp/list file:///grid/0/hadoop/hdfs/tmp/benchmark'}}
> In the data node disk access patterns, the following consistent pattern was observed
irrespective of bufferSize provided.
> {code}
> 21119 read(58,  <unfinished ...>
> 21119 <... read resumed> "\0\1\0\0\0\0\0\0\0034\212\0\0\0\0\0\0\0+\220\0\0\0\376\0\262\252ux\262\252u"...,
65557) = 65557
> 21119 lseek(107, 0, SEEK_CUR <unfinished ...>
> 21119 <... lseek resumed> )             = 53774848
> 21119 write(107, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,
65024 <unfinished ...>
> 21119 <... write resumed> )             = 65024
> 21119 write(108, "\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux"...,
508 <unfinished ...>
> 21119 <... write resumed> )             = 508
> {code}
> Here fd 58 is the incoming socket, 107 is the blk file and 108 is the .meta file.
> The DFS packet size ignores the bufferSize argument and suffers from suboptimal syscall
& disk performance because of the default 64kb value, as is obvious from the interrupted
read/write operations.
> Changing the packet size to a more optimal 1056405 bytes results in a decent spike in
performance, by cutting down on disk & network iops.
> h3. Average time (milliseconds) for a 10 GB write as 10 files in a single map task
> ||timestamp||65536||1056252||
> |1350469614|88530|78662|
> |1350469827|88610|81680|
> |1350470042|92632|78277|
> |1350470261|89726|79225|
> |1350470476|92272|78265|
> |1350470696|89646|81352|
> |1350470913|92311|77281|
> |1350471132|89632|77601|
> |1350471345|89302|81530|
> |1350471564|91844|80413|
> That is by average an increase from ~115 MB/s to ~130 MB/s, by modifying the global packet
size setting.
> This suggests that there is value in adapting the user provided buffer sizes to hadoop
packet sizing, per stream.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message