hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Adam Fuchs (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-7380) unsteady and slow performance when writing to file with block size >2GB
Date Fri, 07 Nov 2014 22:38:33 GMT
Adam Fuchs created HDFS-7380:

             Summary: unsteady and slow performance when writing to file with block size >2GB
                 Key: HDFS-7380
                 URL: https://issues.apache.org/jira/browse/HDFS-7380
             Project: Hadoop HDFS
          Issue Type: Bug
    Affects Versions: 2.4.0
            Reporter: Adam Fuchs
         Attachments: BenchmarkWrites.java

Appending to a large file with block size > 2GB can lead to periods of really poor performance
(4X slower than optimal). I found this issue when looking at Accmulo write performance in
ACCUMULO-3303. I wrote a small test application to isolate this performance down to some basic
API calls (to be attached). A description of the execution can be found here: https://issues.apache.org/jira/browse/ACCUMULO-3303?focusedCommentId=14202830&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14202830

The specific hadoop version was as follows:
[root@n1 ~]# hadoop version
Subversion git@github.com:hortonworks/hadoop.git -r 9e5db004df1a751e93aa89b42956c5325f3a4482
Compiled by jenkins on 2014-04-27T22:28Z
Compiled with protoc 2.5.0
>From source with checksum 9e788148daa5dd7934eb468e57e037b5
This command was run using /usr/lib/hadoop/hadoop-common-

This message was sent by Atlassian JIRA

View raw message