accumulo-notifications mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Elser (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ACCUMULO-2668) slow WAL writes
Date Tue, 15 Apr 2014 04:33:15 GMT

    [ https://issues.apache.org/jira/browse/ACCUMULO-2668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13969208#comment-13969208
] 

Josh Elser commented on ACCUMULO-2668:
--------------------------------------

Some rough numbers of what I saw benchmarking this using a single continuous ingest (ingest)
client across two different machines. 

1. 8core (4 physical, hyperthreaded), 16G RAM, SSD. Saw an ingest rate of ~90K keyvalue/s
pre-patch and ~140K keyvalue/s post-patch
2. 8core (no-HT), 32G RAM, Spinning disks. Saw an ingest rate of approximately ~60K keyvalue/s
pre-patch and ~110k keyvalue/s post-patch

With that in mind, it's pretty clear to me that this is desperately needed before 1.6.0 is
released.

For those who haven't looked at the patch, the default "slow" implementation that was being
used before called write(byte) on an output stream for every byte in the provided byte[].
The patch fixes this to pass the write(byte[], int, int) to the wrapped OutputStream. Obviously,
the latter is much more efficient for a variety of reasons.


> slow WAL writes
> ---------------
>
>                 Key: ACCUMULO-2668
>                 URL: https://issues.apache.org/jira/browse/ACCUMULO-2668
>             Project: Accumulo
>          Issue Type: Bug
>    Affects Versions: 1.6.0
>            Reporter: Jonathan Park
>            Assignee: Jonathan Park
>            Priority: Blocker
>              Labels: 16_qa_bug
>             Fix For: 1.6.1
>
>         Attachments: noflush.diff
>
>
> During continuous ingest, we saw over 70% of our ingest time taken up by writes to the
WAL. When we ran the DfsLogger in isolation (created one outside of the Tserver), we saw about
~25MB/s throughput as opposed to nearly 100MB/s from just writing directly to an hdfs outputstream
(computed by taking the estimated size of the mutations sent to the DfsLogger class divided
by the time it took for it to flush + sync the data to HDFS).
> After investigating, we found one possible culprit was the NoFlushOutputStream. It is
a subclass of java.io.FilterOutputStream but does not override the write(byte[], int, int)
method signature. The javadoc indicates that subclasses of the FilterOutputStream should provide
a more efficient implementation.
> I've attached a small diff that illustrates and addresses the issue but this may not
be how we ultimately want to fix it.
> As a side note, I may be misreading the implementation of DfsLogger, but it looks like
we always make use of the NoFlushOutputStream, even if encryption isn't enabled. There appears
to be a faulty check in the DfsLogger.open() implementation that I don't believe can be satisfied
(line 384).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message