accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From keith-turner <...@git.apache.org>
Subject [GitHub] accumulo pull request #176: Wrote blog post about durability and performance
Date Mon, 31 Oct 2016 13:34:50 GMT
Github user keith-turner commented on a diff in the pull request:

    https://github.com/apache/accumulo/pull/176#discussion_r85738347
  
    --- Diff: _posts/blog/2016-10-28-durability-performance.md ---
    @@ -0,0 +1,136 @@
    +---
    +title: "Durability Performance Implications"
    +date: 2016-10-28 17:00:00 +0000
    +author: Keith Turner
    +---
    +
    +## Overview
    +
    +Accumulo stores recently written data in a sorted in memory map.  Before data is
    +added to this map, it's written to an unsorted WAL (write ahead log).  In the
    +case when a Tablet Server dies, the recently written data is recovered from the
    +WAL.
    +
    +When data is written to Accumulo the following happens :
    +
    + * Client sends a batch of mutations to a tablet server
    + * Tablet server does the following :
    +   * Writes mutation to Tablet Servers WAL
    +   * Sync or flush WAL
    +   * Adds mutations to sorted in memory maps
    +   * Reports success back to client.
    +
    +The sync/flush step above moves data written to the WAL from memory to disk.
    +Write ahead logs are stored in HDFS. HDFS supports two ways of forcing data to
    +disk for an open file : `hsync` and `hflush`.  
    +
    +## HDFS Sync/Flush Details
    +
    +When `hflush` is called on a WAL, it does not ensure data is on disk.  It only
    +ensure that data is in OS buffers on each datanode and on its way to disk.  As a
    +result calls to `hflush` are very fast.  If a WAL is replicated to 3 data nodes
    +then data may be lost if all three machines reboot.  If the datanode process
    +dies, thats ok because it flushed to OS.  The machines have to reboot for data
    +loss to occur.
    +
    +In order to avoid data loss in the event of reboot, `hsync` can be called.  This
    +will ensure data is written to disk on all datanodes before returning.  When
    +using `hsync` for the WAL, if Accumulo reports success to a user it means the
    +data is on disk.  However `hsync` is much slower than `hflush` and the way it's
    +implemented exacerbates the problem.  For example `hflush` make take 1ms and
    +`hsync` may take 50ms.  This difference will impact writes to Accumulo and can
    +be mitigated in some situations with larger buffers in Accumulo.
    +
    +HDFS keeps checksum data internally by default.  Datanodes store checksum data
    +in a separate file in the local filesystem.  This means when `hsync` is called
    +on a WAL, two files must be synced on each datanode.  Syncing two files doubles
    +the time. To make matters even worse, when the two files are synced the local
    +filesystem metadata is also synced.  Depending on the local filesystem and its
    +configuration, syncing the metadata may or may not take time.  In the worst
    +case, we need to wait for four sync operations at the local filesystem level on
    +each datanode. One thing I am not sure about, is if these sync operations occur
    +in parallel on the replicas on different datanodes.  Lets hope they occur in
    --- End diff --
    
    Its certainly not very useful.  Thinking I can do one of the following :
     * Omit it
     * Research it
     * Change it to ask for anyone how might know the answer to this mystery to let us know.
    
    Researching it would be ideal, but I am thinking of going with the last option and asking
for help.  If there is a reader w/ expertise they could probably answer this much more quickly
than me.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message