hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kannan Muthukkaruppan (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-2283) row level atomicity
Date Wed, 10 Mar 2010 05:05:27 GMT

    [ https://issues.apache.org/jira/browse/HBASE-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12843436#action_12843436
] 

Kannan Muthukkaruppan commented on HBASE-2283:
----------------------------------------------

Thanks for your input.

Have the code changes to support things upward compatibly.  The serialized format of a KeyValue
starts with a "int" length. Overloading the length now for versioning. If the length is the
special value -1, then will interpret the rest of the data in new format. Else, interpret
the data in old format. 

A HLog entry could now either be:

<HLogKey>:<KeyValue>
or,
<HLogKey>:<-1 <# of edits, <KeyValue1>, <KeyValue2>, ...>>

I think we have the overall fix pretty much code complete. Will put up the patch after some
basic testing (hopefully by tomorrow). And then will continue more detailed testing in parallel.





> row level atomicity 
> --------------------
>
>                 Key: HBASE-2283
>                 URL: https://issues.apache.org/jira/browse/HBASE-2283
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: Kannan Muthukkaruppan
>            Priority: Blocker
>             Fix For: 0.20.4, 0.21.0
>
>
> The flow during a HRegionServer.put() seems to be the following. [For now, let's just
consider single row Put containing edits to multiple column families/columns.]
> HRegionServer.put() does a:
>         HRegion.put();
>        syncWal()  (the HDFS sync call).  /* this is assuming we have HDFS-200 */
> HRegion.put() does a:
>   for each column family 
>   {
>       HLog.append(all edits to the colum family);
>       write all edits to Memstore;
>   }
> HLog.append() does a :
>   foreach edit in a single column family {
>     doWrite()
>   }
> doWrite() does a:
>    this.writer.append().
> There seems to be two related issues here that could result in inconsistencies.
> Issue #1: A put() does a bunch of HLog.append() calls. These in turn do a bunch of "write"
calls on the underlying DFS stream.  If we crash after having written out some append's to
DFS, recovery will run and apply a partial transaction to memstore.  
> Issue #2: The updates to memstore  should happen after the sync rather than before. Otherwise,
there is the danger that the write to DFS (sync) fails for some reason & we return an
error to the client, but we have already taken edits to the memstore. So subsequent reads
will serve uncommitted data.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message