hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kannan Muthukkaruppan (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-2283) row level atomicity
Date Mon, 15 Mar 2010 19:15:27 GMT

    [ https://issues.apache.org/jira/browse/HBASE-2283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12845468#action_12845468
] 

Kannan Muthukkaruppan commented on HBASE-2283:
----------------------------------------------

Currently, with my patch, TestGetClosestAtOrBefore:testUsingMetaAndBinary() (in regionserver)
is broken.

Debugged this a bit, and it seems that my change has somehow broken the interaction between
the scanner & delete. What's the expected semantics when a delete happens in the middle
of a scan, as the test does here:

{code}
    byte [] firstRowInC = HRegionInfo.createRegionName(Bytes.toBytes("" + 'C'),
      HConstants.EMPTY_BYTE_ARRAY, HConstants.ZEROES);
    Scan scan = new Scan(firstRowInC);
    s = mr.getScanner(scan);
    try {
      List<KeyValue> keys = new ArrayList<KeyValue>();
      while (s.next(keys)) {
        mr.delete(new Delete(keys.get(0).getRow()), null, false);
        keys.clear();
      }
    } finally {
      s.close();
    }
{code}

Is the scanner expected to have snapshot semantics (i.e. not be affected by deletes that are
happening)? With my patch, the scanner seems to be affected by deletes (need to debug why)
-- but I was curious to hear if the old behavior is the expected one?




> row level atomicity 
> --------------------
>
>                 Key: HBASE-2283
>                 URL: https://issues.apache.org/jira/browse/HBASE-2283
>             Project: Hadoop HBase
>          Issue Type: Bug
>            Reporter: Kannan Muthukkaruppan
>            Priority: Blocker
>             Fix For: 0.20.4, 0.21.0
>
>         Attachments: rowLevelAtomicity_2283_v1.patch
>
>
> The flow during a HRegionServer.put() seems to be the following. [For now, let's just
consider single row Put containing edits to multiple column families/columns.]
> HRegionServer.put() does a:
>         HRegion.put();
>        syncWal()  (the HDFS sync call).  /* this is assuming we have HDFS-200 */
> HRegion.put() does a:
>   for each column family 
>   {
>       HLog.append(all edits to the colum family);
>       write all edits to Memstore;
>   }
> HLog.append() does a :
>   foreach edit in a single column family {
>     doWrite()
>   }
> doWrite() does a:
>    this.writer.append().
> There seems to be two related issues here that could result in inconsistencies.
> Issue #1: A put() does a bunch of HLog.append() calls. These in turn do a bunch of "write"
calls on the underlying DFS stream.  If we crash after having written out some append's to
DFS, recovery will run and apply a partial transaction to memstore.  
> Issue #2: The updates to memstore  should happen after the sync rather than before. Otherwise,
there is the danger that the write to DFS (sync) fails for some reason & we return an
error to the client, but we have already taken edits to the memstore. So subsequent reads
will serve uncommitted data.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message