hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rural Hunter <ruralhun...@gmail.com>
Subject Re: data write/read consistency issue
Date Mon, 25 Jan 2016 04:33:15 GMT
No. The code logic is like this:
The main method:

String rowKey="...";
addTagColumn(rowKey, "tag_"+id);
List tags=getTagColumns(rowKey);
//here I had to add re-try logic to ensure the tags list contains the id
just added.

addTagColumn method just does a simple HTable.put.
getTagColumns uses a ColumnPrefixFilter to get all the columns starting
with "tag_".

The other possiblity is that some other processes removed the new id
between the call of add and get. But we checked other code logic and it
seems not likely. or there is a problem with the ColumnPrefixFilter?

2016-01-23 3:42 GMT+08:00 Stack <stack@duboce.net>:

> On Fri, Jan 22, 2016 at 1:51 AM, Rural Hunter <ruralhunter@gmail.com>
> wrote:
>
> > Hi,
> >
> > I have a hbase cluster with 7 servers at version 0.98.13-hadoop2,
> > dfs.replication=2.
> > In a write session, we update some data. Then in a new read session
> > immediately, we read the data using Get class and found it sometimes
> > returns the old version of the data(before the update).
> > We have to add a retry-loop in the read session to read the correct
> value.
> > Is this a normal behavior of hbase cluster?
> >
>
>
> No.  Tell us more. For sure the Get and Write are not concurrent with
> perhaps the Get happening before the update?
> St.Ack
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message