hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bing Li <lbl...@gmail.com>
Subject Re: Is it correct and required to keep consistency this way?
Date Thu, 20 Sep 2012 03:06:38 GMT
Sorry, I didn't keep the exceptions. I will post the exceptions if I get
them again.

But after putting "synchronized" on the writing methods, the exceptions are
gone.

I am a little confused. HTable must be the interface to write/read data
from HBase. If it is not safe, it means locking must be set as what is
shown in my code, doesn't it?

Thanks so much!
Bing

On Thu, Sep 20, 2012 at 11:00 AM, Bijieshan <bijieshan@huawei.com> wrote:

> Yes. It should be safe. What you need to pay attention is HTable is not
> thread safe. What are the exceptions?
>
> Jieshan
> -----Original Message-----
> From: Bing Li [mailto:lblabs@gmail.com]
> Sent: Thursday, September 20, 2012 10:52 AM
> To: user@hbase.apache.org
> Cc: hbase-user@hadoop.apache.org; Zhouxunmiao
> Subject: Re: Is it correct and required to keep consistency this way?
>
> Dear Jieshan,
>
> Thanks so much for your reply!
>
> Now locking is not set on the reading methods in my system. It seems to be
> fine with that.
>
> But I noticed exceptions when no locking was put on the writing method. If
> multiple threads are writing to HBase concurrently, do you think it is safe
> without locking?
>
> Best regards,
> Bing
>
> On Thu, Sep 20, 2012 at 10:22 AM, Bijieshan <bijieshan@huawei.com> wrote:
>
> > You can avoid read & write running parallel from your application level,
> > if I read your mail correctly. You can use ReentrantReadWriteLock if your
> > intention is like that. But it's not recommended.
> > HBase has its own mechanism(MVCC) to manage the read/write consistency.
> > When we start a scanning, the latest data has not committed by MVCC may
> not
> > be visible(According to our configuration).
> >
> > Jieshan
> > -----Original Message-----
> > From: Bing Li [mailto:lblabs@gmail.com]
> > Sent: Thursday, September 20, 2012 10:02 AM
> > To: hbase-user@hadoop.apache.org; user
> > Subject: Is it correct and required to keep consistency this way?
> >
> > Dear all,
> >
> > Sorry to send the email multiple times! An error in the previous email is
> > corrected.
> >
> > I am not exactly sure if it is correct and required to keep consistency
> as
> > follows when saving and reading from HBase? Your help is highly
> > appreciated.
> >
> > Best regards,
> > Bing
> >
> >         // Writing
> >         public void AddOutgoingNeighbor(String hostNodeKey, String
> > groupKey, int timingScale, String neighborKey)
> >         {
> >                 List<Put> puts = new ArrayList<Put>();
> >                 Put hostNodeKeyPut;
> >                 Put groupKeyPut;
> >                 Put topGroupKeyPut;
> >                 Put timingScalePut;
> >                 Put neighborKeyPut;
> >
> >                 byte[] outgoingRowKey =
> > Bytes.toBytes(NeighborStructure.NODE_OUTGOING_NEIGHBOR_ROW +
> > Tools.GetAHash(hostNodeKey + groupKey + timingScale + neighborKey));
> >
> >                 hostNodeKeyPut = new Put(outgoingRowKey);
> >
> > hostNodeKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> > Bytes.toBytes(hostNodeKey));
> >                 puts.add(hostNodeKeyPut);
> >
> >                 groupKeyPut = new Put(outgoingRowKey);
> >
> > groupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_GROUP_KEY_COLUMN,
> > Bytes.toBytes(groupKey));
> >                 puts.add(groupKeyPut);
> >
> >                 topGroupKeyPut = new Put(outgoingRowKey);
> >
> > topGroupKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TOP_GROUP_KEY_COLUMN,
> > Bytes.toBytes(GroupRegistry.WWW().GetParentGroupKey(groupKey)));
> >                 puts.add(topGroupKeyPut);
> >
> >                 timingScalePut = new Put(outgoingRowKey);
> >
> > timingScalePut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> > Bytes.toBytes(timingScale));
> >                 puts.add(timingScalePut);
> >
> >                 neighborKeyPut = new Put(outgoingRowKey);
> >
> > neighborKeyPut.add(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_COLUMN,
> > Bytes.toBytes(neighborKey));
> >                 puts.add(neighborKeyPut);
> >
> >                 try
> >                 {
> >                         // Locking is here
> >                         this.lock.writeLock().lock();
> >                         this.neighborTable.put(puts);
> >                         this.lock.writeLock().unlock();
> >                 }
> >                 catch (IOException e)
> >                 {
> >                         e.printStackTrace();
> >                 }
> >         }
> >
> >         // Reading
> >         public Set<String> GetOutgoingNeighborKeys(String hostNodeKey,
> int
> > timingScale)
> >         {
> >                 List<Filter> outgoingNeighborsList = new
> > ArrayList<Filter>();
> >
> >                 SingleColumnValueFilter hostNodeKeyFilter = new
> > SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_HOST_NODE_KEY_COLUMN,
> > CompareFilter.CompareOp.EQUAL, new SubstringComparator(hostNodeKey));
> >                 hostNodeKeyFilter.setFilterIfMissing(true);
> >                 outgoingNeighborsList.add(hostNodeKeyFilter);
> >
> >                 SingleColumnValueFilter timingScaleFilter = new
> > SingleColumnValueFilter(NeighborStructure.NODE_OUTGOING_NEIGHBOR_FAMILY,
> > NeighborStructure.NODE_OUTGOING_NEIGHBOR_TIMING_SCALE_COLUMN,
> > CompareFilter.CompareOp.EQUAL, new
> > BinaryComparator(Bytes.toBytes(timingScale)));
> >                 timingScaleFilter.setFilterIfMissing(true);
> >                 outgoingNeighborsList.add(timingScaleFilter);
> >
> >                 FilterList outgoingNeighborFilter = new
> > FilterList(outgoingNeighborsList);
> >                 Scan scan = new Scan();
> >                 scan.setFilter(outgoingNeighborFilter);
> >                 scan.setCaching(Parameters.CACHING_SIZE);
> >                 scan.setBatch(Parameters.BATCHING_SIZE);
> >
> >                 String qualifier;
> >                 Set<String> neighborKeySet = Sets.newHashSet();
> >                 try
> >                 {
> >                         // Lock is here
> >                         this.lock.readLock().lock();
> >                         ResultScanner scanner =
> > this.neighborTable.getScanner(scan);
> >                         for (Result result : scanner)
> >                         {
> >                                 for (KeyValue kv : result.raw())
> >                                 {
> >                                         qualifier =
> > Bytes.toString(kv.getQualifier());
> >                                         if
> >
> >
> (qualifier.equals(NeighborStructure.NODE_OUTGOING_NEIGHBOR_NEIGHBOR_KEY_STRING_COLUMN))
> >                                         {
> >
> > neighborKeySet.add(Bytes.toString(kv.getValue()));
> >                                         }
> >                                 }
> >                         }
> >                         scanner.close();
> >                         this.lock.readLock().unlock();
> >                 }
> >                 catch (IOException e)
> >                 {
> >                         e.printStackTrace();
> >                 }
> >                 return neighborKeySet;
> >         }
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message