hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wayne <wav...@gmail.com>
Subject Re: Open Scanner Latency
Date Mon, 31 Jan 2011 22:15:43 GMT
On Mon, Jan 31, 2011 at 4:54 PM, Stack <stack@duboce.net> wrote:

> On Mon, Jan 31, 2011 at 1:38 PM, Wayne <wav100@gmail.com> wrote:
> > After doing many tests (10k serialized scans) we see that on average
> opening
> > the scanner takes 2/3 of the read time if the read is fresh
> > (scannerOpenWithStop=~35ms, scannerGetList=~10ms).
>
> I've saw that this w/e.  The getScanner takes all the time.  Tracing,
> it did not seem to be locating regions in cluster but suspect would
> seem to be down in StoreScanner when we seek all the StoreFiles.  I
> didn't look beyond that (this w/e that is).
>

Very interesting. We have written our own Thrift method getRowsWithColumns
which opens a scanner, does one get (we know how many rows we need), and
then closes without ever having to store the scanner. We have yet to push to
the cluster. We realized the benefit of never having to store the scanner
but I did not consider it might be our problem here. We will find out if
this is the problem. Why would StoreScanner be slow the first time around
and then speed up?

>
> What if you do full table scan of the data that you want hot on a
> period (making sure you Scan with the skip cache button OFF)?  Is the
> data you want cached all in one table?  Does marking the table
> in-memory help?
>

We have blockcache turned off for all of our tables. A full table scan would
be very expensive and timely. Especially since it seems to be flushed
quickly the cost/benefit would be negligible I assume. We have lots of
tables we need to do this with and memory based is not an option.  We read a
lot of our data and a smaller cache actually seemed worse in initial tests.
We have enough problems with memory, we are happy to do disk reads (we just
want out bottleneck to be disk i/o).

>
> > A read's latency for our type of usage pattern should be based
> > primarily on disk i/o latency and not looking around for where the data
> is
> > located in the cluster. Adding SSD disks wouldn't help us much at all to
> > lower read latency given what we are seeing.
> >
>
> You think that its locating data in the cluster?  Long-lived clients
> shouldn't be doing lookups, they should have cached all seen region
> locations, not unless the region moved.  Do you think that is what is
> happening Wayne?
>

I think it is finding it, and it works fast once hot but our write load is
so heavy we assume it pushes the location out of memory. Even if I wait 10
min to rerun the test the numbers start to creep up quickly for the open
scanner. Our test is opening 10k separate scanner objects, reading data, and
then closing.

>
> Here is an interesting article on SSDs and Cassandra:
> http://blog.kosmix.com/?p=1445  Speculation is that SSDs don't really
> improve latency given the size of reads done by cass (and hbase) but
> rather, they help keep latency about constant when lots of contending
> cilents; i.e. maybe we could have one cluster at SU only if we used
> SSDs.
>

We were actually looking to go to 2.5" 10k disks (velociraptor) in those
nifty SuperMicro quad nodes, but from what I have seen this or SSDs would
not have much affect.

>
> St.Ack
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message