hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From steven zhuang <steven.zhuang.1...@gmail.com>
Subject Re: how to do fast scan on huge table
Date Mon, 29 Mar 2010 04:16:46 GMT
thanks, you guys,
            Actually we have thought of producing some results
for  queries before user does the query, that's an option too, a good one.
But I still wonna know how powerful HBase can be in this "big table" case.
           since there will be more tables and more queries, I am panning to
use RMI to develop a framework with 10-20 servers running various user
queries, one thing I am not sure is that how many scanners can run on a
table simultaneously, is there a limit?

On Sun, Mar 28, 2010 at 2:22 AM, Andrew Purtell <apurtell@apache.org> wrote:

> A really good suggestion. We advocate and use this extensively.
> When the queries (or some reasonable subset) can be anticipated and some
> amount of lag is acceptable, then you can periodically run a MR job that
> precomputes answers to anticipated queries and writes them to a table that
> you will use to service real time queries. This trades space -- the disk
> needed to store the precomputed results -- for time -- the time necessary to
> run the analytic query over the data.
> Many BI and OLAP use cases are satisfied by this approach.
>   - Andy
> > From: Karthik K <oss.akk@gmail.com>
> > Subject: Re: how to do fast scan on huge table
> > To: hbase-user@hadoop.apache.org
> > Date: Friday, March 26, 2010, 10:17 PM
> >
> > Another option, might be to do the M-R to compress the
> > input space to the output space, amenable to be served
> > directly to the web-app. Of course, that depends on the
> > output space size and the need for real-time-ness of the
> > data and the acceptable latency lag (of the app) between
> > the write and the read.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message