Return-Path: Delivered-To: apmail-hadoop-hbase-user-archive@minotaur.apache.org Received: (qmail 73040 invoked from network); 17 May 2009 00:31:55 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 17 May 2009 00:31:55 -0000 Received: (qmail 58636 invoked by uid 500); 17 May 2009 00:31:54 -0000 Delivered-To: apmail-hadoop-hbase-user-archive@hadoop.apache.org Received: (qmail 58596 invoked by uid 500); 17 May 2009 00:31:54 -0000 Mailing-List: contact hbase-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hbase-user@hadoop.apache.org Delivered-To: mailing list hbase-user@hadoop.apache.org Received: (qmail 58586 invoked by uid 99); 17 May 2009 00:31:54 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 17 May 2009 00:31:54 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of jej2003@gmail.com designates 209.85.219.219 as permitted sender) Received: from [209.85.219.219] (HELO mail-ew0-f219.google.com) (209.85.219.219) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 17 May 2009 00:31:45 +0000 Received: by ewy19 with SMTP id 19so3417023ewy.29 for ; Sat, 16 May 2009 17:31:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type; bh=CXi5EpxraRFDS1juUVBNfp6q+jl3OP0pYx4/92JqW/A=; b=fAbAZo2WlpT8Cg78q/I8uI59rtp1iBQrS8G9IS7PR600lDxdsW20xG+pzcSgsmHoPz Sh78rtfQSt5epW71EnrKQLqJVcLzySMToL1cWt6uGfphGj5mXr3iWoC5v2Zhw162KWMq wQw6s2AhPQscVEU0xr8tpOXks3ymTQsg3Pxiw= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=hwFhUFovakRwGqUg0YoD4ABMVpwbZ5vcbOYOMbYr6UW6HmRlL82K18mqh+FmIQJXeG XYIjXy2rz/rDODHidCNUFEILm5PTeJ/DeMZ+fSQciXv8qiblqoQSAK8fV3npLq3kXwRY w+P9Oyczlm6VqicbHvw8pVkysCHLdMy7kmZ7M= MIME-Version: 1.0 Received: by 10.210.87.11 with SMTP id k11mr5893886ebb.20.1242520284827; Sat, 16 May 2009 17:31:24 -0700 (PDT) In-Reply-To: <32120a6a0905161331h53c5ea6dncb3b73fe846ec432@mail.gmail.com> References: <43b459c10905161303r7828d1a2g2ae0f173c123f2a0@mail.gmail.com> <32120a6a0905161331h53c5ea6dncb3b73fe846ec432@mail.gmail.com> Date: Sat, 16 May 2009 20:31:24 -0400 Message-ID: <43b459c10905161731w2596e60p87394fc57c80b423@mail.gmail.com> Subject: Re: hbase & lucene From: Jamie Johnson To: hbase-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=0015174c1632340f19046a10cc43 X-Virus-Checked: Checked by ClamAV on apache.org --0015174c1632340f19046a10cc43 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Thanks, I am looking into this perhaps my question should have been a little different though. If I am starting from scratch, nothing in the DB and want to keep a Lucene index for information I add in hbase is there a way to build the index on insertion instead of running a map reduce task after it has been inserted? Or is there a way to specify only to index files after a certain date so this task can be run periodically instead of having to reindex everything? Again I am very new to this so any help would be appreciated. On Sat, May 16, 2009 at 4:31 PM, tim robertson wrote: > There is an IndexTableReduce class in the HBase source that you might > want to look at. > > Here is a basic example of usage: > > BuildTableIndex bti = new BuildTableIndex(); > JobConf conf = new JobConf(TestBuildLucene.class); > conf = bti.createJob(conf, 1, 1, "/tmp/lucene-hbase", > "occurrence", > "raw:CatalogueNo"); > try { > JobClient.runJob(conf); > } catch (IOException e) { > e.printStackTrace(); > } > > // do a term query > try { > IndexReader reader = > IndexReader.open("/tmp/lucene-hbase/part-00000"); > > Cheers, > Tim > > > > > On Sat, May 16, 2009 at 10:03 PM, Jamie Johnson wrote: > > I have seen several pages (most over a year old) which make reference to > > building lucene indexes against hbase, is there any updated documentation > > which can be used to do this, or an old document which is still valid? I > am > > new to the hbase world so any help on this would be greatly appreciated. > > > > Jamie > > > --0015174c1632340f19046a10cc43--