Return-Path: X-Original-To: apmail-lucene-java-user-archive@www.apache.org Delivered-To: apmail-lucene-java-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id CB6787F37 for ; Wed, 28 Dec 2011 02:15:28 +0000 (UTC) Received: (qmail 36440 invoked by uid 500); 28 Dec 2011 02:15:26 -0000 Delivered-To: apmail-lucene-java-user-archive@lucene.apache.org Received: (qmail 36388 invoked by uid 500); 28 Dec 2011 02:15:25 -0000 Mailing-List: contact java-user-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: java-user@lucene.apache.org Delivered-To: mailing list java-user@lucene.apache.org Received: (qmail 36380 invoked by uid 99); 28 Dec 2011 02:15:25 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Dec 2011 02:15:25 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: 74.125.82.176 is neither permitted nor denied by domain of rbart@cs.washington.edu) Received: from [74.125.82.176] (HELO mail-we0-f176.google.com) (74.125.82.176) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Dec 2011 02:15:19 +0000 Received: by werm10 with SMTP id m10so8339389wer.35 for ; Tue, 27 Dec 2011 18:14:58 -0800 (PST) MIME-Version: 1.0 Received: by 10.216.132.42 with SMTP id n42mr13711845wei.24.1325038497895; Tue, 27 Dec 2011 18:14:57 -0800 (PST) Received: by 10.180.103.229 with HTTP; Tue, 27 Dec 2011 18:14:57 -0800 (PST) In-Reply-To: References: <320C27C8-382D-4662-A75B-CF11B786A9CA@hoplahup.net> Date: Tue, 27 Dec 2011 18:14:57 -0800 Message-ID: Subject: Re: Retrieving large numbers of documents from several disks in parallel From: Robert Bart To: java-user@lucene.apache.org Content-Type: multipart/alternative; boundary=0016e6d589aafb15c404b51d9031 X-Virus-Checked: Checked by ClamAV on apache.org --0016e6d589aafb15c404b51d9031 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Erick, Thanks for your reply! You are probably right to question how many Documents we are retrieving. We know it isn't best, but significantly reducing that number will require us to completely rebuild our system. Before we do that, we were just wondering if there was anything in the Lucene API or elsewhere that we were missing that might be able to help out. Search times for a single document from each index seem to run about 100ms. For normal queries (e.g. 3K Documents) the entire search time is spent loading documents (e.g. going from Fields to Strings). Our post-processing takes < 1s, and is done only after all docs are loaded. The problem with our current setup is that we need to retrieve a large sample of Documents and aggregate them in certain ways before we can tell which ones we really needed and which ones we didn't. This doesn't sound like something a Filter would be able to do, so I'm not sure how well we could push this down into the search process. Avoiding this aggregation step is what would require us to completely redesign the system - but maybe thats just what we'll have to do. Anyways, thanks for your help! Any other suggestions would be appreciated, but if there is no (relatively) easy solution, thats ok. Rob On Thu, Dec 22, 2011 at 4:51 AM, Erick Erickson wr= ote: > I call into question why you "retrieve and materialize as > many as 3,000 Documents from each index in order to > display a page of results to the user". You have to be > doing some post-processing because displaying > 12,000 documents to the user is completely useless. > > I wonder if this is an "XY" problem, see: > http://people.apache.org/~hossman/#xyproblem. > > You're seeking all over the each disk for 3,000 > documents, which will take time no matter what. > Especially if you're loading a bunch of fields. > > So let's back up a bit and ask why you think you need > all those documents? Is it something you could push > down into the search process? > > Also, 250M docs/index is a lot of docs. Before continuing, > it would be useful to know your raw search performance if > you, say, fetched 1 document from each partition, keeping > in mind Lance's comments that the first searches load > up a bunch of caches and will be slow. And, as he says, > you can get around with autowarming. > > But before going there, let's understand the root of the problem. > Is it search speed or just loading all those documents and > then doing your post-processing? > > > Best > Erick > > On Thu, Dec 22, 2011 at 3:16 AM, Lance Norskog wrote: > > Is each index optimized? > > > > From my vague grasp of Lucene file formats, I think you want to sort > > the documents by segment document id, which is the order of documents > > on the disk. This lets you materialize documents in their order on the > > disk. > > > > Solr (and other apps) generally use a separate thread per task and > > separate index reading classes (not sure which any more). > > > > As to the cold-start, how many terms are there? You are loading them > > into the field cache, right? Solr has a feature called "auto-warming" > > which automatically runs common queries each time it reopens an index. > > > > On Wed, Dec 21, 2011 at 11:11 PM, Paul Libbrecht > wrote: > >> Michael, > >> > >> from a physical point of view, it would seem like the order in which > the documents are read is very significant for the reading speed (feel th= e > random access jump as being the issue). > >> > >> You could: > >> - move to ram-disk or ssd to make a difference? > >> - use something different than a searcher which might be doing it > better (pure speculation: does a hit-collector make a difference?) > >> > >> hope it helps. > >> > >> paul > >> > >> > >> Le 22 d=E9c. 2011 =E0 03:45, Robert Bart a =E9crit : > >> > >>> Hi All, > >>> > >>> > >>> I am running Lucene 3.4 in an application that indexes about 1 billio= n > >>> factual assertions (Documents) from the web over four separate disks, > so > >>> that each disk has a separate index of about 250 million documents. T= he > >>> Documents are relatively small, less than 1KB each. These indexes > provide > >>> data to our web demo (http://openie.cs.washington.edu), where a > typical > >>> search needs to retrieve and materialize as many as 3,000 Documents > from > >>> each index in order to display a page of results to the user. > >>> > >>> > >>> In the worst case, a new, uncached query takes around 30 seconds to > >>> complete, with all four disks IO bottlenecked during most of this > time. My > >>> implementation uses a separate Thread per disk to (1) call > >>> IndexSearcher.search(Query query, Filter filter, int n) and (2) > process the > >>> Documents returned from IndexSearcher.doc(int). Since 30 seconds seem= s > like > >>> a long time to retrieve 3,000 small Documents, I am wondering if I am > >>> overlooking something simple somewhere. > >>> > >>> > >>> Is there a better method for retrieving documents in bulk? > >>> > >>> > >>> Is there a better way of parallelizing indexes from separate disks > than to > >>> use a MultiReader (which doesn=92t seem to parallelize the task of > >>> materializing Documents) > >>> > >>> > >>> Any other suggestions? I have tried some of the basic ideas on the > Lucene > >>> wiki, such as leaving the IndexSearcher open for the life of the > process (a > >>> servlet). Any help would be greatly appreciated! > >>> > >>> > >>> Rob > >> > >> > >> --------------------------------------------------------------------- > >> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org > >> For additional commands, e-mail: java-user-help@lucene.apache.org > >> > > > > > > > > -- > > Lance Norskog > > goksron@gmail.com > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org > > For additional commands, e-mail: java-user-help@lucene.apache.org > > > > --------------------------------------------------------------------- > To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org > For additional commands, e-mail: java-user-help@lucene.apache.org > > --=20 Rob --0016e6d589aafb15c404b51d9031--